Patent classifications
H04N5/2621
Method and apparatus for video shooting, terminal device and storage medium
Techniques for video shooting are provided by the present disclosure, which comprising: detecting a user's selection on a video shooting mode control on a target interface and a triggering on a video shooting control on the target interface; obtaining a video segment corresponding to the selected video shooting mode; in response to a determination that the user's triggering operation on a next step control on the target interface is not detected, repeatedly detecting the selection on the video shooting mode control, the triggering on the video shooting control and a release operation on the video shooting control to obtain corresponding video segments until the user's triggering operation on the next step control on the target interface is detected; and displaying, on the video storage interface, a result of synthetizing video segments to obtain a target video.
Gesture mapping for image filter input parameters
This disclosure pertains to systems, methods, and computer readable medium for mapping particular user interactions, e.g., gestures, to the input parameters of various image processing routines, e.g., image filters, in a way that provides a seamless, dynamic, and intuitive experience for both the user and the software developer. Such techniques may handle the processing of both “relative” gestures, i.e., those gestures having values dependent on how much an input to the device has changed relative to a previous value of the input, and “absolute” gestures, i.e., those gestures having values dependent only on the instant value of the input to the device. Additionally, inputs to the device beyond user-input gestures may be utilized as input parameters to one or more image processing routines. For example, the device's orientation, acceleration, and/or position in three-dimensional space may be used as inputs to particular image processing routines.
Electronic device and method for electronic device displaying image
According to various embodiments, an electronic device may comprise: a first camera arranged on a first surface of a housing of the electronic device; a second camera arranged apart from the first camera on the first surface; a display; and a processor set to process at least a portion of a first inputted image by applying a first image effect and display same on the display, on the basis of a first object area for the first inputted image obtained by using phase difference information of the first inputted image from among the first inputted image obtained from the first camera or a second inputted image obtained from the second camera, and to process at least a portion of the first inputted image by applying a second image effect and displaying same on the display, on the basis of a second object area for the first inputted image obtained by using time difference information between the first inputted image and the second inputted image.
METHOD AND APPARATUS FOR PROVIDING IMAGE
Disclosed in various embodiments of the present disclosure are a method and an apparatus for providing an image in an electronic device. An electronic device according to various embodiments comprises a camera module, a display, a memory, and a processor, where the processor can display a preview image through the display, capture an image at least based on of the preview image in response to a user input while displaying the preview image, perform image analysis based on the captured image, identify at least one class related to the captured image based on the image analysis result, identify at least one user preference based on the identified class, and provide, through the display, at least one recommended image related to the at least one user preference.
VIDEO CAPTURING METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM
A video capturing method, an apparatus, a device and a storage medium are provided. The method is applied to a terminal device, the terminal device includes a camera, and the method includes: segmenting a to-be-segmented target object to obtain a target segmentation result in response to an input event of selecting the to-be-segmented target object among a plurality of objects presented in an acquisition image of the camera; displaying the target segmentation result; and fusing the target segmentation result, that is adjusted, with a to-be-fused video scene to generate a video in response to a triggering operation for adjusting the target segmentation result.
Makeup mirror display with multiple cameras and variable color temperature light source
A makeup mirror display with multiple cameras and variable color temperature light source includes an adjustable makeup mirror, a makeup mirror display, at least two cameras, and at least one variable color temperature light source. The makeup mirror display is arranged at one end of the adjustable makeup mirror surface and includes at least one control button and a control circuit arranged therein. The at least two cameras are respectively arranged on a left side and a right side of the makeup mirror display and electrically connected with the makeup mirror display. The at least one variable color temperature light source is disposed on one side of the makeup mirror display and electrically connected with the makeup mirror display. Therefore, the user can perform makeup more efficiently and finish a more perfect makeup, so as to enhance the overall practicability and convenience.
Media content discard notification system
Systems and methods for receiving a media content item, causing a media content item interface containing the media content item to be displayed by a display device, and determining that a first contextual element is associated with the media content item displayed in the media content item interface. The systems and methods are include determining that a destructive action is applied to the media content item, and generating an overlay dialog component that is overlaid on top of the media content item in response to determining that the first contextual element is associated with the media content item in the media content item interface and that the destructive action has been applied to the media content item.
Devices, methods, and graphical user interfaces for depth-based annotation
While displaying playback of a first portion of a video in a video playback region, a device receives a request to add a first annotation to the video playback. In response to receiving the request, the device pauses playback of the video at a first position in the video and displays a still image that corresponds to the first, paused position of the video. While displaying the still image, the device receives the first annotation on a first portion of a physical environment captured in the still image. After receiving the first annotation, the device displays, in the video playback region, a second portion of the video that corresponds to a second position in the video, where the first portion of the physical environment is captured in the second portion of the video and the first annotation is displayed in the second portion of the video.
ZONE-ADAPTIVE VIDEO GENERATION
The present invention provides a system and a method for automatically generating an output video of a presentation given by at least one presenter, comprising a displayed content, and performed in an environment, the system comprising a plurality of zones defined within the environment, at least one camera, wherein the camera is configured to capture image frames of the presentation given by the presenter in the environment, means to detect when the at least one presenter changes zone, a configuration associating to each zone, a set of filming parameters for controlling the at least one camera when the at least one presenter is in said zone, wherein the system if further configured to change the filming parameters when the at least one presenter changes zone based on the configuration associated to the zone in which the at least one presenter is so as to provide an output video to the user with different filming parameters.
METHODS AND SYSTEMS OF COMBINING VIDEO CONTENT WITH ONE OR MORE AUGMENTATIONS TO PRODUCE AUGMENTED VIDEO
Data processing systems and methods are disclosed for combining video content with one or more augmentations to produce augmented video. Objects within video content may have associated bounding boxes that may each be associated with respective RGB values. Upon user selection of a pixel, the RGBA value of the pixel may be used to determine a bounding box associated with the RGBA value. The client may transmit an indicator of the determined bounding box to an augmentation system to request augmentation data for the object associated with the bounding box. The system then uses the indicator to determine the augmentation data and transmits the augmentation data to the client device.