Patent classifications
H04N2005/2726
ELECTRONIC DEVICE FOR GENERATING VIDEO COMPRISING CHARACTER AND METHOD THEREOF
An electronic device and method are disclosed. The electronic device includes a display, a processor and memory. The processor may implement the method, including analyzing, by a processor, a first video to identify any characters included in the first video, displaying one or more icons representing one or more characters identified in the first video via a display, receiving, by input circuitry, a first user input selecting a first icon representing a first character from among the one or more icons, based on the first user input, selecting image frames of the first video that include the first character from among image frames included in the first video, and generating, by the processor, a second video including the selected image frames. A second embodiment includes automatically selecting images from a gallery including one or more characters for generation of a video.
GENERATING AUGMENTED REALITY CONTENT BASED ON USER-SELECTED PRODUCT DATA
In one or more implementations, information about a number of products may be obtained. Visual effects corresponding to each product may be applied to objects included in user content to change the appearance of the objects included in the user content. Augmented reality content may be implemented to cause changes to the appearance of one or more objects included in the user content. In various examples, a number of visual effects related to different products may be applied to objects included in the user content. A user interface including information about each of the products applied to objects included in the user content may be produced.
Analyzing 2D movement in comparison with 3D avatar
A processing device receive a two dimensional (2D) video recording of a subject user performing a physical activity and provides a three dimensional (3D) visualization comprising a virtual avatar performing the physical activity. The processing device causes display of the 3D visualization comprising the virtual avatar at a first key point in performing the physical activity, receives first user input to advance the 2D video recording to a first position corresponding the first key point, and receives second user input comprising a first synchronization command. In response, the processing device generates a first synchronization marker to indicate the first position in the 2D video recording corresponding to the first key point.
Communication terminal, display method, and non-transitory computer-readable medium for displaying images and controller
A communication terminal is communicable with another communication terminal mounted on a mobile apparatus or with a communication device of the mobile apparatus via a network. The communication terminal transmits operation instruction information for controlling the mobile apparatus to the another communication terminal or the communication device. The communication terminal includes circuitry configured to: receive a first video transmitted by one of the another communication terminal and the communication device and a second video having a wide-angle of view captured by a wide-angle image capturing apparatus; display, on a display device, at least one of the first video and the second video that are received; and display, on the display device, a controller for controlling an operation of the mobile apparatus as being superimposed on the at least one of the first video and the second video.
Privacy Protection Camera
A video camera may create an anonymized video stream by detecting people's faces, then anonymizing the faces by pixelating the faces. The camera may be a single housing where the outbound transmissions may be restricted to anonymized content. Some devices may include a secure portal or access mechanism by which authorized users may access raw video prior to the anonymization process, or may be able to receive information that may assist in identifying individual people in the video feed. The authorized users may provide credentials or have some other mechanism to gain access to the sensitive raw video feed. The devices may embed the anonymization routines into hardware or software such that a raw video feed may be unavailable when initially installed.
ELECTRONIC DEVICE FOR PROVIDING SHOOTING MODE BASED ON VIRTUAL CHARACTER AND OPERATION METHOD THEREOF
An electronic device according to an embodiment may include: a camera module; a display; and a processor, wherein the processor may be configured to: obtain a preview image corresponding to an external object using the camera module; determine attributes of the external object, based on the obtained preview image; synthesize the preview image with a virtual character image, based on the attributes of the external object; and output the synthesized preview image through the display. Other embodiments may be provided.
COMMUNICATION INTERFACE WITH HAPTIC FEEDBACK RESPONSE
Method for generating haptic feedback responses starts with processor causing communication interface for communication session to be displayed on first user interface and on a second user interface. Processor detects first touch input on first user interface. Processor causes second user interface to display first indicator element at location on second user interface of second client device corresponding to location of first touch input on first user interface. First indicator element is displayed for a predetermined period of time. Processor detects second touch input on second user interface. In response to determining that location of the second touch input on second user interface corresponds to location of first indicator on second user interface and determining that second touch input on second user interface is detected within predetermined period of time, processor causes first user interface and second user interface to generate haptic feedback response. Other embodiments are described herein.
MEDIA CONTENT ITEMS WITH HAPTIC FEEDBACK AUGMENTATIONS
Method for generating haptic feedback responses using haptic augmentations starts with processor receiving a media content item from a first client device and receiving from the first client device a selection of a haptic overlay associated with a haptic feedback response to be applied to the media content item. The processor generates a modified media content item by overlaying the haptic overlay on the media content item. The processor receives from the first client device a selection of a second user associated with the second user and causes the modified media content item to be displayed by a second user interface of the second client device. The processor detects a selection of the haptic overlay from the second client device, and in response to detecting the selection of the haptic overlay, causing the second user interface to generate the haptic feedback response. Other embodiments are described herein.
Systems and methods for implementing personal camera that adapts to its surroundings, both co-located and remote
A computerized system comprising a processing unit and a memory, the system operating in connection with a real-time video conference stream containing a video of a user, wherein the memory embodies a set of computer-executable instructions, which cause the computerized system to perform a method involving: receiving the real time video conference stream containing the video of the user; detecting the background in the received real time video conference stream from the user; and matching the first background and a second background associated with the second user.
Creative camera
- Marcel Van Os ,
- Jessica L. Aboukasm ,
- Jean-Francois M. ALBOUZE ,
- David R. Black ,
- Jae Woo CHANG ,
- Robert M. Chinn ,
- Gregory L. Dudey ,
- Katherine K. Ernst ,
- Aurelio GUZMAN ,
- Christopher J. Moulios ,
- Joanna M. Newman ,
- Grant PAUL ,
- Nicolas Scapel ,
- William A. Sorrentino, III ,
- Brian E. Walsh ,
- Joseph-Alexander P. Weil ,
- Christopher WILSON
The present disclosure generally relates to displaying visual effects in image data. In some examples, visual effects include an avatar displayed on a user's face. In some examples, visual effects include stickers applied to image data. In some examples, visual effects include screen effects. In some examples, visual effects are modified based on depth data in the image data.