Patent classifications
H04N2005/2726
Extended reality system
Systems and methods are disclosed for recommending products or services by receiving a three-dimensional (3D) model of one or more products; performing motion tracking and understanding an environment with points or planes and estimating light or color in the environment; and projecting the product in the environment.
Image editing method and electronic device supporting same
An embodiment of the present invention provides an electronic device comprising: a communication circuit; at least one camera device; a memory; a display; and a processor electrically connected to the communication circuit, the at least one camera device, the memory, and the display. The processor is configured to: determine a query on the basis of a user activity related to operation of the electronic device; confirm whether or not at least one external device connected through the communication circuit or the memory comprises an image effect filter corresponding to the query; acquire the image effect filter corresponding to the query from the at least one external device or the memory when it is confirmed that the at least one external device connected through the communication circuit or the memory comprises the image effect filter corresponding to the query; output an image taken by the at least one camera device through the display; and provide the acquired image effect filter when an event related to editing of the output image occurs. In addition, various embodiments recognized through the specification are possible.
Method and apparatus for generating video file, and storage medium
A method and a device for generating a composite video, and a storage medium are disclosed in embodiments of this disclosure. The method includes: obtaining pre-recorded source video frames; determining a first set and a second set of video rendering parameters; displaying the pre-recorded source video frames with a first image size at a first image position on a display screen of the terminal device according to the first set of video rendering parameters; capturing real-time video frames using an image acquisition component of the terminal device in response to content of and in synchronization with the displayed pre-recorded source video frames; displaying the real-time video frames with a second image size at a second image position on the display screen of the terminal device according to the second set of video rendering parameters; and generating a composite video with each video frame comprising a first corresponding frame from the pre-recorded source video frames and a second corresponding frame from the real-time video frames based on the first image size, the second image size, the first image position, and the second image position.
Creative camera
The present disclosure generally relates to displaying visual effects in image data. In some examples, visual effects include an avatar displayed on a user's face. In some examples, visual effects include stickers applied to image data. In some examples, visual effects include screen effects. In some examples, visual effects are modified based on depth data in the image data.
ENHANCED HYBRID ANIMATION
Systems and methods are described for applying a unifying visual effect, such as posterization, to all or most of the visual elements in a film. In one implementation, a posterization standard includes a line work standard, a color palette, a plurality of color blocks characterized by one or more hard edges, and a gradient transition associated with each of the hard edges. The visual elements, including live actors and set pieces, are prepared in accordance with the posterization standard. The actors are filmed performing live among the set pieces. The live-action segments can be composited with digital elements. The result is a combination of both real and stylized elements, captured simultaneously, to produce an enhanced hybrid of live action and animation.
Electronic device and method for providing augmented reality object therefor
Various embodiments of the present disclosure relate to an electronic device and a method of providing an augmented reality object thereof. The electronic device includes: a touchscreen; a first camera capturing a first image; a second camera capturing a second image; a processor operatively coupled with the touchscreen, the first camera, and the second camera; and a memory operatively coupled with the processor, wherein the memory stores instructions, when executed, causing the processor to: display the first image captured via the first camera on the touchscreen; receive a user input for adding at least one augmented reality object having at least one reflective surface on the first image; acquire the second image via the second camera in response to the user input; identify an angle of the reflective surface; and perform perspective transformation on the second image on the basis of the identified angle, and apply at least part of the perspective-transformed second image to each reflective surface of the augmented reality object. Other various embodiments are possible.
SMART DEVICE WITH SWITCH TO ENABLE PRIVACY FEATURE
The present technology discloses a security camera that includes a housing and a hardware switch coupled to the housing and an audio component of the security camera. The hardware switch has an ON position and an OFF position. When the hardware switch is in an ON position, the audio component of the camera is operational. When the hardware switch is in an OFF position, the audio component of the camera is non-operational.
METHOD AND SYSTEM FOR DEEP LEARNING BASED FACE SWAPPING WITH MULTIPLE ENCODERS
A computer-implemented method of changing a face within an output image or video frame includes: receiving an input image that includes a face presenting a facial expression in a pose; separately encoding different portions of the image by, for each separately encoded portion, generating a latent space point of the portion, thereby generating a plurality of multi-dimensional vectors where each multi-dimensional vector is an encoded representation of a different portion of the input image; concatenating the plurality of multi-dimensional vectors into a combined latent space vector; and decoding the combined latent space vector to generate the output image in accordance with a desired facial identity but with the facial expression and pose of the face in the input image
CREATIVE CAMERA
The present disclosure generally relates to displaying visual effects in image data. In some examples, visual effects include an avatar displayed on a user's face. In some examples, visual effects include stickers applied to image data. In some examples, visual effects include screen effects. In some examples, visual effects are modified based on depth data in the image data.
Methods, systems, and devices for presenting obscured subject compensation content in a videoconference
A conferencing system terminal device includes an image capture device capturing images of a subject during a videoconference occurring across a network and a communication device transmitting the images to at least one remote electronic device engaged in the videoconference. The conferencing system terminal device includes one or more sensors determining that one or more portions of the subject are obscured in the images and one or more processors. The one or more processors apply obscured subject compensation content to the images at locations where the one or more portions of the subject are obscured, during the videoconference, and prior to the communication device transmitting the images to the at least one remote electronic device engaged in the videoconference.