Patent classifications
H04N5/275
CREATIVE CAMERA
- Marcel Van Os ,
- Jessica ABOUKASM ,
- Jean-Francois ALBOUZE ,
- David R. Black ,
- Jae Woo CHANG ,
- Robert M. Chinn ,
- Gregory L. DUDEY ,
- Katherine K. Ernst ,
- Aurelio GUZMAN ,
- Christopher J. Moulios ,
- Joanna M. Newman ,
- Grant PAUL ,
- Nicolas Scapel ,
- William A. Sorrentino, III ,
- Brian WALSH ,
- Joseph-Alexander P. Weil ,
- Christopher WILSON
The present disclosure generally relates to displaying visual effects in image data. In some examples, visual effects include an avatar displayed on a user's face. In some examples, visual effects include stickers applied to image data. In some examples, visual effects include screen effects. In some examples, visual effects are modified based on depth data in the image data.
Electronic apparatus and controlling method thereof
The disclosure provides an electronic apparatus and a controlling method thereof. The electronic apparatus of the disclosure includes a sensor, a camera, and a processor configured to control the electronic apparatus to: obtain information regarding a predetermined object among a plurality of objects included in an imaging region of the camera through the sensor, identify an object region corresponding to the predetermined object from the imaging region of the camera based on the obtained information, and based on the imaging region being captured through the camera, control the camera to not capture the object region.
Image generation method and image synthesis method
There is provided an image generation method and an image synthesis method each of which can achieve an entire image having excellent image quality. An image generation method according to an embodiment of the present invention includes: arranging an image taking apparatus, a first polarizing plate, an object, and a second polarizing plate in the stated order; arranging a retardation plate between the first polarizing plate and the second polarizing plate; and monochromatizing a color of the second polarizing plate recognized by the image taking apparatus to a color complementary to that of the object through use of the retardation plate.
METHOD AND SYSTEM FOR DYNAMIC IMAGE CONTENT REPLACEMENT IN A VIDEO STREAM
The present invention relates to a method for dynamic image content replacement in a video stream comprising generating a set of key image data (K) comprising a sequence of at least two different key images (K1, K2), periodically displaying said set of key image data (K) on a physical display, generating at least a first original video stream (O1) of a scene which includes said physical display by recording said scene with a camera, wherein said at least one video stream (O1) comprises key video frames (FK1, FK2), captures synchronously with displaying each of said at least two different key images (K1, K2) of said set of key image data (K) on said physical display, generating a mask area (MA) corresponding to an active area of said physical display visible in said key video frames from differential images (AFK) obtained from consecutive key video frames (FK1, FK2), generating at least one alternative video stream (V) by inserting of alternative image content (I) into the mask area (MA) of an original video stream, and broadcasting at least said at least one alternative video stream.
LEARNING-BASED SAMPLING FOR IMAGE MATTING
A method of generating a training data set for training an image matting machine learning model includes receiving a plurality of foreground images, generating a plurality of com posited foreground images by com positing randomly selected foreground images from the plurality of foreground images, and generating a plurality of training images by compositing each composited foreground image with a randomly selected background image. The training data set includes the plurality of training images.
Smart color-based background replacement
A streaming application setup assistant may receive an image captured by a camera, the image capturing a physical environment that is to be part of a live video stream. A plurality of pixels may be selected from the image. The plurality of pixels may be grouped, based on a pixel color value for each pixel, into a plurality of pixel groups. A key pixel color value may be calculated that is associated with an average pixel color value of pixels in a largest pixel group of the plurality of pixel groups. A similarity color range that encompasses a threshold percentage of the pixels in the largest pixel group may be identified based on the key pixel color value. The similarity color range may be utilized to configure settings for replacement of the background in the live video stream.
CHAIR AND CHROMA KEY PHOTOGRAPHY BACKDROP ASSEMBLY THEREOF
A chair and chroma key photography backdrop assembly leverages the effect of chroma key photography with a collapsible chroma key panel integrated directly into a chair on which an actor sits during broadcast or video conference. The chair has at least one accessory retention arm that forms a cavity sized and dimensioned to retain broadcast equipment, including cameras and sound systems. The chair also has a shock absorber to absorb shock impulses from the actor who is sitting on the chair. Chroma key photography software in the cameras separate the solid color of the chroma key panel from the actor sitting in the chair, and replaces the solid color background of the chroma key panel with a virtual image or video and also separates the actor from the background, such that the actor appears to be in front of a virtual image or video. By deploying the chroma key panel from the chair, stowage, mobility, and efficiency are enhanced during video production.
MULTI-SPECTRAL VOLUMETRIC CAPTURE
Video capture of a subject, including: a first IR camera, a second IR camera, and a color camera, for capturing video data of the subject; a post, where the first IR camera, the second IR camera, and the color camera are attached to the post, and where the color camera is positioned between the first IR camera and the second IR camera; at least one IR light source, for illuminating the subject; and a processor configured to: generate depth solve data for the subject using data from the first IR camera and the second IR camera; generate projected color data by using data from the color camera to project color onto the depth solve data; and generate final capture data by merging the depth solve data and the projected color data.
System and method for temporal keying in a camera
A system is provided for capturing a key signal within video frames that includes a camera that captures a sequence of media content of a live scene that includes an electronic display having a higher frame rate than an output frame rate of the camera, and a key signal processor that convert all frames in the sequence of media content to the output frame rate of the camera, analyzes a sequence of frames to determine the key signal based on the electronic display outputting a sequence of frames including media content and at least one key frame included in the sequence, and combine remaining frames of the sequence of frames to create a live output signal. Moreover, the key signal processor determines, for each pixel in the frames, whether the pixel has a set chromaticity, and generates a key mask for each pixel in each frame.
System and method for temporal keying in a camera
A system is provided for capturing a key signal within video frames that includes a camera that captures a sequence of media content of a live scene that includes an electronic display having a higher frame rate than an output frame rate of the camera, and a key signal processor that convert all frames in the sequence of media content to the output frame rate of the camera, analyzes a sequence of frames to determine the key signal based on the electronic display outputting a sequence of frames including media content and at least one key frame included in the sequence, and combine remaining frames of the sequence of frames to create a live output signal. Moreover, the key signal processor determines, for each pixel in the frames, whether the pixel has a set chromaticity, and generates a key mask for each pixel in each frame.