Patent classifications
H04N13/282
Control apparatus and control method for same
A control apparatus which controls a virtual camera according to a user operation related to an operation of the virtual camera, when the control apparatus accepts the user operation, determines whether or not to restrict moving of the virtual camera according to the accepted user operation depending on whether or not a predetermined condition for the virtual camera is fulfilled.
Image processing
Apparatus comprises a camera configured to capture images of a user in a scene; a depth detector configured to capture depth representations of the scene, the depth detector comprising an emitter configured to emit a non-visible signal; a mirror arranged to reflect at least some of the non-visible signal emitted by the emitter to one or more features within the scene that would otherwise be occluded by the user and to reflect light from the one or more features to the camera; a pose detector configured to detect a position and orientation of the mirror relative to at least one of the camera and depth detector; and a scene generator configured to generate a three-dimensional representation of the scene in dependence on the images captured by the camera and the depth representations captured by the depth detector and the pose of the mirror detected by the pose detector.
Image processing
Apparatus comprises a camera configured to capture images of a user in a scene; a depth detector configured to capture depth representations of the scene, the depth detector comprising an emitter configured to emit a non-visible signal; a mirror arranged to reflect at least some of the non-visible signal emitted by the emitter to one or more features within the scene that would otherwise be occluded by the user and to reflect light from the one or more features to the camera; a pose detector configured to detect a position and orientation of the mirror relative to at least one of the camera and depth detector; and a scene generator configured to generate a three-dimensional representation of the scene in dependence on the images captured by the camera and the depth representations captured by the depth detector and the pose of the mirror detected by the pose detector.
NON-RIGID STEREO VISION CAMERA SYSTEM
A long-baseline and long depth-range stereo vision system is provided that is suitable for use in non-rigid assemblies where relative motion between two or more cameras of the system does not degrade estimates of a depth map. The stereo vision system may include a processor that tracks camera parameters as a function of time to rectify images from the cameras even during fast and slow perturbations to camera positions. Factory calibration of the system is not needed, and manual calibration during regular operation is not needed, thus simplifying manufacturing of the system.
NON-RIGID STEREO VISION CAMERA SYSTEM
A long-baseline and long depth-range stereo vision system is provided that is suitable for use in non-rigid assemblies where relative motion between two or more cameras of the system does not degrade estimates of a depth map. The stereo vision system may include a processor that tracks camera parameters as a function of time to rectify images from the cameras even during fast and slow perturbations to camera positions. Factory calibration of the system is not needed, and manual calibration during regular operation is not needed, thus simplifying manufacturing of the system.
File generation apparatus, image generation apparatus based on file, file generation method and storage medium
A file generation apparatus generates a file which includes material data used for generation of a virtual viewpoint image that is based on a multi-viewpoint image and type information for specifying a type of the material data, and outputs the generated file.
File generation apparatus, image generation apparatus based on file, file generation method and storage medium
A file generation apparatus generates a file which includes material data used for generation of a virtual viewpoint image that is based on a multi-viewpoint image and type information for specifying a type of the material data, and outputs the generated file.
MULTI-VIEW NEURAL HUMAN RENDERING
An image-based method of modeling and rendering a three-dimensional model of an object is provided. The method comprises: obtaining a three-dimensional point cloud at each frame of a synchronized, multi-view video of an object, wherein the video comprises a plurality of frames; extracting a feature descriptor for each point in the point cloud for the plurality of frames without storing the feature descriptor for each frame; producing a two-dimensional feature map for a target camera; and using an anti-aliased convolutional neural network to decode the feature map into an image and a foreground mask.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
A setting reception unit obtains information identifying an object selected by a user from foreground objects as a target to be a part of the background. A backgrounded target determination unit identifies the model ID of the selected object based on the object identifying information obtained and three-dimensional shape data. Based on the three-dimensional shape data, the determination unit identifies a foreground ID corresponding to the identified model ID, in a captured image from an actual camera. The determination unit obtains coordinate information and mask information in foreground data corresponding to the foreground ID identified, generates a correction foreground mask, and sends the mask to a background correction unit in an image processing unit. The background correction unit generates a correction image by masking the captured image using the mask, superimposes the correction image onto the background image, and outputs it as a corrected background image.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
A setting reception unit obtains information identifying an object selected by a user from foreground objects as a target to be a part of the background. A backgrounded target determination unit identifies the model ID of the selected object based on the object identifying information obtained and three-dimensional shape data. Based on the three-dimensional shape data, the determination unit identifies a foreground ID corresponding to the identified model ID, in a captured image from an actual camera. The determination unit obtains coordinate information and mask information in foreground data corresponding to the foreground ID identified, generates a correction foreground mask, and sends the mask to a background correction unit in an image processing unit. The background correction unit generates a correction image by masking the captured image using the mask, superimposes the correction image onto the background image, and outputs it as a corrected background image.