Patent classifications
H04N2213/003
CAMERA MODULE
A camera module according to the present embodiment comprises: a light-emitting unit for outputting light to an object; a filter allowing the light to pass therethrough; at least one sheet of lens disposed on the filter and condensing the light reflected from the object; a sensor including a plurality of pixels aligned in an array and generating an electrical signal from the light condensed by the lens; and a tilt unit for tilting the filter such that an optical path of the light having passed through the filter is repeatedly moved according to a predetermined rule, wherein the tilt unit includes: a tilting driver for generating an output signal synchronized with an exposure cycle of the sensor on the basis of a trigger signal input from the sensor; and a tilting actuator for tilting the filter diagonally by the output signal.
Methods and systems for producing content in multiple reality environments
This disclosure contains methods and systems that allow filmmakers to port filmmaking and editing skills to produce content to be used in other environments, such as video game environments, and augmented reality, virtual reality, mixed reality, and non-linear storytelling environments.
Method for generating layered depth data of a scene
The invention relates to layered depth data. In multi-view images, there is a large amount of redundancy between images. Layered Depth Video format is a well-known formatting solution for formatting multi-view images which reduces the amount of redundant information between images. In LDV, a reference central image is selected and information brought by other images of the multi-view images that are mainly regions occluded in the central image are provided. However, LDV format contains a single horizontal occlusion layer, and thus fails rendering viewpoints that uncover multiple layers dis-occlusions. The invention uses light filed content which offers disparities in every directions and enables a change in viewpoint in a plurality of directions distinct from the viewing direction of the considered image enabling to render viewpoints that may uncover multiple layer dis-occlusions which may occurs with complex scenes viewed with wide inter-axial distance.
SYSTEMS AND METHODS FOR DETERMINING THREE DIMENSIONAL MEASUREMENTS IN TELEMEDICINE APPLICATION
A system and method for measuring a depth or length of area of interest a telemedicine patient, comprising: a first image capturing device that captures a two-dimensional (2D) image or video of a region of interest of a patient; a second image capturing device that generates a three-dimensional (3D) point cloud of the region of interest of the patient; a rendering system that processes a unified view for both the first and second image capturing devices where the 2D image and 3D point cloud are generated and registered; and a remote measurement processing system that determines a depth or length between two points selected from the 2D image of the region of interest by identifying associated points in the 3D point cloud and performing a measurement using the identified associated points in the 3D point cloud.
CMOS IMAGE SENSOR FOR RGB IMAGING AND DEPTH MEASUREMENT WITH LASER SHEET SCAN
An imaging unit includes a light source and a pixel array. The light source projects a line of light that is scanned in a first direction across a field of view of the light source. The line of light oriented in a second direction that is substantially perpendicular to the first direction. The pixel array is arranged in at least one row of pixels that extends in a direction that is substantially parallel to the second direction. At least one pixel in a row is capable of generating two-dimensional color information of an object in the field of view based on a first light reflected from the object and is capable of generating three-dimensional (3D) depth information of the object based on the line of light reflecting from the object. The 3D-depth information includes time-of-flight information.
Generating output video from video streams
A system and method are provided for generating an output video, such as a video panorama, from a plurality of video streams representing different recordings of a scene. The plurality of video streams may be analyzed to identify at least one part of at least one of the plurality of video streams which is to be used in the output video, thereby identifying a contributing part of a video stream. Orchestration metadata may be generated identifying the contributing part. The orchestration metadata may be provided to a stream source from which the video stream originated to enable the stream source to selectively stream the contributing part of the video stream. Effectively, a selection of the stream's video data may be made to avoid or reduce unnecessary bandwidth usage.
CMOS image sensor for RGB imaging and depth measurement with laser sheet scan
An imaging unit includes a light source and a pixel array. The light source projects a line of light that is scanned in a first direction across a field of view of the light source. The line of light oriented in a second direction that is substantially perpendicular to the first direction. The pixel array is arranged in at least one row of pixels that extends in a direction that is substantially parallel to the second direction. At least one pixel in a row is capable of generating two-dimensional color information of an object in the field of view based on a first light reflected from the object and is capable of generating three-dimensional (3D) depth information of the object based on the line of light reflecting from the object. The 3D-depth information includes time-of-flight information.
Method and apparatus for deriving VR projection, packing, ROI and viewport related tracks in ISOBMFF and supporting viewport roll signaling
A video processing method includes receiving a virtual reality (VR) content, obtaining a picture from the VR content, encoding the picture to generate a part of a coded bitstream, and encapsulating the part of the coded bitstream into ISO Base Media File Format (ISOBMFF) file(s). In one exemplary implementation, the ISOBMFF file(s) may include a transform property item that is set to enable at least one of a projection transformation, a packing transformation, a VR viewport selection, and a VR region of interest (ROI) selection in track derivation. In another exemplary implementation, the ISOBMFF file(s) may include a first parameter, a second parameter, and a third parameter associated with orientation of a viewport, with the first, second and third parameters indicating a yaw angle, a pitch angle and a roll angle of a center of the viewport, respectively. Further, an associated video processing apparatus is provided.
Multiple-viewpoints related metadata transmission and reception method and apparatus
Disclosed is a 360-degree video data processing method performed by a 360-degree video transmission apparatus, the method comprising: obtaining 360-degree video data captured by at least one image obtaining device; deriving a two-dimensional (2D) picture comprising omnidirectional image by processing the 360-degree video data; generating metadata for the 360-degree video data; encoding information on the 2D picture; and performing encapsulation based on the encoded picture and the metadata, wherein the metadata comprises information on viewpoint group ID, and wherein multiple-viewpoints related to the 360-degree video data are categorized into at least one viewpoint group based on the viewpoint group ID.
Systems and methods for determining three dimensional measurements in telemedicine application
A system and method for measuring a depth or length of area of interest a telemedicine patient, comprising: a first image capturing device that captures a two-dimensional (2D) image or video of a region of interest of a patient; a second image capturing device that generates a three-dimensional (3D) point cloud of the region of interest of the patient; a rendering system that processes a unified view for both the first and second image capturing devices where the 2D image and 3D point cloud are generated and registered; and a remote measurement processing system that determines a depth or length between two points selected from the 2D image of the region of interest by identifying associated points in the 3D point cloud and performing a measurement using the identified associated points in the 3D point cloud.