Patent classifications
H04N15/00
Method and apparatus for processing a 3D service
The present description discloses a method and apparatus for processing a 3D service. Here, a 3D service apparatus according to one example of the present invention comprises: a receiving unit which receives a broadcast signal including the 3D service; an SI processing unit which extracts SI information on the 3D service from the broadcast signal and decodes the extracted information; a first processing unit which decodes 3D view data and depth data; a second processing unit which uses, as an input, the decoded signal outputted from the first processing unit and generates a virtual view of a certain time point on the basis of the decoded SI information; and an output formatter which generates 3D service data on the basis of the virtual view of the certain time point generated by the second processing unit, and outputs the generated 3D service data.
Three-dimensional collaboration
Remote collaboration of a subject and a graphics object in a same view of a 3D scene. In one embodiment, one or more cameras of a collaboration system may be configured to capture images of a subject and track the subject (e.g., head of a user, other physical object). The images may be processed and provided to another collaboration system along with a determined viewpoint of the user. The other collaboration system may be configured to render and project the captured images and a graphics object in the same view of a 3D scene.
Three-dimensional image apparatus and operation method thereof
A three-dimensional image apparatus and an operation method thereof are provided. The three-dimensional image apparatus includes a projection unit, a photographic apparatus, a display module and a control unit. In a scan mode, the control unit controls the projection unit to project a structure-light pattern onto an object to be captured in different angles relatively to the object and controls the photographic apparatus to capture a plurality of composition images corresponding to different angles of the object. The composition images are converted into a three-dimensional image in a three dimensional image format through an image conversion module, and the three-dimensional-format image may display a displaying image of the object in a specific viewing angle through the display module. The projection unit and the photographic apparatus are located at a same side of the three-dimensional image apparatus.
Techniques for producing creative stereo parameters for stereoscopic computer animation
A computer-implemented method determining a user-defined stereo effect for a computer-generated scene. A set of bounded-parallax constraints including a near-parallax value and a far-parallax value is obtained. A stereo-volume value is obtained, wherein the stereo-volume value represents a percentage of parallax. A stereo-shift value is also obtained, wherein the stereo-shift value represents a distance across one of: an area associated with a camera sensor of a pair of stereoscopic cameras adapted to film the computer-generated scene; and a screen adapted to depict a stereoscopic image of the computer-generated scene. A creative near-parallax value is calculated based on the stereo-shift value, the stereo-volume, and the near-parallax value. A creative far-parallax value is also calculated based on the stereo-shift value and the product of the stereo-volume and the far-parallax value. The creative near-parallax value and creative far-parallax value are stored in a computer memory as the user-defined stereo effect.
Edge preserving depth filtering
A scene is illuminated with modulated illumination light that reflects from surfaces in the scene as modulated reflection light. Each of a plurality of pixels of a depth camera receive the modulated reflection light and observe a phase difference between the modulated illumination light and the modulated reflection light. For each of the plurality of pixels, an edginess of that pixel is recognized, and the phase difference of that pixel is smoothed as a function of the edginess of that pixel.
Method of using a light-field camera to generate a three-dimensional image, and light field camera implementing the method
A method is provided to generate a 3D image using a light-field camera that includes a main lens, a micro-lens array, a light sensing component, and an image processing module. The micro-lens array forms a plurality of adjacent micro-images at different positions of the light sensing component. Each micro-image includes multiple image zones corresponding to different viewing angles. For each micro-image, the image processing module obtains image pixel values from the image zones, so as to generate camera images corresponding to different viewing angles. The image processing module combines the camera images to generate the 3D image.
Digital receiver and method for processing caption data in the digital receiver
The present description provides a digital receiver which provides 3D caption data and a method for processing 3D caption data in the digital receiver of the present invention. A method for transmitting a broadcast signal for 3D service according to one aspect of the present invention comprises the following steps: encoding 3D video ES including a 3D caption service; generating signaling information for signaling a 3D video service including the encoded 3D video ES; and transmitting a digital broadcast signal including the 3D video service and the signaling information, wherein said 3D caption service includes a first command code for generating left caption data and a second command code for indicating a disparity value for a caption window, and generates right caption data on the basis of the first command code and second command code.
Apparatus and method for detecting a temporal synchronization mismatch between a first and a second video stream of a 3D video content
A video processing apparatus and a method for detecting a temporal synchronization mismatch between at least a first and a second video stream of a stereoscopic video content are described. An eye blink of a creature that is imaged in the video content is detected. The temporal synchronization mismatch is determined by determination of a temporal offset between the reproduction of an eye blink in the first video stream and the reproduction of said eye blink in the second video stream.
Adjustable parallax distance, wide field of view, stereoscopic imaging system
An imaging system and methods for using an imaging system where the operator is able to variably adjust the parallax distance for enhanced stereo performance are disclosed. In addition, by coordinating the parallax distance with the optical settings of the camera, artificial 3D experiences can be created that give a user the perception of observing a scene from a distance different than that actually employed. The imaging system may also include a plurality of stereo camera supersets, wherein a first one or more stereo camera supersets are positioned at a different height relative to a first stereo camera superset. Novel specific uses of the camera system, such as in capturing events of interest are described. Useful techniques for extracting or encoding wide field of view images from memory are also disclosed.
Ambiguity-free optical tracking system
An ambiguity-free optical tracking system (100) includes a position sensor unit (104), a control box (106), a computer (110) and software. The position sensor unit includes two cameras (121 and 122) that track a plurality of individual infra-red reflecting markers (160) affixed on a patient's skin. Coordinates of the cameras in a coordinate system of the cameras are determined as part of a system calibration process by intentionally creating ambiguous markers with two true markers. Using this information, an ambiguity elimination algorithm automatically identifies and eliminates ambiguous markers. A recursive backtracking algorithm builds a one-to-one correspondence between reference markers and markers optically observed during treatment. By utilizing a concept of null markers, the backtracking algorithm deals with missing, misplaced and occluded markers.