Patent classifications
H04N15/00
Three-dimensional video production system
A number of elements within a venue of a live-action event to be televised are tagged with wireless tracking devices to provide accurate and timely location information for all of the elements to facilitate direction of audiovisual capture devices such as cameras and microphones, to automatically regulate convergence, to automatically vertically align paired left- and right-eye views, and to synthesize part or all of 3D scenes when such are not otherwise available.
Two dimensional to three dimensional video conversion
A method of converting two-dimensional image data to three-dimensional image data includes dividing the image data into blocks, performing motion estimation on the blocks to produce block-based motion vectors, applying a global motion analysis and a local motion analysis to the block-based motion vectors to generate motion-based depth, applying a global image model and a local image model to the block-based motion vectors to generate image-based depth, and generating a three-dimensional view by fusing the motion-based depth and the image-based depth. Other conversion methods are also included.
Conversion of 2D image to 3D video
A 3D video generator receives a two-dimensional image to be converted to a first three-dimensional video. The 3D video generator generates a depth map based on depth values for each of a plurality of pixels in the input image. Based on the depth map, the input image and a view disparity value, the 3D video generator creates a series of modified images that are associated with the input image. When combined, the series of modified images comprises a 3D video.
Infrared video display eyewear
A wearable display apparatus for viewing video images of scenes and/or objects illuminated with infrared light, the display apparatus including a transparent display that is positioned in a user's field of vision when the display apparatus is worn, a stereoscopic video camera device including at least two cameras that each capture reflected infrared light images of a surrounding environment and a projection system that receives the infrared light images from the stereoscopic camera device, and simultaneously projects (i) a first infrared-illuminated video image in real-time onto a left eye viewport portion of the transparent display that overlaps a user's left eye field of vision and (ii) a second infrared-illuminated video image in real-time onto a right eye viewport portion of the transparent display that overlaps a user's right eye field of vision.
Dynamic adjustment of predetermined three-dimensional video settings based on scene content
Predetermined three-dimensional video parameter settings may be dynamically adjusted based on scene content. One or more three-dimensional characteristics associated with a given scene may be determined. One or more scale factors may be determined from the three-dimensional characteristics. The predetermined three-dimensional video parameter settings can be adjusted by applying the scale factors to the predetermined three-dimensional video parameter settings. The scene may be displayed on a three-dimensional display using the resulting adjusted set of predetermined three-dimensional video parameters.
Stereoscopic (3D) camera system utilizing a monoscopic (2D) control unit
A camera system comprising: stereoscopic optics; a right image sensor for acquiring a right image from the stereoscopic optics and a left image sensor for acquiring a left image from the stereoscopic optics; a horizontal line switch for receiving the right image from the right image sensor and the left image from the left image sensor and creating a composite image wherein the horizontal line signals from the right image sensor are alternated with the horizontal line signals from the left image sensor; and a single camera processor for receiving the composite image from the horizontal line switch for presenting to a display.
Image processing apparatus, image processing method, program, and camera
Provided are an image processing apparatus, an image processing method, a program, and a camera which are cable of generating a joined image in which distortions and seams are less likely to occur. An image processing apparatus according to an exemplary embodiment generates a joined image by joining a plurality of textures based on a plurality of images, and includes a second derivation unit that derives a motion between images of the plurality of images, and a texture writing unit that writes, into a frame memory, the plurality of textures that form the joined image, based on the motion between the images. The plurality of textures include a texture having such a shape that at least a part of an outline thereof is curved.
Vehicle vision system with customized display
A vehicle vision system includes a plurality of cameras having respective fields of view exterior of the vehicle. A processor is operable to process image data captured by the cameras and to generate images of the environment surrounding the vehicle. The processor is operable to generate a three dimensional vehicle representation of the vehicle. A display screen is operable to display the generated images of the environment surrounding the vehicle and to display the generated vehicle representation of the equipped vehicle as would be viewed from a virtual camera viewpoint. At least one of (a) a degree of transparency of at least a portion of the displayed vehicle representation is adjustable by the system, (b) the vehicle representation comprises a vector model and (c) the vehicle representation comprises a shape, body type, body style and/or color corresponding to that of the actual equipped vehicle.
Two-channel reflector based single-lens 2D/3D camera with disparity and convergence angle control
A single-lens two-channel reflector provides the capability to switch between two-dimensional and three-dimensional imaging. The reflector includes laterally displaceable outward reflectors and displaceable inward reflectors that can simultaneously provide left and right images of a scene to an imager, and controllers for controlling relative distance between the outward and the inward reflectors, and for controlling deflection angle of the inward reflectors, so as to enable the adjustment of disparity and convergence angle.
Image capturing device and image capturing method
A bus interface/camera control interface connects a processor and a plurality of imaging units. A control data pass-through logic exchanges data between the bus interface/camera control interface and the plurality of imaging units. A pass-through mask register includes: a write register used to specify, of the plurality of imaging units, one or more imaging units to be selected so as to selectively supply write data to the one or more imaging units; and a read register used to specify, of the plurality of imaging units, a single imaging unit to be selected so as to selectively receive read data from the single imaging unit. The control data pass-through logic can use the pass-through mask register to switch between input and output operations.