Patent classifications
G06T7/557
Direct camera-to-display system
In one embodiment, an electronic display assembly includes a sensor array located on one side of a circuit board and an electronic display array located on an opposite side of the circuit board from the sensor array. The sensor array includes a plurality of sensor pixel units. Each sensor pixel unit includes a plurality of sensor pixels. The electronic display array includes a plurality of display pixel units. Each display pixel unit includes a plurality of display pixels. Each particular one of the plurality of sensor pixel units is mapped to a corresponding one of the plurality of display pixel units such that display pixels of each particular one of the plurality of display pixel units display light corresponding to light captured by sensor pixels of its mapped sensor pixel unit.
Direct camera-to-display system
In one embodiment, an electronic display assembly includes a sensor array located on one side of a circuit board and an electronic display array located on an opposite side of the circuit board from the sensor array. The sensor array includes a plurality of sensor pixel units. Each sensor pixel unit includes a plurality of sensor pixels. The electronic display array includes a plurality of display pixel units. Each display pixel unit includes a plurality of display pixels. Each particular one of the plurality of sensor pixel units is mapped to a corresponding one of the plurality of display pixel units such that display pixels of each particular one of the plurality of display pixel units display light corresponding to light captured by sensor pixels of its mapped sensor pixel unit.
Light field image rendering method and system for creating see-through effects
A light field image processing method is disclosed for removing occluding foreground and blurring uninterested objects, by differentiating objects located at different depths of field and objects belonging to distinct categories, to create see-through effects. In various embodiments, the image processing method may blur a background object behind a specified object of interest. The image processing method may also at least partially remove from the rendered image any occluding object that may prevent a viewer from viewing the object of interest. The image processing method may further blur areas of the rendered image that represent an object in the light field other than the object of interest. The method includes steps of constructing a light field weight function comprising a depth component and a semantic component, where the weight function assigns a ray in the light field with a weight; and conducting light field rendering using the weight function.
Light field image rendering method and system for creating see-through effects
A light field image processing method is disclosed for removing occluding foreground and blurring uninterested objects, by differentiating objects located at different depths of field and objects belonging to distinct categories, to create see-through effects. In various embodiments, the image processing method may blur a background object behind a specified object of interest. The image processing method may also at least partially remove from the rendered image any occluding object that may prevent a viewer from viewing the object of interest. The image processing method may further blur areas of the rendered image that represent an object in the light field other than the object of interest. The method includes steps of constructing a light field weight function comprising a depth component and a semantic component, where the weight function assigns a ray in the light field with a weight; and conducting light field rendering using the weight function.
Systems and methods for estimating depth from projected texture using camera arrays
Systems and methods for estimating depth from projected texture using camera arrays are described. A camera array includes a conventional camera and at least one two-dimensional array of cameras, where the conventional camera has a higher resolution than the cameras in the at least one two-dimensional array of cameras, an illumination system configured to illuminate a scene with a projected texture, where an image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture, capture a set of images of the scene illuminated with the projected texture, and determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images.
Systems and methods for estimating depth from projected texture using camera arrays
Systems and methods for estimating depth from projected texture using camera arrays are described. A camera array includes a conventional camera and at least one two-dimensional array of cameras, where the conventional camera has a higher resolution than the cameras in the at least one two-dimensional array of cameras, an illumination system configured to illuminate a scene with a projected texture, where an image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture, capture a set of images of the scene illuminated with the projected texture, and determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images.
LIGHT FIELD RECONSTRUCTION METHOD AND APPARATUS OF A DYNAMIC SCENE
The light field reconstruction method includes: obtaining a human segmentation result via a pre-trained semantic segmentation network, and obtaining an object segmentation result according to a pre-obtained scene background; fusing multiple frames of depth maps to obtain a geometric model, obtaining a complete human model according to a pre-trained human model completion network, and registering the models by point cloud registration and fusing the registered models to obtain an object model, so as to obtain a complete human model with geometric details and the object model; tracking motion of a rigid object through point cloud registration; reconstructing the complete human model with geometric details through a human skeleton tracking and a non rigid tracking of human surface nodes; and performing a fusion operation in time sequence and obtaining a reconstructed human model and a reconstructed rigid object model through the fusion operation.
Multi-color flash with image post-processing
Multi-color flash with image post-processing that uses a camera device with a multi-color flash and implements post-processing to generate images is described. In one aspect, the multi-color flash with image post-processing may be implemented by a controller configured to control a camera and flashes of at least two different colors. The controller may be configured to cause the camera to acquire a first image of a scene while the scene is being illuminated with the first flash but not the second flash, then cause the camera to acquire a second image of the scene while the scene is being illuminated with the second flash but not the first flash, and generate a final image of the scene in post-processing based on a combination of the first image and the second image.
Multi-color flash with image post-processing
Multi-color flash with image post-processing that uses a camera device with a multi-color flash and implements post-processing to generate images is described. In one aspect, the multi-color flash with image post-processing may be implemented by a controller configured to control a camera and flashes of at least two different colors. The controller may be configured to cause the camera to acquire a first image of a scene while the scene is being illuminated with the first flash but not the second flash, then cause the camera to acquire a second image of the scene while the scene is being illuminated with the second flash but not the first flash, and generate a final image of the scene in post-processing based on a combination of the first image and the second image.
POWDER LEAKAGE MONITORING DEVICE AND POWDER LEAKAGE MONITORING METHOD
The invention discloses a powder leakage monitoring device and a powder leakage monitoring method. The powder leakage monitoring device comprises a light field camera, a 3D PTZ and a computer. Wherein, the light field camera records the original light field images of the monitored area; the 3D PTZ under the light field camera adjusts the shooting angle of the light field camera when it rotates according to the set direction; and the computer respectively connects to the light field camera and the 3D PTZ, which generates refocused images corresponding to the original light field images, and determines the spatial coordinates of the powder leakage point and the hazard range of the powder leakage in the monitored area according to the refocused images and the shooting angle. Therefore, the range and accuracy of powder leakage monitoring are both increased by using this invention.