Patent classifications
H04N13/20
VELOCITY AND DEPTH AWARE REPROJECTION
In various embodiments, methods and systems for reprojecting images based on a velocity depth late stage reprojection process are provided. A reprojection engine supports reprojecting images based on an optimized late stage reprojection process that is performed based on both depth data and velocity data. Image data and corresponding depth and velocity data of the image data is received. A determination of an adjustment to be made to the image is made. The determination is made based on motion data, the depth data and the velocity data. The motion data corresponds to a device associated with displaying the image data. The velocity data supports determining calculated correction distances for portions of the image data. The image data is adjusted based on the determined adjustment. Adjusting the image data is based on integrating depth-data-based translation and velocity-data-based motion correction, into a single pass implementation, to adjust the image data.
SYSTEMS AND METHODS FOR SPATIALLY SELECTIVE VIDEO CODING
Systems and methods for providing panoramic image and/or video content using spatially selective encoding and/or decoding. Panoramic content may include stitched spherical (360-degree) images and/or VR video. In some implementations, selective encoding functionality may be embodied in a spherical image capture device that may include two lenses configured to capture pairs of hemispherical images. Encoded source images may be decoded and stitched in order to obtain a combined image characterized by a greater field of view as compared to source images. The stitched image may be encoded using a selective encoding methodology including: partitioning a stitched image into multiple portions, determining if a portion is to be re-encoded. If the image portion is to be re-encoded, re-encoding the image portion. If an image portion is not to be re-encoded, copying previously encoded image portion in lieu of encoding.
Critical alignment of parallax images for autostereoscopic display
A method is provided for generating an autostereoscopic display. The method includes acquiring a first parallax image and at least one other parallax image. At least a portion of the first parallax image may be aligned with a corresponding portion of the at least one other parallax image. Alternating views of the first parallax image and the at least one other parallax image may be displayed.
Electronic device, control method, and non-transitorycomputer readable medium
In an electronic device according to the present invention, on a first screen, a range of a part of a VR content having a first video range is displayed as a display range and the display range is changed in accordance with an orientation change of the electronic device or a display range change operation, and on a second screen, a first image with a second video range that is narrower than the first video range and a second image with a range outside of the second video range in the first video range are displayed side by side; an edited VR content including the second video range is generated; and the second video range is changed in accordance with a video range change operation in a state where the first image and the second image are being displayed on the second screen.
MULTISENSORY DATA FUSION SYSTEM AND METHOD FOR AUTONOMOUS ROBOTIC OPERATION
A robotic system includes one or more optical sensors configured to separately obtain two dimensional (2D) image data and three dimensional (3D) image data of a brake lever of a vehicle, a manipulator arm configured to grasp the brake lever of the vehicle, and a controller configured to compare the 2D image data with the 3D image data to identify one or more of a location or a pose of the brake lever of the vehicle. The controller is configured to control the manipulator arm to move toward, grasp, and actuate the brake lever of the vehicle based on the one or more of the location or the pose of the brake lever.
Virtual endoscopic image generation device, method, and medium containing program
A structure extracting unit extracts a structure from a three-dimensional medical image, and a view point determining unit determines a view point position and a direction of line of sight of a virtual endoscopic image. An image generating unit calculates a distance between a view point position and the extracted structure, determines a display attribute of the extracted structure based on the distance and a plurality of different display attributes that correspond to different distances from the view point position and are defined for each of the structures, and generates, from the three-dimensional medical image, a virtual endoscopic image containing the structure having the determined display attribute. A display control unit displays the thus generated virtual endoscopic image on a WS display.
Robust user detection and tracking
Systems and approaches are provided for robustly detecting and tracking a user. Image data can be captured and processed to provide an estimated position and/or orientation of the user. Other sensor data, such as from an accelerometer and/or gyroscope, can be determined for a more robust estimation of the user's position and/or orientation. Multiple user detection processes and/or motion estimation approaches and their corresponding confidence levels can also be combined to determine a final estimated position and orientation of the user. The multiple user pose estimations and/or motion estimations can be combined via an approach such as probabilistic system modeling and maximum likelihood estimation.
Light Field Imaging Device and Method for Depth Acquisition and Three-Dimensional Imaging
A light field imaging device and method are provided. The device can include a diffraction grating assembly receiving a wavefront from a scene and including one or more diffraction gratings, each having a grating period along a grating axis and diffracting the wavefront to generate a diffracted wavefront. The device can also include a pixel array disposed under the diffraction grating assembly and detecting the diffracted wavefront in a near-field diffraction regime to provide light field image data about the scene. The pixel array has a pixel pitch along the grating axis that is smaller than the grating period. The device can further include a color filter array disposed over the pixel array to spatio-chromatically sample the diffracted wavefront prior to detection by the pixel array. The device and method can be implemented in backside-illuminated sensor architectures. Diffraction grating assemblies for use in the device and method are also disclosed.
Light Field Imaging Device and Method for Depth Acquisition and Three-Dimensional Imaging
A light field imaging device and method are provided. The device can include a diffraction grating assembly receiving a wavefront from a scene and including one or more diffraction gratings, each having a grating period along a grating axis and diffracting the wavefront to generate a diffracted wavefront. The device can also include a pixel array disposed under the diffraction grating assembly and detecting the diffracted wavefront in a near-field diffraction regime to provide light field image data about the scene. The pixel array has a pixel pitch along the grating axis that is smaller than the grating period. The device can further include a color filter array disposed over the pixel array to spatio-chromatically sample the diffracted wavefront prior to detection by the pixel array. The device and method can be implemented in backside-illuminated sensor architectures. Diffraction grating assemblies for use in the device and method are also disclosed.
3D DIGITAL PAINTING
A method of digital continuous and simultaneous three-dimensional painting and three-dimensional drawing with steps of providing a digital electronic canvas having at least one display and capable of presenting two pictures for a right eye and a left eye; providing means for creating a continuous 3D virtual canvas by digitally changing a value and sign of horizontal disparity between two images for the right eye and the left eye and their scaling on the digital electronic canvas corresponding to instant virtual distance between the painter and an instant image within the virtual 3D canvas; providing at least one multi-axis input control device allowing digital painting or drawing on the digital electronic canvas; painting within virtual 3D canvas by providing simultaneous appearance of a similar stroke on the images for the right eye and the left eye on the digital electronic canvas.