G06T3/18

Image processing apparatus and method, data, and recording medium

The present disclosure relates to an image processing apparatus and method, data, and a recording medium by which the invisibility of corresponding point detection can be improved. A pattern picked up image obtained by image pickup, by an image pickup section, of a predetermined structured light pattern projected by a projection section and having a plurality of, for example, patterns 101-1 and 102-2 and so forth, each of which has a luminance distribution of a Gaussian function like, for example, a curve 102-1 or 102-2, disposed therein, is used to detect corresponding points between a projection image projected by the projection section and a picked up image picked up by the image pickup section. The present disclosure can be applied, for example, to an image processing apparatus, a projection apparatus, an image pickup apparatus, a projection image pickup apparatus, a control apparatus, a projection image pickup system and so forth.

Single pass rendering for head mounted displays
10853988 · 2020-12-01 · ·

A method of rendering geometry of a 3D scene for display on a non-standard projection display projects geometry of the 3D scene into a 2D projection plane, wherein image regions are defined in the projection plane, maps the geometry from the projection plane into an image space using transformations, wherein a respective transformation is defined for each image region, and renders the geometry in the image space to determine image values of an image to be displayed on the non-standard projection display. The transformations are configured for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display.

Dynamic adaptation of images for projection, and/or of projection parameters, based on user(s) in environment
10853911 · 2020-12-01 · ·

Implementations relate to dynamic adaptation of images for projection by a projector, based on one or more properties of user(s) that are in an environment with the projector. The projector can be associated with an automated assistant client of a client device. In some versions of those implementations, a pose of a user in the environment is determined and, based on the pose, a base image for projecting onto a surface is warped to generate a transformed image. The transformed image, when projected onto a surface and viewed from the pose of the user, mitigates perceived differences relative to the base image. The base image (on which the transformed image is based) can optionally be generated in dependence on a distance of the user. Some implementations additionally or alternatively relate to dynamic adaptation of projection parameters (e.g., a location for projection, a size of projection) based on one or more properties of user(s) that are in an environment with the projector.

Image fusion architecture

Embodiments relate to circuitry for performing fusion of two images captured with two different exposure times to generate a fused image having a higher dynamic range. Information about first keypoints is extracted from the first image by processing pixel values of pixels in the first image. A model describing correspondence between the first image and the second image is then built by processing at least the information about first keypoints. A processed version of the first image is warped using mapping information in the model to generate a warped version of the first image spatially more aligned to the second image than to the first image. The warped version of the first image is fused with a processed version of the second image to generate the fused image.

VIRTUAL REALITY CINEMA-IMMERSIVE MOVIE WATCHING FOR HEADMOUNTED DISPLAYS

Peripheral-vision expanded images are streamed to a video streaming client. The peripheral-vision expanded images are generated from source images in reference to view directions of the viewer at respective time points. View direction data is collected and received in real time while the viewer is viewing display images derived from the peripheral-vision expanded images. A second peripheral-vision expanded image is generated from a second source image in reference to a second view direction of the viewer at a second time point. The second peripheral-vision expanded image has a focal-vision image portion covering the second view direction of the viewer and a peripheral-vision image portion outside the focal-vision image portion. The second peripheral-vision expanded image is transmitted to the video streaming client.

SYSTEMS AND METHODS FOR AUTOMATIC EYE GAZE REFINEMENT
20200371586 · 2020-11-26 ·

A computing device having a front-facing camera applies facial landmark detection and identifies eye regions in the digital image responsive to the front-facing camera capturing a digital image of an individual. For at least one of the eye regions, the computing device is further configured to extract attributes of the eye region, determine an eye gaze score based on the extracted attributes, generate a modified eye region based on the eye gaze score, and output a modified digital image with the modified eye region.

Flow meter

A system for regulating fluid flow having a processor configured to reduce image noise is provided. The system includes an image sensor to capture an image of the drip chamber and a valve to regulate the fluid flowing from the drip chamber to a patient. The processor captures the image of the drip chamber using the image sensor, performs an edge detection on the image to generate a first processed image, and performs an AND-operation on a pixel on a first side of an axis of the first processed image with a corresponding mirror pixel on a second side of the axis of the first processed image to generate a second processed image.

View synthesis using deep convolutional neural networks

Disclosed is a system and method for generating intermediate views between two received images. To generate the intermediate views, a rectification network rectifies the two images and an encoder network encodes the two rectified images to generate convolutional neural network features. The convolutional neural network features are fed to a decoder network that decodes the features to generate a correspondence between the two rectified images and blending masks to predict the visibility of pixels of the rectified images in the intermediate view images. Using the correspondence between the two rectified images and blending masks, a view morphing network synthesizes intermediate view images depicting an object in the two images in a view between the two images.

Policies and architecture to dynamically offload VR processing to HMD based on external cues

Virtual reality systems and methods are described. For example, one embodiment of an apparatus comprises: a communications interface to provide frame data of a virtual reality scene to a head mounted display (HMD); at least one performance monitor coupled to at least one component of the apparatus the at least one performance monitor to monitor performance of the at least one component and to send an alert based on the performance of the at least one component; a processor to process the frame data; a controller to receive the alert based on the performance of the at least one component and to offload processing of the frame data from the processor to the HMD for processing; and a display to show the rendered view of the scene.

Generating a new frame using rendered content and non-rendered content from a previous perspective

Disclosed is an approach for constructing a new frame using rendered content and non-rendered content from a previous perspective. Points of visible surfaces of a first set of objects from a first perspective are rendered. Both rendered content and non-rendered content from the first perspective are stored. The new frame from the second perspective is generated using the rendered content and the non-rendered content from the first perspective.