H04N13/275

Methods and apparatus for encoding, communicating and/or using images

Methods and apparatus for capturing, communicating and using image data to support virtual reality experiences are described. Images, e.g., frames, are captured at a high resolution but lower frame rate than is used for playback. Interpolation is applied to captured frames to generate interpolated frames. Captured frames, along with interpolated frame information, are communicated to the playback device. The combination of captured and interpolated frames correspond to a second frame playback rate which is higher than the image capture rate. Cameras operate at a high image resolution but slower frame rate than images could be captured with the same cameras at a lower resolution. Interpolation is performed prior to delivery to the user device with segments to be interpolated being selected based on motion and/or lens FOV information. A relatively small amount of interpolated frame data is communicated compared to captured frame data for efficient bandwidth use.

Imaging Method and System Based on Wise-pixels with Valved Modulation
20230039767 · 2023-02-09 ·

This disclosure presents a novel smart CMOS imaging sensor and the methods and system for imaging of an object using the smart CMOS imaging sensor. A CMOS-implemented 3D imaging system compromises a wise-pixels-containing imaging sensor and a scanning light point or beam to achieve 3D shape reconstruction, by recording performance of each wise-pixel to the incident light over the period of “valve modulation”. The “valve modulation” is a one-time process of accumulation and release of charges. A frame period comprises multiple valve modulations. In the “frame period”, each wise-pixel will repeat the process that temporarily stores the light intensity, and then release, along with a selection of preferred intensity (e.g. the globally maximum intensity, or the locally maximum intensities, and or the intensities above a certain threshold) during the whole frame period, and the selected intensity and the corresponding time will be exported to the computing units. The selection of the different preferred light intensities is implemented by memory-based, threshold-based, and difference-based approaches, respectively. The obtained maximum intensity and time information can be used to reconstruct 3D geometric information of the surface of the object scanned by moving light source.

Imaging Method and System Based on Wise-pixels with Valved Modulation
20230039767 · 2023-02-09 ·

This disclosure presents a novel smart CMOS imaging sensor and the methods and system for imaging of an object using the smart CMOS imaging sensor. A CMOS-implemented 3D imaging system compromises a wise-pixels-containing imaging sensor and a scanning light point or beam to achieve 3D shape reconstruction, by recording performance of each wise-pixel to the incident light over the period of “valve modulation”. The “valve modulation” is a one-time process of accumulation and release of charges. A frame period comprises multiple valve modulations. In the “frame period”, each wise-pixel will repeat the process that temporarily stores the light intensity, and then release, along with a selection of preferred intensity (e.g. the globally maximum intensity, or the locally maximum intensities, and or the intensities above a certain threshold) during the whole frame period, and the selected intensity and the corresponding time will be exported to the computing units. The selection of the different preferred light intensities is implemented by memory-based, threshold-based, and difference-based approaches, respectively. The obtained maximum intensity and time information can be used to reconstruct 3D geometric information of the surface of the object scanned by moving light source.

METHOD FOR DETERMINING THE COMPLEX AMPLITUDE OF THE ELECTROMAGNETIC FIELD ASSOCIATED WITH A SCENE

A method for determining the complex amplitude of the electromagnetic field associated with a scene, comprising a) capturing a plurality of images of the scene by means of a photographic camera, the images being focused in planes of focus arranged at different distances, wherein the camera comprises a lens of focal length F and a sensor arranged at a certain distance from the lens in its image space, taking at least one image pair from the plurality of images and determining the accumulated wavefront to the conjugate plane in the object space corresponding to the intermediate plane with respect to the planes of focus of the two images of the pair.

Encoding apparatus and encoding method, decoding apparatus and decoding method
11716487 · 2023-08-01 · ·

There is provided an encoding apparatus, an encoding method, a decoding apparatus, and a decoding method that make it possible to acquire two-dimensional image data of a viewpoint corresponding to a predetermined display image generation method and depth image data without depending upon the viewpoint upon image pickup. A conversion unit generates, from three-dimensional data of an image pickup object, two-dimensional image data of a plurality of viewpoints corresponding to a predetermined display image generation method and depth image data indicative of a position of each of pixels in a depthwise direction of the image pickup object. An encoding unit encodes the two-dimensional image data and the depth image data generated by the conversion unit. A transmission unit transmits the two-dimensional image data and the depth image data encoded by the encoding unit. The present disclosure can be applied, for example, to an encoding apparatus and so forth.

Encoding apparatus and encoding method, decoding apparatus and decoding method
11716487 · 2023-08-01 · ·

There is provided an encoding apparatus, an encoding method, a decoding apparatus, and a decoding method that make it possible to acquire two-dimensional image data of a viewpoint corresponding to a predetermined display image generation method and depth image data without depending upon the viewpoint upon image pickup. A conversion unit generates, from three-dimensional data of an image pickup object, two-dimensional image data of a plurality of viewpoints corresponding to a predetermined display image generation method and depth image data indicative of a position of each of pixels in a depthwise direction of the image pickup object. An encoding unit encodes the two-dimensional image data and the depth image data generated by the conversion unit. A transmission unit transmits the two-dimensional image data and the depth image data encoded by the encoding unit. The present disclosure can be applied, for example, to an encoding apparatus and so forth.

THREE-DIMENSIONAL DISPLAY DEVICE, THREE-DIMENSIONAL DISPLAY METHOD, AND PROGRAM
20230028185 · 2023-01-26 · ·

Provided are a three-dimensional display device, a three-dimensional display method, and a program capable of notifying a user of an event having a causal relationship with a damage. The three-dimensional display device (10) includes a memory (16) that stores a three-dimensional model of a structure, a damage displayed in the three-dimensional model, and an event that has a causal relationship with the damage; a display unit (26); and a processor (20). In the three-dimensional display device (10), the processor (20) causes the display unit (26) to display the three-dimensional model, superimpose the damage on the three-dimensional model and display the damage, and display the event having the causal relationship with the damage.

THREE-DIMENSIONAL DISPLAY DEVICE, THREE-DIMENSIONAL DISPLAY METHOD, AND PROGRAM
20230028185 · 2023-01-26 · ·

Provided are a three-dimensional display device, a three-dimensional display method, and a program capable of notifying a user of an event having a causal relationship with a damage. The three-dimensional display device (10) includes a memory (16) that stores a three-dimensional model of a structure, a damage displayed in the three-dimensional model, and an event that has a causal relationship with the damage; a display unit (26); and a processor (20). In the three-dimensional display device (10), the processor (20) causes the display unit (26) to display the three-dimensional model, superimpose the damage on the three-dimensional model and display the damage, and display the event having the causal relationship with the damage.

MEDICAL IMAGE PROCESSING SYSTEM, SURGICAL IMAGE CONTROL DEVICE, AND SURGICAL IMAGE CONTROL METHOD
20230222740 · 2023-07-13 ·

[Problem] An aspect of the present disclosure provides a medical image processing system that facilitates the recognition of the position of a 3D model image in a medical image.

[Solution] A medical image processing system includes: an acquisition unit that acquires a real-time 3D surgical image of an operation site stereoscopically viewable by a surgeon and a 3D model image that is a stereoscopic CG image associated with the 3D surgical image; and a superimposition unit that performs enhancement such that the location of the 3D model image at predetermined spatial positions is enhanced with respect to the 3D surgical image or the 3D model image at the start of superimposition of the 3D model image at the predetermined spatial positions when the 3D surgical image is stereoscopically viewed on the basis of information set for the 3D model image.

MULTIVIEW DISPLAY SYSTEM AND METHOD WITH ADAPTIVE BACKGROUND
20230217000 · 2023-07-06 ·

An adaptive background multiview image display system and method provides improved multiview image quality. Systems and methods may involve generating crosstalk data that reduces crosstalk between a first view of subject image and a second view of the subject image. The subject image may be a multiview image to be overlaid on a background image. A crosstalk violation may be detected in the subject image based on the crosstalk data. At least one of a color value or a brightness value of the background image is determined according to a degree of the crosstalk violation to generate the background image. The subject image may then be overlaid on the generated background image.