H04N13/122

SMART WEARABLE DEVICE FOR VISION ENHANCEMENT AND METHOD FOR REALIZING STEREOSCOPIC VISION TRANSPOSITION
20230239447 · 2023-07-27 ·

The invent discloses a smart wearable device for vision enhancement and a method for realizing stereoscopic vision transposition, comprising a wearable device body, wherein the wearable device body is provided with camera lenses, image sensors, an image information receiving and transmitting unit, image enhancement units, and near-to-eye optical systems; the optical axis and field angle of the near-to-eye optical system are matched with the optical axis and field angle of the camera lens; the image sensor is arranged behind the camera lens; the real scene enters the image sensor through an image imaging device for image acquisition, and through the image enhancement unit, the low-light environment image collected by the smart wearable device in the low-light environment is enhanced and displayed clearly. The invention can ensure the enhancement of the real stereoscopic vision in the dark environment and the interchange of the remote and barrier-free stereoscopic real vision.

IMAGE RECTIFICATION METHOD AND DEVICE, AND ELECTRONIC SYSTEM
20230025058 · 2023-01-26 ·

Provided are an image rectification method and apparatus, and an electronic system. The image rectification method includes: acquiring a first image and a second image of the same shooting object by means of a first shooting apparatus and a second shooting apparatus which are coaxially disposed; and correcting the second image according to shooting parameters of the first shooting apparatus and the second shooting apparatus to obtain a second rectified image, such that the parallax between the second rectified image and the first image in a vertical direction or a horizontal direction is zero. In the method, by taking a first image as a reference, and by means of adjusting the shooting parameters of the first shooting apparatus and a second shooting apparatus, only the second image is rectified, thereby improving the operation efficiency of image rectification, and improving the accuracy and stability of an image rectification result.

IMAGE PROCESSING METHOD, VR DEVICE, TERMINAL, DISPLAY SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

An image processing method includes: acquiring a fixation point position on a respective screen viewed by each of dominant eye(s); determining a fixation area of a left-eye screen and a fixation area of a right-eye screen according to fixation point position(s) corresponding to the dominant eye(s); rendering a first part of a left-eye image to be displayed on the left-eye screen at a first resolution, and rendering a second part of the left-eye image at a second resolution; rendering a first part of a right-eye image to be displayed on the right-eye screen at a third resolution, and rendering a second part of the right-eye image at a fourth resolution. A resolution of an image to be displayed in a fixation area of the respective screen is greater than resolutions of images to be displayed in other areas of the left-eye screen and the right-eye screen.

IMAGE PROCESSING METHOD, VR DEVICE, TERMINAL, DISPLAY SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

An image processing method includes: acquiring a fixation point position on a respective screen viewed by each of dominant eye(s); determining a fixation area of a left-eye screen and a fixation area of a right-eye screen according to fixation point position(s) corresponding to the dominant eye(s); rendering a first part of a left-eye image to be displayed on the left-eye screen at a first resolution, and rendering a second part of the left-eye image at a second resolution; rendering a first part of a right-eye image to be displayed on the right-eye screen at a third resolution, and rendering a second part of the right-eye image at a fourth resolution. A resolution of an image to be displayed in a fixation area of the respective screen is greater than resolutions of images to be displayed in other areas of the left-eye screen and the right-eye screen.

IMAGE SENSORS AND SENSING METHODS TO OBTAIN TIME-OF-FLIGHT AND PHASE DETECTION INFORMATION
20230232130 · 2023-07-20 ·

Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.

Method for transmitting video, apparatus for transmitting video, method for receiving video, and apparatus for receiving video
11558597 · 2023-01-17 · ·

An apparatus for receiving a video according to embodiments of the present invention comprises a decoder configured to decode bitstream based on viewing position and viewport information; an unpacker configured to unpack pictures in the decoded bitstream; a view regenerator configured to perform view regenerating the unpacked pictures; and a view synthesizer configured to perform view synthesis on the view regenerated pictures. A method of transmitting a video, the method comprising inter-view redundancy removing pictures for multiple viewing positions; packing the inter-view redundancy removed pictures; and encoding the packed pictures and signaling information.

Method for transmitting video, apparatus for transmitting video, method for receiving video, and apparatus for receiving video
11558597 · 2023-01-17 · ·

An apparatus for receiving a video according to embodiments of the present invention comprises a decoder configured to decode bitstream based on viewing position and viewport information; an unpacker configured to unpack pictures in the decoded bitstream; a view regenerator configured to perform view regenerating the unpacked pictures; and a view synthesizer configured to perform view synthesis on the view regenerated pictures. A method of transmitting a video, the method comprising inter-view redundancy removing pictures for multiple viewing positions; packing the inter-view redundancy removed pictures; and encoding the packed pictures and signaling information.

Computer-generated image processing including volumetric scene reconstruction

An imagery processing system determines pixel color values for pixels of captured imagery from volumetric data, providing alternative pixel color values. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and capture information related to pixel color values for multiple depths of a scene, which can be processed to provide reconstruction.

Computer-generated image processing including volumetric scene reconstruction

An imagery processing system determines pixel color values for pixels of captured imagery from volumetric data, providing alternative pixel color values. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and capture information related to pixel color values for multiple depths of a scene, which can be processed to provide reconstruction.

Display systems and methods for clipping content to increase viewing comfort

Augmented and virtual reality display systems increase viewer comfort by reducing viewer exposure to virtual content that causes undesirable accommodation-vergence mismatches (AVM). The display systems limit displaying content that exceeds an accommodation-vergence mismatch threshold, which may define a volume around the viewer. The volume may be subdivided into two or more zones, including an innermost loss-of-fusion zone (LoF) in which no content is displayed, and one or more outer AVM zones in which the displaying of content may be stopped, or clipped, under certain conditions. For example, content may be clipped if the viewer is verging within an AVM zone and if the content is displayed within the AVM zone for more than a threshold duration. A further possible condition for clipping content is that the user is verging on that content. In addition, the boundaries of the AVM zone and/or the acceptable amount of time that the content is displayed may vary depending upon the type of content being displayed, e.g., whether the content is user-locked content or in-world content.