Patent classifications
H04N13/122
Overscan for 3D display
A display processor and computer-implemented method are provided for processing three-dimensional [3D] image data for display on a 3D display. The 3D display is arranged for emitting a series of views of the 3D image data which enables stereoscopic viewing of the 3D image data at multiple viewing positions. The series of views may be displayed on the 3D display using overscan. The degree of overscan may be determined as a function of one or more depth range parameters, the one or more depth range parameters characterizing, at least in part, a degree of depth perceived by a viewer when the series of views is displayed on the 3D display.
Overscan for 3D display
A display processor and computer-implemented method are provided for processing three-dimensional [3D] image data for display on a 3D display. The 3D display is arranged for emitting a series of views of the 3D image data which enables stereoscopic viewing of the 3D image data at multiple viewing positions. The series of views may be displayed on the 3D display using overscan. The degree of overscan may be determined as a function of one or more depth range parameters, the one or more depth range parameters characterizing, at least in part, a degree of depth perceived by a viewer when the series of views is displayed on the 3D display.
DISPLAY SYSTEMS AND METHODS FOR CLIPPING CONTENT TO INCREASE VIEWING COMFORT
Augmented and virtual reality display systems increase viewer comfort by reducing viewer exposure to virtual content that causes undesirable accommodation-vergence mismatches (AVM). The display systems limit displaying content that exceeds an accommodation-vergence mismatch threshold, which may define a volume around the viewer. The volume may be subdivided into two or more zones, including an innermost loss-of-fusion zone (LoF) in which no content is displayed, and one or more outer AVM zones in which the displaying of content may be stopped, or clipped, under certain conditions. For example, content may be clipped if the viewer is verging within an AVM zone and if the content is displayed within the AVM zone for more than a threshold duration. A further possible condition for clipping content is that the user is verging on that content. In addition, the boundaries of the AVM zone and/or the acceptable amount of time that the content is displayed may vary depending upon the type of content being displayed, e.g., whether the content is user-locked content or in-world content.
Photoelectric conversion apparatus, method of driving photoelectric conversion apparatus, photoelectric conversion system, and moving body
A photoelectric conversion apparatus includes a control unit configured to change a voltage of an input node from a first voltage toward a predetermined voltage during a predetermined time period after the voltage of the input node changes to the first voltage and before the voltage of the input node changes to a second voltage. A method of driving the photoelectric conversion apparatus includes controlling changing of the voltage of the input node from the first voltage toward the predetermined voltage during the predetermined time period.
Photoelectric conversion apparatus, method of driving photoelectric conversion apparatus, photoelectric conversion system, and moving body
A photoelectric conversion apparatus includes a control unit configured to change a voltage of an input node from a first voltage toward a predetermined voltage during a predetermined time period after the voltage of the input node changes to the first voltage and before the voltage of the input node changes to a second voltage. A method of driving the photoelectric conversion apparatus includes controlling changing of the voltage of the input node from the first voltage toward the predetermined voltage during the predetermined time period.
Sensor misalignment compensation
Camera compensation methods and systems that compensate for misalignment of sensors/camera in stereoscopic camera systems. The compensation includes identifying a pitch angle offset between a first camera and a second camera, determining misalignment of the first and second cameras from the identified pitch angle offset, determining a relative compensation delay responsive to the determined misalignment, introducing the relative compensation delay to image streams produced by the cameras, and producing a stereoscopic image on a display from the first and second image streams with the introduced delay.
Electronic apparatus and control method thereof
An electronic apparatus includes a stacked display including a plurality of panels, and a processor configured to obtain first light field (LF) images of different viewpoints, input the obtained first LF images to an artificial intelligence model for converting an LF image into a layer stack, to obtain a plurality of layer stacks to which a plurality of shifting parameters indicating depth information in the first LF images are respectively applied, and control the stacked display to sequentially and repeatedly display, on the stacked display, the obtained plurality of layer stacks. The artificial intelligence model is trained by applying the plurality of shifting parameters that are obtained based on the depth information in the first LF images.
Electronic apparatus and control method thereof
An electronic apparatus includes a stacked display including a plurality of panels, and a processor configured to obtain first light field (LF) images of different viewpoints, input the obtained first LF images to an artificial intelligence model for converting an LF image into a layer stack, to obtain a plurality of layer stacks to which a plurality of shifting parameters indicating depth information in the first LF images are respectively applied, and control the stacked display to sequentially and repeatedly display, on the stacked display, the obtained plurality of layer stacks. The artificial intelligence model is trained by applying the plurality of shifting parameters that are obtained based on the depth information in the first LF images.
ENHANCED THREE DIMENSIONAL VISUALIZATION USING ARTIFICIAL INTELLIGENCE
Apparatus and methods for enhanced 3D visualization includes receiving a plurality of images of an image scene from a plurality of image sensors. Depth information at locations of the image scene is received from a plurality of depth sensors. The depth information is combined with the plurality of images of the image scene using a machine learning model. A 3D representation of the image scene is generated based on the combined depth and image information.
ENHANCED THREE DIMENSIONAL VISUALIZATION USING ARTIFICIAL INTELLIGENCE
Apparatus and methods for enhanced 3D visualization includes receiving a plurality of images of an image scene from a plurality of image sensors. Depth information at locations of the image scene is received from a plurality of depth sensors. The depth information is combined with the plurality of images of the image scene using a machine learning model. A 3D representation of the image scene is generated based on the combined depth and image information.