H04N13/398

Light field display
11700364 · 2023-07-11 · ·

A method of displaying a light field to at least one viewer of a light field display device, the light field based on a 3D model, the light field display device comprising a plurality of spatially distributed display elements, the method including the steps of: (a) determining the viewpoints of the eyes of the at least one viewer relative to the display device; (b) for each eye viewpoint and each of a plurality of the display elements, rendering a partial view image representing a view of the 3D model from the eye viewpoint through the display element; and (c) displaying, via each display element, the set of partial view images rendered for that display element.

Systems, methods, and computer-readable storage media for controlling aspects of a robotic surgical device and viewer adaptive stereoscopic display

A system includes a robotic arm, an autosteroscopic display, a user image capture device, an image processor, and a controller. The robotic arm is coupled to a patient image capture device. The autostereoscopic display is configured to display an image of a surgical site obtained from the patient image capture device. The image processor is configured to identify a location of at least part of a user in an image obtained from the user image capture device. The controller is configured to, in a first mode, adjust a three dimensional aspect of the image displayed on autostereoscopic display based on the identified location, and, in a second mode, move the robotic arm or instrument based on a relationship between the identified location and the surgical site image.

Systems, methods, and computer-readable storage media for controlling aspects of a robotic surgical device and viewer adaptive stereoscopic display

A system includes a robotic arm, an autosteroscopic display, a user image capture device, an image processor, and a controller. The robotic arm is coupled to a patient image capture device. The autostereoscopic display is configured to display an image of a surgical site obtained from the patient image capture device. The image processor is configured to identify a location of at least part of a user in an image obtained from the user image capture device. The controller is configured to, in a first mode, adjust a three dimensional aspect of the image displayed on autostereoscopic display based on the identified location, and, in a second mode, move the robotic arm or instrument based on a relationship between the identified location and the surgical site image.

HYPER-CONNECTED AND SYNCHRONIZED AR GLASSES

Systems and methods are described for selectively sharing audio and video streams amongst electronic eyewear devices. Each electronic eyewear device includes a camera arranged to capture a video stream in an environment of the wearer, a microphone arranged to capture an audio stream in the environment of the wearer, and a display. A processor of each electronic eyewear device executes instructions to establish an always-on session with other electronic eyewear devices and selectively shares an audio stream, a video stream, or both with other electronic eyewear devices in the session. Each electronic eyewear device also generates and receives annotations from other users in the session for display with the selectively shared video stream on the display of the electronic eyewear device that provided the selectively shared video stream. The annotation may include manipulation of an object in the shared video stream or overlay images registered with the shared video stream.

HYPER-CONNECTED AND SYNCHRONIZED AR GLASSES

Systems and methods are described for selectively sharing audio and video streams amongst electronic eyewear devices. Each electronic eyewear device includes a camera arranged to capture a video stream in an environment of the wearer, a microphone arranged to capture an audio stream in the environment of the wearer, and a display. A processor of each electronic eyewear device executes instructions to establish an always-on session with other electronic eyewear devices and selectively shares an audio stream, a video stream, or both with other electronic eyewear devices in the session. Each electronic eyewear device also generates and receives annotations from other users in the session for display with the selectively shared video stream on the display of the electronic eyewear device that provided the selectively shared video stream. The annotation may include manipulation of an object in the shared video stream or overlay images registered with the shared video stream.

Electronic device and control method thereof

An electronic device according to the present invention includes: a processor; and a memory storing a program which, when executed by the processor, causes the electronic device to: perform control to change a display region of an image in accordance with an orientation change of the electronic device or in accordance with accepting a user operation and display the display region of the image on a screen; and determine a clipping region of the image to be clipped from the image based on a position of the display region of the image, wherein the image includes the display region and the clipping region and the clipping region is wider than the display region.

Electronic device and control method thereof

An electronic device according to the present invention includes: a processor; and a memory storing a program which, when executed by the processor, causes the electronic device to: perform control to change a display region of an image in accordance with an orientation change of the electronic device or in accordance with accepting a user operation and display the display region of the image on a screen; and determine a clipping region of the image to be clipped from the image based on a position of the display region of the image, wherein the image includes the display region and the clipping region and the clipping region is wider than the display region.

Three-dimensional display device, three-dimensional display system, head-up display, and movable object

A three-dimensional display device includes a display panel, a barrier panel, and a controller that controls the display panel and the barrier panel. The controller defines multiple first image areas and multiple second image areas in the display panel, causes the first image areas to be at first intervals in a first direction, causes displaying of a first image viewable by a first eye of a user in the first image areas and a second image viewable by a second eye of the user in the second image areas, defines, in the barrier panel, multiple first transmissive areas transmissive to the image light at a first transmissivity and multiple second transmissive areas transmissive to the image light at a second transmissivity, causes the first transmissive areas to be at second intervals in the first direction, and performs an irregular process at third intervals in the first direction.

Mixed reality system

A mixed reality direct retinal projector system that may include a headset that uses a reflective holographic combiner to direct light from a light engine into an eye box corresponding to a user's eye. The light engine may include light sources coupled to projectors that independently project light to the holographic combiner from different projection points. The light sources may be in a unit separate from the headset that may be carried on a user's hip, or otherwise carried or worn separately from the headset. Each projector may include a collimating and focusing element, an active focusing element, and a two-axis scanning mirror to project light from a respective light source to the holographic combiner. The holographic combiner may be recorded with a series of point to point holograms; each projector interacts with multiple holograms to project light onto multiple locations in the eye box.

Mixed reality system

A mixed reality direct retinal projector system that may include a headset that uses a reflective holographic combiner to direct light from a light engine into an eye box corresponding to a user's eye. The light engine may include light sources coupled to projectors that independently project light to the holographic combiner from different projection points. The light sources may be in a unit separate from the headset that may be carried on a user's hip, or otherwise carried or worn separately from the headset. Each projector may include a collimating and focusing element, an active focusing element, and a two-axis scanning mirror to project light from a respective light source to the holographic combiner. The holographic combiner may be recorded with a series of point to point holograms; each projector interacts with multiple holograms to project light onto multiple locations in the eye box.