Patent classifications
G06T15/30
DISPLAY SYSTEMS AND METHODS FOR CLIPPING CONTENT TO INCREASE VIEWING COMFORT
Augmented and virtual reality display systems increase viewer comfort by reducing viewer exposure to virtual content that causes undesirable accommodation-vergence mismatches (AVM). The display systems limit displaying content that exceeds an accommodation-vergence mismatch threshold, which may define a volume around the viewer. The volume may be subdivided into two or more zones, including an innermost loss-of-fusion zone (LoF) in which no content is displayed, and one or more outer AVM zones in which the displaying of content may be stopped, or clipped, under certain conditions. For example, content may be clipped if the viewer is verging within an AVM zone and if the content is displayed within the AVM zone for more than a threshold duration. A further possible condition for clipping content is that the user is verging on that content. In addition, the boundaries of the AVM zone and/or the acceptable amount of time that the content is displayed may vary depending upon the type of content being displayed, e.g., whether the content is user-locked content or in-world content.
Scene crop via adaptive view-depth discontinuity
A method, apparatus, and system provide the ability to crop a three-dimensional (3D) scene. The 3D scene is acquired and includes multiple 3D images (with each image from a view angle of an image capture device) and a depth map for each image. The depth values in each depth map are sorted. Multiple initial cutoff depths are determined for the scene based on the view angles of the images (in the scene). A cutoff relaxation depth is determined based on a jump between depth values. A confidence map is generated for each depth map and indicates whether each depth value is above or below the cutoff relaxation depth. The confidence maps are aggregated into an aggregated model. A bounding volume is generated out of the aggregated model. Points are cropped from the scene based on the bounding volume.
Scene crop via adaptive view-depth discontinuity
A method, apparatus, and system provide the ability to crop a three-dimensional (3D) scene. The 3D scene is acquired and includes multiple 3D images (with each image from a view angle of an image capture device) and a depth map for each image. The depth values in each depth map are sorted. Multiple initial cutoff depths are determined for the scene based on the view angles of the images (in the scene). A cutoff relaxation depth is determined based on a jump between depth values. A confidence map is generated for each depth map and indicates whether each depth value is above or below the cutoff relaxation depth. The confidence maps are aggregated into an aggregated model. A bounding volume is generated out of the aggregated model. Points are cropped from the scene based on the bounding volume.
Endoscope guidance from interactive planar slices of a volume image
An endoscopic imaging system (10) employing an endoscope (20) and an endoscope guidance controller (30). In operation, endoscope (20) generates an endoscopic video (23) of an anatomical structure within an anatomical region. Endoscopic guidance controller (30), responsive to a registration between the endoscopic video (23) and a volume image (44) of the anatomical region, controls a user interaction (50) with a graphical user interface (31) including one or more interactive planar slices (32) of the volume image (44), and responsive to the user interaction (50) with the graphical user interface (31), endoscopic guidance controller (30) controls a positioning of the endoscope (20) relative to the anatomical structure derived from the interactive planar slices (32) of the volume image (44). A robotic endoscopic imaging system (11) incorporates a robot (23) in the endoscopic imaging system (10) whereby endoscope guidance controller (30) controls a positioning by robot (23) of the endoscope (20) relative to the anatomical structure.
Endoscope guidance from interactive planar slices of a volume image
An endoscopic imaging system (10) employing an endoscope (20) and an endoscope guidance controller (30). In operation, endoscope (20) generates an endoscopic video (23) of an anatomical structure within an anatomical region. Endoscopic guidance controller (30), responsive to a registration between the endoscopic video (23) and a volume image (44) of the anatomical region, controls a user interaction (50) with a graphical user interface (31) including one or more interactive planar slices (32) of the volume image (44), and responsive to the user interaction (50) with the graphical user interface (31), endoscopic guidance controller (30) controls a positioning of the endoscope (20) relative to the anatomical structure derived from the interactive planar slices (32) of the volume image (44). A robotic endoscopic imaging system (11) incorporates a robot (23) in the endoscopic imaging system (10) whereby endoscope guidance controller (30) controls a positioning by robot (23) of the endoscope (20) relative to the anatomical structure.
Automatic representation toggling based on depth camera field of view
One embodiment provides a method comprising determining a spatial relationship between an augmented reality (AR) device and a camera-equipped device. The AR device is worn by a user. The camera-equipped device is positioned within proximity of the user. The method further comprises determining a position of the user relative to a field of view of the camera-equipped device, and providing a representation of the user for display. The representation automatically switches between a real image of the user and a virtual avatar of the user based on the position of the user.
Automatic representation toggling based on depth camera field of view
One embodiment provides a method comprising determining a spatial relationship between an augmented reality (AR) device and a camera-equipped device. The AR device is worn by a user. The camera-equipped device is positioned within proximity of the user. The method further comprises determining a position of the user relative to a field of view of the camera-equipped device, and providing a representation of the user for display. The representation automatically switches between a real image of the user and a virtual avatar of the user based on the position of the user.
DISPLAY METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM
A display method, an electronic device and a storage medium. A particular implementation of the method includes: determining eye position information of an object in an image; determining camera position information of naked eye 3D according to the eye position information; creating an eye space according to the camera position information; obtaining, according to object position information of a target object in the eye space, projection position information of the target object on a projection plane based on projection information; and displaying the target object according to the projection position information.
DISPLAY METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM
A display method, an electronic device and a storage medium. A particular implementation of the method includes: determining eye position information of an object in an image; determining camera position information of naked eye 3D according to the eye position information; creating an eye space according to the camera position information; obtaining, according to object position information of a target object in the eye space, projection position information of the target object on a projection plane based on projection information; and displaying the target object according to the projection position information.
Frustum Rendering in Computer Graphics
A graphics processing system includes a tiling unit configured to tile a first view of a scene into a plurality of tiles, a processing unit configured to identify a first subset of the tiles that are associated with regions of the scene that are viewable in a second view, and a rendering unit configured to render to a render target each of the identified tiles.