Patent classifications
G06F3/04815
Rendering portions of a three-dimensional environment with different sampling rates utilizing a user-defined focus frame
Methods, systems, and non-transitory computer readable storage media are disclosed for rendering portions of a three-dimensional environment at different sampling rates based on a focus frame within a graphical user interface. Specifically, the disclosed system provides a tool for marking a region of a graphical user interface displaying a three-dimensional environment. The disclosed system generates a focus frame based on the marked region of the graphical user interface and attaches the focus frame to a portion of the three-dimensional environment. The disclosed system assigns a first sampling rate to the portion of the three-dimensional environment displayed within the focus frame and a second sampling rate to portions of the three-dimensional environment outside the focus frame. The disclosed system renders the three-dimensional environment by sampling the portion within the focus frame at the first sampling rate and the portions outside the focus frame at the second sampling rate.
Rendering portions of a three-dimensional environment with different sampling rates utilizing a user-defined focus frame
Methods, systems, and non-transitory computer readable storage media are disclosed for rendering portions of a three-dimensional environment at different sampling rates based on a focus frame within a graphical user interface. Specifically, the disclosed system provides a tool for marking a region of a graphical user interface displaying a three-dimensional environment. The disclosed system generates a focus frame based on the marked region of the graphical user interface and attaches the focus frame to a portion of the three-dimensional environment. The disclosed system assigns a first sampling rate to the portion of the three-dimensional environment displayed within the focus frame and a second sampling rate to portions of the three-dimensional environment outside the focus frame. The disclosed system renders the three-dimensional environment by sampling the portion within the focus frame at the first sampling rate and the portions outside the focus frame at the second sampling rate.
Mixed-reality surgical system with physical markers for registration of virtual models
An example method includes obtaining, a virtual model of a portion of an anatomy of a patient obtained from a virtual surgical plan for an orthopedic joint repair surgical procedure to attach a prosthetic to the anatomy; identifying, based on data obtained by one or more sensors, positions of one or more physical markers positioned relative to the anatomy of the patient; and registering, based on the identified positions, the virtual model of the portion of the anatomy with a corresponding observed portion of the anatomy.
Mixed-reality surgical system with physical markers for registration of virtual models
An example method includes obtaining, a virtual model of a portion of an anatomy of a patient obtained from a virtual surgical plan for an orthopedic joint repair surgical procedure to attach a prosthetic to the anatomy; identifying, based on data obtained by one or more sensors, positions of one or more physical markers positioned relative to the anatomy of the patient; and registering, based on the identified positions, the virtual model of the portion of the anatomy with a corresponding observed portion of the anatomy.
TILTABLE USER INTERFACE
A programmable effects system for graphical user interfaces is disclosed. One embodiment comprises adjusting a graphical user interface in response to a tilt of a device. In this way, a graphical user interface may display a parallax effect in response to the device tilt.
METHOD AND APPARATUS FOR EGO-CENTRIC 3D HUMAN COMPUTER INTERFACE
In the method, a processor generates a three dimensional interface with at least one virtual object, defines a stimulus of the interface, and defines a response to the stimulus. The stimulus is an approach to the virtual object with a finger or other end-effector to within a threshold of the virtual object. When the stimulus is sensed, the response is executed. Stimuli may include touch, click, double click, peg, scale, and swipe gestures. The apparatus includes a processor that generates a three dimensional interface with at least one virtual object, and defines a stimulus for the virtual object and a response to the stimulus. A display outputs the interface and object. A camera or other sensor detects the stimulus, e.g. a gesture with a finger or other end-effector, whereupon the processor executes the response. The apparatus may be part of a head mounted display.
GROUND PLANE ADJUSTMENT IN A VIRTUAL REALITY ENVIRONMENT
An HMD device is configured to vertically adjust the ground plane of a rendered virtual reality environment that has varying elevations to match the flat real world floor so that the device user can move around to navigate and explore the environment and always be properly located on the virtual ground and not be above it or underneath it. Rather than continuously adjust the virtual reality ground plane, which can introduce cognitive dissonance discomfort to the user, when the user is not engaged in some form of locomotion (e.g., walking), the HMD device establishes a threshold radius around the user within which virtual ground plane adjustment is not performed. The user can make movements within the threshold radius without the HMD device shifting the virtual terrain. When the user moves past the threshold radius, the device will perform an adjustment as needed to match the ground plane of the virtual reality environment to the real world floor.
System and Methods for Interactive Hybrid-Dimension Map Visualization
A navigational system includes a hybrid-dimensional visualization scheme with a multi-modal interaction flow to serve for digital mapping applications, such as in car infotainment systems and online map services. The hybrid-dimensional visualization uses an importance-driven or focus-and-context visualization approach to combine the display of different map elements, including 2D map, 2D building footprint, 3D map, weather visualization, realistic day-night lighting, and POI information, into a single map view. The combination of these elements is guided by intuitive user interactions using multiple modalities simultaneously, such that the map information is filtered to best respond to the user's request, and presented in a way that presents both the focus and the context in a map in an aesthetic manner. The system facilitates several use cases that are common to the users, including destination preview, destination search, and virtual map exploration.
CLUSTER BASED PHOTO NAVIGATION
The technology relates to navigating imagery that is organized into clusters based on common patterns exhibited when imagery is captured. For example, a set of captured images which satisfy a predetermined pattern may be determined. The images in the set of set of captured images may be grouped into one or more clusters according to the predetermined pattern. A request to display a first cluster of the one or more clusters may be received and, in response, a first captured image from the requested first cluster may be selected. The selected first captured image may then be displayed.
SELECTION OF OBJECTS IN THREE-DIMENSIONAL SPACE
A user may select or interact with objects in a scene using gaze tracking and movement tracking. In some examples, the scene may comprise a virtual reality scene or a mixed reality scene. A user may move an input object in an environment and be facing in a direction towards the movement of the input object. A computing device may use sensors to obtain movement data corresponding to the movement of the input object, and gaze tracking data including to a location of eyes of the user. One or more modules of the computing device may use the movement data and gaze tracking data to determine a three-dimensional selection space in the scene. In some examples, objects included in the three-dimensional selection space may be selected or otherwise interacted with.