Patent classifications
G06F3/0304
DIGITAL ASSISTANT REFERENCE RESOLUTION
Systems and processes for operating a digital assistant are provided. An example process for performing a task includes, at an electronic device having one or more processors and memory, receiving a spoken input including a request, receiving an image input including a plurality of objects, selecting a reference resolution module of a plurality of reference resolution modules based on the request and the image input, determining, with the selected reference resolution module, whether the request references a first object of the plurality of objects based on at least the spoken input, and in accordance with a determination that the request references the first object of the plurality of objects, determining a response to the request including information about the first object.
GRAPHICAL MENU STRUCTURE
A human interface including steps of presenting an image, then receiving a gesture from the user. The image is analyzed to identify the elements of the image and then compared to known images and then either soliciting an input from the user or displaying a menu to the user. Comparing the image and/or graphical image elements may be effectuated using a trained artificial intelligence engine or, in some embodiments, with a structured data source, said data source including predetermined images and menu options. If the input from the user is known, then presenting a predetermined menu. If the image is not known, then presenting an image or other menu options, and soliciting from the user the desired options. Once the user selections an option, the resulting selection may be used to further train the AI system or added to the structured data source for future reference.
Hands-Free Crowd Sourced Indoor Navigation System and Method for Guiding Blind and Visually Impaired Persons
The present invention discloses an indoor Electronic Traveling Aid (ETA) system for blind and visually impaired (BVI) people. The system comprises a headband, intuitive tactile display with myographic (EMG) feedback, controller, and server-based methods corresponding to three operation modalities. In 1.sup.st modality, sighted users mark routes, map navigational directions, and create semantic comments for BVIs. This information of routes is continuously collected and estimated in ETA servers. In the 2.sup.nd modality, BVIs choose the routes from servers, thereby, are supplied with real-time navigational guidance. Also, an EMG interface is used, where the user's facial muscles are enabled is to send commands to the ETA system. In the 3.sup.rd modality, BVIs receive real-time audio guidance in complex or unforeseen situations: ETA provides a crowd-assisted interface and real-time sensory (e.g., video) data, where crowd-assistants analyze the situation and help the BVI to navigate.
Prism based light redirection system for eye tracking systems
A head-mounted device (HMD) contains a display, an optics block, a redirection structure, and an eye tracking system. The display is configured to emit image light and provide it to an eye of a user. The optics block is configured to direct the emitted light in order to allow it to reach the eye. The eye tracking system contains a camera, an illumination source, and a controller. The camera is configured to capture image data using infrared light reflected from the eye. The controller is configured to use this image data to determine eye tracking information. The illumination source is configured to illuminate the eye with infrared light for the purpose of taking eye tracking measurements. The redirection structure is configured to direct infrared light reflected from the eye to the eye tracking system. In multiple embodiments, redirection structures may comprise prism arrays, lenses, liquid crystal layers, or grating structures.
Photodetector activations
An example computing device includes a photodetector to measure an amount of light incident on a detection surface of the photodetector. The example computing device includes a state sensor to activate the photodetector responsive to the computing device being in a detection state. The example computing device also includes a processor. An example processor identifies, during the detection state, a user gesture based on an output of the photodetector. The user gesture blocks light incident on the detection surface of the photodetector. The example processor also alters an operation of the computing device based on the user gesture.
Systems, methods, and media for displaying interactive augmented reality presentations
Systems, methods, and media for displaying interactive augmented reality presentations are provided. In some embodiments, a system comprises: a plurality of head mounted displays, a first head mounted display comprising a transparent display; and at least one processor, wherein the at least one processor is programmed to: determine that a first physical location of a plurality of physical locations in a physical environment of the head mounted display is located closest to the head mounted display; receive first content comprising a first three dimensional model; receive second content comprising a second three dimensional model; present, using the transparent display, a first view of the first three dimensional model at a first time; and present, using the transparent display, a first view of the second three dimensional model at a second time subsequent to the first time based one or more instructions received from a server.
Device for a color-based detection of image contents computing device, and motor vehicle including the device
An apparatus for color-dependent detection of image contents includes a light input coupling apparatus, carrier medium, measuring region, output coupling region, and camera apparatus. The light input coupling apparatus includes a light source to emit light at a first wavelength. The carrier medium receives the light and transmits the light by internal reflection to the measuring region. The measuring region includes a first diffraction structure that outputs light at the first wavelength. The first diffraction structure is formed as a multiplex diffraction structure to input light in a second wavelength range. The output coupling region includes a second diffraction structure formed as a multiplex diffraction structure that outputs light at the first wavelength and the second wavelength range. The camera apparatus captures light output from the carrier medium to the camera apparatus, and provides the light in a form of image data which correlates with the light.
Color-sensitive virtual markings of objects
Disclosed are systems, methods, and non-transitory computer readable media for making virtual colored markings on objects. Instructions may include receiving an indication of an object; receiving from an image sensor an image of a hand of an individual holding a physical marking implement; detecting in the image a color associated with the marking implement; receiving from the image sensor image data indicative of movement of a tip of the marking implement and locations of the tip; determining from the image data when the locations of the tip correspond to locations on the object; and generating, in the detected color, virtual markings on the object at the corresponding locations.
Viewpoint dependent brick selection for fast volumetric reconstruction
A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.
Systems and methods for providing mixed-reality experiences under low light conditions
Systems and methods are provided for facilitating computer vision tasks (e.g., simultaneous location and mapping) and pass-through imaging include a head-mounted display (HMD) that includes a first set of one or more cameras configured for performing computer vision tasks and a second set of one or more cameras configured for capturing image data of an environment for projection to a user of the HMD. The first set of one or more cameras is configured to detect at least a visible spectrum light and at least a particular band of wavelengths of infrared (IR) light. The second set of one or more cameras includes one or more detachable IR filters configured to attenuate IR light, including at least a portion of the particular band of wavelengths of IR light.