Patent classifications
G06F3/01
Hand controller for robotic surgery system
A Robotic control system has a wand, which emits multiple narrow beams of light, which fall on a light sensor array, or with a camera, a surface, defining the wand's changing position and attitude which a computer uses to direct relative motion of robotic tools or remote processes, such as those that are controlled by a mouse, but in three dimensions and motion compensation means and means for reducing latency.
Displaying a representation of a user touch input detected by an external device
A device includes a touch-sensitive display, one or more processors, and memory storing one or more programs including instructions for receiving data from an external device representing user input received over a duration of time at the external device. The programs may include instructions for determining whether the electronic device is actively executing an application for playback. The programs may further include instructions for in accordance with a determination that the electronic device is not actively executing an application for playback: displaying an indication of the receiving of the data; and displaying an affordance, wherein the affordance when selected launches the application for playback and causes the electronic device to playback the received data according to the duration of time of the user input.
Systems, methods, and media for displaying interactive augmented reality presentations
Systems, methods, and media for displaying interactive augmented reality presentations are provided. In some embodiments, a system comprises: a plurality of head mounted displays, a first head mounted display comprising a transparent display; and at least one processor, wherein the at least one processor is programmed to: determine that a first physical location of a plurality of physical locations in a physical environment of the head mounted display is located closest to the head mounted display; receive first content comprising a first three dimensional model; receive second content comprising a second three dimensional model; present, using the transparent display, a first view of the first three dimensional model at a first time; and present, using the transparent display, a first view of the second three dimensional model at a second time subsequent to the first time based one or more instructions received from a server.
Electronic devices with touch input components and haptic output components
An electronic device may include touch input components and associated haptic output components. The control circuitry may provide haptic output in response to touch input on the touch input components and may send wireless signals to the external electronic device based on the touch input. The haptic output components may provide local and global haptic output. Local haptic output may be used to guide a user to the location of the electronic device or to provide a button click sensation to the user in response to touch input. Global haptic output may be used to notify the user that the electronic device is aligned towards the external electronic device and is ready to receive user input to control or communicate with the external electronic device. Control circuitry may switch a haptic output component into an inactive mode to inform the user that a touch input component is inactive.
Mid-air volumetric visualization movement compensation
A wearable computing device generates a volumetric visualization at a first position that is located in a three-dimensional space. The wearable computing device includes a volumetric source configured to create the volumetric visualization. The wearable computing device includes one or more sensors configured to determine movement of the wearable computing device. A movement of the wearable computing device is identified by the wearable computing device. Based on the movement the wearable computing device adjusts the volumetric source.
Recognizing gestures based on wireless signals
In a general aspect, a motion detection system detects gestures (e.g., human gestures) and initiates actions in response to the detected gestures. In some aspects, channel information is obtained based on wireless signals transmitted through a space by one or more wireless communication devices. A gesture recognition engine analyzes the channel information to detect a gesture (e.g., a predetermined gesture sequence) in the space. An action to be initiated in response to the detected gesture is identified. An instruction to perform the action is sent to a network-connected device associated with the space.
Systems and methods for adaptive input thresholding
The disclosed computer-implemented method may include detecting, by a computing system, a gesture that appears to be intended to trigger a response by the computing system, identifying, by the computing system, a context in which the gesture was performed, and adjusting, based at least on the context in which the gesture was performed, a threshold for determining whether to trigger the response to the gesture in a manner that causes the computing system to perform an action that is based on the detected gesture.
Eye image selection
Systems and methods for eye image set selection, eye image collection, and eye image combination are described. Embodiments of the systems and methods for eye image set selection can include comparing a determined image quality metric with an image quality threshold to identify an eye image passing an image quality threshold, and selecting, from a plurality of eye images, a set of eye images that passes the image quality threshold.
Apparatus and method for displaying contents on an augmented reality device
A system for displaying contents on an augmented reality (AR) device comprises a capturing module configured to capture a field of view of a user, a recording module configured to record the captured field of view, a user input controller configured to track a vision of the user towards one or more objects and a server. The server comprises a determination module, an identifier, and an analyser. The determination module is configured to determine at least one object of interest. The identifier is configured to identify a frame containing disappearance of the determined object of interest. The analyser is configured to analyse the identified frame based on at least one disappearance of the object of interest, and generate analysed data. The display module is configured to display a content of the object of interest on the AR device.
Color-sensitive virtual markings of objects
Disclosed are systems, methods, and non-transitory computer readable media for making virtual colored markings on objects. Instructions may include receiving an indication of an object; receiving from an image sensor an image of a hand of an individual holding a physical marking implement; detecting in the image a color associated with the marking implement; receiving from the image sensor image data indicative of movement of a tip of the marking implement and locations of the tip; determining from the image data when the locations of the tip correspond to locations on the object; and generating, in the detected color, virtual markings on the object at the corresponding locations.