Patent classifications
G06F3/017
Recognizing gestures based on wireless signals
In a general aspect, a motion detection system detects gestures (e.g., human gestures) and initiates actions in response to the detected gestures. In some aspects, channel information is obtained based on wireless signals transmitted through a space by one or more wireless communication devices. A gesture recognition engine analyzes the channel information to detect a gesture (e.g., a predetermined gesture sequence) in the space. An action to be initiated in response to the detected gesture is identified. An instruction to perform the action is sent to a network-connected device associated with the space.
Systems and methods for adaptive input thresholding
The disclosed computer-implemented method may include detecting, by a computing system, a gesture that appears to be intended to trigger a response by the computing system, identifying, by the computing system, a context in which the gesture was performed, and adjusting, based at least on the context in which the gesture was performed, a threshold for determining whether to trigger the response to the gesture in a manner that causes the computing system to perform an action that is based on the detected gesture.
Color-sensitive virtual markings of objects
Disclosed are systems, methods, and non-transitory computer readable media for making virtual colored markings on objects. Instructions may include receiving an indication of an object; receiving from an image sensor an image of a hand of an individual holding a physical marking implement; detecting in the image a color associated with the marking implement; receiving from the image sensor image data indicative of movement of a tip of the marking implement and locations of the tip; determining from the image data when the locations of the tip correspond to locations on the object; and generating, in the detected color, virtual markings on the object at the corresponding locations.
Range of motion control in XR applications on information handling systems
More realistic experiences can be provided to a user through the use of a wearable suit. The xR wearable suit may include materials with adjustable characteristics, such as friction, and electronics for controlling the materials to provide feedback to the user wearing the xR suit. In an xR game, the materials may be used to translate virtual damage to physical constraints on the user. For example, when an avatar gets shot in the leg and is debilitated, the user's leg motion can be constricted to understand that shortcoming and stay in sync with the avatar. Examples of such feedback materials include inflating ribs, sheet jamming, and mechanical devices.
Systems, methods, and graphical user interfaces for updating display of a device relative to a user's body
An electronic device, while the electronic device is worn over a predefined portion of the user's body, displays, via a display generation component arranged on the electronic device opposite the predefined portion of the user's body, a graphical representation of an exterior view of a body part that corresponds to the predefined portion of the user's body. The electronic device detects a change in position of the electronic device with respect to the predefined portion of the user's body. The electronic device, in response to detecting the change in the position of the electronic device with respect to the predefined portion of the user's body, modifies the graphical representation of the exterior view of the body part that corresponds to predefined portion of the user's body in accordance with the detected change in position of the electronic device with respect to the predefined portion of the user's body.
System and method for iterative classification using neurophysiological signals
A method of training an image classification neural network comprises: presenting a first plurality of images to an observer as a visual stimulus, while collecting neurophysiological signals from a brain of the observer; processing the neurophysiological signals to identify a neurophysiological event indicative of a detection of a target by the observer in at least one image of the first plurality of images; training the image classification neural network to identify the target in the image, based on the identification of the neurophysiological event; and storing the trained image classification neural network in a computer-readable storage medium.
Systems and methods for enabling quick multi-application menu access to media options
Systems and methods for enabling quick access to media options are provided. A display of a plurality of icons is generated, wherein each of the plurality of icons represents a different one of a plurality of applications. A user input is detected that identifies a first of the plurality of icons associated with a first of the plurality of applications. In response to determining that the user input corresponds to a quick access operation, first and second media asset identifiers and corresponding media options are retrieved from each of second and third applications. A menu that includes the retrieved first and second media asset identifiers is generated for display with the plurality of icons.
Touchless interaction using audio components
The present teachings relate to an electronic device comprising: a first module for generating an audio signal; a second module for generating an ultrasonic signal; a mixer for generating a combined signal; a transmitter for outputting an acoustic signal dependent upon the combined signal; and, a processing means for controlling the ultrasonic signal; wherein, in response to receiving a first instruction signal for initiating the ultrasonic signal, the processing means is configured to increase the amount of the ultrasonic signal in the combined signal from an essentially zero value to a predetermined value over a predetermined enable time-period. The present teachings also relate to an electronic device configured to decrease the amount of the ultrasonic signal in the combined signal from an essentially zero value to a predetermined value over a predetermined disable time-period, and to an electronic device configured to remove the audio signal from the combined signal whilst preventing pop-noise, and to an electronic device capable of replacing the ultrasonic signal whilst minimizing the processing time. The present teachings further relate to a method for reducing the occurrence of pop noise in an acoustic signal associated with: initiating the ultrasonic signal in the combined signal, terminating the ultrasonic signal in the combined signal, terminating the audio signal in the combined signal, and replacing the ultrasonic signal in the combined signal. The present teachings also relate to a computer software product for implementing any of the method steps disclosed herein, and to a computer storage medium storing the computer software herein disclosed.
Head mounted display and setting method
In a head mounted display 100, a memory 71 stores an application. An image pickup unit 74 takes an image of a site of a user 1, and a position specifying unit 73 specifies a position and a direction of the head mounted display 100. A detector 75 detects a position indicated by the user 1 on the basis of the image taken by the image pickup unit 74, and a setting unit 76 sets a position indicating a home position on the basis of a result detected by the detector 75 and the position and the direction specified by the position specifying unit 73.
Systems and methods for controlling virtual scene perspective via physical touch input
Systems, methods, and non-transitory computer readable media for controlling perspective in an extended reality environment are disclosed. In one embodiment, a non-transitory computer readable medium contains instructions to cause a processor to perform the steps of: outputting for presentation via a wearable extended reality appliance (WER-appliance), first display signals reflective of a first perspective of a scene; receiving first input signals caused by a first multi-finger interaction with the touch sensor; in response, outputting for presentation via the WER-appliance second display signals to modify the first perspective of the scene, causing a second perspective of the scene to be presented via the WER-appliance; receiving second input signals caused by a second multi-finger interaction with the touch sensor; and in response, outputting for presentation via the WER-appliance third display signals to modify the second perspective of the scene, causing a third perspective of the scene to be presented via the WER-appliance.