G06V40/28

AIR TRANSPORTATION SYSTEMS AND METHODS
20230049845 · 2023-02-16 ·

Systems and methods are disclosed for transporting people using air vehicles.

DISPLACED HAPTIC FEEDBACK
20230049900 · 2023-02-16 ·

Displaced haptic feedback can be provided to a person providing an input on a user interface. The user interface can be operatively connected to a processor. The processor can be configured to receive an input signal from the user interface. Responsive to receiving the input signal, the processor can be further configured to cause a haptic device to be activated. The haptic device can be physically separated from the user interface. The haptic device can provides a haptic feedback to a user interacting with the user interface.

Determining Features based on Gestures and Scale
20230051467 · 2023-02-16 ·

A system, method, and computer-readable medium for associating a person’s gestures with specific features of objects is disclosed. Using one or more image capture devices, a person’s gestures and the location of that person in an environment is determined. Using determined distances between the person and objects in the environment and scales associated with features of those objects, the list of specific features in the person’s field-of-view may be determined. Further, a facial expression of the person may be scored and that score associated with one or more specific features.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND INFORMATION PROCESSING METHOD

An information processing apparatus includes a processor configured to: obtain a video and an instruction to generate a still image from the video, the video being a video in which a work target is photographed, the work target being a target on which to work; generate the still image in response to the instruction, the still image being cut from the video including the work target; specify the work target in the video, position information, and a superimposition area by using the still image, the position information describing a position of the work target, the superimposition area being an area in which an image is superimposed, the image being obtained by using the position of the work target as a reference; receive instruction information indicating an instruction for work on the work target; and superimpose and display an instruction image in the superimposition area in the video, the instruction image being an image according to the instruction information.

Detecting interactions with non-discretized items and associating interactions with actors using digital images

Commercial interactions with non-discretized items such as liquids in carafes or other dispensers are detected and associated with actors using images captured by one or more digital cameras including the carafes or dispensers within their fields of view. The images are processed to detect body parts of actors and other aspects therein, and to not only determine that a commercial interaction has occurred but also identify an actor that performed the commercial interaction. Based on information or data determined from such images, movements of body parts associated with raising, lowering or rotating one or more carafes or other dispensers may be detected, and a commercial interaction involving such carafes or dispensers may be detected and associated with a specific actor accordingly.

Virtual and augmented reality signatures

A method implemented on a visual computing device to authenticate one or more users includes receiving a first three-dimensional pattern from a user. The first three-dimensional pattern is sent to a server computer. At a time of user authentication, a second three-dimensional pattern is received from the user. The second three-dimensional pattern is sent to the server computer. An indication is received from the server computer as to whether the first three-dimensional pattern matches the second three-dimensional pattern within a margin of error. When the first three-dimensional pattern matches the second three-dimensional pattern within the margin of error, the user is authenticated at the server computer. When the first three-dimensional pattern does not match the second three-dimensional pattern within the margin of error, user is prevented from being authenticated at the server computer.

Hyper planning based on object and/or region

A vehicle computing system may implement techniques to predict behavior of objects detected by a vehicle operating in the environment. The techniques may include determining a feature with respect to a detected objects (e.g., likelihood that the detected object will impact operation of the vehicle) and/or a location of the vehicle and determining based on the feature a model to use to predict behavior (e.g., estimated states) of proximate objects (e.g., the detected object). The model may be configured to use one or more algorithms, classifiers, and/or computational resources to predict the behavior. Different models may be used to predict behavior of different objects and/or regions in the environment. Each model may receive sensor data as an input, and output predicted behavior for the detected object. Based on the predicted behavior of the object, a vehicle computing system may control operation of the vehicle.

System and method for an interactive digitally rendered avatar of a subject person
11582424 · 2023-02-14 · ·

A system and method for an interactive digitally rendered avatar of a subject person to participate in a web meeting is described. In one embodiment, the method includes receiving an invite to a web meeting on a video conferencing platform, wherein the invite identifies a subject person and the video conferencing platform. The method also includes generating an interactive avatar of the subject person based on a data collection associated with the subject person stored in a database. The method further includes instantiating a platform integrator associated with the video conferencing platform identified in the invite and joining, by the interactive avatar of the subject person, the web meeting on the video conferencing platform. The platform integrator transforms outputs and inputs between the video conferencing platform and an interactive digitally rendered avatar system so that the interactive avatar of the subject person participates in the web meeting.

3D user interface depth forgiveness
11579747 · 2023-02-14 · ·

A head-worn device system includes one or more cameras, one or more display devices and one or more processors. The system also includes a memory storing instructions that, when executed by the one or more processors, configure the system to generate a virtual object, generate a virtual object collider for the virtual object, determine a conic collider for the virtual object, provide the virtual object to a user, detect a landmark on the user's hand in the real-world, generate a landmark collider for the landmark, and determine a selection of the first virtual object by the user based on detecting a collision between the landmark collider with the conic collider and with the virtual object collider.

Associating three-dimensional coordinates with two-dimensional feature points
11580662 · 2023-02-14 · ·

An example method includes causing a light projecting system of a distance sensor to project a three-dimensional pattern of light onto an object, wherein the three-dimensional pattern of light comprises a plurality of points of light that collectively forms the pattern, causing a light receiving system of the distance sensor to acquire an image of the three-dimensional pattern of light projected onto the object, causing the light receiving system to acquire a two-dimensional image of the object, detecting a feature point in the two-dimensional image of the object, identifying an interpolation area for the feature point, and computing three-dimensional coordinates for the feature point by interpolating using three-dimensional coordinates of two points of the plurality of points that are within the interpolation area.