Patent classifications
G01S5/16
Visual-inertial tracking using rolling shutter cameras
Visual-inertial tracking of an eyewear device using a rolling shutter camera(s). The eyewear device includes a position determining system. Visual-inertial tracking is implemented by sensing motion of the eyewear device. An initial pose is obtained for a rolling shutter camera and an image of an environment is captured. The image includes feature points captured at a particular capture time. A number of poses for the rolling shutter camera is computed based on the initial pose and sensed movement of the device. The number of computed poses is responsive to the sensed movement of the mobile device. A computed pose is selected for each feature point in the image by matching the particular capture time for the feature point to the particular computed time for the computed pose. The position of the mobile device is determined within the environment using the feature points and the selected computed poses for the feature points.
Attitude determination system
An instrument (20) determines the attitude of a spacecraft (3) on which it is mounted, by interacting incident light (11) from the Sun with one or more light conditioning elements (12) and thereby forming a diffraction pattern at a photo-sensitive detector (13). The intensity distribution of light on the detector (13) is dependent on the angle of incidence of the light (11). An on-board computer (16) determines a direction vector to the Sun based on the light diffraction pattern detected by the detector (13).
Three-dimensional object position tracking system
A hand-held controller and a positional reference device for determining the position and orientation of the hand-held controller within a three-dimensional volume relative to the location of the positional reference device. An input/output subsystem in conjunction with processing and memory subsystems can receive a reference image data captured by a beacon sensing device combined with inertial measurement information from inertial measurement units within the hand-held controller. The position and orientation of the hand-held controller can be computed based on the linear distance between a pair of beacons on the positional reference device and the reference image data and the inertial measurement information.
POSITIONING METHOD AND SYSTEM, AND APPARATUS
The present disclosure relates to positioning methods, systems, and apparatuses. One example method includes receiving, by a server, a first message from a first terminal device, where the first message includes first location information determined by the first terminal device, determining, by the server, a first reference object based on the first location information, and sending, by the server, a second message to the first terminal device, where the second message includes identification information of the first reference object, and the identification information of the first reference object is used by the first terminal device to update the first location information.
IN-VEHICLE USER POSITIONING METHOD, IN-VEHICLE INTERACTION METHOD, VEHICLE-MOUNTED APPARATUS, AND VEHICLE
This application provides an in-vehicle user positioning method, an in-vehicle interaction method, a vehicle-mounted apparatus, and a vehicle. In an example, the in-vehicle user positioning method includes: obtaining a sound signal collected by an in-vehicle microphone; in response to that a first voice command is recognized from the sound signal, determining a first user who sends the first voice command; and determining an in-vehicle location of the first user based on a mapping relationship between an in-vehicle user and an in-vehicle location.
Localization using dynamic landmarks
A method, system and computer program product for determining a map position of an ego-vehicle are disclosed. The method includes acquiring map data comprising a road geometry, initializing at least one dynamic landmark by measuring a position and velocity, relative to the ego-vehicle, of a surrounding vehicle, and determining a first map position of the surrounding vehicle based on this measurement and the geographical position of the ego-vehicle. Further, the method includes predicting a second map position of the surrounding vehicle, and measuring a location, relative to the ego-vehicle, of the surrounding vehicle when it is estimated to be at the second map position, whereby the geographical position of the ego-vehicle can be computed and updated.
System and method for classifying agents based on agent movement patterns
Described is a system and method for the classification of agents based on agent movement patterns. In operation, the system receives position data of a moving agent from a camera or sensor. Motion data of the moving agent is then extracted and used to generate a predicted future motion of the moving agent using a set of pre-calculated Echo State Networks (ESN). Each ESN represents an agent classification and generates a predicted future motion. A prediction error is generated for each ESN by comparing the predicted future motion for each ESN with actual motion data. Finally, the agent is classified based on the ESN having the smallest prediction error.
OPTICAL DETECTOR
An optical detector (110) is disclosed, comprising: at least one optical sensor (122) adapted to detect a light beam (116) and to generate at least one sensor signal, wherein the optical sensor (122) has at least one sensor region (126), wherein the sensor signal of the optical sensor (122) is dependent on an illumination of the sensor region (126) by the light beam (116), wherein the sensor signal, given the same total power of the illumination, is dependent on a width of the light beam (116) in the sensor region (126); at least one focus-tunable lens (130) located in at least one beam path (132) of the light beam (116), the focus-tunable lens (130) being adapted to modify a focal position of the light beam (116) in a controlled fashion; at least one focus-modulation device (136) adapted to provide at least one focus-modulating signal (138) to the focus-tunable lens (130), thereby modulating the focal position; and at least one evaluation device (140), the evaluation device (140) being adapted to evaluate the sensor signal.
ELECTRONIC APPARATUS AND METHOD FOR DISPLAYING VIRTUAL ENVIRONMENT IMAGE
An electronic apparatus includes a controller, a display device coupled to the controller, and a detector coupled to the controller. The display device displays virtual environmental images. The detector detects a spatial parameter of a local space of the electronic apparatus in which the electronic apparatus is located. The controller receives the spatial parameter and controls the display device based on the spatial parameter.
ELECTRONIC APPARATUS AND METHOD FOR DISPLAYING VIRTUAL ENVIRONMENT IMAGE
An electronic apparatus includes a controller, a display device coupled to the controller, and a detector coupled to the controller. The display device displays virtual environmental images. The detector detects a spatial parameter of a local space of the electronic apparatus in which the electronic apparatus is located. The controller receives the spatial parameter and controls the display device based on the spatial parameter.