G01S5/163

Image-based techniques for stabilizing positioning estimates
11249197 · 2022-02-15 · ·

A device implementing a system for estimating device location includes at least one processor configured to receive a first estimated position of the device at a first time. The at least one processor is further configured to capture, using an image sensor of the device, images during a time period defined by the first time and a second time, and determine, based on the images, a second estimated position of the device, the second estimated position being relative to the first estimated position. The at least one processor is further configured to receive a third estimated position of the device at the second time, and estimate a location of the device based on the second estimated position and the third estimated position.

Method and apparatus for estimating 3D position and orientation through sensor fusion

An apparatus and method for estimating a three-dimensional (3D) position and orientation based on a sensor fusion process is provided. The method of estimating the 3D position and orientation may include estimating a strength-based position and a strength-based orientation of a remote apparatus when a plurality of strength information is received, based on an attenuation characteristic of a strength that varies based on a distance and orientation, estimating an inertia-based position and an inertia-based orientation of the remote apparatus by receiving a plurality of inertial information, and estimating a fused position based on a weighted-sum of the strength-based position and the inertia-based position, and to estimate a fused orientation based on a weighted-sum of the strength-based orientation and the inertia-based orientation. The strength-based position and the strength-based orientation may be estimated based on a plurality of adjusted strength information from which noise is removed using a plurality of previous strength information.

SYSTEM AND METHOD FOR NET-CAPTURE OF UNMANNED AERIAL VEHICLE
20220234756 · 2022-07-28 ·

A system and method for capturing an unmanned aerial vehicle includes a net configured to receive the unmanned aerial vehicle, an infrared emitter arrangement including a plurality of infrared emitters arranged around the net, an infrared sensor mounted to the unmanned aerial vehicle and configured to detect the infrared emitter arrangement, and a processor that is in communication with the infrared sensor and configured to adjust an azimuth and elevation of the unmanned aerial vehicle based on the detected infrared emitter arrangement in a field-of-view of the infrared sensor.

METHOD FOR NAVIGATING A MOVABLE DEVICE ALONG AN INCLINED SURFACE
20210397197 · 2021-12-23 ·

A method for navigating a movable device is disclosed. The movable device comprises a moving system arranged to move the movable device along a surface, a control system arranged to control movement of the movable device by means of the moving system, and a vision-based detector. At least one fiducial mounted at or near a position of the movable device, is captured by means of the vision-based detector and recognised. A position and/or a pose of the movable device is determined, based on the recognised at least one fiducial. An inclination of the movable device is detected, and the determined position and/or pose of the movable device is corrected, based on the detected inclination of the movable device. Finally, the movable device is navigated based on the corrected position and/or pose of the movable device.

SYSTEM AND METHOD FOR HUMAN INTERACTION WITH VIRTUAL OBJECTS
20210389818 · 2021-12-16 · ·

A system for human interaction with virtual objects comprises: a touch sensitive surface, configured to detect a position of a contact made on the touch sensitive surface; a reference layer rigidly attached to the touch sensitive surface and comprising one or more patterns; a display device, configured to display a virtual object that is registered in a reference coordinate fixed with respect to the touch sensitive surface; one or more image sensors rigidly attached to the display device, configured to capture an image of at least a portion of the one or more patterns; and at least one processor, configured to determine a position and an orientation of the display device with respect to the touch sensitive surface based on the captured image, and identify an interaction with the virtual object based on the detected position of the contact made on the touch sensitive surface.

Estimating pose in 3D space
11200420 · 2021-12-14 · ·

Methods and devices for estimating position of a device within a 3D environment are described. Embodiments of the methods include sequentially receiving multiple image segments forming an image representing a field of view (FOV) comprising a portion of the environment. The image includes multiple sparse points that are identifiable based in part on a corresponding subset of image segments of the multiple image segments. The method also includes sequentially identifying one or more sparse points of the multiple sparse points when each subset of image segments corresponding to the one or more sparse points is received and estimating a position of the device in the environment based on the identified the one or more sparse points.

TRACKING SYSTEM
20210385388 · 2021-12-09 ·

A tracking system includes one or more cameras and machine-vision processors configured to track the position of a plurality of optical tracking emitters. Each of the plurality of tracking emitters is modulated with a message. The message includes identification information for any of the plurality of tracking cameras receiving the message, so that the cameras can track the location of each emitter.

Positioning method, apparatus and system, layout method of positioning system, and storage medium
11372075 · 2022-06-28 · ·

Provided are a positioning method, apparatus and system, a layout method for a positioning system and a storage medium. A plurality of labels is laid with preset positioning precision in an area to be positioned, where each label among the plurality of labels carries label information, where the label information corresponds to position information of a position of the each label. A position acquisition device reads first label information on a label at a distance not greater than the preset positioning precision from the position acquisition device; and the position acquisition device determines first position information of the position acquisition device according to the first label information.

Firearm simulation and training system and method
11371794 · 2022-06-28 · ·

Disclosed embodiments provide systems and methods for simulation of firearm discharge and training of armed forces and/or law enforcement personnel. A motion tracking system tracks motion of one or more users. In embodiments, the users wear one or more sensors on their bodies to allow tracking by the motion tracking system. A scenario management system utilizes position, orientation, and motion information provided by the motion tracking system to evaluate user performance during a scenario. A weapon simulator includes sensors that indicate position of the weapon and/or orientation of the weapon. The weapon simulator may further provide trigger activation indications to the scenario management system. In embodiments, the scenario management system generates, plays, reviews, and/or evaluates simulations. The evaluation can include scoring based on reaction times, posture, body position, body orientation, and/or other attributes.

Imaging sensor-based position detection
11372455 · 2022-06-28 · ·

An example method for determining a mobile device position and orientation is provided. The method may include capturing a two-dimensional image of a surface including at least four markers, and determining a unit direction vector for each of the at least four markers based on an association between a pixel location of each of the at least four markers in the two-dimensional image. The method may further include determining apex angles between each pair of the unit direction vectors, and determining marker distances from the imaging sensor to each of the at least four markers via a first iterative process based on the apex angles. Additionally, the method may include determining the mobile device position with respect to the coordinate frame via a second iterative process based on the marker distances, the apex angles, and the coordinates of each of the at least four markers.