G01S5/163

System and method for human interaction with virtual objects
11640198 · 2023-05-02 · ·

A system for human interaction with virtual objects comprises: a touch sensitive surface, configured to detect a position of a contact made on the touch sensitive surface; a reference layer rigidly attached to the touch sensitive surface and comprising one or more patterns; a display device, configured to display a virtual object that is registered in a reference coordinate fixed with respect to the touch sensitive surface; one or more image sensors rigidly attached to the display device, configured to capture an image of at least a portion of the one or more patterns; and at least one processor, configured to determine a position and an orientation of the display device with respect to the touch sensitive surface based on the captured image, and identify an interaction with the virtual object based on the detected position of the contact made on the touch sensitive surface.

SURGICAL FIELD CAMERA SYSTEM THAT ONLY USES IMAGES FROM CAMERAS WITH AN UNOBSTRUCTED SIGHT LINE FOR TRACKING
20230017128 · 2023-01-19 ·

A system and method for tracking an object within a surgical field are described. A system may include a mesh of cameras distributed around the surgical field, the mesh of cameras including, for example at least three cameras. Cameras in the mesh of cameras may be in known positions or orientations relative to the surgical field or may be mobile with position or orientation to be determined by the system. The system may include a computing system communicatively coupled to the mesh of cameras, the computing system including a processor and a memory device. The computing system may be used to generate tracking data for a tracked object from images or image data taken by one or more cameras of the mesh of cameras.

Cellular-based navigation method
11812342 · 2023-11-07 · ·

A method for creating a correction function for improving the accuracy of a GPS device collects multiple time samples at multiple known locations wherein each time sample consists of GPS coordinates and associated satellite data from multiple satellites. The satellite data includes or permits determination of (i) satellite azimuth and elevation of an associated satellite, (ii) Signal-to-Noise Ratio of a received signal from the associated satellite, and optionally (iii) pseudo-range. For each time sample a respective error between the known location and the corresponding GPS coordinates is computed and an error correction function is created as a function of the respective GPS coordinates and the satellite data by applying deep learning/machine learning techniques to the multiple time samples.

Image-based approach for device localization based on a vehicle location
11570576 · 2023-01-31 · ·

Disclosed is an image-based approach for device localization. In particular, a mobile device may capture image(s) of a vehicle that is substantially proximate to the mobile device. Based on the image(s), the mobile device may (i) determine parameter(s) associated with the vehicle, and (ii) determine or obtain a location of the vehicle in accordance with the determined parameter(s). Additionally, the mobile device may use the image(s) as basis for determining a relative location indicating where the mobile device is located relative to the vehicle. Based on the location of the vehicle and on the relative location of the mobile device, the mobile device may then determine a location of the mobile device.

Mobile device locationing

A mobile computing device includes: a tracking sensor; a proximity sensor; and a controller coupled to the tracking sensor and the proximity sensor, the controller configured to: obtain a sequence of sensor datasets, each sensor dataset including: (i) a location of the mobile computing device, in a local coordinate system, generated using the tracking sensor, (ii) a proximity indicator generated using the proximity sensor, defining a range to a fixed reference device, and (iii) a predefined location of the reference device in a facility coordinate system; determine, from the sequence, an adjusted pose of an origin of the local coordinate system in the facility coordinate system; and generate, using a current location of the mobile device in the local coordinate system and the adjusted pose, a corrected location of the mobile computing device in the facility coordinate system; and execute a control action based on the corrected location.

FIREARM SIMULATION AND TRAINING SYSTEM AND METHOD
20220299288 · 2022-09-22 · ·

Disclosed embodiments provide systems and methods for simulation of firearm discharge and training of armed forces and/or law enforcement personnel. A motion tracking system tracks motion of one or more users. In embodiments, the users wear one or more sensors on their bodies to allow tracking by the motion tracking system. A scenario management system utilizes position, orientation, and motion information provided by the motion tracking system to evaluate user performance during a scenario. A weapon simulator includes sensors that indicate position of the weapon and/or orientation of the weapon. The weapon simulator may further provide trigger activation indications to the scenario management system. In embodiments, the scenario management system generates, plays, reviews, and/or evaluates simulations. The evaluation can include scoring based on reaction times, posture, body position, body orientation, and/or other attributes.

Measurement system, work machine, and measurement method

A measurement system includes an imaging unit mounted to a swing body of a work machine to image a shape around the work machine, a position detection unit which determines a position of the swing body, an imaging unit position calculation unit which calculates a position of the imaging unit when the imaging unit performs imaging while the swing body swings, and a three-dimensional position calculation unit which determines a three-dimensional position around the work machine during the imaging, on the basis of a position of the imaging unit calculated by the imaging unit position calculation unit.

SATELLITE MODULE FOR ATTITUDE DETERMINATION

A satellite module for attitude determination includes a containment body comprising at least one data acquisition board and a connection interface, at least one first-type sensor selected from a sun sensor, an earth sensor, a stellar sensor, a horizon sensor, in communication with the data acquisition board and at least one second-type sensor, different from the first type, selected from a sun sensor, an earth sensor, a stellar sensor, a horizon sensor, and in communication with the data acquisition board. The connection interface may be mounted on a first face of the containment body, the first-type sensor may be mounted on a second face of the containment body, and the second-type sensor may be mounted on a third face of the containment body.

Real time position and orientation tracker

The present disclosure relates to a tracking system for tracking the position and/or orientation of an object in an environment, the tracking system including: at least one camera mounted to the object; a plurality of spaced apart targets, at least some of said targets viewable by the at least one camera; and, one or more electronic processing devices configured to: determine target position data indicative of the relative spatial position of the targets; receive image data indicative of an image from the at least one camera, said image including at least some of the targets; process the image data to: identify one or more targets in the image; determine pixel array coordinates corresponding to a position of the one or more targets in the image; and, use the processed image data to determine the position and/or orientation of the object by triangulation.

Surgical field camera system that only uses images from cameras with an unobstructed sight line for tracking
11432877 · 2022-09-06 · ·

A system and method for tracking an object within a surgical field are described. A system may include a mesh of cameras distributed around the surgical field, the mesh of cameras including, for example at least three cameras. Cameras in the mesh of cameras may be in known positions or orientations relative to the surgical field or may be mobile with position or orientation to be determined by the system. The system may include a computing system communicatively coupled to the mesh of cameras, the computing system including a processor and a memory device. The computing system may be used to generate tracking data for a tracked object from images or image data taken by one or more cameras of the mesh of cameras.