G01S5/163

ESTIMATING POSE IN 3D SPACE
20220101004 · 2022-03-31 ·

Methods and devices for estimating position of a device within a 3D environment are described. Embodiments of the methods include sequentially receiving multiple image segments forming an image representing a field of view (FOV) comprising a portion of the environment. The image includes multiple sparse points that are identifiable based in part on a corresponding subset of image segments of the multiple image segments. The method also includes sequentially identifying one or more sparse points of the multiple sparse points when each subset of image segments corresponding to the one or more sparse points is received and estimating a position of the device in the environment based on the identified the one or more sparse points.

LIGHT DIRECTION DETECTOR SYSTEMS AND METHODS
20220107415 · 2022-04-07 ·

Intensity of a light from a light array comprising a plurality of light sources configured to illuminate in sequence may be detected at two optically isolated points of a motion tracker device. The optically isolated points may be disposed at a distance from one another such that a variation in intensity of light due to shadowing effects from the plurality of light sources is different at the optically isolated points. The optically isolated points may be separated by a T-shaped wall. The motion tracker device may generate a current signal representing a photodiode differential between the two optically isolated points and proportional to the intensity of the light. The current signal may be used for sensor fusion with an inertial measurement unit.

POSITION AND ORIENTATION CALCULATION METHOD, NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM AND INFORMATION PROCESSING APPARATUS
20220067964 · 2022-03-03 · ·

A position and orientation calculation method includes comparing a first image feature of first image information included in a plurality of environment maps with a second image feature of second image information acquired from a moving object or an imaging apparatus of the moving object, and specifying a calculation environment map to be used for calculating a position and orientation of the moving object or the imaging apparatus of the moving object, among the plurality of environment maps, based on a result of the comparison.

OPTICS-BASED CORRECTION OF THE POSITION OF A MOBILE UNIT
20230393233 · 2023-12-07 ·

A method locates a mobile unit in an industrial manufacturing environment. The method includes: A) rough locating of the mobile unit by way of rough position data of a manufacturing execution system (MES); B) moving a sensor unit comprising a sensor to the mobile unit on a basis of the rough position data; C) receiving a coded signal emitted by the mobile unit by way of the sensor and finely locating the mobile unit, wherein the sensor ascertains the position of a first transmitter of the coded signal and provides the first position in the form of signal position data; and D) identifying the mobile unit on a basis of the coded signal and correcting the rough position data on a basis of the signal position data.

System and method for human interaction with virtual objects using reference device with fiducial pattern
11144113 · 2021-10-12 · ·

A system for human interaction with virtual objects comprises: a touch sensitive surface, configured to detect a position of a contact made on the touch sensitive surface; a reference layer rigidly attached to the touch sensitive surface and comprising one or more patterns; a display device, configured to display a virtual object that is registered in a reference coordinate fixed with respect to the touch sensitive surface; one or more image sensors rigidly attached to the display device, configured to capture an image of at least a portion of the one or more patterns; and at least one processor, configured to determine a position and an orientation of the display device with respect to the touch sensitive surface based on the captured image, and identify an interaction with the virtual object based on the detected position of the contact made on the touch sensitive surface.

Apparatus, Systems and Methodologies Configured to Enable Electrical Output Management of Solar Energy Infrastructure, Including Management via Remotely Operated Coating Application System and/or Wireless Monitoring Systems
20210309361 · 2021-10-07 ·

Apparatus, systems and methods for dispensing a coating material onto a solar power infrastructure unit (140), such as solar panel arrays (140), in order to reduce their electrical output to negligible levels and/or safe levels. An Unmanned Aerial Vehicle (UAV) (120, 130) or drone with a suitable dispensing apparatus (124) may be adapted and improved in order to apply the coating material to the solar power unit (140). The UAV (120, 130) may be remotely controlled and/or operate autonomously.

Navigation Using Self-Describing Fiducials

In one embodiment, a navigation system may include at least one self-describing fiducial with a communication element to communicate navigation state estimation aiding information comprising a geographic position of the self-describing fiducial with respect to one or more coordinate systems and a first navigating object to receive navigation information from an exterior system, receive navigation state estimation aiding information from the self-describing fiducial, and compare the navigation information received from the exterior system to the navigation state estimation aiding information to improve navigation parameters of the first navigating object.

Redundant reciprocal surgical tracking system with three optical trackers
11103313 · 2021-08-31 · ·

The present invention relates to a redundant reciprocal tracking system composed of at least two trackers 10. A first tracker is able to sense partial or full pose data (orientation and position) of a second tracker in a first reference frame and the second tracker is able to sense partial or full pose data of the first tracker in a second reference frame. Pose data of first and second trackers are further transferred to a central processor 30, which is able to compute the transformation between first and second reference frame. Data generated by the trackers are such designed that they define an over-determined mathematical system (e.g. more than 6 degrees of freedom in a 3D setup). The over-determined information can be used to qualify and/or improve the transformation of the reference frame. In an embodiment of the invention, the tracking system is an optical one and the over-determined information defines an error metric used to check the validity of the transformation. Such setup could be used in surgical navigation system in order to reduce the risk of injury or death of the patient.

METHOD FOR LOCALIZING ROBOT, ROBOT, AND STORAGE MEDIUM
20210247775 · 2021-08-12 · ·

Provided is a method for localizing a robot. The robot may move from a current position to a new position during the localizing process, more environment information may be acquired during the new movement, and then the acquired environment information is compared with an environment map stored in the robot, which facilitates successfully localizing a pose of the robot in the stored environment map. In addition, during the movement and localization of the robot, environment information at different positions is generally different, so that similar regional environments may be distinguished, and the problem that an accurate pose cannot be obtained because there may be a plurality of similar regional environments when the robot stays at the original position for localizing may be overcome.

Augmented optical imaging system for use in medical procedures

An optical imaging system for imaging a target during a medical procedure is disclosed. The optical imaging system includes: a first camera for capturing a first image of the target; a second wide-field camera for capturing a second image of the target; at least one path folding mirror disposed in an optical path between the target and a lens of the second camera; and a processing unit for receiving the first image and the second image, the processor being configured to: apply an image transform to one of the first image and the second wide-field image; and combine the transformed image with the other one of the images to produce a stereoscopic image of the target.