G01S5/163

STATE ESTIMATION AND LOCALIZATION FOR ROV-BASED STRUCTURAL INSPECTION
20200068125 · 2020-02-27 ·

A vision-based state estimation framework to estimate the state of an underwater remotely operated vehicle (ROV) used for inspection of an underwater structure, for example, a nuclear reactor pressure vessel. The framework employs an external overhead, pan-tilt-zoom (PTZ) camera as the primary sensing modality and incorporates prior knowledge of the geometry of the structure.

Optical Tracking System
20200057147 · 2020-02-20 ·

An electronic device includes an electromagnetic radiation source having an axis, a set of optics disposed about the axis, a reflector disposed about the axis non-symmetrically, and a controller configured to operate the electromagnetic radiation source while controlling a beam steering orientation (e.g., rotation) of the reflector. The reflector is disposed to reflect electromagnetic radiation emitted by the electromagnetic radiation source. The set of optics is disposed to shape electromagnetic radiation emitted by the electromagnetic radiation source and direct electromagnetic radiation received from the reflector into a panoramic field of view about the axis.

Methods and apparatus for light-based positioning and navigation

Systems, methods, mobile computing devices and computer-readable media are described herein relating to light-based positioning. In various embodiments, light sources (106) may be commissioned to selectively energize one or more LEDs (520) to emit light carrying a coded light signal. The coded light signal may convey information about a location of a lighting effect (102) projected by the one or more LEDs onto a surface (104). In various embodiments, mobile computing devices (100) such as smart phones or tablets may detect these coded light signals from the lighting effects and/or from the light sources, extract the location information, and utilize it to determine their locations within an environment.

SYSTEM AND METHOD FOR HUMAN INTERACTION WITH VIRTUAL OBJECTS
20200042078 · 2020-02-06 ·

A system for human interaction with virtual objects comprises: a touch sensitive surface, configured to detect a position of a contact made on the touch sensitive surface; a reference layer rigidly attached to the touch sensitive surface and comprising one or more patterns; a display device, configured to display a virtual object that is registered in a reference coordinate fixed with respect to the touch sensitive surface; one or more image sensors rigidly attached to the display device, configured to capture an image of at least a portion of the one or more patterns; and at least one processor, configured to determine a position and an orientation of the display device with respect to the touch sensitive surface based on the captured image, and identify an interaction with the virtual object based on the detected position of the contact made on the touch sensitive surface.

System and method for augmented reality support using a lighting system's sensor data

Methods and systems for providing enhanced augmented reality features and enhancements are disclosed such as an AR support system (100) using lighting units (LU1) in a lighting system (100) to improve performance of augmented reality devices (20). The lighting system (100) may also take advantage of features of the augmented reality devices (20) to improve the safety and performance of the lighting system (100). The lighting units (LU1) include sensors and communication capabilities that detect situations as to when the augmented device would need to be assisted by the lighting network. Finally a method to provide assistance information to the augmented reality device while optimizing energy savings is also described.

METHOD OF ESTIMATING A DIRECTION OF ABSOLUTE ORIENTATION OF AN OPTRONIC SYSTEM
20190383609 · 2019-12-19 ·

A method for estimating the bearing of an optronic system in a geographical reference frame, the optronic system being situated at a first position and denoted first optronic system is provided. It comprises the following steps: defining a collaborative configuration, by way of the first optronic system and of at least one other optronic system, the optronic systems being respectively situated at separate positions and equipped with means for communication with one another, and with acquisition devices, acquiring, in a scene, one or more objects common to the optronic systems, the direction of orientation between each optronic system and each object being unknown, determining two positions from among those of the optronic systems, for at least one common object: measuring the relative angle by way of a relative angle measurement device fitted to the first optronic system, measuring the elevation of the object by way of an elevation measurement device fitted to the first optronic system, performing additional measurements by way of each other optronic system, with the two positions and the measurements constituting observations, communication, by the other optronic system(s) to the first optronic system, of the observations that it does not have, on the basis of the observations, estimation, by the first optronic system, of the bearing of the first optronic system.

VISION-ENHANCED POSE ESTIMATION
20190385328 · 2019-12-19 ·

This specification discloses computer-based systems, methods, devices, and other techniques for estimating the pose of a device, including estimating the pose based on images captured by a set of image sensors disposed around the device's periphery. Some implementations include a system that obtains visual data representing at least one image captured by one or more image sensors of a mobile device. The at least one image show an environment of the mobile device, and the one or more image sensors are located at respective corners of the mobile device, or at other locations around its periphery. The system processes the visual data to determine a pose of the mobile device. Further, the system can determine a location of the mobile device in the environment based on the pose, and can present an indication of the location of the mobile device in the environment.

SELF-TRACKED CONTROLLER
20240095948 · 2024-03-21 ·

The disclosed system may include a housing dimensioned to secure various components including at least one physical processor and various sensors. The system may also include a camera mounted to the housing, as well as physical memory with computer-executable instructions that, when executed by the physical processor, cause the physical processor to: acquire images of a surrounding environment using the camera mounted to the housing, identify features of the surrounding environment from the acquired images, generate a map using the features identified from the acquired images, access sensor data generated by the sensors, and determine a current pose of the system in the surrounding environment based on the features in the generated map and the accessed sensor data. Various other methods, apparatuses, and computer-readable media are also disclosed.

Systems and methods for supplemental navigation using distributed avionics processing

Disclosed are methods, systems, and non-transitory computer-readable medium for distributed vehicle navigation processing for a vehicle. For instance, the method may include: by the vehicle: obtaining reference data from one or a combination of an imaging system, an antenna system, and/or a radar system of the vehicle; in response to obtaining the reference data, determining whether a GNSS signal is below a threshold; and in response to determining the GNSS signal is below the threshold, transmitting a navigation supplementation request message including the reference data to an edge node or a cloud node. By the edge node or the cloud node: in response to receiving the navigation supplementation request message from the vehicle, performing a position resolution process to determine and transmit a position of the vehicle by one or more functions. By the vehicle: performing a navigation control process based on the determined position.

Locating system
11922653 · 2024-03-05 · ·

An object locating system (100) in which there is an observation device (104) observing at least three datums (106, 112, 114), each of which datums (106, 112, 114) having a positioning system that reports it position to the observation device (104). The positioning systems of the datums (106, 12, 114) being calibrated so as to accurately report their relative positions. The observation device (104) has a camera whose field of view (116) contains an object (18) to be located as well as at least two of the datums (106, 112,114) and a range finder that measures the distance (110) between the observation device (104) and at least one object (18) within the field of view (116) of the camera. A computing device calculates an azimuth and (X1, X2) elevation angle (Y1, Y2) between two datums (122, 114), or between the optical axis of the camera and each datum (112, 114),in the image so as to triangulate the position and attitude of the camera (104) at the time the image was captured using received position data for each datum (106, 112, 114) at the time the image was captured; and also calculates an azimuth and elevation angle between an optical axis of the camera and the object (18) in the image. Knowing the position and attitude of the camera (104) and a distance (110) to the object (18) at the time the image was captured, it triangulates a position of the object (18) at the time the image was captured.