Patent classifications
G01S5/163
Arrangement and Method for the Model-Based Calibration of a Robot in a Working Space
An arrangement for the model-based calibration of a mechanism in a workspace with calibration objects that are either directed laser radiation patterns together with an associated laser radiation-pattern generator or radiation-pattern position sensors. Functional operation groups made up of at least one laser radiation pattern and at least one position sensor interact in such a way when a radiation pattern impinges on the sensor that measured sensor position information values are passed along to computing devices that determine the parameters of a mathematical mechanism model with the aid of these measured values. In the process, at least two different functional operation groups are used to calibrate the mechanism, and at least two calibration objects from different functional operation groups are rigidly connected to one another.
INTERACTIVE SPATIAL ORIENTATION METHOD AND SYSTEM
Disclosed is an interactive spatial orientation method and system. The method includes: sequentially scanning, by a scanning apparatus, a receiving apparatus in a first direction and a second direction perpendicular to each other; converting, by the receiving apparatus, received optical signals generated from the first scanning and the second scanning into radio waves carrying results of the first scanning and the second scanning, and transferring the radio waves to a processing apparatus; synthesizing, by the processing apparatus, the results of the first scanning and the second scanning to obtain six degrees of freedom information of the receiving apparatus. The system includes a scanning apparatus; a receiving apparatus; and a processing apparatus.
System and method for measuring tracker system accuracy
The present invention relates to a simple and effective system and method for measuring camera based tracker system accuracy, especially for a helmet-mounted tracker system, utilizing Coordinate Measuring Machine (CMM). The method comprises the steps of; computing spatial relation between tracked object and calibration pattern using CMM; computing relation between reference camera and tracker camera; computing relation between reference camera and calibration pattern; computing ground truth relation between tracker camera and tracked object; obtaining actual tracker system results; comparing these results with the ground truth relations and finding accuracy of the tracker system; recording accuracy results; testing if the accuracy results is a new calculation required. The system comprises; a reference camera; a calibration pattern visible by reference camera; a camera spatial relation computation unit; a relative spatial relation computation unit a memory unit; a spatial relation comparison unit.
SURVEYING INSTRUMENT AND PROGRAM
A surveying instrument includes: a survey system; an image sensing system, including first and second image sensing units, the second having an angle of view wider than that of the first; horizontal and vertical angle drivers to rotate the survey and image sensing systems around a surveying instrument vertical and horizontal axes, respectively; a data storage part; an angle detecting part; and a control unit to cause an image, based on image data the first or second generates after imaging, a design data object for showing the design data portion locations included in the image, and coordinate measurement point objects for showing the coordinate measurement points locations, to be surveyed, corresponding to the design data portion included in the image, to appear on a representation device in response to the design data stored in the data storage part and the detected angle.
EVENT DRIVEN SENSOR (EDS) TRACKING OF LIGHT EMITTING DIODE (LED) ARRAY
An event driven sensor (EDS) is used for simultaneous localization and mapping (SLAM) and in particular is used in conjunction with a constellation of light emitting diodes (LED) to simultaneously localize all LEDs and track EDS pose in space. The EDS may be stationary or moveable and can track moveable LED constellations as rigid bodies. Each individual LED is distinguished at a high rate using minimal computational resources (no image processing). Thus, instead of a camera and image processing, rapidly pulsing LEDs detected by the EDS are used for feature points such that EDS events are related to only one LED at a time.
INDOOR LOCALIZATION OF A MULTI-ANTENNA RECEIVER
An approach to localization in an indoor environment makes use of a multiple antenna receiver (e.g., in a smartphone, tablet, camera) and knowledge of locations of one or more radio transmitters, which may be part of a data communication infrastructure providing data communication services to devices in the environment. Successive measurements of transmissions from the transmitters are recorded at the receiver as the device is translated and rotated in the environment. Rotation related measurements are also made at the device. The radio frequency and rotation related measurements are used to infer the location and orientation, together referred to as the pose, of the device. Phase synchronization of the transmitters and the receiver are not required. In general, accuracy of the pose estimate far exceeds that achievable using radio frequency measurements without taking into consideration motion of the device, and far exceeds that achievable using the inertial measurements alone.
AERIAL VEHICLE SYSTEM
A system is provided for maneuvering a payload in an air space constrained by one or more obstacles, and may include first and second aerial vehicles coupled by a tether to a ground station. Sensor systems and processors in the ground station and aerial vehicles may track obstacles and the tether's and the vehicles' positions and attitude to maneuver the payload and the tether to carry out a mission. The sensor system may include airborne cameras providing data for a scene reconstruction process and simultaneous mapping of obstacles and localization of aerial vehicles relative to the obstacles. The aerial vehicles may include a frame formed substantially of a composite material for preventing contact of the rotors with the tether segments.
Visible Light Based Indoor Positioning System
A method for enabling indoor positioning of a mobile receiver, including: detecting an orientation of the mobile receiver; measuring light intensities using at least three effective visible light receiving areas positioned on the mobile receiver, wherein the at least three effective visible light receiving areas are orientated such that a measurement of light intensity of a light from the same light source by each of the at least three effective visible light receiving areas is different from the others; and producing an output which enables a 3-dimensional indoor positioning of the mobile receiver relative to a second coordinate system.
Position measurement systems using position sensitive detectors
Methods and devices for a remote control device for a display device are disclosed. In one embodiment, the remote control device may comprise a plurality of light sources that each has a light profile angled in a predetermined degree different from other light sources. In another embodiment, the remote control device may comprise a controller; and a plurality of optical detectors coupled to the controller. Each optical detector may generate a pair of electrical signals in response to incident light from a plurality of light sources located on a display device and the controller may calculate the position of the remote control device based on the electrical signals.
System and method for tracking
Systems and methods are provided for generating calibration information for a media projector. The method includes tracking at least position of a tracking apparatus that can be positioned on a surface. The media projector shines a test spot on the surface, and the test spot corresponds to a known pixel coordinate of the media projector. The system includes a computing device in communication with at least two cameras, wherein each of the cameras are able to capture images of one or more light sources attached to an object. The computing device determines the object's position by comparing images of the light sources and generates an output comprising the real-world position of the object. This real-world position is mapped to the known pixel coordinate of the media projector.