Patent classifications
G01C3/10
BALANCING COLORS IN A SCANNED THREE-DIMENSIONAL IMAGE
A method of balancing colors of three-dimensional (3D) points measured by a scanner from a first location and a second location. The scanner measures 3D coordinates and colors of first object points from a first location and second object points from a second location. The scene is divided into local neighborhoods, each containing at least a first object point and a second object point. An adapted second color is determined for each second object point based at least in part on the colors of first object points in the local neighborhood.
BALANCING COLORS IN A SCANNED THREE-DIMENSIONAL IMAGE
A method of balancing colors of three-dimensional (3D) points measured by a scanner from a first location and a second location. The scanner measures 3D coordinates and colors of first object points from a first location and second object points from a second location. The scene is divided into local neighborhoods, each containing at least a first object point and a second object point. An adapted second color is determined for each second object point based at least in part on the colors of first object points in the local neighborhood.
DISTANCE RECOMMENDATION DEVICE FOR GOLF, CAPABLE OF GENERATING RECOMMENDED DISTANCE AT THIRD LOCATION
A distance recommendation device for golf is proposed. As a method for providing a recommended distance from a ball to a target at a third location by the distance recommendation device for the golf, there is provided a distance recommendation method that includes obtaining position information of the ball positioned in each hole, obtaining position information of the target positioned in each hole, and generating a recommended shot distance from the ball to the target on the basis of the obtained information.
Hybrid refractive gradient-index optics for time-of-fly sensors
Techniques are described for time-of-fly sensors with hybrid refractive gradient-index optics. Some embodiments are for integration into portable electronic devices with cameras, such as smart phones. For example, a time-of-fly (TOF) imaging subsystem can receive optical information along an optical path at an imaging plane. A hybrid lens can be coupled with the TOF imaging subsystem and disposed in the optical path so that the imaging plane is substantially at a focal plane of the hybrid lens. The hybrid lens can include a less-than-quarter-pitch gradient index (GRIN) lens portion, and a refractive lens portion with a convex optical interface. The portions of the hybrid lens, together, produce a combined focal length that defines the focal plane. The hybrid lens is designed so that the combined focal length is less than a quarter-pitch focal length of the GRIN lens portion and has less spherical aberration than either lens portion.
Hybrid refractive gradient-index optics for time-of-fly sensors
Techniques are described for time-of-fly sensors with hybrid refractive gradient-index optics. Some embodiments are for integration into portable electronic devices with cameras, such as smart phones. For example, a time-of-fly (TOF) imaging subsystem can receive optical information along an optical path at an imaging plane. A hybrid lens can be coupled with the TOF imaging subsystem and disposed in the optical path so that the imaging plane is substantially at a focal plane of the hybrid lens. The hybrid lens can include a less-than-quarter-pitch gradient index (GRIN) lens portion, and a refractive lens portion with a convex optical interface. The portions of the hybrid lens, together, produce a combined focal length that defines the focal plane. The hybrid lens is designed so that the combined focal length is less than a quarter-pitch focal length of the GRIN lens portion and has less spherical aberration than either lens portion.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing apparatus includes: a plurality of stereo cameras arranged so that directions of baseline lengths of the stereo cameras intersect each other; a depth estimation unit that estimates, from captured images captured by the plurality of stereo cameras, a depth of an object included in the captured images; and an object detection unit that detects the object based on the depth estimated by the depth estimation unit and reliability of the depth, the reliability being determined in accordance with an angle of a direction of an edge line of the object with respect to the directions of the baseline lengths of the plurality of stereo cameras.
OPTICAL DETECTION OF AN OBJECT IN ACCORDANCE WITH THE TRIANGULATION PRINCIPLE
An optoelectronic sensor in accordance with the triangulation principle is provided for the detection of an object in a monitored zone that has a light transmitter light transmitter for transmitting light into the monitored zone, and a light receiver having a plurality of light reception elements arranged to form an array for the generation of a respective received signal from a received light spot that the light remitted at the object generates on the light receiver. The light transmitter and the light receiver form a triangulation arrangement. The optoelectronic sensor has a control and evaluation unit that is configured to determine the incidence location of the received light spot on the light receiver and to determine distance information therefrom. The control and evaluation unit has a plurality of processing channels in which respective received signals from a group of light reception elements are combined.
OPTICAL DETECTION OF AN OBJECT IN ACCORDANCE WITH THE TRIANGULATION PRINCIPLE
An optoelectronic sensor in accordance with the triangulation principle is provided for the detection of an object in a monitored zone that has a light transmitter light transmitter for transmitting light into the monitored zone, and a light receiver having a plurality of light reception elements arranged to form an array for the generation of a respective received signal from a received light spot that the light remitted at the object generates on the light receiver. The light transmitter and the light receiver form a triangulation arrangement. The optoelectronic sensor has a control and evaluation unit that is configured to determine the incidence location of the received light spot on the light receiver and to determine distance information therefrom. The control and evaluation unit has a plurality of processing channels in which respective received signals from a group of light reception elements are combined.
ARRANGEMENT FOR, AND METHOD OF, DETERMINING A DISTANCE TO A TARGET TO BE READ BY IMAGE CAPTURE OVER A RANGE OF WORKING DISTANCES
A distance to a target to be read by image capture over a range of working distances is determined by directing an aiming light spot along an aiming axis to the target, and by capturing a first image of the target containing the aiming light spot, and by capturing a second image of the target without the aiming light spot. Each image is captured in a frame over a field of view having an imaging axis offset from the aiming axis. An image pre-processor compares first image data from the first image with second image data from the second image over a common fractional region of both frames to obtain a position of the aiming light spot in the first image, and determines the distance to the target based on the position of the aiming light spot in the first image.
Systems and methods for position and pose determination and tracking
Systems and methods are disclosed for determining position and pose of as well as tracking an object in a physical environment based on the emission and sensing of light signals. The derived position, pose and tracking information may be used in a VR/AR environment. The disclosed systems and methods allow for the improved tracking of both active and passive devices. In addition, the disclosed systems and methods enable an arbitrary number of light sensors to be disposed on an object, thereby increasing accuracy and mitigating the effects of occlusion of certain light sensors. Position and pose estimates may be refined and tracked using a filter lattice responsive to changes in observed system states and/or settings. Further, data received from an inertial measurement unit may be used to increase tracking accuracy as well as position and pose determination itself.