G05D1/0251

Coded localization systems, methods and apparatus

A coded localization system includes a plurality of optical channels arranged to cooperatively distribute electromagnetic energy from at least one object onto a plurality of detectors. Each of the channels includes a localization code that is different from any other localization code in other channels, to modify electromagnetic energy passing therethrough. Digital outputs from the detectors are processable to determine sub-pixel localization of the object onto the detectors, such that a location of the object is determined more accurately than by detector geometry alone. Another coded localization system includes a plurality of optical channels arranged to cooperatively distribute a partially polarized signal onto a plurality of pixels. Each of the channels includes a polarization code that is different from any other polarization code in other channels to uniquely polarize electromagnetic energy passing therethrough. Digital outputs from the detectors are processable to determine a polarization pattern.

Controller, control method, and program
11501461 · 2022-11-15 · ·

The present technology relates to a controller, a control method, and a program that enable self-localization with lower power consumption. Provided are a selection unit that selects, from a horizontal camera mounted in a horizontal direction and a downward camera mounted in a downward direction, a camera used for self-localization depending on speed, and a self-localization unit that performs self-localization using an image obtained by imaging with the horizontal camera or the downward camera selected by the selection unit. The selection unit selects the horizontal camera in a case where the speed is equal to or higher than a predetermined speed, and selects the downward camera in a case where the speed is not equal to or higher than the predetermined speed. The present technology can be applied to a self-localization system.

Multimodal sentiment detection

Described herein is a system for improving sentiment detection and/or recognition using multiple inputs. For example, an autonomously motile device is configured to generate audio data and/or image data and perform sentiment detection processing. The device may process the audio data and the image data using a multimodal temporal attention model to generate sentiment data that estimates a sentiment score and/or a sentiment category. In some examples, the device may also process language data (e.g., lexical information) using the multimodal temporal attention model. The device can adjust its operations based on the sentiment data. For example, the device may improve an interaction with the user by estimating the user's current emotional state, or can change a position of the device and/or sensor(s) of the device relative to the user to improve an accuracy of the sentiment data.

Method for traveling on basis of characteristics of traveling surface, and robot for implementing same

The present disclosure relates to a method for driving on the basis of characteristics of a driving surface, and a robot for implementing the same, and a method for driving on the basis of characteristics of a driving surface, according to one embodiment of the present disclosure, comprises the steps in which: a sensing module of the robot senses an adjacent driving surface to generate characteristic information of the driving surface, and a control unit of the robot stores position and characteristic information of the driving surface in a map storage of the robot; the controller of the robot sets a function to be applied to the driving surface in response to the characteristic information of the driving surface, or generates a movement path selectively including the driving surface corresponding to start and end points of the robot; and the controller controls a moving unit and a functional unit of the robot according to the set function or the movement path.

Method of extracting feature from image using laser pattern and device and robot of extracting feature thereof
11493931 · 2022-11-08 · ·

Provided herein are a method of extracting a feature from an image using a laser pattern and an identification device and a robot including the same, and the identification device for extracting a feature from an image using a laser pattern, which includes a first camera coupled to a laser filter and configured to generate a first image including a pattern of a laser which is reflected from an object, a second camera configured to capture an area overlapping an area captured by the first camera to generate a second image, and a controller configured to generate a mask for distinguishing an effective area using the pattern included in the first image and extract a feature from the second image by applying the mask to the second image.

Bounding box estimation and lane vehicle association

Disclosed are techniques for estimating a 3D bounding box (3DBB) from a 2D bounding box (2DBB). Conventional techniques to estimate 3DBB from 2DBB rely upon classifying target vehicles within the 2DBB. When the target vehicle is misclassified, the projected bounding box from the estimated 3DBB is inaccurate. To address such issues, it is proposed to estimate the 3DBB without relying upon classifying the target vehicle.

Pool cleaning robot and a method for imaging a pool

A method for cleaning a region of a pool, the method may include moving a pool cleaning robot along a cleaning path that covers the region while acquiring, at first different points of time and by a sensing unit of the pool cleaning robot, first images of first scenes, at least one first scene at each first point of time; wherein the acquiring of the first images is executed while illuminating the first scenes by the pool cleaning robot; detecting, in at least one first image, illumination reflected or scattered as a result of the illuminating of the first scenes; removing from the at least one first image information about the illumination reflected or scattered; determining, based at least in part of on the first images, first locations of the pool cleaning robot; and wherein the moving is responsive to the first locations of the pool cleaning robot.

Information processing apparatus, information processing method, information processing system, and storage medium
11573574 · 2023-02-07 · ·

An information processing apparatus for determining control values for controlling a position of a vehicle for conveying a cargo includes an acquisition unit configured to acquire first information for identifying a three-dimensional shape of the cargo based on a captured first image of the cargo, and second information for identifying, based on a captured second image of an environment where the vehicle moves, a distance between an object in the environment and the vehicle, and a determination unit configured to, based on the first information and the second information, determine the control values for preventing the cargo and the object from coming closer than a predetermined distance.

Method and device for determining the geographic position and orientation of a vehicle

In a method for determining the geographic position and orientation of a vehicle, an image of the vehicle's surroundings is recorded by at least one camera of the vehicle, wherein the recorded image at least partially comprises regions of the vehicle's surroundings on the ground level. Classification information is generated for the individual pixels of the recorded image and indicates an assignment to one of several given object classes, wherein based on this assignment, a semantic segmentation of the image is performed. Ground texture transitions based on the semantic segmentation of the image are detected. The detected ground texture transitions are projected onto the ground level of the vehicle's surroundings. The deviation between the ground texture transitions projected onto the ground level of the vehicle's surroundings and ground texture transitions in a global reference map is minimized. The current position and orientation of the vehicle in space is output based on the minimized deviation.

OBJECT POSE ESTIMATION

A plurality of virtual three-dimensional (3D) points distributed on a 3D reference plane for a camera array including a plurality of cameras are randomly selected. The plurality of cameras includes a host camera and one or more additional cameras. Respective two-dimensional (2D) projections of the plurality of virtual 3D points for the plurality of cameras are determined based on respective poses of the cameras. For the respective one or more additional cameras, respective homography matrices are determined based on the 2D projections for the respective camera and the 2D projections for the host camera. The respective homography matrices map the 2D projections for the respective camera to the 2D projections for the host camera. A stitched image is generated based on respective images captured by the plurality of cameras and the respective homography matrices.