G05D1/2435

Self-location estimation method
11874666 · 2024-01-16 · ·

The present invention provides a self-location estimation method including: a first step of estimating the self-location of a moving body (1) from the detection information of a plurality of sensors (5) to (8) by using a plurality of algorithms (11) to (13); a second step of determining a weighting factor for each algorithm from one or more state quantities A, B and C, which are obtained by estimation processing for each of a plurality of algorithms, by using a trained neural network (14); and a third step of identifying, as the self-location of the moving body (1), a location obtained by synthesizing the self-locations, which have been estimated by the algorithms, by using weighting factors.

SYSTEMS AND METHODS FOR SAFE OPERATION OF ROBOTS
20240100702 · 2024-03-28 · ·

Methods and apparatus for implementing a safety system for a mobile robot are described. The method comprises receiving first sensor data from one or more sensors, the first sensor data being captured at a first time, identifying, based on the first sensor data, a first unobserved portion of a safety field in an environment of a mobile robot, assigning, to each of a plurality of contiguous regions within the first unobserved portion of the safety field, an occupancy state, updating, at a second time after the first time, the occupancy state of one or more of the plurality of contiguous regions, and determining one or more operating parameters for the mobile robot, the one or more operating parameters based, at least in part, on the occupancy state of at least some regions of the plurality of contiguous regions at the second time.

Depth estimation

An image processing system to estimate depth for a scene. The image processing system includes a fusion engine to receive a first depth estimate from a geometric reconstruction engine and a second depth estimate from a neural network architecture. The fusion engine is configured to probabilistically fuse the first depth estimate and the second depth estimate to output a fused depth estimate for the scene. The fusion engine is configured to receive a measurement of uncertainty for the first depth estimate from the geometric reconstruction engine and a measurement of uncertainty for the second depth estimate from the neural network architecture, and use the measurements of uncertainty to probabilistically fuse the first depth estimate and the second depth estimate.

Systems and methods for enhancing performance and mapping of robots using modular devices
11940805 · 2024-03-26 · ·

Systems and methods for enhancing task performance and computer readable maps produced by robots using modular sensors is disclosed herein. According to at least one non-limiting exemplary embodiment, robots may perform a first set of tasks, wherein coupling one or more modular sensors to the robots may configure a robot to perform a second set of tasks, the second set of tasks includes the first set of tasks and at least one additional task.

Electronic system for controlling the docking of a vehicle with a docking area, and corresponding method
11932418 · 2024-03-19 · ·

An electronic system and method controls automatic or semi-automatic docking of a vehicle with a given docking area, applicable, in particular, to the docking of an airport vehicle, such as a baggage belt loader, a catering vehicle, etc., to the fuselage of an aircraft, for example to the door of such an aircraft. The given docking area comprises at least one target. The system includes first determination device configured to determine the position of the docking area by determining the type of target from a set of given types and its position, second determination device configured to determine a guide path for guiding the vehicle towards the given docking area depending on the position of the docking area, and third determination device configured to determine the type of docking destination, the second determination device being capable of determining one or more exclusion areas depending on the type of docking destination, by comparing the type of docking destination with types of docking destination, stored in a database in association with exclusion areas, such that the guide path for guiding the vehicle towards the given docking area does not pass into any of the exclusion areas.

Method for operating a higher-level automated vehicle (HAV), in particular a highly automated vehicle

A method for operating a higher-level automated vehicle (HAV), in particular a highly automated vehicle, is provided, including: S1 for providing a digital map, which may be a highly accurate digital map, in a driver assistance system of the HAV; S2 for determining an instantaneous vehicle position and localizing the vehicle position in the digital map; S3 for providing an expected setpoint traffic density at the vehicle position; S4 for ascertaining an instantaneous actual traffic density in the surroundings of the HAV; S5 for comparing the actual traffic density to the setpoint traffic density and ascertaining a difference value as the result of the comparison; S6 for checking the vehicle position of the HAV for plausibility at least partially based on the difference value and/or S7 for updating the digital map at least partially based on the difference value. Also described are a corresponding driver assistance system and a computer program.

Information processing apparatus and information processing method

There is provided an information processing apparatus including a controller that, when an autonomous mobile object estimates a self-position, determines which of a first estimation method using a result of sensing by a first sensor unit configured to sense internal world information in relation to the autonomous mobile object and a second estimation method using a result of sensing by a second sensor unit configured to sense external world information in relation to the autonomous mobile object is used by the autonomous mobile object based on whether a state of the autonomous mobile object is a stopped state.

Vehicular control system with handover procedure for driver of controlled vehicle
11927954 · 2024-03-12 · ·

A vehicular control system includes a forward-viewing camera, a forward-sensing sensor and an in-cabin-sensing sensor. With the system controlling driving of the vehicle, the system determines a triggering event that triggers handing over driving of the vehicle to a driver of the vehicle before the vehicle encounters an event point associated with the triggering event. The vehicular control system (i) determines a total action time available before the vehicle encounters the event point, (ii) estimates a driver takeover time for the driver to take over control of the vehicle and (iii) estimates a handling time for the driver to control the vehicle to avoid encountering the event point. Responsive to the vehicular control system determining that the estimated driver takeover time is less than the difference between the determined total action time and the estimated handling time, control of the vehicle is handed over to the driver of the vehicle.

Moving robot and traveling method thereof in corner areas
11915475 · 2024-02-27 · ·

The present disclosure relates to a moving robot that generates image information by accumulating results of sensing a corner area using a 3D camera sensor, and detects an obstacle by extracting an area for identifying the obstacle from the image information, and a traveling method thereof in corner areas.

Dock assembly for autonomous modular sweeper robot
11903554 · 2024-02-20 · ·

A dock assembly is provided. The dock assembly is configured for docking with a robot. An alignment platform of said dock assembly is configured to receive a sweeper module from the robot when the robot is docked and said sweeper module disengages from the robot. The alignment platform has a plurality of cones positioned on a top side of the alignment platform. The plurality of cones are configured to engage a plurality of holes positioned on an underside of the sweeper module when the sweeper module becomes disengaged from the robot. The plurality of cones enable self-alignment of the alignment platform to the sweeper module as the plurality of cones engage the plurality of holes. The alignment platform has a plurality of support pads positioned on a bottom side of the alignment platform. The support pads are configured to rest on a plurality of bearings that permit lateral movement of the alignment platform when the plurality of cones engage the plurality of holes and the alignment platform self-aligns to the sweeper module.