G05D1/249

AUTO-LOCATING AND POSITIONING RELATIVE TO AN AIRCRAFT
20240104758 · 2024-03-28 ·

Techniques for auto-locating and positioning relative to an aircraft are disclosed. An example method can include a computing system receiving a multi-dimensional representation of a candidate target aircraft upon which the robot is to perform an operation. The computing system can extract a first feature from the multi-dimensional representation associated with the candidate target aircraft. The computing system can compare the first feature with a second feature associated with a target aircraft. The computing system can determine whether the candidate aircraft is the target aircraft based on the comparison. The computing system can determine a path from the location of the robot to the target aircraft based at least in part on the determining of whether the candidate aircraft is the target aircraft.

Cleaning system and cleaning method

A cleaning system and a cleaning method configured for cleaning task of solar panels are provided. The cleaning system includes an operation region, cleaning robots, shuttle robots, and a data processing system. The cleaning method includes a first carrying step, a cleaning step, and a second carrying step.

System, method, and computer-readable medium for an autonomous vehicle to pass a bicycle

An autonomous vehicle configured to autonomously pass a cyclist includes an imaging device and processing circuitry configured to receive information from the imaging device. Additionally, the processing circuitry of the autonomous vehicle is configured to identify a cyclist passing situation based on the information received from the imaging device, and plan a path of an autonomous vehicle based on the cyclist passing situation. The autonomous vehicle also includes a positioning system and the processing circuitry is further configured to receive information from the positioning system, determine if the cyclist passing situation is sufficiently identified, and identify the cyclist passing situation based on the information from the imaging device and the positioning system when the cyclist passing situation is not sufficiently identified based on the information received from the imaging device.

System, method, and computer-readable medium for an autonomous vehicle to pass a bicycle

An autonomous vehicle configured to autonomously pass a cyclist includes an imaging device and processing circuitry configured to receive information from the imaging device. Additionally, the processing circuitry of the autonomous vehicle is configured to identify a cyclist passing situation based on the information received from the imaging device, and plan a path of an autonomous vehicle based on the cyclist passing situation. The autonomous vehicle also includes a positioning system and the processing circuitry is further configured to receive information from the positioning system, determine if the cyclist passing situation is sufficiently identified, and identify the cyclist passing situation based on the information from the imaging device and the positioning system when the cyclist passing situation is not sufficiently identified based on the information received from the imaging device.

Mobile robot having collision avoidance system for crossing a road from a pedestrian pathway

A collision avoidance method and system for a mobile robot crossing a road. When a mobile robot approaches a road, it senses road conditions via at least one first sensor, and initiates road crossing if the road conditions are deemed suitable for crossing. As it crosses the road, the mobile robot senses, via at least one second sensor, a change in the road conditions indicating the presence of at least one hazardous moving object. In response to determining that at least one hazardous object in present, the mobile robot initiates a collision avoidance maneuver. A mobile robot configured to avoid collisions while crossing a road includes: at least one first sensor configured to sense road conditions, at least one second sensor configured to sense road conditions, and a processing component configured to carry out one or more collision avoidance maneuvers.

Determining drivable free-space for autonomous vehicles

In various examples, sensor data may be received that represents a field of view of a sensor of a vehicle located in a physical environment. The sensor data may be applied to a machine learning model that computes both a set of boundary points that correspond to a boundary dividing drivable free-space from non-drivable space in the physical environment and class labels for boundary points of the set of boundary points that correspond to the boundary. Locations within the physical environment may be determined from the set of boundary points represented by the sensor data, and the vehicle may be controlled through the physical environment within the drivable free-space using the locations and the class labels.

GPS location augmentation and outage playthrough

Agricultural machines utilize global positioning systems (GPS) to acquire the location of the machine as well as the location of an event, which may be based upon an operation of the agricultural machine. Because of the possibility of outage and/or inaccuracy of the GPS, a GPS augmentation system can be included with the agricultural machine. The GPS augmentation system can supplement the location determination of the GPS, or can be used in place of the GPS when the GPS is not available. An unmanned vehicle can also be used as part of the augmentation system to provide additional information for the location of the agricultural machine and/or the event.

Line laser module and autonomous mobile device

Embodiments of the present disclosure provide a line laser module and an autonomous mobile device. The line laser module includes a fixed base, and a camera and a line laser emitter arranged on the fixed base. The line laser emitter is provided at one or more sides of the camera, and configured to emit a laser with a linear projection. The camera is configured to operate in conjunction with the line laser emitter, and to capture an environmental image. An infrared filter is arranged in front of the camera, and configured to allow only infrared light to enter the camera. The autonomous mobile device includes an infrared flashlight. The camera is configured to capture, at different time points, a first environmental image for distance measurement and a second environmental image for object identification.

Line laser module and autonomous mobile device

Embodiments of the present disclosure provide a line laser module and an autonomous mobile device. The line laser module includes a fixed base, and a camera and a line laser emitter arranged on the fixed base. The line laser emitter is provided at one or more sides of the camera, and configured to emit a laser with a linear projection. The camera is configured to operate in conjunction with the line laser emitter, and to capture an environmental image. An infrared filter is arranged in front of the camera, and configured to allow only infrared light to enter the camera. The autonomous mobile device includes an infrared flashlight. The camera is configured to capture, at different time points, a first environmental image for distance measurement and a second environmental image for object identification.

Method, apparatus and computer storage medium for training trajectory planning model

A method for training a trajectory planning model, an apparatus, and computer storage medium are provided. The method may include: obtaining an image of a physical environment in which a vehicle is located via at least one sensor of the vehicle, the image including multiple objects surrounding the vehicle; obtaining a feature chart indicating multiple initial trajectory points of the vehicle in the image from a trajectory planning model based on the image; identifying the image to determine in the image a first area associated with a road object in multiple objects and a second area associated with a non-road object in the multiple objects; determining a planning trajectory point based on positional relationship of the multiple initial trajectory points with respect to the first area and the second area; and training a trajectory planning model based on the planning track point and the actual trajectory point of the vehicle.