G05D1/0257

SYSTEM AND METHOD FOR CONTROLLING CROP UNLOADING TUBE POSITION OF AN AGRICULTURAL HARVESTER
20230027697 · 2023-01-26 · ·

An agricultural harvester includes one or more actuators configured to move a crop unloading tube of the harvester relative to a frame of the harvester. Additionally, the agricultural harvester includes a sensor configured to capture data indicative of a presence of the crop receiving vehicle within a crop unloading zone of the agricultural harvester. Moreover, the agricultural harvester includes a computing system communicatively coupled to the sensor. As such, the computing system configured to determine when the crop receiving vehicle is present within the crop unloading zone based on the data captured by the sensor. In addition, when it is determined that the crop receiving vehicle is present within crop unloading zone, the computing system is configured to control an operation of the one or more actuators such that the crop unloading tube is moved relative to the frame from a current position to a predetermined crop unloading position.

DATA FUSION FOR ENVIRONMENTAL MODEL GENERATION

A method for fusion of radar and visual information, the method comprises: obtaining visual information and radar information about a three dimensional (3D) space located within a field of view of a camera that acquired the visual information and within a field of view of a radar that acquired the radar information; finding, based on the visual information, estimated visual-detection-based (VDB) objects and estimated VDB locations of the estimated VDB objects within the 3D space; wherein the estimated VDB locations exhibit a distance ambiguity; determining hybrid-detection-based (HDB) objects and HDB locations of the HDB objects, based on (i) the radar information, (ii) the estimated VDB objects, and (iii) the estimated VDB locations of the VDB objects.

Discovering and plotting the boundary of an enclosure

Provided is a process that includes: obtaining a first version of a map of a workspace; selecting a first undiscovered area of the workspace; in response to selecting the first undiscovered area, causing the robot to move to a position and orientation to sense data in at least part of the first undiscovered area; and obtaining an updated version of the map mapping a larger area of the workspace than the first version.

Rear-facing perception system for vehicles
11704909 · 2023-07-18 · ·

Devices, systems and methods for operating a rear-facing perception system for vehicles are described. An exemplary rear-facing perception system contains two corner units and a center unit, with each of the two corner units and the center unit including a camera module and a dual-band transceiver. A method for operating the rear-facing perception system includes pairing with a control unit by communicating, using the dual-band transceiver, over at least a first frequency band, transmitting a first trigger signal to the two corner units over a second frequency band non-overlapping with the first frequency band, and switching to an active mode. In an example, the first trigger signal causes the two corner units to switch to the active mode, which includes orienting the camera modules on the center unit and the two corner units to provide an unobstructed view of an area around a rear of the vehicle.

SYSTEMS AND METHODS FOR OBSTACLE DETECTION FOR A POWER MACHINE
20230017850 · 2023-01-19 ·

A retrofit kit for a power machine can include a detection module configured to be removably secured to the power machine to detect objects around the power machine. A control module can be configured to receive detected object data from the detection module and control the display module based on the detected object data to provide one or more indicators of an object detected by the detection module.

Continuous convolution and fusion in neural networks

Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.

Online agent predictions using semantic maps

A method for controlling a vehicle based on a prediction from a semantic map is presented. The method includes receiving a snapshot of an environment from one or more sensors. The method also includes generating the semantic map based on the snapshot and predicting an action of a dynamic object in the snapshot based on one or more surrounding objects. The method still further includes controlling an action of the vehicle based on the predicted action.

Radar based three dimensional point cloud for autonomous vehicles
11698454 · 2023-07-11 · ·

Example embodiments described herein involve determining three dimensional data representative of an environment for an autonomous vehicle using radar. An example embodiment involves receiving radar reflection signals at a radar unit coupled to a vehicle and determining an azimuth angle and a distance for surfaces in the environment causing the radar reflection signals. The embodiment further involves determining an elevation angle for the surfaces causing the radar reflection signals based on phase information of the radar reflection signals and controlling the vehicle based at least in part on the azimuth angle, the distance, and the elevation angle for the surfaces causing the plurality of radar reflection signals. In some instances, the radar unit is configured to receive radar reflection signals using a staggered linear array with one or multiple radiating elements offset in the array.

Foothold position control system and method for biped robot

A foothold position control system and method for a biped robot are provided. 1) A feasible collision-free path is planned by using a path planning algorithm; 2) an available foothold area of a swing foot is determined according to step-length constraints, movement capabilities, foot sizes, and center offsets of a biped robot; and 3) fuzzy processing is performed to determine a specific foothold position of the biped robot. Selection of suitable foothold positions on both sides of a path when a biped robot executes specific walking actions after finishing path planning is realized. The foothold position control system and method has the advantages of being simple and easy to implement, having low computational load and high speed, being capable of exerting extreme movement capabilities of different biped robots, enabling more flexible movement of the biped robots, and so on.

SYSTEM AND METHOD FOR COLLABORATIVE SENSOR CALIBRATION
20230213939 · 2023-07-06 ·

The present teaching relates to method, system, medium, and implementations for sensor calibration. An ego vehicle determines whether a sensor deployed on the ego vehicle to facilitate autonomous driving of the ego vehicle needs to be calibrated and sends, if it is determined that the sensor needs to be calibrated, a request for assistance in collaborative calibration of the sensor, with a first position of the ego vehicle or a first configuration of the sensor with respect to the ego vehicle. When a response of the request is received, an assisting vehicle is indicated to travel to be near the ego vehicle to facilitate the calibration of the sensor by collaborating with the moving ego vehicle and the ego vehicle coordinates with the assisting vehicle to enable the sensor to acquire information of a target present on the assisting vehicle for the collaborative calibration of the sensor.