G05D1/0251

Mobile robots to generate occupancy maps

An example control system includes a memory and at least one processor to obtain image data from a given region and perform image analysis on the image data to detect a set of objects in the given region. For each object of the set, the example control system may classify each object as being one of multiple predefined classifications of object permanency, including (i) a fixed classification, (ii) a static and fixed classification, and/or (iii) a dynamic classification. The control system may generate at least a first layer of a occupancy map for the given region that depicts each detected object that is of the static and fixed classification and excluding each detected object that is either of the static and unfixed classification or of the dynamic classification.

Sensor arrangement for an agricultural vehicle
11703880 · 2023-07-18 · ·

A sensor arrangement for an agricultural vehicle includes a first electro-optical sensor including a first field of view having an optical axis, and a second electro-optical sensor including a second field of view having an optical axis. The first and second sensors are spaced apart from one another and oriented such that the optical axes of the two sensors intersect at a distance from the two sensors.

Electronic apparatus and method for assisting with driving of vehicle

An electronic apparatus and method for assisting with driving of a vehicle are provided. The electronic apparatus includes: a processor configured to execute one or more instructions stored in a memory, to: obtain a surrounding image of the vehicle via at least one sensor, recognize an object from the obtained surrounding image, obtain three-dimensional (3D) coordinate information for the object by using the at least one sensor, determine a number of planar regions constituting the object, based on the 3D coordinate information corresponding to the object, determine whether the object is a real object, based on the number of planar regions constituting the object, and control a driving operation of the vehicle based on a result of the determining whether the object is the real object.

Caster device, robot having the same, and method for driving robot

A caster device includes a caster wheel configured to rotate around a horizontal rotation axis; and a case configured to expose a lower surface of the caster wheel, cover the caster wheel and have an inclined surface from a top of the horizontal rotation axis toward a bottom of the horizontal rotation axis.

LEGGED ROBOT MOTION CONTROL METHOD, APPARATUS, AND DEVICE, AND STORAGE MEDIUM

A legged robot motion control method, apparatus, and device, and a storage medium. The method includes: acquiring center of mass state data corresponding to a spatial path starting point and spatial path ending point of a motion path; determining a candidate foothold of each foot in the motion path based on the spatial path starting point and the spatial path ending point; determining a variation relationship between a center of mass position variation coefficient and a foot contact force based on the center of mass state data; screening out, under restrictions of a constraint set, a target center of mass position variation coefficient and target foothold that satisfy the variation relationship; determining a target motion control parameter according to the target center of mass position variation coefficient and the target foothold; and controlling a legged robot based on the target motion control parameter to move according to the motion path.

Systems and methods for multi-camera modeling with neural camera networks

Systems and methods for self-supervised depth estimation using image frames captured from a camera mounted on a vehicle comprise: receiving a first image from the camera mounted at a first location on the vehicle; receiving a second image from the camera mounted at a second location on the vehicle; predicting a depth map for the first image; warping the first image to a perspective of the camera mounted at the second location on the vehicle to arrive at a warped first image; projecting the warped first image onto the second image; determining a loss based on the projection; and updating the predicted depth values for the first image.

Continuous convolution and fusion in neural networks

Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.

Moving robot and control method thereof
11547261 · 2023-01-10 · ·

Disclosed are a moving robot and a control method thereof, and the moving robot performs cleaning by moving based on a map and is able to determine, based on a global localization function, the current position on the map no matter where the moving robot is positioned, so that the moving robot is capable of recognizing the current position on the map, and, even when the position of the moving robot is arbitrarily changed, the moving robot is able to recognize the position thereof again and move to an exact designated area, so that the moving robot is capable of perform designated cleaning and move rapidly and accurately, thereby performing cleaning efficiently.

Method and system for integrated global and distributed learning in autonomous driving vehicles

The present teaching relates to system, method, medium for in-situ perception in an autonomous driving vehicle. A plurality of types of sensor data are received, which are acquired by a plurality of types of sensors deployed on the vehicle to provide information about surrounding of the vehicle. Based on at least one model, one or more surrounding items are tracked from a first of the plurality of types of sensor data acquired by a first type sensors. At least some of the tracked items are automatically labeled via cross validation and are used to locally adapt, on-the-fly, the at least one model. Model update information is received which from a model update center, which is derived based on the labeled at least some items. The at least one model is updated using the model update information.

METHODS AND SYSTEMS FOR PROVIDING DRONE-ASSISTED MEDIA CAPTURE

A method may include receiving a request for drone-assisted media capture from a vehicle located at a first location, the request specifying one or more user preferences, selecting an unmanned aerial vehicle that is able to perform the drone-assisted media capture based on the user preferences, causing the selected unmanned aerial vehicle to travel to the first location, and causing the selected unmanned aerial vehicle to capture media at the first location based on the user preferences.