Patent classifications
G05D1/249
Method, apparatus and computer storage medium for training trajectory planning model
A method for training a trajectory planning model, an apparatus, and computer storage medium are provided. The method may include: obtaining an image of a physical environment in which a vehicle is located via at least one sensor of the vehicle, the image including multiple objects surrounding the vehicle; obtaining a feature chart indicating multiple initial trajectory points of the vehicle in the image from a trajectory planning model based on the image; identifying the image to determine in the image a first area associated with a road object in multiple objects and a second area associated with a non-road object in the multiple objects; determining a planning trajectory point based on positional relationship of the multiple initial trajectory points with respect to the first area and the second area; and training a trajectory planning model based on the planning track point and the actual trajectory point of the vehicle.
Systems and methods for enhancing performance and mapping of robots using modular devices
Systems and methods for enhancing task performance and computer readable maps produced by robots using modular sensors is disclosed herein. According to at least one non-limiting exemplary embodiment, robots may perform a first set of tasks, wherein coupling one or more modular sensors to the robots may configure a robot to perform a second set of tasks, the second set of tasks includes the first set of tasks and at least one additional task.
Systems and methods for enhancing performance and mapping of robots using modular devices
Systems and methods for enhancing task performance and computer readable maps produced by robots using modular sensors is disclosed herein. According to at least one non-limiting exemplary embodiment, robots may perform a first set of tasks, wherein coupling one or more modular sensors to the robots may configure a robot to perform a second set of tasks, the second set of tasks includes the first set of tasks and at least one additional task.
Mobile robots and systems with mobile robots
Improved mobile robots and systems and methods thereof, described herein, can enhance security and monitoring services of grounds and property. And, such mobile robots and systems and methods thereof can enhance policing as well as customer service and help desk functionality. In some embodiments, the mobile robots and systems and methods thereof can enhance exploration, such as space exploration.
Mobile robots and systems with mobile robots
Improved mobile robots and systems and methods thereof, described herein, can enhance security and monitoring services of grounds and property. And, such mobile robots and systems and methods thereof can enhance policing as well as customer service and help desk functionality. In some embodiments, the mobile robots and systems and methods thereof can enhance exploration, such as space exploration.
Applying multiple processing schemes to target objects
A method includes obtaining, by the treatment system configured to implement a machine learning (ML) algorithm, one or more images of a region of an agricultural environment near the treatment system, wherein the one or more images are captured from the region of a real-world where agricultural target objects are expected to be present, determining one or more parameters for use with the ML algorithm, wherein at least one of the one or more parameters is based on one or more ML models related to identification of an agricultural object, determining a real-world target in the one or more images using the ML algorithm, wherein the ML algorithm is at least partly implemented using the one or more processors of the treatment system, and applying a treatment to the target by selectively activating the treatment mechanism based on a result of the determining the target.
System and method for fisheye image processing
A system and method for fisheye image processing can be configured to: receive fisheye image data from at least one fisheye lens camera associated with an autonomous vehicle, the fisheye image data representing at least one fisheye image frame; partition the fisheye image frame into a plurality of image portions representing portions of the fisheye image frame; warp each of the plurality of image portions to map an arc of a camera projected view into a line corresponding to a mapped target view, the mapped target view being generally orthogonal to a line between a camera center and a center of the arc of the camera projected view; combine the plurality of warped image portions to form a combined resulting fisheye image data set representing recovered or distortion-reduced fisheye image data corresponding to the fisheye image frame; generate auto-calibration data representing a correspondence between pixels in the at least one fisheye image frame and corresponding pixels in the combined resulting fisheye image data set; and provide the combined resulting fisheye image data set as an output for other autonomous vehicle subsystems.
Navigation with a safe longitudinal distance
Systems and methods are provided for navigating a host vehicle. A processing device may be programmed to receive an image representative of an environment of the host vehicle; determine a planned navigational action for the host vehicle; analyze the image to identify a target vehicle travelling toward the host vehicle; determine a next-state distance between the host vehicle and the target vehicle that would result if the planned navigational action was taken; determine a stopping distance for the host vehicle based on a braking profile, a maximum acceleration capability, and a current speed of the host vehicle; determine a stopping distance for the target vehicle based on a braking profile and a current speed of the target vehicle; and implement the planned navigational action if the determined next-state distance is greater than a sum of the stopping distances for the host vehicle and the target vehicle.
Low-profile robotic platform
Described herein are robotic platforms and associated features that may have applicability in a wide variety of applications and industries, but that may have particular applicability in automotive testing and testing of vehicles having autonomous or semi-autonomous driving features. Robotic platforms may include a low-profile chassis, one or more rotational elements coupled to one or more drive motors and supported within the chassis, and a control system coupled to and controlling the drive motor(s). Also disclosed are suspension systems that may maintain the chassis of a robotic platform above the ground in use but that allows the chassis to ground out when subject to a pre-determined load, thereby spreading the load across the chassis.
Method for operating a system with two automatically moving floor processing devices as well as system for implementing such a method
A method for operating a system with a first automatically moving floor processing device and a second automatically moving floor processing device in which the first floor processing device detects environmental features in an environment of the first floor processing device. The first floor processing device or a shared computing device allocated to both the processing devices generates a first area map based on the detected environmental features, and the first floor processing device also detects the second floor processing device, and the position of the second floor processing device is thereupon stored within the generated first area map. The second floor processing device receives information about a current position of the second floor processing device within the first area map, and controls a second floor processing activity as soon as the first floor processing device has detected the second floor processing device.