Patent classifications
G05D101/15
Robotic bees and mantis and insect-like robots with embodied artificial intelligence
This invention relates to insect-like robots empowered by generative artificial intelligence (Gen-AI), specifically designed to address critical challenges in agriculture and environmental monitoring. The robotic bee, mantis, and dragonfly can autonomously perform essential tasks such as crop pollination, pest control, and detailed farmland inspection. These robots feature lifelike designs with components such as heads with integrated high-resolution cameras, antennae for communication, specialized mouthparts, thoraxes housing CPUs and actuators, and wings with thin-film photovoltaic solar materials. The AI models function as the brains, processing data captured by various sensors to provide real-time guidance and control commands. Leveraging advanced technologies and sustainable power sources, these robotic insect-like robots enhance productivity, sustainability, and efficiency in agricultural practices, contributing to a more secure and eco-friendly future.
Methods and systems for jobsite creation
A technique is directed to methods and systems for creating digital representations jobsites. The jobsite creation system can collect machine data, such as location data, from machines in a worksite and determine the role, such as loading, hauling, drilling, etc. of each machine in the worksite. The machines are grouped together and organized based on which machines operate in a geographic region together. The jobsite creation system generates a digital representation of a jobsite with a site boundary based on the machine groupings. A user can view the jobsite on a user interface to monitor the operation of the machines and track productivity and utilization data for the machines in the jobsite.
Unmanned platform with bionic visual multi-source information and intelligent perception
Disclosed is an unmanned platform with bionic visual multi-source information and intelligent perception. The unmanned platform is equipped with a bionic polarization vision/inertia/laser radar combined navigation module, a deep learning object detection module and an autonomous obstacle avoidance module; the bionic polarization vision/inertia/laser radar combined navigation module is configured to position and orient the unmanned platform in real time; the deep learning object detection module is configured to sense an environment around the unmanned platform according to RGB images of a surrounding environment collected by the bionic polarization vision/inertia/laser radar combined navigation module; and the autonomous obstacle avoidance module determines whether there are any obstacles around the unmanned platform during running according to the objects identified by the target, and performs autonomous obstacle avoidance in combination with the carrier navigation and positioning information. Concealment, autonomous navigation, object detection and autonomous obstacle avoidance capabilities of the unmanned platform are thus improved.