Patent classifications
G06V20/17
Partitioning agricultural fields for annotation
Some implementations herein relate to a graphical user interface (GUI) that facilitates dynamically partitioning agricultural fields into clusters on an individual agricultural field-basis using agricultural features. A map of a geographic area containing a plurality of agricultural fields may be rendered as part of a GUI. The agricultural fields may be partitioned into a first set of clusters based on a first granularity value and agricultural features of individual agricultural fields. The individual agricultural fields may be visually annotated in the GUI to convey the first set of clusters of similar agricultural fields. Upon receipt of a second granularity value different from the first granularity value, the agricultural fields may be partitioned into a second set of clusters of similar agricultural fields. The map of the geographic area may be updated so that individual agricultural fields are visually annotated to convey the second set of clusters.
Cloud-based framework for processing, analyzing, and visualizing imaging data
Embodiments of the present disclosure provide methods, apparatus, systems, computing devices, computing entities, and/or the like for detecting objects located in an area of interest. In accordance with one embodiment, a method is provided comprising: receiving, via an interface provided through a general instance on a cloud environment, imaging data comprising raw images collected on the area of interest; upon receiving the images: activating a central processing unit (CPU) focused instance on the cloud environment and processing, via the image, the raw images to generate an image map of the area of interest; and after generating the image map: activating a graphical processing unit (GPU) focused instance on the cloud environment and performing object detection, via the image, on a region within the image map by applying one or more object detection algorithms to the region to identify locations of the objects in the region.
Cloud-based framework for processing, analyzing, and visualizing imaging data
Embodiments of the present disclosure provide methods, apparatus, systems, computing devices, computing entities, and/or the like for detecting objects located in an area of interest. In accordance with one embodiment, a method is provided comprising: receiving, via an interface provided through a general instance on a cloud environment, imaging data comprising raw images collected on the area of interest; upon receiving the images: activating a central processing unit (CPU) focused instance on the cloud environment and processing, via the image, the raw images to generate an image map of the area of interest; and after generating the image map: activating a graphical processing unit (GPU) focused instance on the cloud environment and performing object detection, via the image, on a region within the image map by applying one or more object detection algorithms to the region to identify locations of the objects in the region.
AERIAL VEHICLES WITH MACHINE VISION
An aerial vehicle is provided. The aerial vehicle can include a plurality of sensors mounted thereon, an avionics system configured to operate at least a portion of the aerial vehicle, and a machine vision controller in operative communication with the avionics system and the plurality of sensors. The machine vision controller is configured to perform a method. The method includes obtaining sensor data from at least one sensor of the plurality of sensors, determining performance data from the avionic system or an additional sensor of the plurality of sensors, processing the sensor data based on the performance data to compensate for movement of the unmanned aerial vehicle, identifying at least one geographic indicator based on processing the sensor data, and determining a geographic location of the aerial vehicle based on the at least one geographic indicator.
Autonomous Aerial Vehicle Hardware Configuration
An introduced autonomous aerial vehicle can include multiple cameras for capturing images of a surrounding physical environment that are utilized for motion planning by an autonomous navigation system. In some embodiments, the cameras can be integrated into one or more rotor assemblies that house powered rotors to free up space within the body of the aerial vehicle. In an example embodiment, an aerial vehicle includes multiple upward-facing cameras and multiple downward-facing cameras with overlapping fields of view to enable stereoscopic computer vision in a plurality of directions around the aerial vehicle. Similar camera arrangements can also be implemented in fixed-wing aerial vehicles.
DISASTER INFORMATION PROCESSING APPARATUS, OPERATION METHOD OF DISASTER INFORMATION PROCESSING APPARATUS, OPERATION PROGRAM OF DISASTER INFORMATION PROCESSING APPARATUS, AND DISASTER INFORMATION PROCESSING SYSTEM
Provided are a disaster information processing apparatus, an operation method of a disaster information processing apparatus, an operation program of a disaster information processing apparatus, and a disaster information processing system capable of grasping a damage situation at a disaster site in a short time without waste. A RW control unit receives a first aerial image obtained by capturing a first imaging range including an area by a first camera mounted on a first drone. A first damage situation analysis unit analyzes a first damage situation of a disaster in the first imaging range based on the first aerial image. A second imaging range determination unit determines a second imaging range of a second camera mounted on a second drone based on a first analysis result, and the second imaging range is relatively narrower than the first imaging range.
DISASTER INFORMATION PROCESSING APPARATUS, OPERATION METHOD OF DISASTER INFORMATION PROCESSING APPARATUS, OPERATION PROGRAM OF DISASTER INFORMATION PROCESSING APPARATUS, AND DISASTER INFORMATION PROCESSING SYSTEM
Provided are a disaster information processing apparatus, an operation method of a disaster information processing apparatus, an operation program of a disaster information processing apparatus, and a disaster information processing system capable of grasping a damage situation at a disaster site in a short time without waste. A RW control unit receives a first aerial image obtained by capturing a first imaging range including an area by a first camera mounted on a first drone. A first damage situation analysis unit analyzes a first damage situation of a disaster in the first imaging range based on the first aerial image. A second imaging range determination unit determines a second imaging range of a second camera mounted on a second drone based on a first analysis result, and the second imaging range is relatively narrower than the first imaging range.
METHOD, SYSTEM, AND IMAGE PROCESSING DEVICE FOR CAPTURING AND/OR PROCESSING ELECTROLUMINESCENCE IMAGES, AND AN AERIAL VEHICLE
A method (400) of capturing and processing electroluminescence (EL) images (1910) of a PV array (40) is disclosed herein. In a described embodiment, the method 400 includes controlling the aerial vehicle (20) to fly along a flight path to capture EL images (1910) of corresponding PV array subsections (512b) of the PV array (40), deriving respective image quality parameters from at least some of the captured EL images, dynamically adjusting a flight speed of the aerial vehicle along the flight path, based on the respective image quality parameters for capturing the EL images (1910) of the PV array subsections (512b), extracting a plurality of frames (1500) of the PV array subsection (512b) from the EL images (1910); determining a reference frame having a highest image quality of the PV array subsection (512b) from among the extracted frames (2100); performing image alignment of the extracted frames (2100) to the reference frame to generate image aligned frames (2130), and processing the image aligned frames (2130) to produce an enhanced image (2140) of the PV array subsection (512b) having a higher resolution than the reference frame. A system, image processing device, and aerial vehicle for the method thereof are also disclosed.
Reinforcement learning-based remote control device and method for an unmanned aerial vehicle
A device and method for remotely controlling an unmanned aerial vehicle based on reinforcement learning are disclosed. An embodiment provides a device for remotely controlling an unmanned aerial vehicle based on reinforcement learning, where the device includes a processor and a memory connected to the processor, and the memory includes program instructions that can be executed by the processor to determine an inclination direction corresponding to the hand pose of a user, the movement direction of the hand, and the angle in the inclination direction based on sensing data associated with the pose of the hand or the movement of the hand acquired by way of at least one sensor, and determine one of a movement direction, a movement speed, a mode change, a figural trajectory, and a scale of the figural trajectory of the unmanned aerial vehicle according to the determined inclination direction, movement direction, and angle.
UAV video aesthetic quality evaluation method based on multi-modal deep learning
The present disclosure provides a UAV video aesthetic quality evaluation method based on multi-modal deep learning, which establishes a UAV video aesthetic evaluation data set, analyzes the UAV video through a multi-modal neural network, extracts high-dimensional features, and concatenates the extracted features, thereby achieving aesthetic quality evaluation of the UAV video. There are four steps, step one to: establish a UAV video aesthetic evaluation data set, which is divided into positive samples and negative samples according to the video shooting quality; step two to: use SLAM technology to restore the UAV's flight trajectory and to reconstruct a sparse 3D structure of the scene; step three to: through a multi-modal neural network, extract features of the input UAV video on the image branch, motion branch, and structure branch respectively; and step four to: concatenate the features on multiple branches to obtain the final video aesthetic label and video scene type.