Patent classifications
G06V10/757
CLOUD-EDGE-END COOPERATIVE CONTROL METHOD OF 5G NETWORKED UNMANNED AERIAL VEHICLE FOR SECURITY RESCUE
The present invention discloses a cloud-edge-end cooperative control method of a 5G networked UAV for security rescue, including: an image acquisition step: performing, by a single-chip microcomputer, attitude resolution on data acquired by a detection sensor, to obtain image data; a sparse landmark map building step: performing, by a control platform, front-end feature point matching, local map building and optimization, loopback detection, and frame resolution on the image data, to generate a sparse landmark map; a three-dimensional dense map building step: generating, by an edge cloud, a three-dimensional dense map based on a key frame pose and key frame observation data of the sparse landmark map; a high-precision semantic map building step: obtaining a high-precision semantic map; and a UAV movement step: adjusting, by the driving mechanism, a pose of the UAV according to the three-dimensional dense map or the high-precision semantic map.
SIMILAR AREA DETECTION DEVICE, SIMILAR AREA DETECTION METHOD, AND COMPUTER PROGRAM PRODUCT
A similar area detection device according to an embodiment includes an acquisition unit, a feature-point-extraction unit, a matching unit, an outermost contour extraction unit, and a detection unit. The acquisition unit acquires a first image and a second image. The feature-point-extraction unit extracts feature points from each of the first image and the second image. The matching unit associates the feature points extracted from the first image with the feature points extracted from the second image, and detects corresponding points between images. The outermost contour extraction unit extracts an outermost contour from each of the first image and the second image. The detection unit detects a similar area from each of the first image and the second image based on the outermost contours and the number of corresponding points. Similar areas are partial areas similar to each other between the first and the second images.
GENERATING AND EVALUATING MAPPINGS BETWEEN SPATIAL POINT SETS IN MULTI-LEVELS
A method for generating and evaluating N-to-1 mappings between spatial point sets in nD, n=2 or 3, implemented on a computing device comprising a programmable general purpose processor and a programmable data-parallel coprocessor and a memory coupled with them. Embodiments of the method comprises using the computing device to carry out steps comprising receiving a first and a second spatial point sets in 2D or 3D, the first spatial point set comprising a first non-empty non-isolated portion of non-isolated points and a second constrained portion of constrained points, receiving an extended array of fixed correspondents comprising a first not-yet-fixed portion for the non-isolated portion of the first spatial point set and a second fixed portion for the constrained portion of the first spatial point set, a CCISS or padded CCISS between the first non-empty non-isolated portion and the second spatial point set, dividing the first non-empty non-isolated portion into a number of sub-portions, and dividing the first not-yet-fixed portion of the extended array of fixed correspondents and the CCISS or the padded CCISS accordingly, iteratively generating optimal N-to-1 mappings between the members of the sub-portions of the first non-empty on-isolated portion and updating the respective sub-portions of the extended array of fixed correspondents one sub-portion at each iteration.
ACTION LEARNING METHOD, MEDIUM, AND ELECTRONIC DEVICE
An action learning method, including: acquiring human body moving image data; determining three-dimensional human body pose action data corresponding to the human body moving image data; matching the three-dimensional human body pose action data with atomic actions in a robot atomic action library to determine robot action sequence data corresponding to the human body moving image data; performing action continuity stitching on all robot sub-actions in the robot action sequence data sequentially; determining a continuous action learned by a robot from the robot action sequence data subjected to the action continuity stitching.
Three-dimensional object estimation using two-dimensional annotations
A method includes obtaining a two-dimensional image, obtaining a two-dimensional image annotation that indicates presence of an object in the two-dimensional image, determining a location proposal based on the two-dimensional image annotation, determining a classification for the object, determining an estimated size for the object based on the classification for the object, and defining a three-dimensional cuboid for the object based on the location proposal and the estimated size.
METHOD AND APPARATUS FOR DETECTING BODY
Embodiments of the present application disclose a method and apparatus for detecting a body. A particular embodiment of the method comprises: acquiring a set of candidate body image region in a target image; for a candidate body image region in the set of candidate body image region: acquiring position information and confidences of candidate body key points in the candidate body image region; determining the candidate body key points within a body contour according to body contour information in the candidate body image region and the acquired position information; and determining a confidence score of the candidate body image region according to a sum of the confidences of the candidate body key points within the body contour; and determining a body image region from the set of candidate body image regions according to the confidence scores of the candidate body image regions in set of the candidate body image regions.
High Resolution Alignment of 3D Imaging with 2D Imaging
Alignment of a 2D image to a corresponding 3D image is provided by writing a pattern into a 3D sample. The pattern is at known positions in the 3D image, and provides visible reference features in the 2D image. This permits accurate determination of the plane in the 3D image that corresponds to the 2D image.
System for Determining when a Driver Accesses a Communication Device
The techniques of this disclosure relate to a system for modifying access to a communication device. The system includes a controller circuit configured to receive first-feature data generated by a first detector configured to detect identifying features of a driver of a vehicle. The controller circuit is also configured to receive second-feature data generated by a second detector configured to detect identifying features of a user of a communication device. The controller circuit is also configured to determine whether an identifying feature from the first-feature data matches a corresponding identifying feature from the second-feature data. The controller circuit is also configured to modify access to one or more functions of the communication device based on the determination. The system can reduce instances of driver distraction caused by the driver attempting to use the communication device.
Vehicle controllers for agricultural and industrial applications
Systems and methods for vehicle controllers for agricultural and industrial applications are described. For example, a method includes accessing a map data structure storing a map representing locations of physical objects in a geographic area; accessing current point cloud data captured using a distance sensor connected to a vehicle; detecting a crop row based on the current point cloud data; matching the detected crop row with a crop row represented in the map; determining an estimate of a current location of the vehicle based on a current position in relation to the detected crop row; and controlling one or more actuators to cause the vehicle to move from the current location of the vehicle to a target location.
METHODS AND APPARATUS FOR AUTOMATIC HAND POSE ESTIMATION USING MACHINE LEARNING
Systems and methods for hand pose estimation are provided. For example, a computing device may obtain an image, such as an image of a hand. The computing device may apply one or more preprocessing processes to the image to generate an augmented image. Further, the computing device may apply a first machine learning process to the augmented image to generate a plurality of keypoints. The computing device may also apply a second machine learning process to the plurality of keypoints to generate a plurality of depth values. The computing device may further determine a plurality of angles based on the plurality of keypoints and the plurality of depth values. In some examples, the computing device may generate a model comprising a plurality of segments based on the plurality of angles. The computing device may store the plurality of angles and, in some examples, the model in a memory device.