G06V10/766

METHOD AND DEVICE FOR TRAINING OBJECT RECOGNIZER
20230125692 · 2023-04-27 · ·

The disclosure relates to an object recognizer training method and device. There may be provided a method for training an object recognizer comprising obtaining an image by capturing an object by a first sensor and a second sensor, obtaining first object recognition information by inputting an image captured by the first sensor to a first sensor-based object recognizer and obtaining second object recognition information by inputting an image captured by the second sensor to a second sensor-based object recognizer, detecting an object recognition error in the second sensor-based object recognizer, if the object recognition error is detected, obtaining a predicted value of the second object recognition information corresponding to the first object recognition information based on reference data created before, and training the second sensor-based object recognizer using the predicted value of the second object recognition information.

Landing tracking control method and system based on lightweight twin network and unmanned aerial vehicle

A landing tracking control method comprises the following contents: a tracking model training stage and an unmanned aerial vehicle real-time tracking stage. The landing tracking control method extracts a network Snet by using a lightweight feature and makes modification, so that an extraction speed of the feature is increased to better meet a real-time requirement. Weight allocation on the importance of channel information is carried out to differentiate effective features more purposefully and utilize the features, so that the tracking precision is improved. In order to improve a training effect of the network, a loss function of an RPN network is optimized, a regression precision of a target frame is measured by using CIOU, and meanwhile, calculation of classified loss function is adjusted according to CIOU, and a relation between a regression network and classification network is enhanced.

Landing tracking control method and system based on lightweight twin network and unmanned aerial vehicle

A landing tracking control method comprises the following contents: a tracking model training stage and an unmanned aerial vehicle real-time tracking stage. The landing tracking control method extracts a network Snet by using a lightweight feature and makes modification, so that an extraction speed of the feature is increased to better meet a real-time requirement. Weight allocation on the importance of channel information is carried out to differentiate effective features more purposefully and utilize the features, so that the tracking precision is improved. In order to improve a training effect of the network, a loss function of an RPN network is optimized, a regression precision of a target frame is measured by using CIOU, and meanwhile, calculation of classified loss function is adjusted according to CIOU, and a relation between a regression network and classification network is enhanced.

Phase-robust matched kernel acquisition for qubit state determination

Systems, computer-implemented methods, and computer program products that can facilitate determining a state of a qubit are described. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise an output receiving component that can receive, in response to a request, output representative of a quantum state of a qubit of a quantum computing device, and a classifying component that classifies the quantum state of the qubit of the quantum computing device based on the output representative of the quantum state of the qubit. The system can further include a configuring component that can configure the classifying component based on a characteristic of the request.

DISTANCE DETERMINATION FROM IMAGE DATA

A computer includes a processor and a memory storing instructions executable by the processor to receive image data from a camera, generate a depth map from the image data, detect an object in the image data, apply a bounding box circumscribing the object to the depth map, mask the depth map by setting depth values for pixels in the bounding box in the depth map to a depth value of a closest pixel in the bounding box, and determine a distance to the object based on the masked depth map. The closest pixel is closest to the camera of the pixels in the bounding box.

Method for generating a three-dimensional working surface of a human body, system
11631219 · 2023-04-18 · ·

A method for generating a three-dimensional working surface of a human body includes receiving input data corresponding to images; generating a first point cloud from the input data, each point being associated with a three-dimensional spatial coordinate; determining a plurality of attributes at each point of the first point cloud, calculating a set of geometric parameters from a regression carried out from a series of matrix operations performed according to different layers of a neural network trained from a plurality of partial views of parametric models parameterised with different parameterisation configurations; determining a parameterised model to generate a body model of a body including a first meshing.

Method for generating a three-dimensional working surface of a human body, system
11631219 · 2023-04-18 · ·

A method for generating a three-dimensional working surface of a human body includes receiving input data corresponding to images; generating a first point cloud from the input data, each point being associated with a three-dimensional spatial coordinate; determining a plurality of attributes at each point of the first point cloud, calculating a set of geometric parameters from a regression carried out from a series of matrix operations performed according to different layers of a neural network trained from a plurality of partial views of parametric models parameterised with different parameterisation configurations; determining a parameterised model to generate a body model of a body including a first meshing.

METHOD AND DEVICE FOR ASCERTAINING A CLASSIFICATION AND/OR A REGRESSION RESULT WHEN MISSING SENSOR DATA
20220327332 · 2022-10-13 ·

A computer-implemented method for ascertaining a classification and/or a regression result based on the plurality of sensor values. The method includes: ascertaining a plurality of hypotheses regarding a missing sensor value using a machine learning system; ascertaining a plurality of outputs, an output being based in each case on the plurality of sensor values and a hypothesis and the output characterizing a classification and/or a regression result; providing an aggregation of the plurality of outputs as the classification and/or the regression result.

Repair estimation based on images
11631165 · 2023-04-18 · ·

In one embodiment, a method includes accessing an image of a damaged object. The method further includes determining, using a plurality of image segmentation models, a plurality of objects in the image. The method further includes determining, using a plurality of visual inference models and the determined plurality of objects from the image segmentation models, a repair-relevant property vector for the damaged object in the image. The repair-relevant property vector includes a plurality of damaged object properties. The method further includes generating a repair report using the repair-relevant property vector and a price catalogue. The repair report includes an indication of the damaged object and a price associated with the repair or replacement of the damaged object. The method further includes providing the generated report for display on an electronic display device.

Method for recognizing distribution network equipment based on raspberry pi multi-scale feature fusion

Disclosed is a method for recognizing distribution network equipment based on Raspberry Pi multi-scale feature fusion. The method includes obtaining an initial sample data set; constructing an object detection network composed of EfficientNe-B0 backbone network, multi-scale feature fusion module and a regression classification prediction head; training the object detection network by taking the initial sample data set as a training sample; finally, detecting inspection pictures by using a the trained object detection network. A light-weight EfficientNet-B0 backbone network feature extraction method obtains more features of objects. Meanwhile, an introduction of multi-scale feature fusion better adapts to small object detection, and a light-weight y_pred regression classification detection head is effectively deployed and realized in Raspberry Pi embedded equipment with tight resources and limited computing power.