Patent classifications
G06V2201/12
Apparatus and methods for augmented reality measuring of equipment
Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality measuring of equipment. An example apparatus disclosed herein includes an image comparator to compare camera data with reference information of a reference vehicle part to identify a vehicle part included in the camera data, and an inspection image analyzer to, in response to the image comparator identifying the vehicle part, measure the vehicle part by causing an interface generator to generate an overlay representation of the reference vehicle part on the camera data displayed on a user interface, and determining, based on one or more user inputs to adjust the overlay representation, a measurement corresponding to the vehicle part.
Detection of environmental changes to delivery zone
A technique for detecting an environmental change to a delivery zone via an unmanned aerial vehicle includes obtaining an anchor image and an evaluation image, each representative of the delivery zone, providing the anchor image and the evaluation image to a machine learning model to determine an embedding score associated with a distance between representations of the anchor image and the evaluation image within an embedding space, and determining an occurrence of the environmental change to the delivery zone when the embedding score is greater than a threshold value.
Goods sensing system and method for goods sensing based on image monitoring
A goods sensing system includes: a sample collector that collects a plurality of sets of image samples, where each set of the image samples comprise sample images of a type of goods at multiple angles, where a set of the image samples of a same type of goods are provided with a same group identification, and the group identification is the type of the goods corresponding to the set of image samples; a model trainer that trains a convolutional neural network model according to each sample image and a group identification of the sample image to obtain a goods identification model; a real-time image collector that continuously acquires at least one real-time image of space in front of a shelf, each real-time image including part or all of images of goods; and a goods category deriver that obtains a type and quantity of the goods displayed in the real-time.
Deep learning for object detection using pillars
Among other things, we describe techniques for detecting objects in the environment surrounding a vehicle. A computer system is configured to receive a set of measurements from a sensor of a vehicle. The set of measurements includes a plurality of data points that represent a plurality of objects in a 3D space surrounding the vehicle. The system divides the 3D space into a plurality of pillars. The system then assigns each data point of the plurality of data points to a pillar in the plurality of pillars. The system generates a pseudo-image based on the plurality of pillars. The pseudo-image includes, for each pillar of the plurality of pillars, a corresponding feature representation of data points assigned to the pillar. The system detects the plurality of objects based on an analysis of the pseudo-image. The system then operates the vehicle based upon the detecting of the objects.
Ultrasound face scanning and identification apparatuses and methods
An electronic face-identification device, which performs face scanning with ultrasonic waves, includes a housing and an ultrasound device disposed within the housing. The ultrasound device may be configured to transmit ultrasonic waves through air to a face and scan the face with the ultrasonic waves, to receive reflected waves through the air corresponding to reflections of the ultrasonic waves from the face, and to perform a recognition process for the face based on reflections of the ultrasonic waves from the face. The ultrasound device may include a plurality of ultrasound transducers, and electronic circuitry configured to transmit signals to the ultrasound transducers and receive signals from the ultrasound transducers. The face-identification device may be incorporated into various electronic equipment, such as hand-held equipment in the form of smartphones and tablet computers, as well as in larger scale installations at airports, workplace entryways, and the like.
HAND POSTURE ESTIMATION METHOD, APPARATUS, DEVICE, AND COMPUTER STORAGE MEDIUM
Described are a hand posture estimation method, an electronic device, and a non-transitory computer-readable storage medium. The method includes: obtaining an initial feature map corresponding to a hand region in a candidate image; obtaining a fused feature map by performing feature fusion processing on the initial feature map; wherein the feature fusion processing is configured to fuse features around a plurality of key points; the plurality of key points represent skeleton key nodes of the hand region; obtaining a target feature map by performing deconvolution processing on the fused feature map; wherein the deconvolution processing is configured to adjust a resolution of the fused feature map; and obtaining coordinate information of the plurality of key points based on the target feature map to determine a posture estimation result of the hand region in the candidate image.
Generation of synthetic 3-dimensional object images for recognition systems
Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
APPARATUS AND METHOD OF CONVERTING DIGITAL IMAGES TO THREE-DIMENSIONAL CONSTRUCTION IMAGES
A method implemented with instructions executed by a processor includes receiving a digital image of an interior space. At least one detected object is identified within the digital image. Dimensions of the detected object are determined. Image segmentation is applied to the digital image to produce a segmented image. Edges are detected in the segmented image to produce a combined output image. Geometric transformation, field of view and depth correction are applied to the combined output image to correct for image distortion to produce a geometrically transformed digital image. Dimensions are applied to the geometrically transformed digital image at least partially based on the dimensions of the detected object to produce a dimensionalized floorplan.
SELF-SERVICE CHECKOUT TERMINAL, METHOD AND CONTROL DEVICE
In accordance with various embodiments, a self-service checkout terminal can comprise: a capture device having at least one sensor, wherein the capture device is configured: to capture first biometric data with reference to a person at the self-service checkout terminal; to capture second biometric data with reference to an official identity certificate if the identity certificate is presented to the capture device; to capture a product identifier of a product if the product is presented to the capture device; a control device configured for: firstly determining a sales restriction to which the product is subject, on the basis of the product identifier; comparing the first biometric data with the second biometric data; secondly determining whether the person satisfies a criterion of the sales restriction on the basis of a result of the comparing and on the basis of the second biometric data.
Computer-implemented method and system for generating a virtual vehicle environment
A computer-implemented method for creating a virtual vehicle environment includes: receiving data of a real vehicle environment; generating a first feature vector representing a respective real object by applying a second machine learning algorithm to the respective real object and storing the first feature vector; providing a plurality of stored second feature vectors representing synthetically generated objects; identifying a second feature vector having a greatest degree of similarity to the first feature vector; selecting the identified second feature vector and retrieving a stored synthetic object that is associated with the second feature vector and that corresponds to the real object or procedurally generating the synthetic object that corresponds to the real object, depending on the degree of similarity of the identified second feature vector to the first feature vector; and integrating the synthetic object into a predetermined virtual vehicle environment.