Patent classifications
G06V10/752
VR PARTIAL PASSTHROUGH USING CONTOUR APPROXIMATION
According to at least one embodiment, a method, a computer system, and a computer program product for virtual reality is provided. The present invention may include detecting a selection of a real-world object via a passthrough function of a virtual reality headset; analyzing a contour of the selected real-world object via a contour approximation method; creating a first layer that maintains passthrough function within the analyzed contour of the selected real-world object; creating a second layer that comprises a virtual simulated environment and one or more virtual objects; and displaying the first layer on top of the second layer by overwrapping the first layer and the second layer.
ANALYZING CONTROL OBJECTS IN A FREE SPACE GESTURE CONTROL ENVIRONMENT
Free space machine interface and control can be facilitated by predictive entities useful in interpreting a control object's position and/or motion (including objects having one or more articulating members, i.e., humans and/or animals and/or machines). Predictive entities can be driven using motion information captured using image information or the equivalents. Predictive information can be improved applying techniques for correlating with information from observations.
IMAGE ANALYSIS BASED ON ADAPTIVE WEIGHTING OF TEMPLATE CONTOURS
A method of characterizing an image. The method includes accessing a template contour that corresponds to a set of contour points extracted from the image. The method includes comparing the template contour and the extracted contour points based on a plurality of distances between locations on the template contour and the extracted contour points. The plurality of distances is weighted based on the locations on the template contour and overlap of the locations on the template contour with a blocking structure in the image. The method includes determining, based on the comparison, a matching geometry and/or a matching position of the template contour with the extracted contour points from the image.
LOAD CENTER DEVICE IDENTIFICATION AND LOCATION
Systems/methods for identifying devices in a load center and recording their respective slot locations provide a load center mapping app that performs automatic device identification using a digital image of the load center. The load center mapping app applies digital image processing algorithms to the digital image to outline a contour of each device, then employs optical character recognition (OCR) to each device to recognize the identifier for that device. The mapping app thereafter maps the identifier for each device to the slot number for the device, and stores the mapping information a virtual load center table that can be used to generate an augmented load center. Such a mapping app is particularly useful in identifying devices that have an electronic label or similar indicia containing an identifier that indicates which branch is connected to that device, although other devices without an electronic label may also be identified and mapped.
Shape-based edge detection
Techniques are described for detecting a periphery of a surface based on a point set representing the surface. The surface may correspond to a display medium upon which content is projected. A shape model may be matched and aligned to a contour of the point set. A periphery or edge of the surface and corresponding display medium may be determined based on the aligned shape model.
Wearable facial movement tracking devices
This technology provides systems and methods for tracking facial movements and reconstructing facial expressions by learning skin deformation patterns and facial features. Frontal view images of a user making a variety of facial expressions are acquired to create a data training set for use in a machine-learning process. Head-mounted or neck-mounted wearable devices are equipped with one or more camera(s) or acoustic device(s) in communication with a data processing system. The cameras capture images of contours of the users face from either the cheekbone or the chin profile of the user. The acoustic devices transmit and receive signals to calculate a representation of the skin deformation. A data processing system uses the images, the profile of the contours, or skin deformation to track facial movement or to reconstruct facial expressions of the user based on the data training set.
Image analysis apparatus and method for determining shape of particle included in image of object
An image analysis apparatus according to an embodiment of the present invention includes: a processor; and a memory storing program instructions that cause the processor to: determine a shape of a particle included in a particle image that is extracted from an image of an object, so that an OK particle image which is a particle image of an OK particle that satisfies a predetermined standard for shape and a provisional NG particle image which is a particle image of a provisional NG particle that does not satisfy the predetermined standard, are obtained; generate a pseudo image using a generative model; and determine whether the provisional NG image and the pseudo image are similar, wherein in a case where the provisional NG image and the pseudo image are determined to be similar, the provisional NG particle is determined as including an OK particle.
Method for generating objective function, apparatus, electronic device and computer readable medium
A method for generating a target function is provided. The method includes: performing normalization processing on a vector corresponding to each pixel in a target feature map set to generate a target vector, so as to obtain a target vector set; generating hash coding corresponding to each vector in the target vector set, to obtain a hash coding set; determining a prior probability of each hash coding in the hash coding set; and generating a target function based on an entropy of the prior probability.
Image processor, imaging device, robot and robot system
An image processor includes memory configured to store the shape of an object, extracting circuitry configured to extract a second image of a target range from a first image of the object, distance detecting circuitry configured to process the second image to detect distances from at least three parts projected within the target range to the camera, plane estimating circuitry configured to estimate a plane projected within the target range using the distances based on the at least three parts, angle detecting circuitry configured to detect an angle of the plane with respect to an optical axis of the camera, contour estimating circuitry configured to estimate, based on the shape of the object and the angle of the plane, a contour of the object projected on the first image, and identifying circuitry configured to identify the object on the first image based on the contour.
Georeferencing a generated floorplan and generating structural models
Methods and systems for improved generation and georeferencing of floor plans are presented. In one embodiment, a method is presented that includes receiving images that depict sheets of a blueprint of a structure. Subsets of the images depicting floor sheets and elevation sheets may be identified. Exterior contours may be extracted from the images depicting floor sheets and elevation contours may be extracted from the images depicting elevation sheets. A corresponding structure within a three-dimensional map may be identified based on the exterior contours and the elevation contours. A three-dimensional contour of the exterior of the structure may be extracted from the three-dimensional map.