Patent classifications
G06T2207/20164
IMAGE MATTING METHOD AND APPARATUS
An image matting method and apparatus, an electronic device, and a computer-readable storage medium. The method comprises: performing feature point detection on an image so as to obtain a feature point; acquiring a first image region manually marked on the image; adjusting the first image region according to the feature point, so as to obtain a second image region; and performing matting on the image according to the second image region. The manually marked first image region is adjusted according to the feature point, so as to acquire the second image region which is more accurately positioned, and then matting can be performed according to the second image region so as to accurately extract a required region.
ROBOTIC SYSTEM FOR OBJECT SIZE DETECTION
A computing system including a processing circuit in communication with a camera having a field of view, The processing circuit obtains image information based on the objects in the field of view and defines a minimum viable region for a target open corner. Potential minimum viable regions are defined by identifying candidate edges of an object and determining potential intersection points based on the candidate edges. The minimum viable region may then be identified and validated from the potential minimum viable regions.
METHOD FOR PREDICTING DEFECTS IN ASSEMBLY UNITS
One variation of a method for predicting manufacturing defects includes: accessing a first set of inspection images of a first set of assembly units recorded by an optical inspection station over a first period of time; generating a first set of vectors representing features extracted from the first set of inspection images; grouping neighboring vectors in a multi-dimensional feature space into a set of vector groups; accessing a second inspection image of a second assembly recorded by the optical inspection station at a second time succeeding the first period of time; detecting a second set of features in the second inspection image; generating a second vector representing the second set of features in the multi-dimensional feature space; and, in response to the second vector deviating from the set of vector groups by more than a threshold difference, flagging the second assembly unit.
IMAGE RENDERING METHOD AND APPARATUS, DEVICE, MEDIUM, AND COMPUTER PROGRAM PRODUCT
Provided is an image rendering method performed by a computer device, the method including: determining a vertex coordinate of a virtual texture tile corresponding to an image in a virtual texture; loading, through a vertex shader, a physical texture corresponding to the vertex coordinate to a texture cache; determining, through the vertex shader for each rendering child tile in the virtual texture tile corresponding to the vertex coordinate, a physical texture coordinate corresponding to each rendering child tile in the texture cache, and transmitting the physical texture coordinate to a pixel shader; and sampling, through the pixel shader, a texel matched with the physical texture coordinate from the physical texture in the texture cache, and rendering, based on the texel, the image.
IMAGE ANALYZATION METHOD AND IMAGE ANALYZATION DEVICE
An image analyzation method and an image analyzation device are disclosed. The method includes: obtaining a first image which presents at least a first object and a second object; analyzing the first image to detect a first central point between a first endpoint of the first object and a second endpoint of the second object; determining a target region based on the first central point as a center of the target region; capturing a second image located in the target region from the first image; and analyzing the second image to generate status information which reflects a gap status between the first object and the second object.
INTERACTIVE ENDOSCOPY FOR INTRAOPERATIVE VIRTUAL ANNOTATION IN VATS AND MINIMALLY INVASIVE SURGERY
A controller (522) for live annotation of interventional imagery includes a memory (52220) that stores software instructions and a processor (52210) that executes the software instructions. When executed by the processor (52210), the software instructions cause the controller (522) to implement a process that includes receiving (S210) interventional imagery during an intraoperative intervention and automatically analyzing (S220) the interventional imagery for detectable features. The process executed when the processor (52210) executes the software instructions also includes detecting (S230) a detectable feature and determining (S240) at add an annotation to the interventional imagery for the detectable feature. The processor further includes identifying (S250) a location for the annotation as an identified location in the interventional imagery and adding (S260) the annotation to the interventional imagery at the identified location to correspond to the detectable feature. During the intraoperative intervention, a video is output (S270) as video output based on interventional imagery and the annotation, including the annotation overlaid on the interventional imagery at the identified location.
AUTOMATIC MESH TRACKING FOR 3D FACE MODELING
The mesh tracking described herein involves mesh tracking on 3D face models. In contrast to existing mesh tracking algorithms which generally require user intervention and manipulation, the mesh tracking algorithm is fully automatic once a template mesh is provided. In addition, an eye and mouth boundary detection algorithm is able to better reconstruct the shape of eyes and mouths.
METHOD AND APPARATUS FOR CONSTRUCTING A 3D GEOMETRY
Aspects of the disclosure include methods, apparatuses, and non-transitory computer-readable storage mediums for generating a three-dimensional (3D) geometry of a room from a panorama image of the room. An apparatus includes processing circuitry that determine two-dimensional (2D) positions of wall corner points of the room in the panorama image based on a user input. Each wall corner point is in one of a floor plane or a ceiling plane of the room. The processing circuitry calculates 3D positions of the wall corner points based on the 2D positions of the wall corner points, a size of the panorama image, and a distance between the floor plane and a capture position of a device capturing the panorama image, determines a room layout based on an order of the wall corner points, and generates the 3D geometry based on the room layout and the 3D positions of the wall corner points.
SYSTEMS AND METHODS OF FACIAL AND BODY RECOGNITION, IDENTIFICATION AND ANALYSIS
Systems and methods for learning and recognizing features of an image are provided. A point detector identifies points in an image where there are two-dimensional changes. A geometric feature evaluator overlays at least one mesh on the image and analyzes geometric features on the at least one mesh. An internal calibrator transforms data from the point detector and the geometric feature evaluator into a three-dimensional point figure of the image, and a depth evaluator determines a final shape of the image. A three-dimensional object model of the image is constructed. The image could be a human face or body. An artificial intelligence unit learns and identifies a user's facial features including skull size, distance between eyes, and bone structure and body features including skeleton shape and body size. Exemplary systems and methods can construct and learn features of a human face based on a partial view where part of the face is covered. Systems and methods can unlock a mobile device based on recognition of the features of the user's face.
IMAGE CORRELATION FOR END-TO-END DISPLACEMENT AND STRAIN MEASUREMENT
A system for correlating image data includes a memory configured to store a sequence of images of a sample. The system also includes a processor operatively coupled to the memory and configured to crop a first pair of images to specify a region of interest in the first pair of images, where at least one image in the pair of images is from the sequence of images. The processor is also configured to calculate, using a first convolutional neural network, a displacement field for the first pair of images. The processor is also configured to calculate, using a second convolutional neural network, a strain field for the first pair of images. The processor is further configured to determine an amount of displacement or deformation of the sample based at least in part on the displacement field and the strain field.