Patent classifications
G06T7/75
IMAGE PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
An image processing device includes a reception interface and a processor. The reception interface is configured to receive image data corresponding to an image in which a subject is captured. The processor is configured to perform, with respect to the image data, processing that a skeleton model in which a plurality of feature points corresponding to four limbs are connected to a center feature point corresponding to a center of a human body is applied to the subject.
IMAGE PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
An image processing device includes a reception interface and a processor. The reception interface receives image data corresponding to an image in which a subject is captured. The processor detects, based on the image data, a left shoulder feature point, a right shoulder feature point, and a face feature point of the person. The processor acquires a first value corresponding to a distance between the left shoulder feature point and the face feature point. The processor acquires a second value corresponding to a distance between the right shoulder feature point and the face feature point. The processor estimates presence or absence of a body twist of the person based on a ratio between the first value and the second value.
SIMULTANEOUS LOCALIZATION AND MAPPING USING DEPTH MODELING
Embodiments of localization and mapping using depth modeling are described herein. In one example, frames of image data captured by sensor(s) from various poses within an environment are received over an interface. Keypoints are detected in the current frame, and matching keypoints are found in preceding frames. The pose of the current frame is determined based at least partially on depth models associated with the matching keypoints.
OBJECT DETECTION DEVICE AND MODEL PATTERN EVALUATION DEVICE
Provided is an object detection device which can detect with high accuracy a symmetrical target object represented in an image. The object detection device includes a memory which stores a model pattern representing a plurality of predetermined features in different positions to each other on a target object when the target object is viewed from a predetermined direction, a feature extraction unit which extracts a plurality of the predetermined features from an image in which the target object is represented, and a collation unit which calculates a degree of coincidence representing a degree of matching between the plurality of the predetermined features of the model pattern and a plurality of the predetermined features extracted from a region corresponding to the model pattern in the image while changing at least one of a relative position or angle, or a relative direction and relative size of the model pattern with respect to the image, and which judges that the target object is represented in the region of the image corresponding to the model pattern when the degree of coincidence is equal to or greater than a predetermined threshold, wherein the predetermined features stored in the memory include a feature of interest which can be used for detecting a position in a specific direction of the target object in the image or which can be used for detecting an angle in a rotational direction centered about a predetermined point of the target object in the image, and the collation unit increases the contribution in the calculation of the degree of coincidence when the feature of interest of the model pattern and a predetermined feature in the image match as compared to the case in which a predetermined feature of the model pattern other than the feature of interest and the predetermined feature in the image match to calculate the degree of coincidence.
COMPUTER SYSTEM AND METHOD FOR CONTROLLING GENERATION OF VIRTUAL MODEL
Model data of a virtual model imitating an object model is generated based on photographed data obtained by photographing the object model including a joint structure. A given applied joint structure is applied to the virtual model. The virtual model based on the model data is disposed in a given virtual space. Virtual model management data including the model data and data of the applied joint structure is stored in a predetermined storage section or is externally output as data for causing a joint of the virtual model to function.
Three-Dimensional Skeleton Mapping
A system includes processing hardware and a memory storing software code. When executed, the software code receives first skeleton data including a first location of each of multiple skeletal key-points from the perspective of a first camera, receives second skeleton data including a second location of each of the skeletal key-points from the perspective of a second camera, correlates first and second locations of some or all of the multiple skeletal key-points to produce correlated skeletal key-point location data for each of at least some skeletal key-points. The software code further merges the correlated skeletal key-point location data for each of those at least some skeletal key-points to provide merged location data, and generates, using the merged location data and the locations of the first, second, and third cameras, a mapping of the 3D pose of a skeleton.
OBJECT POSE ESTIMATION
A depth image of an object can be input to a deep neural network to determine a first four degree-of-freedom pose of the object. The first four degree-of-freedom pose and a three-dimensional model of the object can be input to a silhouette rendering program to determine a first two-dimensional silhouette of the object. A second two-dimensional silhouette of the object can be determined based on thresholding the depth image. A loss function can be determined based on comparing the first two-dimensional silhouette of the object to the second two-dimensional silhouette of the object. Deep neural network parameters can be optimized based on the loss function and the deep neural network can be output.
POSE DETECTION OF AN OBJECT IN A VIDEO FRAME
Aspects of the disclosure provide solutions for determining a position of an object in a video frame. Examples include: receiving a segmentation mask of an identified object in a video frame; adjusting a 3D representation of a moveable part of the object based on constraints for the moveable part; comparing the 3D model of the object to the segmentation mask of the object; determining a match between the 3D model of the object to the segmentation mask of the object is above a threshold; and based on the match being above the threshold, determining a position of the object.
System and Method for Dimensioning Target Objects
A method comprising obtaining, from a sensor, depth data representing a target object; selecting a model to fit to the depth data; for each data point in the depth data: defining a ray from a location of the sensor to the data point; and determining an error based on a distance from the data point to the model along the ray; when the depth data does not meet a similarity threshold for the model based on the determined errors, selecting a new model and repeating the error determination for the depth data based on the new model; when the depth data meets the similarity threshold for the model, selecting the model as representing the target object; and outputting the selected model representing the target object.
Anatomically intelligent echochardiography for point-of-care
An apparatus includes an imaging probe and is configured for dynamically arranging presentation of visual feedback for guiding manual adjustment, via the probe, of a location, and orientation, associated with the probe. The arranging is selectively based on comparisons between fields of view of the probe and respective results of segmenting image data acquired via the probe. In an embodiment, the apparatus includes a sensor which guides a decision that acoustic coupling quality is insufficient, the apparatus issuing a user alert upon the decision.