Patent classifications
G06V10/426
System and method to predict parts dependencies for replacement based on the heterogenous subsystem analysis
A non-transitory computer readable medium (107, 127) stores instructions executable by at least one electronic processor (101, 113) to perform a component co-replacement recommendation method (200). The method includes: identifying components of a medical device by analyzing a technical document (130) related to the medical device; identifying component symbols (132) representing the components in drawings of the technical document; extracting relationships between the components of the medical device based on graphical connections (136) between the component symbols in the drawings of the technical document; generating a component connections graph (124) representing the relationships between the components of the medical device, the graph including nodes (138) corresponding to the components and connections (136) between the components; receiving an identification of a component to be replaced; and determining a co-replacement recommendation (122) for the component to be replaced based on the component connections graph.
Saliency prediction method and system for 360-degree image
The present disclosure provides a saliency prediction method and system for a 360-degree image based on a graph convolutional neural network. The method includes: firstly, constructing a spherical graph signal of an image of an equidistant rectangular projection format by using a geodesic icosahedron composition method; then inputting the spherical graph signal into the proposed graph convolutional neural network for feature extraction and generation of a spherical saliency graph signal; and then reconstructing the spherical saliency graph signal into a saliency map of an equidistant rectangular projection format by using a proposed spherical crown based interpolation algorithm. The present disclosure further proposes a KL divergence loss function with sparse consistency. The method can achieve excellent saliency prediction performance subjectively and objectively, and is superior to an existing method in computational complexity.
Saliency prediction method and system for 360-degree image
The present disclosure provides a saliency prediction method and system for a 360-degree image based on a graph convolutional neural network. The method includes: firstly, constructing a spherical graph signal of an image of an equidistant rectangular projection format by using a geodesic icosahedron composition method; then inputting the spherical graph signal into the proposed graph convolutional neural network for feature extraction and generation of a spherical saliency graph signal; and then reconstructing the spherical saliency graph signal into a saliency map of an equidistant rectangular projection format by using a proposed spherical crown based interpolation algorithm. The present disclosure further proposes a KL divergence loss function with sparse consistency. The method can achieve excellent saliency prediction performance subjectively and objectively, and is superior to an existing method in computational complexity.
DEPTH INFORMATION BASED POSE DETERMINATION FOR MOBILE PLATFORMS, AND ASSOCIATED SYSTEMS AND METHODS
A method includes determining a depth range where a subject is likely to appear in a current depth map based on one or more previous depth maps of the environment, filtering the current depth map based on the depth range, to generate a reference depth map, identifying a plurality of candidate regions from the reference depth map, selecting a subset of the plurality of candidate regions, determining a main region from the subset of the plurality of candidate regions, associating the main region and one or more target regions, identifying the first pose component of the subject from a collective region, identifying the second pose component of the subject from the collective region, determining one or more vectors representing a spatial relationship between the identified first pose component and the identified second pose component, and controlling a movement of a movable object based on the one or more vectors.
DEPTH INFORMATION BASED POSE DETERMINATION FOR MOBILE PLATFORMS, AND ASSOCIATED SYSTEMS AND METHODS
A method includes determining a depth range where a subject is likely to appear in a current depth map based on one or more previous depth maps of the environment, filtering the current depth map based on the depth range, to generate a reference depth map, identifying a plurality of candidate regions from the reference depth map, selecting a subset of the plurality of candidate regions, determining a main region from the subset of the plurality of candidate regions, associating the main region and one or more target regions, identifying the first pose component of the subject from a collective region, identifying the second pose component of the subject from the collective region, determining one or more vectors representing a spatial relationship between the identified first pose component and the identified second pose component, and controlling a movement of a movable object based on the one or more vectors.
Depth information based pose determination for mobile platforms, and associated systems and methods
A pose determination method for a subject includes identifying a plurality of candidate regions from depth data representing an environment based on a depth connectivity criterion, determining a first region including a first subset of the plurality of candidate regions based on an estimation regarding a first pose component of the subject, determining a second region including a second subset of the plurality of candidate regions that are disconnected from the first subset of the plurality of candidate regions based on relative locations of the first region and the second region, generating a collective region by associating the first region with the second region, identifying the first pose component and a second pose component of the subject from the collective region, determining a spatial relationship between the first pose component and the second pose component, and generating a controlling command based on the spatial relationship.
Depth information based pose determination for mobile platforms, and associated systems and methods
A pose determination method for a subject includes identifying a plurality of candidate regions from depth data representing an environment based on a depth connectivity criterion, determining a first region including a first subset of the plurality of candidate regions based on an estimation regarding a first pose component of the subject, determining a second region including a second subset of the plurality of candidate regions that are disconnected from the first subset of the plurality of candidate regions based on relative locations of the first region and the second region, generating a collective region by associating the first region with the second region, identifying the first pose component and a second pose component of the subject from the collective region, determining a spatial relationship between the first pose component and the second pose component, and generating a controlling command based on the spatial relationship.
Identification device, identification method, and identification program
An identification apparatus includes processing circuitry configured to determine whether or not a first image and a second image are similar based on feature points extracted from each of the first image and the second image, and determine whether or not the first image and the second image are similar by comparing neighborhood graphs generated for each of the first image and the second image, the feature points being as nodes.
Identification device, identification method, and identification program
An identification apparatus includes processing circuitry configured to determine whether or not a first image and a second image are similar based on feature points extracted from each of the first image and the second image, and determine whether or not the first image and the second image are similar by comparing neighborhood graphs generated for each of the first image and the second image, the feature points being as nodes.
RELATIONSHIP MODELING AND EVALUATION BASED ON VIDEO DATA
A method includes acquiring digital video data that portrays an interacting event, identifying a plurality of video features in the digital video data, analyzing the plurality of video features to create a relationship graph, determining a relationship score based on the relationship graph using a first computer-implemented machine learning model, and outputting the relationship score with a user interface. The interacting event comprises a plurality of interactions between a first individual and a second individual and each video feature of the plurality of video features corresponds to an interaction of the plurality of interactions. The relationship graph comprises a first node, a second node, and a first edge extending from the first node to the second node. The first node represents the first individual, the second node represents the second individual, and a weight of the first edge represents a relationship strength between the first individual and the second individual.