Patent classifications
G06T2207/20072
Motion Capture and Character Synthesis
In some examples, a computing device can determine synthetic meshes based on source meshes of a source mesh sequence and target meshes of a target mesh sequence. The computing device can then place the respective synthetic meshes based at least in part on a rigid transformation to define a processor-generated character. For example, the computing device can determine subsets of the mesh sequences based on a similarity criterion. The computing device can determine modified first and second meshes having a connectivity corresponding to a reference mesh. The computing device can then determine the synthetic meshes based on the modified first and second meshes. In some examples, the computing device can project source and target textures onto the synthetic mesh to provide projected source and target textures. The computing device can determine a synthetic texture registered to the synthetic mesh based on the projected source and target textures.
SYSTEMS, PROCESSES AND DEVICES FOR OCCLUSION DETECTION FOR VIDEO-BASED OBJECT TRACKING
Processes, systems, and devices for occlusion detection for video-based object tracking (VBOT) are described herein. Embodiments process video frames to compute histogram data and depth level data for the object to detect a subset of video frames for occlusion events and generate output data that identifies each video frame of the subset of video frames for the occlusion events. Threshold measurement values are used to attempt to reduce or eliminate false positives to increase processing efficiency.
NODE-BASED NEAR-MISS DETECTION
A system includes one or more video capture devices and a processor coupled to each video capture device. Each processor is operable to direct its respective video capture device to obtain an image of a monitored area and process the image to identify objects of interest represented in the image. The processor is also operable to generate bounding perimeter virtual objects for the identified objects of interest, each bounding perimeter virtual object surrounding at least part of its respective object of interest. The processor is further operable to determine danger zones for the identified objects of interest based on the bounding perimeter virtual objects. The processor is further operable to determine at least one near-miss condition based at least in part on an actual or predicted overlap of danger zones for multiple objects of interest, and may optionally generate an alert at least partially in response to the near-miss condition.
COMPUTER-IMPLEMENTED METHOD FOR EVALUATING AN ANGIOGRAPHIC COMPUTED TOMOGRAPHY DATASET, EVALUATION DEVICE, COMPUTER PROGRAM AND ELECTRONICALLY READABLE DATA MEDIUM
At least one vascular tree supplying at least a part of the hollow organ in the computed tomography dataset is segmented, and a tree structure up to an order possible based on the blood vessel segmentation result is determined from a blood vessel segmentation result. Perfusion information for each edge in the tree structure is assigned as at least one of the computed tomography data assigned to the blood vessel segment or at least one value derived therefrom. Adjacent hollow organ segments of the hollow organ are defined based on supply by adjacent blood vessels in the tree structure, and the tree structure and the perfusion information are analyzed to determine hemodynamic information to assign to hollow organ segments. At least a part of the hemodynamic information in at least one of the computed tomography dataset or a visualization dataset derived therefrom is then visualized.
IMAGE PROCESSING METHOD AND APPARATUS, DEVICE, AND MEDIUM
An image processing method is provided. In the method, a target video frame set is acquired from video data of a plurality of video frames. The target video frame set includes a subset of the video frames that is selected based on characteristics of the subset of the video frames. A global color feature of a reference video frame is acquired. An image semantic feature of the reference video frame is acquired. An enhancement parameter of the reference video frame is acquired for each of at least one image information dimension according to the global color feature and the image semantic feature. Image enhancement is separately performed on the video frames in the target video frame set according to each enhancement parameter of the reference video frame to obtain target image data for each of the video frames in the target video frame set.
RESERVOIR COMPUTING NEURAL NETWORKS BASED ON SYNAPTIC CONNECTIVITY GRAPHS
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for implementing a reservoir computing neural network. In one aspect there is provided a reservoir computing neural network comprising: (i) a brain emulation sub-network, and (ii) a prediction sub-network. The brain emulation sub-network is configured to process the network input in accordance with values of a plurality of brain emulation sub-network parameters to generate an alternative representation of the network input. The prediction sub-network is configured to process the alternative representation of the network input in accordance with values of a plurality of prediction sub-network parameters to generate the network output. The values of the brain emulation sub-network parameters are determined before the reservoir computing neural network is trained and are not adjusting during training of the reservoir computing neural network.
Estimating bone mineral density from plain radiograph by assessing bone texture with deep learning
The present disclosure provides a computer-implemented method, a device, and a computer program product for radiographic bone mineral density (BMD) estimation. The method includes receiving a plain radiograph, detecting landmarks for a bone structure included in the plain radiograph, extracting an ROI from the plain radiograph based on the detected landmarks, estimating the BMD for the ROI extracted from the plain radiograph by using a deep neural network.
Method and system for automatically processing point cloud based on reinforcement learning
A method and system for automatically processing point cloud based on reinforcement learning are provided. The method for automatically processing point cloud based on reinforcement learning according to an embodiment of the present disclosure includes scanning to collect a point cloud (PCL) and an image through a lidar and a camera; calibrating, by a controller, to match locations of the image and the point cloud through reinforcement learning that maximizes a reward including geometric and luminous intensity consistency of the image and the point cloud; and meshing, by the controller, the point cloud into a 3D image through reinforcement learning that minimizes a reward including a difference between a shape of the image and a shape of the point cloud.
Systems and methods for image regularization based on a curve derived from the image data
Disclosed are systems and associated methods for generating a regularized image from non-uniformly distributed image data based on a curve derived from the image data. The curve is used to reduce the non-uniformity in the image data without losing detail or changing the overall image. Regularizing the image includes obtaining a tree-based representation of the non-uniformly distributed image data, generating a regularization curve that models a particular distribution of the image data, and applying the regularization curve to the tree-based representation in order to select the nodes of the tree-based representation for the decimated image data to render in place of the original image data and the original image data to preserve and render as part of the regularized image. Specifically, the system render the original image data associated with leaf nodes and the decimated image data associated with parent nodes that intersect or are within the regularization curve.
POINT CLOUD ALIGNMENT
Examples of methods for point cloud alignment are described herein. In some examples, a method includes orienting a model point cloud or a scanned point cloud based on a set of initial orientations. In some examples, the method includes determining, using a first portion of a machine learning model, first features of the model point cloud and second features of the scanned point cloud. In some examples, the method includes determining, using a second portion of the machine learning model, correspondence scores between the first features and the second features based on the set of initial orientations. In some examples, the method includes globally aligning the model point cloud and the scanned point cloud based on the correspondence scores.