G06T2207/20076

Position and attitude estimation device, position and attitude estimation method, and storage medium

According to one embodiment, a position and attitude estimation device includes a processor. The processor is configured to acquire time-series images continuously captured by a capture device installed on a mobile object, estimate first position and attitude of the mobile object based on the acquired time-series images, estimate a distance to a subject included in the acquired time-series images and correct the estimated first position and attitude to a second position and attitude based on an actual scale, based on the estimated distance.

Motion Capture and Character Synthesis

In some examples, a computing device can determine synthetic meshes based on source meshes of a source mesh sequence and target meshes of a target mesh sequence. The computing device can then place the respective synthetic meshes based at least in part on a rigid transformation to define a processor-generated character. For example, the computing device can determine subsets of the mesh sequences based on a similarity criterion. The computing device can determine modified first and second meshes having a connectivity corresponding to a reference mesh. The computing device can then determine the synthetic meshes based on the modified first and second meshes. In some examples, the computing device can project source and target textures onto the synthetic mesh to provide projected source and target textures. The computing device can determine a synthetic texture registered to the synthetic mesh based on the projected source and target textures.

Method and apparatus for mammographic multi-view mass identification

A method, applied to an apparatus for mammographic multi-view mass identification, includes receiving a main image, a first auxiliary image, and a second auxiliary image. The main image and the first auxiliary image are images of a breast of a person, and the second auxiliary image is an image of another breast of the person. The method further includes detecting the nipple location based on the main image and the first auxiliary image; generating a first probability map of the main image based on the main image, the first auxiliary image, and the nipple location; generating a second probability map of the main image based on the main image, the second auxiliary image, and the nipple location; and generating and outputting a fused probability map based on the first probability map and the second probability map.

TEM-based metrology method and system

A metrology method for use in determining one or more parameters of a three-dimensional patterned structure, the method including performing a fitting procedure between measured TEM image data of the patterned structure and simulated TEM image data of the patterned structure, determining a measured Lamellae position of at least one measured TEM image in the TEM image data from a best fit condition between the measured and simulated data, and generating output data indicative of the simulated TEM image data corresponding to the best fit condition to thereby enable determination therefrom of the one or more parameters of the structure.

SYSTEMS AND METHODS FOR MAPPING AN ENVIRONMENT
20180012370 · 2018-01-11 ·

A method for mapping an environment by an electronic device is described. The method includes obtaining a set of sensor measurements. The method also includes determining a set of voxel occupancy probability distributions respectively corresponding to a set of voxels based on the set of sensor measurements. Each of the voxel occupancy probability distributions represents a probability of occupancy of a voxel over a range of occupation densities. The range includes partial occupation densities.

Asset tracking systems

The disclosed technology includes image-based systems and methods for object tracking within an asset area. Some exemplary methods include receiving an indication of a first object entering an asset area and receiving data indicative of a plurality of captured images. The methods also include performing, by at least one processor, object classification of the first object based on one or more of the plurality of captured images. The methods further include determining a first object location of the first object based at least in part on the object classification, and outputting an indication of the first object location.

SYSTEMS, PROCESSES AND DEVICES FOR OCCLUSION DETECTION FOR VIDEO-BASED OBJECT TRACKING
20180012078 · 2018-01-11 ·

Processes, systems, and devices for occlusion detection for video-based object tracking (VBOT) are described herein. Embodiments process video frames to compute histogram data and depth level data for the object to detect a subset of video frames for occlusion events and generate output data that identifies each video frame of the subset of video frames for the occlusion events. Threshold measurement values are used to attempt to reduce or eliminate false positives to increase processing efficiency.

Quantification of an influence of scattered radiation in a tomographic analysis
11707245 · 2023-07-25 · ·

Systems and methods for quantification of an influence of scattered radiation in the analysis of an object a projection image is provided. Based on the projection image and on a characteristic of a tomography facility and/or of the object relating to the influence of the scattered radiation, at least one intermediate image is created. The at least one intermediate image is analyzed using an artificial neural network to quantify the influence of the scattered radiation.

INFORMATION PROCESSING DEVICE, MOUNTING DEVICE, AND INFORMATION PROCESSING METHOD
20230237667 · 2023-07-27 · ·

An information processing device is a device used in a mounting device that collects a component and arranges the component on a substrate. In this information processing device, a control section acquires a captured image including a component, sets multiple detection lines for detecting a brightness difference of the component with respect to the acquired captured image in the component, and obtains a reference position of an outer edge portion from the multiple detection lines. Next, the control section uses a predetermined weight coefficient of which weight tends to decrease as a distance from the reference position increases to obtain a candidate value obtained by adding the weight coefficient to one or more outer edge portion candidates existing on the detection line. Subsequently, the control section selects a position of the outer edge portion existing on the detection line based on the obtained candidate value.

ASSESSMENT OF PROBABILITY OF BONE FRACTURE
20230238136 · 2023-07-27 ·

A patient-specific assessment of fracture probability for the femur proximal end is provided. 3D locations of the femur head center, a point on the femoral shaft center, and the femoral intercondylar notch are determined from a clinical image. A frontal plane, a perpendicular thereunto and a bone shaft axis are determined from the 3D locations. An FEA coordinate system is defined from the frontal plane, the perpendicular and the axis. Two FEA analyses are performed, one for neck fracture and one for pertrochanteric fracture, with the same displacement constraints and the same load magnitude but different load angles. The femur proximal end is divided into four anatomically-based regions. For each region and each load, maximum tensile and compressive principal strains are determined and, based on the body weight and the principal strains, a likelihood of fracture is obtained. The minimum of these 8 likelihoods gives the probability of fracture.