Patent classifications
G06K9/52
APPARATUS AND METHOD FOR VISUALIZING TISSUE MACRO- AND MICROSTRUCTURE FOR PATHOLOGY EVALUATION IN MAGNETIC RESONANCE IMAGING
A method improves a detection of a brain tissue pathology in magnetic resonance (MR) images of a patient. The method includes acquiring multiple MR imaging data for creating four different contrast maps of a patient brain. From the multiple MR imaging data, performing an estimation of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) concentration for each voxel of a part of the patient brain. From the multiple MR imaging data, segmenting the part of the patient brain in different regions-of-interest (ROIs) according to a chosen atlas. For each voxel of each of the contrast maps of the patient brain, computing, for the part of the patient brain, a deviation score. The method further includes creating from the deviation score and for each of the quantitative contrast maps, a deviation map representing the part of the brain in dependence on the deviation score calculated for each voxel.
TOOTH TYPE JUDGMENT PROGRAM, TOOTH TYPE POSITION JUDGMENT DEVICE AND METHOD OF THE SAME
The tooth type judgment program includes, extracting point groups indicating a surface of three-dimensional profile data from inputted three-dimensional profile data; moving and/or rotating the three-dimensional profile data of a tooth corresponding to a specific type of tooth; calculating an arrangement relationship in which an error between a point group included in any of a region of the extracted point groups and the three-dimensional profile data of the tooth becomes minimum, and estimating a direction of the tooth included in the region based on the calculated arrangement relationship.
GESTURE CONTROL DEVICE AND METHOD
A gesture control system for a device for determining which one of a plurality of devices is to be controlled by a gesture acquires images of a gesture from each of the electronic devices; establishes a three dimensional coordinate system for the gesture image; calculates an angle between a first vector from a start point of the gesture to a center point of each electronic device and a second vector from an end point of the gesture to the center point of each electronic device. Thereby, the electronic device intended as the object to be controlled by the gesture can be determined, according to whether the angle between the first vector and the second vector is less than a preset value. A gesture control method is also provided.
GESTURE CONTROL DEVICE AND METHOD
A device for recognizing control gestures and determining which one device out of a plurality is the target of control acquires images of a gesture from each electronic device. A three dimensional coordinate system for each image is established, and coordinate of a central point of each electronic device determined. Extent of gesture to the left and to the right at different depths is determined and a regression plane equation is calculated. A distance between the regression plane and center points of each electronic device is determined and the electronic device with the closest (the shortest distance) center point is determined as the target device of the control gesture. A gesture control method is also provided.
IMAGE PROCESSING SYSTEM AND METHOD OF PROCESSING IMAGES
The disclosure relates to systems and method for processing images. The method includes selecting a predetermined reference structure, the predetermined reference structure having a known feature size/shape. The method also includes obtaining a reference image of the predetermined reference structure, and capturing a calibration image of the predetermined reference structure using an observation device. The calibration image includes a plurality of features. Additionally, the method includes identifying at least one portion of the plurality of features of the calibration image that include a feature size/shape substantially similar to the known feature size and shape of the predetermined reference structure. Finally, the method includes combining the identified portion of the plurality of features of the calibration image to form a stacked feature image, and determining a point spread function (PSF) of the observation device by comparing the obtained reference image with the stacked feature image.
INFORMATION PROCESSOR AND MOVABLE BODY APPARATUS
According to one embodiment, an information processor includes a memory and processing circuitry. The circuitry receives area information indicating a second area in a first area around a movable body apparatus and third areas in the first area, wherein the movable body apparatus is movable in the second area and an object is present in each of the third areas. The circuitry receives movement information including at least one of a velocity, a movement direction or an acceleration of the apparatus. The circuitry acquires evaluation values each indicative of a damage to be caused when the apparatus collides with each object in the third areas, and determines, based on the evaluation values, a position corresponding to a first object which causes a least damage.
Lane mark recognition device
A lane mark recognition device detects an edge located in a proximal area in front of the vehicle, and determines a lane mark candidate on the basis of the detected edge and an edge that is located in a distal area farther than the proximal area in front of the vehicle and that is continuous with the edge. Therefore, an edge of another vehicle (leading vehicle) or the like located in the distal area in a captured image is not detected as a lane mark candidate (is excluded as a non lane mark candidate).
External recognition apparatus and excavation machine using external recognition apparatus
An external recognition apparatus and an excavation machine using the external recognition apparatus, the external recognition apparatus including: a three-dimensional distance measurement device configured to acquire distance information in a three-dimensional space in a predetermined, region which is under a hydraulic shovel and which includes a region to be excavated by the hydraulic shovel; a plane surface estimation unit configured to estimate a plane surface in the predetermined region based on the distance information; and an excavation object region recognition unit configured to recognize the region to be excavated in the predetermined region based on the plane surface and the distance information.
Object recognition
Approaches introduce a pre-processing and post-processing framework to a neural network-based approach to identify items represented in an image. For example, a classifier that is trained on several categories can be provided. An image that includes a representation of an item of interest is obtained. Rotated versions of the image are generated and each of a subset of the rotated images is analyzed to determine a probability that a respective image includes an instance of a particular category. The probabilities can be used to determine a probability distribution of output category data, and the data can be analyzed to select an image of the rotated versions of the image. Thereafter, a categorization tree can then be utilized, whereby for the item of interest represented the image, the category of the item can be determined. The determined category can be provided to an item retrieval algorithm to determine primary content for the item of interest. This information also can be used to determine recommendations, advertising, or other supplemental content, within a specific category, to be displayed with the primary content.
METHOD AND SYSTEM FOR ALIGNING A TAXI-ASSIST CAMERA
Apparatus and associated methods relate to aligning a taxi-assist camera such that each image frame of real-time video that the camera generates has a standard presentation format. The taxi-assist camera is configured to be mounted on an aircraft and oriented such that each image frame includes both a specific feature of the aircraft and of nearby objects external to the aircraft. The specific feature of the aircraft is detected and a location within the image frame of the specific feature is determined. The determined location within the image frame is compared with a reference location. A transformation operator is generated to transform the image frame such that the specific feature of the aircraft will be located within the image at a location corresponding to the reference location. The transformation operator is then applied to each of the image frames of the real-time video that the camera generates.