Patent classifications
G06T2207/10141
METHOD FOR DETERMINING OBJECT INFORMATION RELATING TO AN OBJECT IN A VEHICLE ENVIRONMENT, CONTROL UNIT AND VEHICLE
The disclosure relates to a method for determining object information relating to an abject in an environment of a vehicle having a camera. The method includes: capturing the environment with the camera from a first position; changing the position of the camera; capturing the environment with the camera from a second position; determining object information relating to an object by selecting at least one first pixel in the first image and at least one second pixel in the second image, by selecting the first pixel and the second pixel such that they are assigned to the same object point of the object, and determining object coordinates of the assigned object point by triangulation. Changing the position of the camera is brought about by controlling an active actuator system in the vehicle. The actuator system adjusts the camera by an adjustment distance without changing a driving condition of the vehicle.
SYSTEM & METHOD FOR HANDBAG AUTHENTICATION
Systems and methods for authenticating handbags using a portable electronic device along with a bilinear convolutional neural network (CNN) model are described. One method includes using a portable electronic device comprising a camera, and a lens-accessory attached to the portable electronic device such that an optical feature of the lens-accessory is positioned in front of the camera. The portable electronic device acquires one or more pictures of a handbag and sends the one or more pictures to a bilinear CNN model via a network asset where an authenticity is determined. The systems and methods disclosed are capable of allowing the portable electronic device to be spaced apart from the handbag while acquiring pictures, and the lens-accessory can be between 10× and 50× magnification.
Automatic association between physical and visual skin properties
Techniques for automatic association between physical and visual skin properties are provided. A computer system receives a two-dimensional (2D) image and a three-dimensional (3D) image associated with a person and determines surface skin properties using the 2D image and physical skin properties using the 3D image. The computer system generates a skin disorder severity assessment using associations between the surface skin properties and the physical skin properties.
PHOTOGRAPHING CONDITION DETERMINING METHOD FOR METAL STRUCTURE, PHOTOGRAPHING METHOD FOR METAL STRUCTURE, PHASE CLASSIFICATION METHOD FOR METAL STRUCTURE, PHOTOGRAPHING CONDITION DETERMINING DEVICE FOR METAL STRUCTURE, PHOTOGRAPHING DEVICE FOR METAL STRUCTURE, PHASE CLASSIFICATION DEVICE FOR METAL STRUCTURE, MATERIAL PROPERTY ESTIMATING METHOD FOR METAL MATERIAL, AND MATERIAL PROPERTY ESTIMATING DEVICE FOR METAL MATERIAL
A photographing condition determining method includes: photographing a part of a metal structure of a metal material subjected to predetermined sample preparation under a predetermined photographing condition; assigning, to pixels corresponding to one or a plurality of predetermined phases of the metal structure, labels of respective phases for a photographed image; calculating one or more feature values for a pixel to which a label of one of the assigned phases; classifying the phases of the metal structure of the image by inputting a calculated feature value to a model, which has been learned in advance using feature values assigned with labels of respective phases as input and labels of the respective phases as output, and acquiring a label of a phase of a pixel corresponding to the input feature value; and determining a photographing condition when other parts of the metal structure are photographed based on a classification result.
System and method for handbag authentication
Systems and methods for authenticating handbags using a portable electronic device along with a bilinear convolutional neural network (CNN) model are described. One method includes using a portable electronic device comprising a camera, and a lens-accessory attached to the portable electronic device such that an optical feature of the lens-accessory is positioned in front of the camera. The portable electronic device acquires one or more pictures of a handbag and sends the one or more pictures to a bilinear CNN model via a network asset where an authenticity is determined. The systems and methods disclosed are capable of allowing the portable electronic device to be spaced apart from the handbag while acquiring pictures, and the lens-accessory can be between 10× and 50× magnification.
IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD
An image processing apparatus includes an information obtaining unit configured to obtain three-dimensional polarization sensitive tomographic information and three-dimensional motion contrast information about a subject based on tomographic signals of lights having different polarizations, the lights being obtained by splitting a combined light obtained by combining a returned light from the subject illuminated with a measurement light with a reference light corresponding to the measurement light, an obtaining unit configured to obtain a lesion region of the subject using the three-dimensional polarization sensitive tomographic information, and an image generation unit configured to generate an image in which the lesion region is superimposed on a motion contrast image generated using the three-dimensional motion contrast information.
Method and system for processing an image
A method of processing an image is disclosed. The method comprises decomposing the image into a plurality of channels, each being characterized by a different depth-of-field, and accessing a computer readable medium storing an in-focus dictionary defined over a plurality of dictionary atoms, and an out-of-focus dictionary defined over a plurality of sets of dictionary atoms, each set corresponding to a different out-of-focus condition. The method also comprises computing one or more sparse representations of the decomposed image over the dictionaries.
COHERENT ILLUMINATION FOR TOUCH POINT IDENTIFICATION
A system includes a sensor to capture multiple images of a portion of a first object illuminated by coherent illumination and a time of capture of each of the images; and a processor to compare two images of the multiple images to identify one or more touch points. Each touch point has a difference in value between the two images that is greater than a threshold. Upon determining a spatial shape formed by the identified touch points that corresponds to a pointing end of a pointing object, the system provides at least one of: i) a touch location of the pointing end relative to the first object, where the touch location is based on the spatial shape formed by the identified touch points, or ii) the time of capture of a second image of the two images that produced the spatial shape.
SYSTEM AND METHOD FOR EFFICIENT IDENTIFICATION OF DEVELOPMENTAL ANOMALIES
A system and method for identifying developmental anomalies. The method includes obtaining a first set of at least one multimedia content element showing at least one crop and captured using a first set of at least one capturing parameter; obtaining normal development data for the at least one crop, wherein the normal development data represents at least one normal development characteristic of the at least one crop; analyzing, via machine vision, the first set of at least one multimedia content element to identify a first set of at least one characteristic of the at least one crop; determining, based on the first set of at least one characteristic and the normal development data, whether a suspected anomaly is identified; and verifying if the suspected anomaly is an anomaly using a second set of at least one multimedia content element captured using a second set of at least one capturing parameter.
SYSTEMS AND METHODS FOR PROGRESSIVE IMAGING
An imaging system includes an imaging unit, a display unit, and at least one processor. The at least one processor is configured to acquire a first type of diagnostic imaging information of the patient; reconstruct a first image using the first type of diagnostic imaging information; if a first stop criterion for terminating imaging is not satisfied, acquire a second type of diagnostic imaging information having an increased level of acquisitional burden; reconstruct a second image; if a second stop criterion for terminating imaging is not satisfied, acquire a third type of diagnostic imaging information having an increased level of acquisitional burden, wherein the patient is maintained on a table of the imaging unit during the acquisition of the second type of diagnostic imaging information, reconstruction of the second image, and acquisition of the third type of diagnostic imaging information; reconstruct a third image; and display the third image.