Patent classifications
G06V10/803
Method and system for image processing
A method and system for image processing is provided. The method for image processing may include: obtaining an image data set, wherein the image data set includes a first set of volume data; and determining, by the at least one processor, a target anatomy of interest based on the first set of volume data. The determining of the target anatomy of interest may include: determining an initial anatomy of interest in the first set of volume data; and editing the initial anatomy of interest to obtain the target anatomy of interest. The target anatomy of interest may include at least one region of interest (ROI) or at least one volume of interest (VOI). The initial anatomy of interest may include at least one ROI or at least one VOI.
SYSTEM AND METHOD TO FUSE MULTIPLE SOURCES OF OPTICAL DATA TO GENERATE A HIGH-RESOLUTION, FREQUENT AND CLOUD-/GAP-FREE SURFACE REFLECTANCE PRODUCT
Aspects of the subject disclosure may include, for example, performing, by a processing system, image fusion using two or more groups of images to generate predicted images, wherein each group of the two or more groups has one of a different resolution, a different frequency temporal pattern or a combination thereof than another of the two or more groups. Gap filling can be performed by the processing system to correct images of the two or more groups. Additional embodiments are disclosed.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
An image processing device according to one aspect of the present disclosure includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: receive a visible image of a face; receive a near-infrared image of the face; adjust brightness of the visible image based on a frequency distribution of pixel values of the visible image and a frequency distribution of pixel values of the near-infrared image; specify a relative position at which the visible image is related to the near-infrared image; invert adjusted brightness of the visible image; detect a region of a pupil from a synthetic image obtained by adding up the visible image the brightness of which is inverted and the near-infrared image based on the relative position; and output information on the detected pupil.
Head-Mounted Display With Low Light Operation
A display system includes a controller and a head-mounted display. The head-mounted display includes a display, a head support coupled to the display for supporting the display on a head of a user to be viewed by the user, and sensors coupled to the head support for sensing an environment from the head-mounted display unit in low light. The sensors include one or more of an infrared sensor for sensing the environment with infrared electromagnetic radiation, or a depth sensor for sensing distances to objects of the environment, and also include an ultrasonic sensor for sensing the environment with ultrasonic sound waves. The controller determines graphical content according to the sensing of the environment with the one or more of the infrared sensor or the depth sensor and with the ultrasonic sensor, and operates the display to provide the graphical content concurrent with the sensing of the environment.
Ophthalmologic apparatus, and method of controlling the same
An ophthalmologic apparatus of an embodiment example includes a front image acquiring device, a first search processor, and a second search processor. The front image acquiring device is configured to acquire a front image of a fundus of a subject's eye. The first search processor is configured to search for an interested region corresponding to an interested site of the fundus based on a brightness variation in the front image. The second search processor is configured to search for the interested region by template matching between the front image and a template image in the event that the interested region has not been detected by the first search processor.
Systems and methods of data augmentation for pre-trained embeddings
Systems and methods are provided for generating textual embeddings by tokenizing text data and generating vectors to be provided to a transformer system, where the textual embeddings are vector representations of semantic meanings of text that is part of the text data. The vectors may be averaged for every token of the generated textual embeddings and concatenating average output activations of two layers of the transformer system. Image embeddings may be generated with a convolutional neural network (CNN) from image data, wherein the image embeddings are vector representations of the images that are part of the image data. The textual embeddings and image embeddings may be combined to form combined embeddings to be provided to the transformer system.
Identity authentication method based on biometric feature, and identity authentication system thereof
The present invention relates to a biometric-based identity authentication method and system. The method includes: obtaining mobile terminal numbers of all users entering a specified area through a base station associated with the specified area which the users enter to generate a first mobile terminal number list; recognizing biometrics of users, and obtaining a second mobile terminal number list composed of n mobile terminal numbers with the highest similarity to the biometrics based on a pre-established binding relationship between biometrics of users and the mobile terminal numbers; and comparing the first mobile terminal number list with the second mobile terminal number list, wherein on the condition that the intersection of the two is one mobile terminal number, it is determined that the user of the mobile terminal number is the user with successful identity authentication, and on the condition that the intersection of the two is more than one number, it is determined that the user of the mobile terminal number with the highest biometric similarity in the intersection is the user with successful identity authentication. According to the present invention, the range of face recognition N can be narrowed down, and a user only needs to carry a mobile phone and 1:N face recognition can be completed without additional operations.
Systems and methods for constructing and utilizing field-of-view (FOV) information
Described herein are systems, methods, and non-transitory computer readable media for constructing and utilizing vehicle field-of-view (FOV) information. The FOV information can be utilized in connection with vehicle localization such as localization of an autonomous vehicle (AV), sensor data fusion, or the like. A customized computing machine can be provided that is configured to construct and utilize the FOV information. The customized computing machine can utilize the FOV information, and more specifically, FOV semantics data included therein to manage various data and execution patterns relating to processing performed in connection with operation of an AV such as, for example, data prefetch operations, reordering of sensor data input streams, and allocation of data processing among multiple processing cores.
Plant identification using heterogenous multi-spectral stereo imaging
A farming machine identifies and treats a plant as the farming machine travels through a field. The farming machine includes a pair of image sensors for capturing images of a plant. The image sensors are different, and their output images are used to generate a depth map to improve the plant identification process. A control system identifies a plant using the depth map. The control system captures images, identifies a plant, and actuates a treatment mechanism in real time.
Method for Assessing Damage of Vehicle, Apparatus for Assessing Damage of Vehicle, and Electronic Device Using Same
A method for assessing damage of vehicle, an apparatus for assessing damage of vehicle and an electronic device using same. The method for assessing the damage of the vehicle includes: acquiring vehicle images; processing the vehicle images by a first model to obtain a component identification result, and the component identification result includes a component name, and at least one of a component region and a component mask of a vehicle component; processing the vehicle images by a second model to obtain a damage identification result, and the damage identification result includes a damage morphology and at least one of a damage region and a damage region mask of the vehicle component; and fusing the component identification result and the damage identification result to obtain a damage assessment result.