Patent classifications
G06T2207/20132
Systems and methods for displaying medical imaging data
A system for displaying medical imaging data comprising one or more data inputs, one or more processors, and one or more displays, wherein the one or more data inputs are configured for receiving first image data generated by a first medical imaging device, wherein the first image data comprises a field of view (FOV) portion and a non-FOV portion, and the one or more processors are configured for identifying the non-FOV portion of the first image data and generating cropped first image data by removing at least a portion of the non-FOV portion of the first image data, and transmitting the cropped first image data for display in a first portion of the display and additional information for display in a second portion of the display.
DIGITAL BOOSTER FOR SIGHTS
A digital booster for an optical system includes an image acquisition unit. The image acquisition unit is configured to acquire an image frame from a non-magnified optic. The image frame includes an aiming reticle imposed by the non-magnified optic. The digital booster includes a display and a processor. The processor is configured to locate the aiming reticle on the image frame, select a sub-frame of the image frame with an aspect ratio that is centered on the aiming reticle of the image frame, perform image inversion and rescaling of the sub-frame, and transmit the sub-frame to the display.
METHOD FOR HOSPITAL VISIT GUIDANCE FOR MEDICAL TREATMENT FOR ACTIVE THYROID EYE DISEASE, AND SYSTEM FOR PERFORMING SAME
According to the present application, a computer-implemented method of predicting thyroid eye disease is disclosed. The method comprising: preparing a conjunctival hyperemia prediction model, a conjunctival edema prediction model, a lacrimal edema prediction model, an eyelid redness prediction model, and an eyelid edema prediction model, obtaining a facial image of an object, obtaining a first processed image and a second processed image from the facial image, wherein the first processed image is different from the second processed image, obtaining predicted values for each of a conjunctival hyperemia, a conjunctival edema and a lacrimal edema by applying the first processed image to the conjunctival hyperemia prediction model, the conjunctival edema prediction model, and the lacrimal edema prediction model, and obtaining predicted values for each of an eyelid redness and an eyelid edema by applying the second processed image to the eyelid redness prediction model and the eyelid edema prediction model.
AUTOMATED PLACENTAL MEASUREMENT
The present invention teaches a method of predicting the potential for manifestation of various medical conditions by analyzing human placenta comprising and including determining the need for early monitoring, intervention or potential treatment for medical conditions likely to manifest as a child grows older and investigating the potential for various medical conditions. The method includes selecting and identifying a sample of the placenta to analyze by algorithms and preparing the sample to be analyzed. The sample is captured by obtaining a three-dimensional digital image of the chorionic surface of the sample by a selected capturing device. The physician corrects for errors in the digital image and loads the data into a computer for analysis. The digital image data is analyzed using algorithms to determine the vascular structure of the placenta, which is interpreted and analyzed to determine the potential for manifestation of various medical conditions.
System and method for providing similar or related products based on deep-learning
A method for providing similar or related products based on deep-learning, which is performed by a data processing unit of a shopping mall server, includes: acquiring an item image and item information for an item registered in a shopping mall; detecting bounding boxes for one or more objects by object-detecting the item image; setting a bounding box for an object associated with the item based on the item information; creating a main bounding box image by cropping a portion of the item image in the set bounding box; creating a padding image by padding-processing the main bounding box image; extracting a feature vector for the padding image; matching the feature vector with the item and storing the feature vector in a database; and creating the database for a similar or related product search service.
USE OF DBSCAN FOR LANE DETECTION
A system and method of lane detection using density based spatial clustering of applications with noise (DBSCAN) includes capturing an input image with one or more optical sensors disposed on a motor vehicle. The method further includes passing the input image through a heterogeneous convolutional neural network (HCNN). The HCNN generates an HCNN output. The method further includes processing the HCNN output with DBSCAN to selectively classify outlier data points and clustered data points in the HCNN output. The method further includes generating a DBSCAN output selectively defining the clustered data points as predicted lane lines within the input image. The method further includes marking the input image by overlaying the predicted lane lines on the input image.
HYPER CAMERA WITH SHARED MIRROR
An imaging system can include a first and second camera configured to capture first and second sets of oblique images along first and second scan paths, respectively, on an object area. A drive is coupled to a scanning mirror structure, having at least one mirror surface, and configured to rotate the structure about a scan axis based on a scan angle. The first and second cameras each have an optical axis set at an oblique angle to the scan axis and include a respective lens to focus first and second imaging beams reflected from the mirror surface to an image sensor located in each of the cameras. The first and second imaging beams captured by their respective cameras can vary according to the scan angle. Each of the image sensors captures respective sets of oblique images by sampling the imaging beams at first and second values of the scan angle.
SYSTEM AND METHOD FOR REFINING AN ITEM IDENTIFICATION MODEL BASED ON FEEDBACK
A system for refining an item identification model detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system captures images of the item. The system extracts a set of features from at least one of the images. The system identifies the item based on the set of features. The system receives an indication that the item is not identified correctly. The system receives an identifier of the item. The system identifies the item based on the identifier of the item. The system feeds the identifier of the item and the images to the item identification model. The system retrains the item identification model to learn to associate the item to the images. The system updates the set of features based on the determined association between the item and the images.
HAND DETECTION TRIGGER FOR ITEM IDENTIFICATION
A device configured to capture a first overhead depth image of the platform using a three-dimensional (3D) sensor at a first time instance and a second overhead depth image of a first object using the 3D sensor at a second time instance. The device is further configured to determine that a first portion of the first object is within a region-of-interest and a second portion of the first object is outside the region-of-interest in the second overhead depth image. The device is further configured to capture a third overhead depth image of a second object placed on the platform using the 3D sensor at a third time instance. The device is further configured to capture a first image of the second object using a camera in response to determining that the first object is outside of the region-of-interest and the second object is within the region-of-interest for the platform.
REDUCING A SEARCH SPACE FOR ITEM IDENTIFICATION USING MACHINE LEARNING
A device configured to receive a first encoded vector and receive one or more feature descriptors for a first object. The device is further configured to remove one or more encoded vectors from an encoded vector library that are not associated with the one or more feature descriptors and to identify a second encoded vector in the encoded vector library that most closely matches the first encoded vector based on the numerical values within the first encoded vector. The device is further configured to identify a first item identifier in the encoded vector library that is associated with the second encoded vector and to output the first item identifier.