G06T2207/10016

AUTOMATED ASSESSMENT OF ENDOSCOPIC DISEASE

The application relates to devices and methods for analysing a colonoscopy video or a portion thereof, and for assessing the severity of ulcerative colitis in a subject by analysing a colonoscopy video obtained from the subject. Analysing a colonoscopy video comprises using a first deep neural network classifier to classify image data from the subject colonoscopy video or portion thereof into at least a first severity class (more severe endoscopic lesions) and a second severity class (less severe endoscopic lesions), wherein the first deep neural network has been trained at least in part in a weakly supervised manner using training image data from a plurality of training colonoscopy videos, the training image data comprising multiple sets of consecutive frames from the plurality of training colonoscopy videos, wherein frames in a set have the same severity class label. Devices and methods for providing a tool for analysing colonoscopy videos are also described.

INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, AND METHOD FOR PROCESSING INFORMATION
20230046397 · 2023-02-16 · ·

An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a detection target included in the observation data. The processor maps coordinates of the detected detection target as coordinates of a detection target in a virtual space, tracks a position and a velocity of a material point indicating the detection target in the virtual space, and maps coordinates of the tracked material point in the virtual space as coordinates in a display space. The processor sequentially observes a size of the detection target in the display space and estimates a size of a detection target at a present time on a basis of observed values of a size of a detection target at the present time and estimated values of a size of a past detection target. The output interface outputs output information based on the coordinates of the material point mapped to the display space and the estimated size of the detection target.

Systems And Methods For Optical Evaluation Of Pupillary Psychosensory Responses
20230052100 · 2023-02-16 ·

The present disclosure is directed to systems and methods for measuring and analyzing pupillary psychosensory responses. An electronic device is configured to receive video data with at least two frames. The electronic device then locates one or more eye objects in the video data and determine pupil and iris sizes of the one or more eye objects. The electronic device determines the pupillary psychosensory responses of the one or more eye objects by tracking a ratio of pupil diameter to iris diameter throughout the video. Several metrics for the pupillary psychosensory responses can be determined (e.g., velocity of change of the ratio, peak to peak amplitude of the change in ratio over time, etc.). These metrics can be used as measures of an individual's cognitive ability and mental health in a single session or tracked throughout multiple sessions.

APPARATUS AND SYSTEM FOR DISPENSING COSMETIC MATERIAL
20230052590 · 2023-02-16 · ·

A system is provided that includes a mobile user device (300) that executes an application and determines and transmits a recipe for generating a target cosmetic material that is based on a combination of a plurality of separate ingredients that are associated with the user. The system includes a dispensing device (100) configured to receive the transmitted recipe from the mobile user device 300) and dispense each of the plurality of separate ingredients onto a common dispensing surface such that when the dispensed amounts of each of the plurality of separate ingredients is blended on the dispensing surface, the target cosmetic material is achieved.

SYSTEM AND METHOD FOR MEASURING DISTORTED ILLUMINATION PATTERNS AND CORRECTING IMAGE ARTIFACTS IN STRUCTURED ILLUMINATION IMAGING

A method for measuring distorted illumination patterns and correcting image artifacts in structured illumination microscopy. The method includes the steps of generating an illumination pattern by interfering multiple beams, modulating a scanning speed or an intensity of a scanning laser, or projecting a mask onto an object; taking multiple exposures of the object with the illumination pattern shifting in phase; and applying Fourier transform to the multiple exposures to produce multiple raw images. Thereafter, the multiple raw images are used to form and then solve a linear equation set to obtain multiple portions of a Fourier space image of the object. A circular 2-D low pass filter and a Fourier Transform are then applied to the portions. A pattern distortion phase map is calculated and then corrected by making a coefficient matrix of the linear equation set varying in phase, which is solved in the spatial domain.

INFORMATION PROCESSING DEVICE, PROGRAM, AND METHOD
20230049305 · 2023-02-16 · ·

An information processing device that includes a control unit configured to track an object in an image using images input in time series, using a tracking result obtained by performing tracking in units of a tracking region corresponding to a specific part of the object.

CRACK DETECTION DEVICE, CRACK DETECTION METHOD AND COMPUTER READABLE MEDIUM

In a crack detection device (10), an image acquisition unit (21) acquires image data acquired by taking an image of a road surface from an oblique direction with respect to the road surface, An image classification unit (22) classifies image data acquired into an acceptable range with a resolution higher than a standard value, and an unacceptable range with a resolution equal to or less than the standard value. A data output unit (23) outputs acceptable data being image data of a part classified into the acceptable range as data to detect a crack on the road surface. An image display unit (24) displays data output.

DEVICE AND COMPUTER-IMPLEMENTED METHOD FOR OBJECT TRACKING
20230051014 · 2023-02-16 ·

A device and computer-implemented method for object tracking. The method comprises providing a sequence of digital images, determining a sequence of relational graph embeddings, wherein a first relational graph embedding of the sequence comprises a first object embedding representing a first object in a first digital image of the sequence of digital images, wherein the first relational graph embedding comprises a first relation embedding of a relation for the first object embedding, wherein the first relation embedding relates the first object embedding to embeddings representing other objects of the first digital image in the first relational graph embedding and to embeddings in a second relational graph embedding of the sequence that represent objects of a second digital image of the sequence of digital images.

IMAGE PROCESSING METHOD, NETWORK TRAINING METHOD, AND RELATED DEVICE
20230047094 · 2023-02-16 ·

This application provides an image processing method, a network training method, and a related device, and relates to image processing technologies in the artificial intelligence field. The method includes: inputting a first image including a first vehicle into an image processing network to obtain a first result output by the image processing network, where the first result includes location information of a two-dimensional 2D bounding frame of the first vehicle, coordinates of a wheel of the first vehicle, and a first angle of the first vehicle, and the first angle of the first vehicle indicates an included angle between a side line of the first vehicle and a first axis of the first image; and generating location information of a three-dimensional 3D outer bounding box of the first vehicle based on the first result.

METHOD AND APPARATUS FOR DETECTION AND TRACKING, AND STORAGE MEDIUM

In the field of video processing, a detection and tracking method and apparatus, and a storage medium, are provided. The method includes: performing feature point analysis on a video frame sequence, to obtain feature points on each video frame thereof; performing target detection on an extracted frame through a first thread based on the feature points, to obtain a target box in the extracted frame; performing target box tracking in a current frame through a second thread based on the feature points and the target box in the extracted frame, to obtain a result target box in the current frame; and outputting the result target box. As the target detection and the target tracking are divided into two threads, a tracking frame rate is unaffected by a detection algorithm, and the target box of the video frame can be outputted in real time, improving real-time performance and stability.