G06T2207/10016

METHOD OF PROCESSING IMAGE, ELECTRONIC DEVICE, AND MEDIUM
20230049656 · 2023-02-16 ·

The present disclosure provides a method of processing an image, a device, and a medium. The method of processing the image includes: performing an image processing on an original image to obtain a component image for brightness of the original image; determining at least one of the original image and the component image as an image to be processed; classifying a pixel in the image to be processed, so as to obtain a classification result; processing the image to be processed according to the classification result, so as to obtain a target image; and determining an image quality of the original image according to the target image.

METHODS AND SYSTEMS FOR GENERATING END-TO-END DE-SMOKING MODEL

The disclosure herein relates to methods and systems for generating an end-to-end de-smoking model for removing smoke present in a video. Conventional data-driven based de-smoking approaches are limited mainly due to lack of suitable training data. Further, the conventional data-driven based de-smoking approaches are not end-to-end for removing the smoke present in the video. The de-smoking model of the present disclosure is trained end-to-end with the use of synthesized smoky video frames that are obtained by source aware smoke synthesis approach. The end-to-end de-smoking model localize and remove the smoke present in the video, using dynamic properties of the smoke. Hence the end-to-end de-smoking model simultaneously identifies the regions affected with the smoke and performs the de-smoking with minimal artifacts. localized smoke removal and color restoration of a real-time video.

TECHNIQUES FOR QUANTITATIVELY ASSESSING TEAR-FILM DYNAMICS
20230049316 · 2023-02-16 ·

Aspects of the present disclosure provide techniques for quantitatively assessing tear-film dynamics associated with contact lenses. An example method includes projecting an image of one or more shapes on a tear film surface of the contact lens worn on the eye, capturing video data, comprising a plurality of image frames, of the one or more shapes projected on the tear film surface of the contact lens over a period of time, performing image segmentation on a plurality of reflection patterns included in the plurality of image frames, generating a plurality of maps of the tear film surface of the contact lens indicating changes to the tear film surface of the contact lens during the period of time, and outputting, based on the plurality of maps, one or more metrics quantifying the changes to the tear film surface of the contact lens over the period of time.

METHOD OF IN-PROCESS DETECTION AND MAPPING OF DEFECTS IN A COMPOSITE LAYUP

A method of detecting defects in a composite layup includes capturing, using an infrared camera, reference images of a reference layup being laid up by a reference layup head. The method also includes manually reviewing the reference images for defects, and generating reference defect masks indicating defects in the reference images. The method further includes training, using the reference images and reference defect masks, a neural network, creating a machine learning model that, given a production image as input, outputs a production defect mask indicating the defect location and the defect type of each defect. The method also includes capturing, using an infrared camera, production images of a production layup being laid up by the production layup head, and applying the model to the production images to automatically generate a production defect masks indicating each defect in the production images.

TRANSPORT MECHANISMS FOR VIDEO STREAM MERGING WITH OVERLAPPING VIDEO

In various embodiments, a device receives a first video stream of a video conference. The device receives a second video stream of the video conference. The second video stream includes an indicated location for video of the second video stream relative to video of the first video stream. The device merges the first video stream and the second video stream into an overlapped video having the video of the second video stream located at the indicated location relative to the video of the first video stream. The device provides the overlapped video for display.

SYSTEM AND METHOD FOR CALIBRATING A TIME DIFFERENCE BETWEEN AN IMAGE PROCESSOR AND AN INTERTIAL MEASUREMENT UNIT BASED ON INTER-FRAME POINT CORRESPONDENCE
20230049084 · 2023-02-16 ·

Systems and methods are used for calibrating a time difference between an image signal processor (ISP) and an inertial measurement unit (IMU) of an image capture device. An image capture device includes a lens, an image sensor, an IMU, and an ISP. The image sensor detects images as frames and the IMU captures motion data. The ISP detects one or more key points on the frames and matches the one or more key points between the frames. The ISP computes one or more calibration parameters. The one or more calibration parameters are based on the matched key points and a time difference between the ISP and the IMU. The ISP performs a calibration using the calibration parameters.

SYSTEMS AND METHODS FOR VISUAL INSPECTION AND 3D MEASUREMENT

Systems and methods for inspecting the outer skin of a honeycomb body are provided. The inspection system comprises a rotational sub-assembly configured to rotate the honeycomb body, a camera sub-assembly configured to image at least a portion of the outer skin of the honeycomb body as it rotates, a three-dimensional (3D) line sensor sub-assembly configured to obtain height information from the outer skin of the honeycomb body; and an edge sensor sub-assembly configured to obtain edge data from the circumferential edges of the honeycomb body. In some examples, the inspection system utilizes a universal coordinate system to synchronize or align the data obtain from each of these sources to prevent redundant or duplicative detection of one or more defects on the outer skin of the honeycomb body.

SYSTEMS AND METHODS FOR PROVIDING DISPLAYED FEEDBACK WHEN USING A REAR-FACING CAMERA

A system includes a processor and a non-transitory computer-readable medium containing instructions that when executed by the processor causes the processor to perform operations comprising displaying a prompt to a user of a mobile device on a display of a mobile device to capture an image representing at least a portion of a mouth of the user using a rear-facing camera of the mobile device, where the rear-facing camera is on an opposite side of the mobile device including the display. The operations further comprise controlling the rear-facing camera to enable the rear-facing camera to capture the image, receiving the image, and outputting, user feedback based on the image, where the user feedback is outputted on the display that is on the opposite side of the mobile device than the rear-facing camera.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND INFORMATION PROCESSING METHOD

An information processing apparatus includes a processor configured to: obtain a video and an instruction to generate a still image from the video, the video being a video in which a work target is photographed, the work target being a target on which to work; generate the still image in response to the instruction, the still image being cut from the video including the work target; specify the work target in the video, position information, and a superimposition area by using the still image, the position information describing a position of the work target, the superimposition area being an area in which an image is superimposed, the image being obtained by using the position of the work target as a reference; receive instruction information indicating an instruction for work on the work target; and superimpose and display an instruction image in the superimposition area in the video, the instruction image being an image according to the instruction information.

TECHNIQUES FOR THREE-DIMENSIONAL ANALYSIS OF SPACES

An example method includes receiving a 2D image of a 3D space from an optical camera, identifying, in the 2D image. A virtual image generated by an optical instrument refracting and/or reflecting the light is identified. The example method further includes identifying, in the 2D image, a first object depicting a subject disposed in the 3D space from a first direction extending from the optical camera to the subject and identifying, in the virtual image, a second object depicting the subject disposed in the 3D space from a second direction extending from the optical camera to the subject via the optical instrument, the second direction being different than the first direction. A 3D image depicting the subject based on the first object and the second object is generated. Alternatively, a location of the subject in the 3D space is determined based on the first object and the second object.