G06T2207/20104

METHOD FOR INSPECTING AN OBJECT

A method for inspecting an object includes receiving or determining inspection image data, the inspection image data including an inspection image pixel array with at least one inspection image pixel in the inspection image pixel array having a pixel property associated therewith. The method includes receiving via a processor a user input associated with a continuous segment of inspection image pixels in the inspection image pixel array. The method includes determining a property of the object based on the pixel properties associated with the continuous segment of inspection image pixels in the inspection image pixel array.

METHOD FOR INSPECTING AN OBJECT

A method for inspecting an object includes determining a first inspection package that includes a first inspection image of the object and a first designation. The method includes determining data indicative of a second inspection package that includes a second inspection image of the object and a second designation. The method includes determining a first property of the object based on the first inspection image of the object, one or more properties maps of the object, and the first designation. The method includes determining a second property of the object based on the second inspection image of the object, the one or more properties maps of the object, and the second designation. The method includes displaying the first property and the second property or displaying data indicative of a comparison of the first property with the second property.

METHOD AND ELECTRONIC DEVICE FOR PRODUCING MEDIA FILE WITH BLUR EFFECT
20230014805 · 2023-01-19 ·

A method for producing a media file with a blur effect in an electronic device is provided. The method includes segmenting an image frame into a plurality of segments. Further, the method includes determining at least one segment from the plurality of segments comprising one of a foreground Region of Interest (ROI) and a background region of the ROI and detecting whether one of the foreground region of the ROI and the background region of the ROI comprises motion information and static information. Further, the method includes automatically applying a motion type blur effect and/or a static type blur effect on one of the foreground region of the ROI, and the background region of the ROI. The method includes generating the media file based on the applied the motion type blur effect and the static type blur effect and storing the media file.

Method of detecting wrinkles based on artificial neural network and apparatus therefor

According to various embodiments, a wrinkle detection service providing server for providing a wrinkle detection method based on an artificial intelligence may include a data pre-processor for obtaining a skin image of a user from a skin measurement device and performing pre-processing based on feature points based on the skin image; a wrinkle detector for inputting the skin image pre-processed through the data pre-processing into an artificial neural network and generating a wrinkle probability map corresponding to the skin image; a data post-processor for post-processing the generated wrinkle probability map; and a wrinkle visualization service providing unit for superimposing the post-processed wrinkle probability map on the skin image and providing a wrinkle visualization image to a user terminal of the user.

SYSTEMS AND METHODS OF MEDIA PROCESSING

Media processing systems and techniques are described. A media processing system receives image data that represents an environment captured by an image sensor. The media processing system receives an indication of an object in the environment that is represented in the image data. The media processing system divides the image data into regions, including a first region and a second region. The object is represented in one of the plurality of regions. The media processing system modifies the image data to obscure the first region without obscuring the second region based on the object being represented in the one of the plurality of regions. The media processing system outputs the image data after modifying the image data. In some examples, the object is depicted in the first region and not the second region. In some examples, the object is depicted in the second region and not the first region.

Quantitative imaging for instantaneous wave-free ratio

Systems and methods for analyzing pathologies utilizing quantitative imaging are presented herein. Advantageously, the systems and methods of the present disclosure utilize a hierarchical analytics framework that identifies and quantify biological properties/analytes from imaging data and then identifies and characterizes one or more pathologies based on the quantified biological properties/analytes. This hierarchical approach of using imaging to examine underlying biology as an intermediary to assessing pathology provides many analytic and processing advantages over systems and methods that are configured to directly determine and characterize pathology from underlying imaging data.

Apparatuses and methods for navigation in and local segmentation extension of anatomical treelike structures

A local extension method for segmentation of anatomical treelike structures includes receiving an initial segmentation of 3D image data including an initial treelike structure. A target point in the 3D image data is defined, and a region of interest based on the target point is extracted to create a sub-image. Highly tubular voxels are detected in the sub-image, and a spillage-constrained region growing is performed using the highly tubular voxels as seed points. Connected components are extracted from the results of the region growing. The extracted components are pruned to discard components not likely to be connected to the initial treelike structure, keeping only candidate components likely to be a valid sub-tree of the initial treelike structure. The candidate components are connected to the initial treelike structure, thereby extending the initial segmentation in the region of interest.

IMAGING APPARATUS
20230012208 · 2023-01-12 ·

An imaging apparatus includes: an image sensor that captures a subject image to generate image data; a first depth measurer that acquires first depth information indicating a depth at a first spatial resolution, the depth showing a distance between the imaging apparatus and a subject in an image indicated by the image data; a second depth measurer that acquires second depth information indicating the depth in the image at a second spatial resolution different from the first spatial resolution; and a controller that acquires third depth information indicating the depth at the first or second spatial resolution for each region of different regions in the image, based on the first depth information and the second depth information.

COMPUTER-IMPLEMENTED DETECTION AND PROCESSING OF ORAL FEATURES

Described herein are computer-implemented methods for identifying and classifying one or more regions of interest in a facial region and augmenting an appearance of the regions of interest in an image. For example, a region of interest may include one or more of: a teeth region, a lip region, a mouth region, or a gum region. User selected templates for teeth, gums, smile, etc. may be used to replace the analogous facial features in an input image provided by the user, for example from an image library or taken with an image sensor. The computer-implemented methods described herein may use one or more trained machine learning models and one or more algorithms to identify and classify regions of interest in an input image.

FOVEATED STITCHING
20230216981 · 2023-07-06 ·

The present disclosure relates to a computer-implemented method for stitching images representing the surroundings of an automated vehicle into a stitched view and an image stitching system for an automated vehicle for use in said method. The method comprises the steps of: providing, by means of respective image capturing units, two images representing surroundings of the automated vehicle, wherein the two images share an overlapping region of the surroundings from different viewpoints of the respective image capturing units; determining an image transformation between the two images based on pre-calculated calibration information, or feature matching of discernable features of the surroundings visible in said two images; stitching the two images into a stitched view with a respective image seam between the two images based on said image transformation; displaying the stitched view to an operator of the automated vehicle, and receiving, by means of an operator input device, operator input data indicating the operator's viewpoint in the stitched view; determining a region of interest within the stitched view based on said operator input data; wherein the step of stitching the two images into a stitched view involves determining a set of stitching solutions between the two images and selecting a stitching solution that results in a stitched view with a stitching seam that is displaced away a distance from a point in the region of interest in a direction towards the outside of the region of interest.