G06T2207/20164

Deep Saliency Prior

Techniques for tuning an image editing operator for reducing a distractor in raw image data are presented herein. The image editing operator can access the raw image data and a mask. The mask can indicate a region of interest associated with the raw image data. The image editing operator can process the raw image data and the mask to generate processed image data. Additionally, a trained saliency model can process at least the processed image data within the region of interest to generate a saliency map that provides saliency values. Moreover, a saliency loss function can compare the saliency values provided by the saliency map for the processed image data within the region of interest to one or more target saliency values. Subsequently, the one or more parameter values of the image editing operator can be modified based at least in part on the saliency loss function.

X-ray diagnosis apparatus and image processing apparatus

A marker-coordinate detecting unit detects coordinates of a stent marker on a new image when the new image is stored in an image-data storage unit; and then a correction-image creating unit creates a correction image from the new image through, for example, image transformation processing, so as to match up the detected coordinates with reference coordinates that are coordinates of the stent marker already detected by the marker-coordinate detecting unit in a first frame. An image post-processing unit then creates an image for display by performing post-processing on the correction image created by the correction-image creating unit, the post-processing including high-frequency noise reduction filtering-processing, low-frequency component removal filtering-processing, and logarithmic-image creating processing; and then a system control unit performs control of displaying a moving image of an enlarged image of a set region that is set in the image for display, together with an original image.

Computer Vision Systems and Methods for Generating Building Models Using Three-Dimensional Sensing and Augmented Reality Techniques

Computer vision systems and methods for generating building models using three-dimensional sensing and augmented reality (AR) techniques are provided. Image frames including images of a structure to be modeled are captured by a camera of a mobile device such as a smart phone, as well as three-dimensional data corresponding to the image frames. An object of interest, such as a structural feature of the building, is detected using both the image frames and the three-dimensional data. An AR icon is determined based upon the type of object detected, and is displayed on the mobile device superimposed on the image frames. The user can manipulate the AR icon to better fit or match the object of interest in the image frames, and can capture the object of interest using a capture icon displayed on the display of the mobile device.

SYSTEMS AND METHODS FOR ARTIFICIAL INTELLIGENCE (AI) THREE-DIMENSIONAL MODELING
20220406007 · 2022-12-22 ·

An Artificial Intelligence (Al) three-dimensional modeling system that analyzes and segments imagery of a room, generates a three-dimensional model of the room from the segmented imagery, identifies objects within the room, and conducts an assessment of the room based on the identified objects.

METHOD FOR GENERATING 3D REFERENCE POINTS IN A MAP OF A SCENE

A method of complementing a map of a scene with 3D reference points including four steps. In a first step, data is collected and recorded based on samples of at least one of an optical sensor, a GNSS, and an IMU. A second step includes initial pose generation by processing of the collected sensor data to provide a track of vehicle poses. A pose is based on a specific data set, on at least one data set re-coded before that dataset and on at least one data set recorded after that data set. A third step includes SLAM processing of the initial poses and collected optical sensor data to generate keyframes with feature points. In a fourth step 3D reference points are generated by fusion and optimization of the feature points by using future and past feature points together with a feature point at a point of processing. This second and fourth steps provides significantly better results than SLAM or VIO methods known from prior art, as the second and the fourth steps are based on recorded data. Wherein a normal SLAM or VIO algorithm only can access data of the past, in these steps, processing may also be done by looking at positions ahead, by using the recorded data.

Region-of-Interest Positioning for Laser-Assisted Bonding

A semiconductor device is formed by providing a semiconductor die. A laser-assisted bonding (LAB) assembly is disposed over the semiconductor die. The LAB assembly includes an infrared (IR) camera. The IR camera is used to capture an image of the semiconductor die. Image processing is performed on the image to identify corners of the semiconductor die. Regions of interest (ROI) are identified in the image relative to the corners of the semiconductor die. Parameters can be used to control the size and location of the ROI relative to the respective corners. The ROI are monitored for temperature using the IR camera while LAB is performed.

VISUAL POSITIONING METHOD, MOBILE MACHINE USING THE SAME, AND COMPUTER READABLE STORAGE MEDIUM
20220392103 · 2022-12-08 ·

A visual positioning method and a mobile machine using the same are provided. The method includes: extracting a plurality of corner feature points corresponding to a current image; determining whether a distance between each pair of the plurality of corner feature points is less than a first preset threshold; if yes, determining whether a grayscale value of each of the plurality of corner feature points with the distance less than the first preset threshold is within a second preset threshold range; if yes, obtaining cluster set(s) of the corner feature points; screening a plurality of valid feature points from the cluster set(s); determining a positioning reliability based on a ratio of amount of the valid feature points to an amount of the plurality of corner feature points; and if the positioning reliability is within a preset range, performing a visual positioning based on the positioning reliability.

Damage diagram creation method, damage diagram creation device, damage diagram creation system, and recording medium
11523013 · 2022-12-06 · ·

Provided are a damage diagram creation method, a damage diagram creation device, a damage diagram creation system, and a recording medium capable of detecting damage with high accuracy based on a plurality of images acquired by subjecting a subject to split imaging. In a damage diagram creation method, damage of a subject is detected from each image (each image in a state of being not composed) constituting a plurality of images (a plurality of images acquired by subjecting the subject to split imaging), and thus, damage detection performance is not deteriorated due to deterioration of image quality in an overlapping area. Therefore, it is possible to detect damage with high accuracy based on a plurality of images acquired by subjecting the subject to split imaging. Detection results for the respective images can be composed using a composition parameter calculated based on correspondence points between the images.

METHOD FOR MEASURING A DISTANCE SEPARATING A CAMERA FROM A REFERENCE OBJECT
20220383526 · 2022-12-01 · ·

A method for measuring a distance separating a camera from a reference object including a predetermined number of corners arranged in a pattern and at least two reference points separated by a reference length. The method includes acquiring an image including the reference object with the camera, detecting corners in the image, attributing a class to each corner, based on an orientation of the corner, detecting the reference object in the image based on the corners and attributed classes, placing the reference points in the image based on the corners of the reference object and measuring an imaged length separating the reference points, and comparing the reference length to the imaged length and obtaining the distance separating the camera from the reference object.

TECHNOLOGIES FOR AUTOMATICALLY DETERMINING AND DISPLAYING SALIENT PORTIONS OF IMAGES
20220383032 · 2022-12-01 ·

Systems and methods for automatically determining and displaying salient portions of images are disclosed. According to certain aspects, an electronic device may support a design application that may apply a saliency detection learning model to a digital image, resulting in the application generating one or more salient portions of the digital image. The electronic device may generate a digital rendering of the salient portion of the image on digital models of items or products, and may enable a user to review the digital rendering. The user may also choose alternative salient portions of the digital image and/or aspect ratios for those salient portions for inclusion on a digital model of the item or product.