G06T3/0006

Point cloud colorization system with real-time 3D visualization

Enabling colorization and color adjustments on 3D point clouds, which are projected onto a 2D view with an equirectangular projection. A user may color regions on the 2D view and preview the changes immediately in a 3D view of the point cloud. Embodiments render the color of each point in the point cloud by testing whether the 2D projection of the point is inside the colored region. Applications may include generation of a color 3D virtual reality environment using point clouds and color-adjusted imagery.

Systems and methods for automatic alignment of drawings
11625843 · 2023-04-11 · ·

Systems and methods are disclosed for automatically aligning drawings. One method comprises receiving a source drawing and a target drawing, determining main axes of the source and target drawings respectively, and aligning the main axis of the source drawing to the main axis of the target drawing. A plurality of source feature point vectors and target feature point vectors may be generated from the source and target drawings, whose main axes have been aligned. A predetermined number of matching FPV pairs may then be determined across the source and target drawings, and the source drawing may be aligned with the target drawings based on the matching FPV pairs.

Method for temporal stabilization of landmark localization

Various embodiments set forth systems and techniques for training a landmark model. The techniques include determining, using the landmark model, a first landmark in a set of first landmarks associated with a first image; performing, on the first image, a first perturbation to obtain a second image; determining, using the landmark model, a second landmark in a set of second landmarks associated with the second image; determining, based on a first distance between the first landmark and the second landmark, a first loss function; and updating, based on the first loss function, a first parameter of the landmark model.

TEACHING DATA CONVERSION DEVICE, TEACHING DATA CONVERSION METHOD, AND NON-TEMPORARY STORAGE MEDIUM
20230143661 · 2023-05-11 · ·

The first neural network is a learned neural network learned in such a way as to output, when an object image is input, a geometric transformation parameter relevant to the object image. The object image is an image of an object identified based on object information of the first teaching data including an image and the object information including a category, a position, and a size of an object included in the image. The calculation unit calculates an orientation of the object, based on the geometric transformation parameter being output from the first neural network. The generation unit generates, by adding the orientation of the object being calculated by the calculation unit to the first teaching data, second teaching data including an image and object information including a category, a position, a size, and an orientation of an object included in the image.

Deep learning-based multi-site, multi-primitive segmentation for nephropathology using renal biopsy whole slide images

Embodiments discussed herein facilitate segmentation of histological primitives from stained histology of renal biopsies via deep learning and/or training deep learning model(s) to perform such segmentation. One example embodiment is configured to access a first histological image of a renal biopsy comprising a first type of histological primitives, wherein the first histological image is stained with a first type of stain; provide the first histological image to a first deep learning model trained based on the first type of histological primitive and the first type of stain; and receive a first output image from the first deep learning model, wherein the first type of histological primitives is segmented in the first output image.

FRAME CALIBRATION FOR ROBUST VIDEO SYNTHESIS

A method for calibrating an animation includes obtaining a static source image of a source face; capturing, by an image capturing device, a driving video of a human; determining whether a driving face, of the human, is present in each of the video frames; measuring a driving face position and a driving face size in a calibration reference frame of the driving video; generating each of one or more modified video frames based on the calibration reference frame and each of one or more subsequent video frames; and outputting, for each of the driving video frames, the source image in response to determining that the driving face is not present, and outputting the source image and the modified video frame in response to determining that the driving face is present.

IMAGE UPSAMPLING

An example system includes a feature extraction engine to identify features in a compressed image at a plurality of scales. The features include features corresponding to compression artifacts and features corresponding to image content to be upsampled. The feature extraction engine is to identify the features in the compressed image at a first scale based on noncontiguous pixels. The system also includes a reconstruction engine to refine the features corresponding to the image content to be upsampled and mitigate the features corresponding to the compression artifacts. The system includes an upsampling engine to generate an upsampled version of the compressed image based on the refined and mitigated features.

Method of plane tracking
11688033 · 2023-06-27 · ·

A method of plane tracking comprising: capturing by a camera a reference frame of a given plane from a first angle; capturing by the camera a destination frame of the given plane from a second angle different than the first angle; defining coordinates of matching points in the reference frame and the destination frame; calculating, using the first and second angles, first and second respective rotation transformations to a simulated plane parallel to the given plane; applying an affine transformation between the reference frame coordinate on the simulated plane and the destination frame coordinate on the simulated plane; and applying a projective transformation on the simulated plane destination frame coordinate to calculate the destination frame coordinate.

Image-based circular plot recognition and interpretation

A device includes software instructions for a circular plot analysis agent and at least one circular plot definition. The circular plot analysis agent obtains a digital image of a circular plot, detects a perimeter of the circular plot within the digital image, detects a plurality of edges within the perimeter, identifies a set of endpoints on the perimeter as a function of the plurality of edges, generates a plot descriptor from the set of endpoints, and initiates a transaction with a second device as a function of the plot descriptor.

GEO-POSITIONING
20170364250 · 2017-12-21 ·

The invention is a method of geo-positioning geographic data for visualisation of a geographic area, particularly a mine site, and a device to work the method. The method includes the steps of: importing two or more data sources having geographic data of the geographic area; selecting a first control in a first data source of the two or more data sources and the same first control in a second data source of the two or more data sources; selecting a second control in the first data source and the same second control in the second data source; and applying an algorithm in a processor to process the first control in the first data source, the first control in the second data source, the second control in the first data source and the second control in the second data source by overlaying, rotating and scaling the data sources until at least the first control in the first data source matches the first control in the second data source and the second control in the first data source matches the second control in the second data source.