Patent classifications
G06T3/153
Point cloud data processing device, point cloud data processing method, and point cloud data processing program
Working efficiency in position matching of multiple point cloud data displayed on a display is improved. A point cloud data processing device includes a highlight controlling part. In a condition in which a first point cloud containing multiple markers for position matching and a second point cloud containing multiple markers for position matching are displayed on a display, when a marker of one of the first point cloud and the second point cloud is specified, the highlight controlling part highlights a corresponding marker of the other point cloud.
Thermal-depth fusion imaging
An imaging system is provided. The imaging system includes a 3D image capture device, which is configured to capture a depth image of an object, and a thermal image capture device, which is configured to capture a thermal image of the object. The imaging system also includes a processing system, which is coupled with the 3D image capture device and the thermal image capture device. The processing system is configured to process the depth image and the thermal image to produce a thermal-depth fusion image by aligning the thermal image with the depth image, and assigning a thermal value derived from the thermal image to a plurality of points of the depth image.
SYSTEMS AND METHODS FOR PROCESSING IMAGES WITH EDGE DETECTION AND SNAP-TO FEATURE
A method for creating image products includes the following steps. Image data and positional data corresponding to the image data are captured and processed to create geo-referenced images. Edge detection procedures are performed on the geo-referenced images to identify edges and produce geo-referenced, edge-detected images. The geo-referenced, edge-detected images are saved in a database. A user interface to view and interact with the geo-referenced image is also provided such that the user can consistently select the same Points of Interest between multiple interactions and multiple users.
Method and Apparatus for Registration of Different Mammography Image Views
A method of identifying potential lesions in mammographic images may include operations executed by an image processing device including receiving first image data of a first type, receiving second image data of a second type, registering the first image data and the second image data by employing a CNN using pixel level registration or object level registration, determining whether a candidate detection of a lesion exists in both the first image data and the second image data based on the registering of the first image data and the second image data, and generating display output identifying the lesion.
VIEW SYNTHESIS USING NEURAL NETWORKS
A video stitching system combines video from different cameras to form a panoramic video that, in various embodiments, is temporally stable and tolerant to strong parallax. In an embodiment, the system provides a smooth spatial interpolation that can be used to connect the input video images. In an embodiment, the system applies an interpolation layer to slices of the overlapping video sources, and the network learns a dense flow field to smoothly align the input videos with spatial interpolation. Various embodiments are applicable to areas such as virtual reality, immersive telepresence, autonomous driving, and video surveillance.
Beautifying freeform drawings using arc and circle center snapping
Embodiments of the present invention are directed to beautifying freeform input paths in accordance with paths existing in the drawing (i.e., resolved paths). In some embodiments of the present invention, freeform input paths of a curved format can be modified or replaced to more precisely illustrate a path desired by a user. As such, a user can provide a freeform input path that resembles a path of interest by the user, but is not as precise as desired. Based on existing paths in the electronic drawing, a path suggestion(s) can be generated to rectify, modify, or replace the input path with a more precise path. In some cases, a user can then select a desired path suggestion, and the selected path then replaces the initially provided freeform input path.
Face replacement and alignment
A face replacement system for replacing a target face with a source face can include a facial landmark determination model having a cascade multichannel convolutional neural network (CMC-CNN) to process both the target and the source face. A face warping module is able to warp the source face using determined facial landmarks that match the determined facial landmarks of the target face, and a face selection module is able to select a facial region of interest in the source face. An image blending module is used to blend the target face with the selected source region of interest.
Methods, systems, and computer readable media for deriving a three-dimensional (3D) textured surface from endoscopic video
Methods, systems, and computer readable media for deriving a three-dimensional (3D) textured surface from endoscopic video are disclosed. According to one method for deriving a 3D textured surface from endoscopic video, the method comprises: performing video frame preprocessing to identify a plurality of video frames of an endoscopic video, wherein the video frame preprocessing includes informative frame selection, specularity removal, and key-frame selection; generating, using a neural network or a shape-from-motion-and-shading (SfMS) approach, a 3D textured surface from the plurality of video frames; and optionally registering the 3D textured surface to at least one CT image.
Computer-implemented method for drawing a polyline in a three-dimensional scene
A computer-implemented method of drawing a polyline in a three-dimensional scene: a) draws a segment (S1) of said polyline in said three-dimensional scene, said segment having a starting point (P1) and an endpoint (P2); b) displays, in the three-dimensional scene, a graphical tool (PST) representing a set of three orthogonal planes (PLA, PLB, PLC), one of said planes being orthogonal to the segment; c) selects one of said planes; and d) draws another segment of the polyline (S2), having a starting point coinciding with the endpoint of the segment drawn in step a) and lying in the plane (PLA) selected in step c). Steps a), c) and d) are carried out based on input commands provided by a user. A computer program product, non-volatile computer-readable data-storage medium and Computer Aided Design or three-dimensional illustration authoring system carries out such a method.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR DERIVING A THREE-DIMENSIONAL (3D) TEXTURED SURFACE FROM ENDOSCOPIC VIDEO
Methods, systems, and computer readable media for deriving a three-dimensional (3D) textured surface from endoscopic video are disclosed. According to one method for deriving a 3D textured surface from endoscopic video, the method comprises: performing video frame preprocessing to identify a plurality of video frames of an endoscopic video, wherein the video frame preprocessing includes informative frame selection, specularity removal, and key-frame selection; generating, using a neural network or a shape-from-motion-and-shading (SfMS) approach, a 3D textured surface from the plurality of video frames; and optionally registering the 3D textured surface to at least one CT image.