G06T11/203

IMAGE PROCESSING DEVICE, DISPLAY CONTROL METHOD, AND RECORDING MEDIUM

An image processing device includes a hardware processor. The hardware processor designates, from one frame image of a dynamic image acquired by dynamic imaging of a movement of a locomotorium, a plurality of regions or points on a structure included in the locomotorium, sets an alignment reference based on the designated regions or points, tracks the designated regions or points in a plurality of frame images of the dynamic image, aligns a line segment connecting the regions or the points to each other in the plurality of frame images based on the alignment reference, and causes a display to display the line segment so as to be superimposed on a representative frame image of the dynamic image.

System for facilitating medical image interpretation

A system for facilitating medical image interpretation includes a processing unit and a display control unit. The processing unit includes a location information module generating a reference location indicator, and a feature marking module generating indication markers. The display control unit is in signal connection with the processing unit and a display device. The display control unit includes an image displaying module controlling the display device to display tissue images, and an auxiliary information displaying module controlling the display device to display, for each of the tissue images displayed by the display device, the reference location indicator and the indication markers together on the tissue image.

Information processing apparatus, information processing method, and non-transitory computer readable medium

An information processing apparatus (10) is for supporting work by a user who uses drawings for a plant. The information processing apparatus (10) includes a controller (15). The controller (15) is configured to convert a drawing including elements configuring the plant into an abstract model represented by element information indicating the elements and connection information indicating a connection relationship between the elements. The controller (15) is configured to generate display information, when it is judged that a difference exists between one abstract model based on one drawing and another abstract model based on another drawing, for displaying the differing portion in a different form than another portion.

Method, apparatus, and system for determining polyline homogeneity

An approach is provided for an asymmetric evaluation of polygon similarity. The approach, for instance, involves receiving a first polygon representing an object depicted in an image. The approach also involves generating a transformation of the image comprising image elements whose values are based on a respective distance that each image element is from a nearest image element located on a first boundary of the first polygon. The approach further involves determining a subset of the plurality of image elements of the transformation that intersect with a second boundary of a second polygon. The approach further involves calculating a polygon similarity of the second polygon with respect the first polygon based on the values of the subset of image elements normalized to a length of the second boundary of the second polygon.

SYMBOL RECOGNITION FROM RASTER IMAGES OF P&IDs USING A SINGLE INSTANCE PER SYMBOL CLASS

Traditional systems that enable extracting information from Piping and Instrumentation Diagrams (P&IDs) lack accuracy due to existing noise in the images or require a significant volume of annotated symbols for training if deep learning models that provide good accuracy are utilized. Conventional few-shot/one-shot learning approaches require a significant number of training tasks for meta-training prior. The present disclosure provides a method and system that utilizes the one-shot learning approach that enables symbol recognition using a single instance per symbol class which is represented as a graph with points (pixels) sampled along the boundaries of different symbols present in the P&ID and subsequently, utilizes a Graph Convolutional Neural Network (GCNN) or a GCNN appended to a Convolutional Neural Network (CNN) for symbol classification. Accordingly, given a clean symbol image for each symbol class, all instances of the symbol class may be recognized from noisy and crowded P&IDs.

ANTI-ALIASING TWO-DIMENSIONAL VECTOR GRAPHICS USING A COMPRESSED VERTEX BUFFER
20230038647 · 2023-02-09 ·

Techniques for rendering two-dimensional vector graphics are described. The techniques include using a central processing unit to generate tessellate triangles along a vector path in which each of the tessellate triangles is represented by a set of vertices. From the tessellate triangles, an index buffer and a compressed vertex buffer are generated. The index buffer includes a vertex index for each vertex of each of the tessellate triangles. The compressed vertex buffer includes a vertex buffer entry for each unique vertex that maps to one or more vertex indices of the index buffer. The index buffer and the compressed vertex buffer are provided to a graphics processing unit to render the vector path with anti-aliasing.

Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method

The present disclosure discloses a photography-based 3D modeling system and method, and an automatic 3D modeling apparatus and method, including: (S1) attaching a mobile device and a camera to the same camera stand; (S2) obtaining multiple images used for positioning from the camera or the mobile device during movement of the stand, and obtaining a position and a direction of each photo capture point, to build a tracking map that uses a global coordinate system; (S3) generating 3D models on the mobile device or a remote server based on an image used for 3D modeling at each photo capture point; and (S4) placing the individual 3D models of all photo capture points in the global three-dimensional coordinate system based on the position and the direction obtained in S2, and connecting the individual 3D models of multiple photo capture points to generate an overall 3D model that includes multiple photo capture points.

Rendering portions of a three-dimensional environment with different sampling rates utilizing a user-defined focus frame
11551409 · 2023-01-10 ·

Methods, systems, and non-transitory computer readable storage media are disclosed for rendering portions of a three-dimensional environment at different sampling rates based on a focus frame within a graphical user interface. Specifically, the disclosed system provides a tool for marking a region of a graphical user interface displaying a three-dimensional environment. The disclosed system generates a focus frame based on the marked region of the graphical user interface and attaches the focus frame to a portion of the three-dimensional environment. The disclosed system assigns a first sampling rate to the portion of the three-dimensional environment displayed within the focus frame and a second sampling rate to portions of the three-dimensional environment outside the focus frame. The disclosed system renders the three-dimensional environment by sampling the portion within the focus frame at the first sampling rate and the portions outside the focus frame at the second sampling rate.

Glyph Accessibility System
20230008785 · 2023-01-12 · ·

Glyph accessibility techniques are described as implemented by a digital content processing system involving accessing glyphs and glyph alternatives. These techniques include preprocessing techniques in which a base font is used to determine similarity of glyphs within the base font to each other. Glyph metadata that describes this similarity is cached in a storage device and used during runtime to increase efficiency in locating similar glyphs in other fonts.

Navigating a vehicle based on data processing using synthetically generated images
11594016 · 2023-02-28 · ·

A user-generated graphical representation can be sent into a generative network to generate a synthetic image of an area including a road, the user-generated graphical representation including at least three different colors and each color from the at least three different colors representing a feature from a plurality of features. A determination can be made that a discrimination network fails to distinguish between the synthetic image and a sensor detected image. The synthetic image can be sent, in response to determining that the discrimination network fails to distinguish between the synthetic image and the sensor-detected image, into an object detector to generate a non-user-generated graphical representation. An objective function can be determined based on a comparison between the user-generated graphical representation and the non-user-generated graphical representation. A perception model can be trained using the synthetic image in response to determining that the objective function is within a predetermined acceptable range.