Patent classifications
G06T2210/12
Neural network model trained using generated synthetic images
Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
Intersection testing in a ray tracing system using ray bundle vectors
Ray tracing systems and computer-implemented methods are described for performing intersection testing on a bundle of rays with respect to a box. Silhouette edges of the box are identified from the perspective of the bundle of rays. For each of the identified silhouette edges, components of a vector providing a bound to the bundle of rays are obtained and it is determined whether the vector passes inside or outside of the silhouette edge. Results of determining, for each of the identified silhouette edges, whether the vector passes inside or outside of the silhouette edge, are used to determine an intersection testing result for the bundle of rays with respect to the box.
IMAGE-BASED INSTRUMENT IDENTIFICATION AND TRACKING
Disclosed is a computer-implemented method of transmitting identification information of a medical instrument. The method encompasses comparing a digital image of an instrument tray and an instrument to a digital image of just the instrument tray to determine the identity of the instrument. A characteristic geometry such as its envelope is assigned to the instrument, and a characteristic quantity of the envelope such as its aspect ratio may be used to identify the instrument. Based on determining, from the image of the instrument and the instrument tray, the relative position between those two entities, the method determines whether the instrument has been taken from the instrument tray, and accordingly instructs a medical computing system about this determination. The medical computing system may then determine whether the correct instrument has been taken from the instrument has been taken from the instrument tray, for example by comparison with medical procedure planning data.
INTERSECTION TESTING IN A RAY TRACING SYSTEM
A ray tracing unit and method for processing a ray in a ray tracing system performs intersection testing for the ray by performing one or more intersection testing iterations. Each intersection testing iteration includes: (i) traversing an acceleration structure to identify the nearest intersection of the ray with a primitive that has not been identified as the nearest intersection in any previous intersection testing iterations for the ray; and (ii) if, based on a characteristic of the primitive, a traverse shader is to be executed in respect of the identified intersection: executing the traverse shader in respect of the identified intersection; and if the execution of the traverse shader determines that the ray does not intersect the primitive at the identified intersection, causing another intersection testing iteration to be performed. When the intersection testing for the ray is complete, an output shader is executed to process a result of the intersection testing for the ray.
IMAGE PROCESSING APPARATUS, METHOD AND PROGRAM, LEARNING APPARATUS, METHOD AND PROGRAM, AND DERIVATION MODEL
An image processing apparatus includes at least one processor, and the processor derives three-dimensional coordinate information that defines a position of a structure in a tomographic plane from a tomographic image including the structure, and that defines a position of an end part of the structure outside the tomographic plane in a direction intersecting the tomographic image.
TECHNIQUES FOR INTRODUCING ORIENTED BOUNDING BOXES INTO BOUNDING VOLUME HIERARCHY
Described herein is a technique for modifying a bounding volume hierarchy. The techniques include combining preferred orientations of child nodes of a first bounding box node to generate a first preferred orientation; based on the first preferred orientation, converting one or more child nodes of the first bounding box node into one or more oriented bounding box nodes; combining preferred orientations of child nodes of a second bounding box node to generate a second preferred orientation; and based on the second preferred orientation, maintaining one or more children of the second bounding box node as non-oriented bounding box nodes.
3-D graphics rendering with implicit geometry
Aspects relate to tracing rays in 3-D scenes that comprise objects that are defined by or with implicit geometry. In an example, a trapping element defines a portion of 3-D space in which implicit geometry exist. When a ray is found to intersect a trapping element, a trapping element procedure is executed. The trapping element procedure may comprise marching a ray through a 3-D volume and evaluating a function that defines the implicit geometry for each current 3-D position of the ray. An intersection detected with the implicit geometry may be found concurrently with intersections for the same ray with explicitly-defined geometry, and data describing these intersections may be stored with the ray and resolved.
Physical object boundary detection techniques and systems
Physical object boundary detection techniques and systems are described. In one example, an augmented reality module generates three dimensional point cloud data. This data describes depths at respective points within a physical environment that includes the physical object. A physical object boundary detection module is then employed to filter the point cloud data by removing points that correspond to a ground plane. The module then performs a nearest neighbor search to locate a subset of the points within the filtered point cloud data that correspond to the physical object. Based on this subset, the module projects the subset of points onto the ground plane to generate a two-dimensional boundary. The two-dimensional boundary is then extruded based on a height determined from a point having a maximum distance from the ground plane from the filtered cloud point data.
Phrase recognition model for autonomous vehicles
Aspects of the disclosure relate to training and using a phrase recognition model to identify phrases in images. As an example, a selected phrase list may include a plurality of phrases is received. Each phrase of the plurality of phrases includes text. An initial plurality of images may be received. A training image set may be selected from the initial plurality of images by identifying the phrase-containing images that include one or more phrases from the selected phrase list. Each given phrase-containing image of the training image set may be labeled with information identifying the one or more phrases from the selected phrase list included in the given phrase-containing images. The model may be trained based on the training image set such that the model is configured to, in response to receiving an input image, output data indicating whether a phrase of the plurality of phrases is included in the input image.
VIRTUAL REALITY SYSTEM WITH INSPECTING FUNCTION OF ASSEMBLING AND DISASSEMBLING AND INSPECTION METHOD THEREOF
A virtual reality system with an inspecting function of assembling and disassembling and an inspection method of assembling and disassembling based on virtual reality are presented. A learning-end acquires an inspection data and a teaching assembling-disassembling record being set with a plurality of checkpoints, plays the teaching assembling-disassembling record, modifies a learning assembling-disassembling status of a plurality of virtual objects based on user's operations for assembling or disassembling. The learning-end issues an assembling-disassembling error reminder when the learning assembling-disassembling status is inconsistent with a teaching assembling-disassembling status at any of the checkpoints.