Patent classifications
G06T15/00
Virtual and augmented reality systems and methods
A method for displaying virtual content to a user, the method includes determining an accommodation of the user's eyes. The method also includes delivering, through a first waveguide of a stack of waveguides, light rays having a first wavefront curvature based at least in part on the determined accommodation, wherein the first wavefront curvature corresponds to a focal distance of the determined accommodation. The method further includes delivering, through a second waveguide of the stack of waveguides, light rays having a second wavefront curvature, the second wavefront curvature associated with a predetermined margin of the focal distance of the determined accommodation.
Virtual and augmented reality systems and methods
A method for displaying virtual content to a user, the method includes determining an accommodation of the user's eyes. The method also includes delivering, through a first waveguide of a stack of waveguides, light rays having a first wavefront curvature based at least in part on the determined accommodation, wherein the first wavefront curvature corresponds to a focal distance of the determined accommodation. The method further includes delivering, through a second waveguide of the stack of waveguides, light rays having a second wavefront curvature, the second wavefront curvature associated with a predetermined margin of the focal distance of the determined accommodation.
Neural network model trained using generated synthetic images
Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
Neural network model trained using generated synthetic images
Training deep neural networks requires a large amount of labeled training data. Conventionally, labeled training data is generated by gathering real images that are manually labelled which is very time-consuming. Instead of manually labelling a training dataset, domain randomization technique is used generate training data that is automatically labeled. The generated training data may be used to train neural networks for object detection and segmentation (labelling) tasks. In an embodiment, the generated training data includes synthetic input images generated by rendering three-dimensional (3D) objects of interest in a 3D scene. In an embodiment, the generated training data includes synthetic input images generated by rendering 3D objects of interest on a 2D background image. The 3D objects of interest are objects that a neural network is trained to detect and/or label.
Intersection testing in a ray tracing system using ray coordinate system basis vectors
A method and an intersection testing module for performing intersection testing of a ray with a box in a ray tracing system. The ray and the box are defined in a 3D space using a space-coordinate system, and the ray is defined with a ray origin and a ray direction. A ray-coordinate system is used to perform intersection testing, wherein the ray-coordinate system has an origin at the ray origin, and the ray-coordinate system has three basis vectors. A first of the basis vectors is aligned with the ray direction. A second and a third of the basis vectors: (i) are both orthogonal to the first basis vector, (ii) are not parallel with each other, and (iii) have a zero as one component when expressed in the space-coordinate system. A result of performing the intersection testing is outputted for use by the ray tracing system.
APPARATUS AND METHOD FOR INTERLOCKING LESION LOCATIONS BETWEEN A GUIDE IMAGE AND A 3D TOMOSYNTHESIS IMAGES COMPOSED OF A PLURALITY OF 3D IMAGE SLICES
Provided are a method and an apparatus for interlocking a lesion location between a 2D medical image and 3D tomosynthesis images including a plurality of 3D image slices.
APPARATUS AND METHOD FOR INTERLOCKING LESION LOCATIONS BETWEEN A GUIDE IMAGE AND A 3D TOMOSYNTHESIS IMAGES COMPOSED OF A PLURALITY OF 3D IMAGE SLICES
Provided are a method and an apparatus for interlocking a lesion location between a 2D medical image and 3D tomosynthesis images including a plurality of 3D image slices.
INTERSECTION TESTING IN A RAY TRACING SYSTEM
A ray tracing unit and method for processing a ray in a ray tracing system performs intersection testing for the ray by performing one or more intersection testing iterations. Each intersection testing iteration includes: (i) traversing an acceleration structure to identify the nearest intersection of the ray with a primitive that has not been identified as the nearest intersection in any previous intersection testing iterations for the ray; and (ii) if, based on a characteristic of the primitive, a traverse shader is to be executed in respect of the identified intersection: executing the traverse shader in respect of the identified intersection; and if the execution of the traverse shader determines that the ray does not intersect the primitive at the identified intersection, causing another intersection testing iteration to be performed. When the intersection testing for the ray is complete, an output shader is executed to process a result of the intersection testing for the ray.
EFFICIENT CONVOLUTION OPERATIONS
A method of operation of a texturing/shading unit in a GPU pipeline is used for efficient convolution operations. The method uses texture hardware to collectively fetch all the texels required to calculate properties for a group of output pixels without any duplication. The method then bypasses bilinear filter hardware in the texture hardware and passes the fetched and unfiltered texel data from the texture hardware unit to shader hardware in the texturing/shading unit. The shader hardware uses the fetched texel data to perform a plurality of convolution operations to calculate the properties of each of the output pixel.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING SYSTEM, IMAGE DISPLAY METHOD, AND IMAGE PROCESSING PROGRAM
An image processing device is an image processing device configured to cause a display to display, as a three-dimensional image, three-dimensional data representing a biological tissue having a longitudinal lumen. The image processing device includes: a control unit configured to calculate centroid positions of a plurality of cross sections in a lateral direction of the lumen of the biological tissue by using the three-dimensional data, set a pair of planes intersecting at a single line passing through the calculated centroid positions as cutting planes, and form, in the three-dimensional data, an opening exposing the lumen of the biological tissue from a region interposed between the cutting planes in the three-dimensional image.