G06T15/60

Reality vs virtual reality racing

A method for displaying a virtual vehicle includes: calculating a virtual world comprising the virtual vehicle and a representation of a physical object at a virtual position; calculating a virtual position of a point of view within the virtual world based on a position of the point of view at the racecourse; and calculating a portion of the virtual vehicle within the virtual world that is visible from the virtual position of the point of view, wherein the portion of the virtual vehicle visible from the virtual position of the point of view comprises a portion of the virtual vehicle that is unobscured, from the virtual position of the point of view, by the representation of the physical object at the virtual position of the physical object.

Shadow neutral 3-D garment rendering

A tool allows a user to create new designs for apparel and preview these designs in three dimensions before manufacture. Software and lasers are used in finishing apparel to produce a desired wear pattern or other design. Based on a laser input file with a pattern, a laser will burn the pattern onto apparel. With the tool, the user will be able to create, make changes, and view images of a design, in real time, before burning by a laser. Input to the tool includes fabric template images, laser input files, and damage input. The tool allows adding of tinting and adjusting of intensity and bright point. The user can also move, rotate, scale, and warp the image input.

Shadow neutral 3-D garment rendering

A tool allows a user to create new designs for apparel and preview these designs in three dimensions before manufacture. Software and lasers are used in finishing apparel to produce a desired wear pattern or other design. Based on a laser input file with a pattern, a laser will burn the pattern onto apparel. With the tool, the user will be able to create, make changes, and view images of a design, in real time, before burning by a laser. Input to the tool includes fabric template images, laser input files, and damage input. The tool allows adding of tinting and adjusting of intensity and bright point. The user can also move, rotate, scale, and warp the image input.

Shadow Neutral 3-D Visualization of Garment

A tool allows a user to create new designs for apparel and preview these designs in three dimensions before manufacture. Software and lasers are used in finishing apparel to produce a desired wear pattern or other design. Based on a laser input file with a pattern, a laser will burn the pattern onto apparel. With the tool, the user will be able to create, make changes, and view images of a design, in real time, before burning by a laser. Input to the tool includes fabric template images, laser input files, and damage input. The tool allows adding of tinting and adjusting of intensity and bright point. The user can also move, rotate, scale, and warp the image input.

Shadow Neutral 3-D Visualization of Garment

A tool allows a user to create new designs for apparel and preview these designs in three dimensions before manufacture. Software and lasers are used in finishing apparel to produce a desired wear pattern or other design. Based on a laser input file with a pattern, a laser will burn the pattern onto apparel. With the tool, the user will be able to create, make changes, and view images of a design, in real time, before burning by a laser. Input to the tool includes fabric template images, laser input files, and damage input. The tool allows adding of tinting and adjusting of intensity and bright point. The user can also move, rotate, scale, and warp the image input.

Efficiently determining an absorption coefficient of a virtual volume in 3D computer graphics
11481959 · 2022-10-25 · ·

Disclosed is a method to derive the absorption coefficient, transparency, and/or the scattering coefficient from the user-specified parameters including roughness, phase function, index of refraction (IOR), and color by performing the simulation once, and storing the results of the simulation in an easy to retrieve representation, such as a lookup table, or an analytic function. To create the analytic function, one or more analytic functions can be fitted to the results of the simulation for the multiple parameters including roughness, phase function, IOR, and color. The lookup table can be combined with the analytic representation. For example, the lookup table can be used to represent the color, roughness, and phase function, while the IOR can be represented by an analytic function. For example, when the IOR is above 2, the lookup table becomes three-dimensional and the IOR is calculated using the analytic function.

METHOD FOR REMOVING OBJECTS FROM TEXTURE

A method for creating a texture from input images, thereby removing representations of objects from the texture, the method comprising performing semantic segmentation in a plurality of digital input images with a plurality of semantic classes, at least one of the semantic classes relating to a target object class; identifying, in at least a first input image, one or more possible instances of representations of objects belonging to the target object class, each possible instance being constituted by a plurality of contiguous instance pixels of the image pixels; determining whether the instance pixels are target pixels, target pixels being pixels constituting a representation of an object belonging to the target object class; and replacing target pixels in the texture with replacement pixels derived from one or more of the plurality of digital input images.

METHOD FOR REMOVING OBJECTS FROM TEXTURE

A method for creating a texture from input images, thereby removing representations of objects from the texture, the method comprising performing semantic segmentation in a plurality of digital input images with a plurality of semantic classes, at least one of the semantic classes relating to a target object class; identifying, in at least a first input image, one or more possible instances of representations of objects belonging to the target object class, each possible instance being constituted by a plurality of contiguous instance pixels of the image pixels; determining whether the instance pixels are target pixels, target pixels being pixels constituting a representation of an object belonging to the target object class; and replacing target pixels in the texture with replacement pixels derived from one or more of the plurality of digital input images.

System and method for localization of fluorescent targets in deep tissue for guiding surgery

A method for identifying a source of florescence is disclosed which includes shining light on a subject at a first wavelength, causing emission of light at a second wavelength from the source of fluorescence, filtering out light at the first wavelength, capturing at least one 2 dimensional (2D) image of a subject having a plurality of pixels at the second wavelength, and establishing information about approximate location of a source of florescence within a tissue of the subject, selectively generating a 3D geometric model where the model is adapted to provide a model representation of the at least one 2D captured image, comparing the modeled at least one 2D captured image to the captured at least one 2D image and iteratively adjusting the model to minimize the difference, and outputting location and geometric configuration of the source of fluorescence within the tissue within the region of interest.

System and method for localization of fluorescent targets in deep tissue for guiding surgery

A method for identifying a source of florescence is disclosed which includes shining light on a subject at a first wavelength, causing emission of light at a second wavelength from the source of fluorescence, filtering out light at the first wavelength, capturing at least one 2 dimensional (2D) image of a subject having a plurality of pixels at the second wavelength, and establishing information about approximate location of a source of florescence within a tissue of the subject, selectively generating a 3D geometric model where the model is adapted to provide a model representation of the at least one 2D captured image, comparing the modeled at least one 2D captured image to the captured at least one 2D image and iteratively adjusting the model to minimize the difference, and outputting location and geometric configuration of the source of fluorescence within the tissue within the region of interest.