Patent classifications
G06T2210/36
VOXELIZATION OF A 3D STRUCTURAL MEDICAL IMAGE OF A HUMAN'S BRAIN
A computer-implemented method for voxelizing a 3D structural medical image of a human's brain. The method including obtaining a 3D structural medical image of the human's brain, including a reference frame, generating a voxelized 3D structural medical image, obtaining parameters of at least one EEG electrode sensor and, for each EEG electrode sensor: a localization in the voxelized 3D structural medical image's reference frame, and a sensor detection distance, obtaining a regular 3D grid of voxels, and for each voxel of the 3D grid, iteratively subdividing the voxel while the distance between the voxel and the localization of any electrode sensor is smaller than or equal to the sensor detection distance and while a size of the voxel is greater than a predetermined length, each subdivided voxel joining a finite number of voxels of the voxelized 3D structural medical image.
Systems and methods for supporting multi-language display view capabilities in a process control plant
Techniques for configuring and presenting multiple languages at a user interface executing in an operating environment of a process plant include configuring, in a configuration environment, a multi-language interface object to indicate a plurality of languages that may be presented at the user interface. The multi-language interface object includes a parameter whose value is changeable, within the operating environment, to indicate a desired language that is to be presented on the user interface. A configuration of the graphical display view that references the configured multi-language interface object is downloaded into the operating environment. Thus, during runtime, changes in the language utilized at the user interface are implemented without any additional downloads from or communications with the configuration environment. Independently selectable user controls may be provided to independently control the language utilized by fixtures of the user interface and the language in which graphical display view content is presented.
DISPLACEMENT-CENTRIC ACCELERATION FOR RAY TRACING
Aspects and features of the present disclosure provide a direct ray tracing operator with a low memory footprint for surfaces enriched with displacement maps. A graphics editing application can be used to manipulate displayed representations of a 3D object that include surfaces with displacement textures. The application creates an independent map of a displaced surface. The application ray-traces bounding volumes on the fly and uses the intersection of a query ray with a bounding volume to produce rendering information for a displaced surface. The rendering information can be used to generate displaced surfaces for various base surfaces without significant re-computation so that updated images can be rendered quickly, in real time or near real time.
Using social connections to define graphical representations of users in an artificial reality setting
Systems, methods, and non-transitory computer-readable media are disclosed for variably rendering graphical representations of co-users in VR environments based on social graph information between the co-users and a VR user. For example, the disclosed systems can identify a co-user within a VR environment. Furthermore, the disclosed systems can determine a social relevancy (e.g., a social relationship type) between the co-user and a VR user within the VR environment based on social graph information. Then, the disclosed systems can select and/or determine a graphical representation and/or other capabilities of the co-user within the VR environment based on the social relevancy. Additionally, the disclosed systems can display the co-user within the VR environment using the determined graphical representation type (e.g., from the perspective of the VR user).
Trimming search space for nearest neighbor determinations in point cloud compression
A search space for performing nearest neighbor searches for encoding point cloud data may be trimmed. Ranges of a space filling curve may be used to identify search space to exclude or reuse, instead of generating nearest neighbor search results for at least some of the points of a point cloud located within some of the ranges of the space filling curve. Additionally, neighboring voxels may be searched to identify any neighboring points missed during the trimmed search based on the ranges of the space filling curve.
Systems and methods for supplementing digital media with three-dimensional (3D) models
High-fidelity three-dimensional (3D) models and other high-fidelity digital media that depict objects with a high-level of detail may be computationally demanding to display on some devices. According to some embodiments of the present disclosure, digital media may be supplemented with one or more 3D models to improve the overall level of detail provided by the digital media without excessively increasing computational requirements. An example computer-implemented method includes instructing a user device to display digital media depicting an object, receiving an indication selecting a region of the depicted object, and instructing the user device to display a 3D model corresponding to the selected region of the depicted object, where the 3D model is different from the digital media.
Reducing volumetric data while retaining visual fidelity
Managing volumetric data, including: defining a view volume in a volume of space, wherein the volumetric data has multiple points in the volume of space and at least one point is in the view volume and at least one point is not in the view volume; defining a grid in the volume of space, the grid having multiple cells and dividing the volume of space into respective cells, wherein each point has a corresponding cell in the grid, and each cell in the grid has zero or more corresponding points; and reducing the number of points for a cell in the grid where that cell is outside the view volume.
Generating modified digital images utilizing a global and spatial autoencoder
The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a modified digital image from extracted spatial and global codes. For example, the disclosed systems can utilize a global and spatial autoencoder to extract spatial codes and global codes from digital images. The disclosed systems can further utilize the global and spatial autoencoder to generate a modified digital image by combining extracted spatial and global codes in various ways for various applications such as style swapping, style blending, and attribute editing.
LEVEL OF DETAIL DETERMINATION
Examples described herein relate to an apparatus that includes at least one memory and at least one processor. In some examples, the at least one processor is to: determine at least one level of detail (LOD) associated with at least one mipmap, stored in the at least one memory, based on values in an affine map, wherein the affine map comprises one or more values in a matrix for conversion of an object from screen space to texture space and provide a texture to apply to the object in screen space based on the at least one mipmap. In some examples, to determine at least one LOD associated with at least one mipmap, stored in the at least one memory, based on values in an affine map, the at least one processor is to: determine a length of a minor axis of an ellipse in texture space and based on the length of the minor axis of the ellipse, determine at least one LOD.
ARTIFICIAL INTELLIGENCE (AI) LIFELIKE 3D CONVERSATIONAL CHATBOT
A 3D conversational chatbot is disclosed. The conversational chatbot is embodied in an avatar to provide a human-like experience for end-users. The chatbot is an artificial intelligence-based chatbot. The chatbot is configured with the knowledge of the chatbot owner. The knowledge may depend on the owner, such as the products and/or services provided by the owner. For example, the chatbot is customized with AI for the specific needs of its owner. The avatar communicates with the user, such as a customer, to answer questions with life-like speech and facial movement.