G06T2219/2012

AUTOMATED AND ASSISTED IDENTIFICATION OF STROKE USING FEATURE-BASED BRAIN IMAGING
20230215153 · 2023-07-06 ·

Provided herein are systems and methods for automated identification of volumes of interest in volumetric brain images using artificial intelligence (AI) enhanced imaging to diagnose and treat acute stroke. The methods can include receiving image data of a brain having header data and voxel values that represent an interruption in blood supply of the brain when imaged, extracting the header data from the image data, populating an array of cells with the voxel values, applying a segmenting analysis to the array to generate a segmented array, applying a morphological neighborhood analysis to the segmented array to generate a features relationship array, where the features relationship array includes features of interest in the brain indicative of stroke, identifying three-dimensional (3D) connected volumes of interest in the features relationship array, and generating output, for display at a user device, indicating the identified 3D volumes of interest.

Three-dimensional model distribution method, three-dimensional model receiving method, three-dimensional model distribution device, and three-dimensional model receiving device

A three-dimensional model distribution method includes generating a depth image from a three-dimensional model; and distributing the depth image and information for restoring the three-dimensional model from the depth image.

Methods and apparatus to generate photo-realistic three-dimensional models of a photographed environment

Methods and apparatus to generate photo-realistic three-dimensional models of a photographed environment are disclosed. An apparatus includes an object position calculator to determine a three-dimensional (3D) position of an object detected within a first image of an environment and within a second image of the environment. The apparatus further includes a 3D model generator to generate a 3D model of the environment based on the first image and the second image. The apparatus also includes a model integrity analyzer to detect a difference between the 3D position of the object and the 3D model. The 3D model generator automatically modifies the 3D model based on the difference in response to the difference satisfying a confidence threshold.

Creating 3D Objects and Digital 3D Objects
20230213913 · 2023-07-06 · ·

The disclosure includes an object comprising a front lens layer made from at least one of transparent material or translucent material, having a lens with curved surfaces that provide refractive behaviors and a backing layer embedded with patterns. The disclosure also includes a method for designing an object with lenticular effects. The disclosure further includes a method for designing a textile for 3D printing. The disclosure also includes a candy or lollipop comprising a front layer comprising a plurality of at least one of elongated or standalone transparent geometries with defined heights, curvatures and shapes that provide refractive behaviors and a backing layer with at least one of colors or patterns. The disclosure also includes barrier-based object designs that create optical illusions.

METHOD AND APPARATUS FOR PROVIDING USER-INTERACTIVE CUSTOMIZED INTERACTION FOR XR REAL OBJECT TRANSFORMATION
20230215121 · 2023-07-06 ·

A method of providing a user-interactive customized interaction for a transformation of an extended reality (XR) real object includes segmenting a target object from an input image received through a camera, extracting a similar target object having a highest similarity to the target object from three-dimensional (3D) model data that has been previously learnt, extracting texture of the target object through the camera and mapping the texture to the similar target object, transforming a shape of the similar target object by incorporating intention information of a user based on a user interaction, and rendering and outputting the transformed similar target object.

Display apparatus, image processing apparatus, and control method

A display apparatus includes a display screen, and a controller that causes the display screen to display a composite image in which a first image acquired by imaging a space by a camera and a second image representing at least one type of aerosol existing in the space are combined. The position of the at least one type of aerosol as seen in a depth direction in the first image is reflected in the second image.

Dynamic, interactive signaling of safety-related conditions in a monitored environment

Systems and methods for determining safe and unsafe zones in a workspace—where safe actions are calculated in real time based on all relevant objects (e.g., some observed by sensors and others computationally generated based on analysis of the sensed workspace) and on the current state of the machinery (e.g., a robot) in the workspace—may utilize a variety of workspace-monitoring approaches as well as dynamic modeling of the robot geometry. The future trajectory of the robot(s) and/or the human(s) may be forecast using, e.g., a model of human movement and other forms of control. Modeling and forecasting of the robot may, in some embodiments, make use of data provided by the robot controller that may or may not include safety guarantees.

Reducing volumetric data while retaining visual fidelity

Managing volumetric data, including: defining a view volume in a volume of space, wherein the volumetric data has multiple points in the volume of space and at least one point is in the view volume and at least one point is not in the view volume; defining a grid in the volume of space, the grid having multiple cells and dividing the volume of space into respective cells, wherein each point has a corresponding cell in the grid, and each cell in the grid has zero or more corresponding points; and reducing the number of points for a cell in the grid where that cell is outside the view volume.

COMPUTATIONAL PHOTOGRAPHY FEATURES WITH DEPTH
20220414834 · 2022-12-29 ·

A method including receiving an image as a portion of a real-world space, placing an anchor on the image, determining a position of the anchor, determine a depth associated with the position of the anchor, applying an image editing algorithm based on the depth to the captured image, and rendering the edited image.

Dynamically estimating light-source-specific parameters for digital images using a neural network

This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.