Patent classifications
G06T3/02
METHOD TO PRODUCE 3D MODEL FROM ONE OR SEVERAL IMAGES
The present invention provides a method to produce a 3D model of a person or an object from just one or several image. The method uses a neural network that is trained on pairs of 3D models of human heads and their frontal images, and then, given an image, infers a 3D model.
Image Processing Method and Transmission Electron Microscope
An image processing method includes: acquiring an optical microscope image of a specimen; acquiring a transmission electron microscope image of the specimen having been thinned; and superimposing the optical microscope image on the transmission electron microscope image and causing a display section to display the superimposed images, and in superimposing the optical microscope image on the transmission electron microscope image and causing the display section to display the superimposed images, the optical microscope image is distorted to match a field of view of the optical microscope image with a field of view of the transmission electron microscope image.
POSITIONAL BIOMARKERS BASED ON BRAIN IMAGING
Methods and systems for skull-based brain imaging. In some examples, a method includes receiving a first brain image of a brain taken at a first time and a second brain image of the brain taken at a second time; characterizing a structural change of brain tissue of the brain between the first time and the second time by: determining a global linear change of brain tissue using the first brain image and the second brain image; and determining a local deformation of brain tissue using the first brain image, the second brain image, and the global linear change of brain tissue; and outputting one or more parameters characterizing the structural change based on the global linear change of brain tissue and the local deformation of brain tissue.
Systems and methods for conducting the launch of an application in a dual-display device
Embodiments are described for handling the launching of applications in a multi-screen device. In embodiments, a first touch sensitive display of a first screen receives input to launch an application. In response, the application is launched. A determination is made as to whether the first touch sensitive display already has windows in its stack. If there are no windows in the stack of the first touch sensitive display, a new window of the first application is displayed on the first touch sensitive display. If there are windows in the stack, a determination is made whether a second display has windows in its stack. If not, the new window is displayed on the second display. If the second display also has windows in its stack, the new window will be displayed on the first touch sensitive display.
Point cloud colorization system with real-time 3D visualization
Enabling colorization and color adjustments on 3D point clouds, which are projected onto a 2D view with an equirectangular projection. A user may color regions on the 2D view and preview the changes immediately in a 3D view of the point cloud. Embodiments render the color of each point in the point cloud by testing whether the 2D projection of the point is inside the colored region. Applications may include generation of a color 3D virtual reality environment using point clouds and color-adjusted imagery.
Managing objects in panorama display to navigate spreadsheet
A panorama display application shows objects from a spreadsheet such as charts in primary screen of a mobile device adjoined by left and right virtual screens. The application overlays interaction controls such as sort and filter functions on the object. The application also provides additional interaction controls for the object on the left virtual screen and associated objects links on the right virtual screen. The application may expose the additional interaction controls and the associated objects links by overlaying portions of the virtual screens on the primary screen. The application fluidly shifts content from virtual screens to the primary screen subsequent to detected user action on the overlaid portions.
Manipulation of 3-D RF imagery and on-wall marking of detected structure
A radio frequency (RF) imaging device comprising a display receives a three-dimensional (3D) image that is a superposition of two or more images having different image types including at least a 3D RF image of a space disposed behind a surface. A plurality of input control devices receive a user input for manipulating the display of the 3D image. Alternatively or additionally, the radio frequency (RF) imaging device may receive a three-dimensional (3D) image that is a weighted combination of a plurality of images including a 3D RF image of a space disposed behind a surface, an infrared (IR) image of the surface, and a visible light image of the surface. A user input may specify changes to the weighted combination. In another embodiment, the RF imaging device may include an output device that produces a physical output indicating a detected type of material of an object in the space.
Method for training convolutional neural network to reconstruct an image and system for depth map generation from an image
A method for training a convolutional neural network to reconstruct an image. The method includes forming a common loss function basing on the left and right images (I.sub.L, I.sub.R), reconstructed left and right images (I.sub.L, I.sub.R), disparity maps (d.sub.L, d.sub.R), reconstructed disparity maps (d.sub.L, d.sub.R) for the left and right images (I.sub.L, I.sub.R) and the auxiliary images (I.sub.L, I.sub.R) and training the neural network based on the formed loss function.
Method for designing a three dimensional modeled object in a three dimensional scene by extruding a curve
According to an embodiment of the invention, there is provided a computer-implemented method for designing a three dimensional modeled object in a three dimensional scene, wherein the method comprises the steps of: providing a first curve; duplicating the first curve to obtain a second curve; determining a set of at least one starting point belonging to the first curve; determining a set of at least one target point belonging to the second curve, each target point being associated at least one starting point; linking the relevant points with their associated target points by using at least a connecting curve.
System and method for holographic image-guided percutaneous endovascular percutaneous procedures
Holographic image-guidance can be used to track an interventional device during an endovascular percutaneous procedure. The holographic image guidance can be provided by a head-mounted device by transforming tracking data and vasculature image data to a common coordinate system and creating a holographic display relative to a patient's vasculature to track the interventional device during the endovascular percutaneous procedure. The holographic display can also include graphics to provide guidance for the physical interventional device as it travels through the patient's anatomy (e.g., the vasculature).