Patent classifications
G06T2207/20108
View-independent decoding for omnidirectional video
For omnidirectional video such as 360-degree Virtual Reality (360VR) video, a video system that support independent decoding of different views of the omnidirectional video is provided. A decoder for such a system can extract a specified part of a bitstream to decode a desired perspective/face/view of an omnidirectional image without decoding the entire image while suffering minimal or no loss in coding efficiency.
Method and apparatus for user guidance for the choice of a two-dimensional angiographic projection
Systems and methods provide guidance for selection of projection perspectives to utilize to obtain complementary combinations of projection images of an object. The systems and methods provide a bi-dimensional first image of the object which has been obtained from a first perspective. A map of values associated with different candidate perspectives relative to the first perspective is determined, wherein the value associated with a given candidate perspective is determined from at least one parameter indicative of a degree to which the given candidate perspective complements the first perspective and at least one weighting parameter. The map can be displayed or evaluated to select at least one candidate perspective to utilize to acquire or obtain a combination of complementary projection images.
TREATMENT PROCEDURE PLANNING SYSTEM AND METHOD
A system and method for planning surgical procedure including a treatment zone setting view presenting at least one slice of a 3D reconstruction generated from CT image data including a target. The treatment zone setting view presenting a treatment zone marker defining a location and a size of a treatment zone and configured to adjust the treatment zone marker in response to a received user input. The system and method further including a volumetric view presenting a 3D volume derived from the 3D reconstruction and a 3D representation of the treatment zone marker relative to structures depicted in the 3D volume.
Image processing apparatus, image processing system, operation method of image processing apparatus, and computer-readable recording medium
An image processing apparatus is configured to acquire a first image group and a second image group and execute image processing on the first and second image groups. The image processing apparatus includes: a processor comprising hardware, wherein the processor is configured to: compare the number of images of interest in the first image group, with the number of images of interest in the second image group; and determine, based on a result of the comparison, priority of processing on the first image group and processing on the second image group.
Method of correlating a slice profile
A method and system for correlating slice profiles associated with a series of magnetic resonance images taken at a plurality of positions. The method comprises first positioning a patient in a first position in the imaging volume of the magnet. A scout scan is then acquired. Selection is then made of an anatomical landmark in the scout scan, which will be referred to as an anatomical fiducial. A particular slice, typically one of a stack of slices to be acquired in a subsequent scan, is selected and precisely positioned at the location of the anatomical fiducial in the scout scan. Following completion of the scan, the patient may be repositioned, necessitating a new scout scan to set up parameters for a second scan.
Optimizing user interactions in segmentation
A system and a computer-implemented method are provided for segmenting an object in a medical image using a graphical segmentation interface. The graphical segmentation interface may comprise a set of segmentation tools for enabling a user to obtain a first segmentation of the object in the image. This first segmentation may be represented by segmentation data. Interaction data may be obtained which is indicative of a set of user interactions of the user with the graphical segmentation interface by which the first segmentation of the object was obtained. The system may comprise a processor configured for analyzing the segmentation data and the interaction data to determine an optimized set of user interactions which, when carried out by the user, obtains a second segmentation similar to the first segmentation, yet in a quicker and more convenient manner. A video may be generated for training the user by indicating the optimized set of user interactions to the user.
CUT-SURFACE DISPLAY OF TUBULAR STRUCTURES
A method for visualizing a tubular object from a set of volumetric data may include the steps of: determining a viewing direction for the tubular object; selecting a constraint subset of the tubular object within the volumetric data; defining a cut-surface through the volumetric data and including the constraint subset of the tubular object within the volumetric data; and rendering an image based upon the determined viewing direction and the volumetric data of the tubular object along the intersection of the volumetric data and the defined cut-surface. Additionally or alternatively, the method may identify a plurality of bifurcations in the tubular object; assign a weighting factor to each identified bifurcation; determine a bifurcation normal vector associated with each bifurcation; determine a weighted average of the bifurcation normal vectors; and render an image of the volumetric data from a perspective parallel to the weighted average of the bifurcation normal vectors.
Treatment procedure planning system and method
A system and method for planning surgical procedure including a treatment zone setting view presenting at least one slice of a 3D reconstruction generated from CT image data including a target. The treatment zone setting view presenting a treatment zone marker defining a location and a size of a treatment zone and configured to adjust the treatment zone marker in response to a received user input. The system and method further including a volumetric view presenting a 3D volume derived from the 3D reconstruction and a 3D representation of the treatment zone marker relative to structures depicted in the 3D volume.
Methods and software for creating a 3D image from images of multiple histological sections and for mapping anatomical information from a reference atlas to a histological image
Methods and software assisting a user in working with images of histological sections to increase the user's productivity and decrease the need for extensive expertise in anatomy. In some embodiments, the methods include methods of assisting a user in creating a 3D volume image of a tissue block from a series of images of histological sections taken from the tissue block. In some embodiments, the methods include methods of automatedly registering a live-view or stored histological section image to a tissue block atlas. In some embodiments, the methods include methods of annotating a histological section image with information from a tissue block atlas based on user input(s) associated with the tissue block atlas. In some embodiments, the methods include methods of automatedly controlling operation of an imaging modality, such as an optical microscope, based on user input(s) associated with a tissue block atlas. These and other methods may be embodied in various configurations of software.
Three-dimensional posture estimating method and apparatus, device and computer storage medium
The present disclosure provides a three-dimensional posture estimating method and apparatus, a device and a computer storage medium, wherein the method comprises: obtaining two-dimensional posture information of an object in an image and three-dimensional size information of the object; determining coordinates of key points of the object in an object coordinate system according to the three-dimensional size information of the object; determining a transformation relationship between a camera coordinate system and the object coordinate system according to a geometrical relationship between coordinates of key points of the object in the object coordinate system and the two-dimensional posture information of the object. Application of this manner to the field of autonomous driving may implement mapping a detection result of a two-dimensional obstacle to a three-dimensional space to obtain its posture.