Patent classifications
G06T2207/20221
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
An information processing apparatus configured to paste a full-spherical panoramic image along an inner wall of a virtual three-dimensional sphere; calculate an arrangement position for arranging a planar image closer to a center point of the virtual three-dimensional sphere than the inner wall, in such an orientation that a line-of-sight direction from the center point to the inner wall and a perpendicular line of the planar image are parallel to each other, the planar image being obtained by pasting an embedding image to be embedded in the full-spherical panoramic image, on a two-dimensional plane; and display a display image on a display unit. The display image is a two-dimensional image viewed from the center point in the line-of-sight direction in a state in which the full-spherical panoramic image is pasted along the inner wall of the virtual three-dimensional sphere and the planar image is arranged at an arrangement position.
Design-to-wafer image correlation by combining information from multiple collection channels
At least three dark field images of a feature on a semiconductor wafer can be formed using an optical inspection system. Each of the at least three dark field images is from a different channel of the optical inspection system using an aperture that is fully open during image generation. The dark field images can be fused into a pseudo wafer image that is aligned with a corresponding design. This alignment can improve care area placement.
IMAGING DEVICE AND METHOD FOR GENERATING AN UNDISTORTED WIDE VIEW IMAGE
Certain aspects of the technology disclosed herein involve combining images to generate a wide view image of a surrounding environment. Images can be recorded using an stand-alone imaging device having wide angle lenses and/or normal lenses. Images from the imaging device can be combined using methods described herein. In an embodiment, a pixel correspondence between a first image and a second image can be determined, based on a corresponding overlap area associated with the first image and the second image. Corresponding pixels in the corresponding overlap area associated with the first image and the second image can be merged based on a weight assigned to each of the corresponding pixels.
Image Registration with Device Data
Systems and methods for image registration using data collected by an electronic device, such as a mobile device, capable of simultaneous localization and mapping are provided. An electronic device, such as a mobile device, can be can be configured to collect data using a variety of sensors as the device is carried or transported through a space. The collected data can be processed and analyzed to generate a three-dimensional representation of the space and objects in the space in near real time as the device is carried through the space. The data can be used for a variety of purposes, including registering imagery for localization and image processing.
Systems and methods for generating panning images
Images may be captured by a moving image capture device. A reference image and a background image may be selected from the images. The reference image may include depiction of an object, with the object blocking view of the background. The background image may include depiction of the background blocked by the object in the reference image. An object layer may be generated by segmenting the depiction of the object from the reference image. A background layer may be generated by combining the depiction of the background in the background image with the reference image. The background layer may be blurred and combined with the object layer to generate a panning image.
DISEASE CHARACTERIZATION FROM FUSED PATHOLOGY AND RADIOLOGY DATA
Methods and apparatus distinguish invasive adenocarcinoma (IA) from in situ adenocarcinoma (AIS). One example apparatus includes a set of circuits, and a data store that stores three dimensional (3D) radiological images of tissue demonstrating IA or AIS. The set of circuits includes a classification circuit that generates an invasiveness classification for a diagnostic 3D radiological image, a training circuit that trains the classification circuit to identify a texture feature associated with IA, an image acquisition circuit that acquires a diagnostic 3D radiological image of a region of tissue demonstrating cancerous pathology and that provides the diagnostic 3D radiological image to the classification circuit, and a prediction circuit that generates an invasiveness score based on the diagnostic 3D radiological image and the invasiveness classification. The training circuit trains the classification circuit using a set of 3D histological reconstructions combined with the set of 3D radiological images.
Three-dimensional data creation method, three-dimensional data transmission method, three-dimensional data creation device, and three-dimensional data transmission device
A three-dimensional data creation method includes: creating first three-dimensional data from information detected by a sensor; receiving encoded three-dimensional data that is obtained by encoding second three-dimensional data; decoding the received encoded three-dimensional data to obtain the second three-dimensional data; and merging the first three-dimensional data with the second three-dimensional data to create third three-dimensional data.
Photoacoustic image evaluation apparatus, method, and program, and photoacoustic image generation apparatus
A photoacoustic image evaluation apparatus includes a processor configured to acquire a first photoacoustic image generated at a first point in time and a second photoacoustic image generated at a second point in time before the first point in time, the first and second photoacoustic images being photoacoustic images generated by detecting photoacoustic waves generated inside a subject, who has been subjected to blood vessel regeneration treatment, by emission of light into the subject; acquire a blood vessel regeneration index, which indicates a state of a blood vessel by the regeneration treatment, based on a difference between a blood vessel included in the first photoacoustic image and a blood vessel included in the second photoacoustic image; and display the blood vessel regeneration index on a display.
Methods and systems for image processing with multiple image sources
Various methods and systems are provided for image processing for multiple cameras. In one embodiment, a method comprises acquiring image frames with a plurality of image frame sources configured with different acquisition settings, processing the image frames based on the different acquisition settings to generate at least one final image frame, and outputting the at least one final image frame. In this way, information from different image frame sources such as cameras may be leveraged to achieve increased frame rates with improved image quality and a desired motion appearance.
MULTI-DOMAIN CONVOLUTIONAL NEURAL NETWORK
In one embodiment, an apparatus comprises a memory and a processor. The memory is to store visual data associated with a visual representation captured by one or more sensors. The processor is to: obtain the visual data associated with the visual representation captured by the one or more sensors, wherein the visual data comprises uncompressed visual data or compressed visual data; process the visual data using a convolutional neural network (CNN), wherein the CNN comprises a plurality of layers, wherein the plurality of layers comprises a plurality of filters, and wherein the plurality of filters comprises one or more pixel-domain filters to perform processing associated with uncompressed data and one or more compressed-domain filters to perform processing associated with compressed data; and classify the visual data based on an output of the CNN.