G06T3/4007

SYSTEM AND METHOD FOR GENERATING A STAINED IMAGE

A system and method for generating a stained image including the steps of obtaining a first image of a key sample section; and processing the first image with a multi-modal stain learning engine arranged to generate at least one stained image, wherein the at least one stained image represents the key sample section stained with at least one stain.

IMAGE PROCESSING METHODS, ELECTRONIC DEVICES, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIA

An image processing method includes: according to position coordinates of any interpolation pixel in a target image, determining position coordinate of the interpolation pixel in an original image; calculating a two-dimensional image entropy of an (n×n) neighborhood of the interpolation pixel in the original image; when it is greater than or equal to a preset entropy threshold value, calculating a pixel value of the interpolation pixel based on all original pixels; when it is less than the preset entropy threshold, calculating gradient values in at least two edge directions within the threshold neighborhood and determining whether there is a strong-edge direction; if so, calculating the pixel value of the interpolation pixel based on a plurality of original pixels in the strong-edge direction; if not, calculating the pixel value of the interpolation pixel based on all the original pixels.

Mobile and augmented reality based depth and thermal fusion scan

Systems and methods are described for mobile and augmented reality-based depth and thermal fusion scan imaging. Some embodiments of the present technology use sophisticated techniques to fuse information from both thermal and depth imaging channels together to achieve synergistic effects for object recognition and personal identification. Hence, the techniques used in various embodiments provide a much better solution for, say, first responders, disaster relief agents, search and rescue, and law enforcement officials to gather more detailed forensic data. Some embodiments provide a series of unique features including small size, wearable devices, and ability to feed fused depth and thermal streams into AR glasses. In addition, some embodiments use a two-layer architecture for performing device local fusion and cloud-based platform for integration of data from multiple devices and cross-scene analysis and reconstruction.

Method and apparatus for image processing

An image processing method is provided The image processing includes images obtained using a Red-Clear-Clear-Blue (RCCB) color filter array. An image processing method for processing an RCCB image obtained by an image capturing device comprising an array of photosensors with a mosaic of RCCB filters comprises interpolating the RCCB image to obtain dense C-channel data from a sparse C-channel, and chromatic filtering, based on the RCCB image data and the dense C-channel, to obtain an RGB image by obtaining the dense R-channel and B-channel, and filtering the R-channel and B-channel using the guide image. The computational load on the image processing equipment and the processing time are reduced, while enhancing the obtained image quality, including in low light conditions.

Immersive video experience including rotation

Techniques for video manipulation based on an immersive video experience including rotation are disclosed. Parameters pertaining to a video are determined. Second parameters pertaining to a video display are determined. A minimum scale value is calculated to inscribe a rectangle within an oval based on a height and a width of the video. A height and a width of the video display define the rectangle. A gravity sensor input is preprocessed using a low-pass filter before using it to determine the minimum scale value. The video is preprocessed using a video inset and a viewport inset. The video can be trimmed, and the rectangle can be scaled. A rectangular portion of the video is rendered on the video display wherein the rectangular portion is on or inside the boundaries of the oval.

Electronic apparatus and controlling method thereof

An electronic apparatus includes a memory storing information on an artificial intelligence (AI) model comprising a plurality of layers, and a processor configured to obtain an output image that is processed from an input image using the AI model. The processor is configured to, based on a number of non-zero data values included in operation data output from a first layer among the plurality of layers, compress the operation data according to at least one of a plurality of coding modes and store the compressed data in an internal memory, obtain restoration data corresponding to the operation data by decompressing the compressed data stored in the internal memory, and provide the obtained restoration data to a second layer among the plurality of layers.

Enhancing the resolution of a video stream

In one embodiment, a method includes accessing first-resolution images corresponding to frames of a video, computing a motion vector based on a first-resolution image of a first frame in the video and a first-resolution image of a second frame in the video, generating a second-resolution warped image associated with the second frame by using the motion vector to warp a second-resolution reconstructed image associated with the first frame, generating a second-resolution intermediate image associated with the second frame based on the first-resolution image associated with the second frame, computing adjustment parameters by processing the first-resolution image associated with the second frame and the second-resolution warped image associated with the second frame using a machine-learning model, and adjusting pixels of the second-resolution intermediate image associated with the second frame based on the adjustment parameters to reconstruct a second-resolution reconstructed image associated with the second frame.

COMPOSITOR LAYER EXTRAPOLATION
20230128288 · 2023-04-27 ·

In one embodiment, a method may obtain, from an application, (a) an image and (b) a layer frame having a first pose in front of the image. The method may generate, for a first viewpoint associated with a first time, a first display frame by separately rendering the image and the layer frame having the first pose into a display buffer. The method may display the first display frame at the first time. The method may determine an extrapolated pose for the layer frame based on the first pose of the layer frame and a second pose of a previously submitted layer frame. The method may generate, for a second viewpoint associated with a second time, a second display frame by separately rendering the image and the layer frame having the extrapolated pose into the display buffer. The method may display the second display frame at the second time.

Context-aware synthesis for video frame interpolation
11475536 · 2022-10-18 · ·

Systems, methods, and computer-readable media for context-aware synthesis for video frame interpolation are provided. Bidirectional flow may be used in combination with flexible frame synthesis neural network to handle occlusions and the like, and to accommodate inaccuracies in motion estimation. Contextual information may be used to enable frame synthesis neural network to perform informative interpolation. Optical flow may be used to provide initialization for interpolation. Other embodiments may be described and/or claimed.

Method and system to assess pulmonary hypertension using phase space tomography and machine learning

Phase space tomography methods and systems to facilitate the analysis and evaluation of complex, quasi-periodic system by generating computed phase-space tomographic images and mathematical features as a representation of the dynamics of the quasi-periodic cardiac systems. The computed phase-space tomographic images can be presented to a physician to assist in the assessment of presence or non-presence of disease. In some implementations, the phase space tomographic images are used as input to a trained neural network classifier configured to assess for presence or non-presence of pulmonary hypertension, including pulmonary arterial hypertension.