G06T3/40

High efficiency dynamic contrast processing

A high efficiency method of processing images to provide perceptual high-contrast output. Pixel intensities are calculated by a weighted combination of a fixed number of static bounding rectangle sizes. This is more performant than incrementally growing the bounding rectangle size and performing expensive analysis on resultant histograms. To mitigate image artifacts and noise, blurring and down-sampling are applied to the image prior to processing.

High efficiency dynamic contrast processing

A high efficiency method of processing images to provide perceptual high-contrast output. Pixel intensities are calculated by a weighted combination of a fixed number of static bounding rectangle sizes. This is more performant than incrementally growing the bounding rectangle size and performing expensive analysis on resultant histograms. To mitigate image artifacts and noise, blurring and down-sampling are applied to the image prior to processing.

AI frame engine for mobile edge

Aspects of the disclosure provide a device for processing frames with aliasing artifacts. For example, the device can include a motion estimation circuit, a warping circuit coupled to the motion estimation circuit, and a temporal decision circuit coupled to the warping circuit. The motion estimation circuit can estimate a motion value between a current frame and a previous frame. The warping circuit can warp the previous frame based on the motion value such that the warped previous frame is aligned with the current frame and determine whether the current frame and the warped previous frame are consistent. The temporal decision circuit can generate an output frame, the output frame including either the current frame and the warped previous frame when the current frame and the warped previous frame are consistent, or the current frame when the current frame and the warped previous frame are not consistent.

Distinguishing real from virtual objects in immersive reality
11580734 · 2023-02-14 · ·

Aspects of the subject disclosure may include, for example, a camera positioned to capture image information of an immersive experience presented to one or more users engaged in the immersive experience and located in an immersive experience space, a processing system and a memory that stores executable instructions to facilitate performance of operations including receiving the image information from the camera, detecting objects located in the immersive experience space with the one or more users, the objects including at least one virtual object created by the immersive experience, determining the at least one virtual object is a projected virtual object of the immersive experience, generating a signal indicating the at least one virtual object is a projected virtual object, and a projector, responsive to the signal, to provide a visual indication in the immersive experience space to identify the projected virtual object as a virtual object to the one or more users engaged in the immersive experience. Other embodiments are disclosed.

Distinguishing real from virtual objects in immersive reality
11580734 · 2023-02-14 · ·

Aspects of the subject disclosure may include, for example, a camera positioned to capture image information of an immersive experience presented to one or more users engaged in the immersive experience and located in an immersive experience space, a processing system and a memory that stores executable instructions to facilitate performance of operations including receiving the image information from the camera, detecting objects located in the immersive experience space with the one or more users, the objects including at least one virtual object created by the immersive experience, determining the at least one virtual object is a projected virtual object of the immersive experience, generating a signal indicating the at least one virtual object is a projected virtual object, and a projector, responsive to the signal, to provide a visual indication in the immersive experience space to identify the projected virtual object as a virtual object to the one or more users engaged in the immersive experience. Other embodiments are disclosed.

Image processing apparatus, image processing method, and non-transitory computer-readable medium
11580620 · 2023-02-14 · ·

There is provided with an image processing apparatus. A noise reduction unit generates a noise-reduced image in which noise is reduced from an input image in which a plurality of types of pixels that represent mutually different types of color information are arranged in one plane. An extraction unit generates a high-frequency emphasized image in which a high-frequency component of the input image is emphasized. A demosaicing unit generates a demosaiced image having a plurality of planes that each represent one type of color information by demosaicing processing to the noise-reduced image. A generation unit generates an output image by correcting the demosaiced image by using the high-frequency emphasized image.

Apparatus and methods for pre-processing and stabilization of captured image data
11582387 · 2023-02-14 · ·

Apparatus and methods for the pre-processing of image data so as to enhance quality of subsequent encoding and rendering. In one embodiment, a capture device is disclosed that includes a processing apparatus and a non-transitory computer readable apparatus comprising a storage medium have one or more instructions stored thereon. The one or more instructions, when executed by the processing apparatus, are configured to: receive captured image data (such as that sourced from two or more separate image sensors) and pre-process the data to enable stabilization of the corresponding images prior to encoding. In some implementations, the pre-processing includes combination (e.g., stitching) of the captured image data associated with the two or more sensors to facilitates the stabilization. Advantageously, undesirable artifacts such as object “jitter” can be reduced or eliminated. Methods and non-transitory computer readable apparatus are also disclosed.

Method of matching images to be merged and data processing device performing the same

Each input image from a plurality of input images is divided into a plurality of image tiles. A feature point map including a plurality of feature point tiles respectively corresponding to the plurality of image tiles is generated by extracting feature points included in each image tile of the plurality of image tiles. A descriptor map including a plurality of descriptor tiles respectively corresponding to the plurality of feature point tiles is generated by generating descriptors of feature points included in the feature point map. Mapping information containing matching relationships between feature points included in different input images of the plurality of input images is generated based on a plurality of descriptor maps respectively corresponding to the plurality of input images. Image merging performance may be enhanced by dividing the input image into the plurality of image tiles to increase distribution uniformity of the feature points.

Medical image segmentation method based on U-Net

A medical image segmentation method based on a U-Net, including: sending real segmentation image and original image to a generative adversarial network for data enhancement to generate a composite image with a label; then putting the composite image into original data set to obtain an expanded data set, and sending the expanded data set to improved multi-feature fusion segmentation network for training. A Dilated Convolution Module is added between the shallow and deep feature skip connections of the segmentation network to obtain receptive fields with different sizes, which enhances the fusion of detail information and deep semantics, improves the adaptability to the size of the segmentation target, and improves the medical image segmentation accuracy. The over-fitting problem that occurs when training the segmentation network is alleviated by using the expanded data set of the generative adversarial network.

Medical image segmentation method based on U-Net

A medical image segmentation method based on a U-Net, including: sending real segmentation image and original image to a generative adversarial network for data enhancement to generate a composite image with a label; then putting the composite image into original data set to obtain an expanded data set, and sending the expanded data set to improved multi-feature fusion segmentation network for training. A Dilated Convolution Module is added between the shallow and deep feature skip connections of the segmentation network to obtain receptive fields with different sizes, which enhances the fusion of detail information and deep semantics, improves the adaptability to the size of the segmentation target, and improves the medical image segmentation accuracy. The over-fitting problem that occurs when training the segmentation network is alleviated by using the expanded data set of the generative adversarial network.