H04N9/74

Dynamically rendering 360-degree videos using view-specific-filter parameters

This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.

Display apparatus and control method thereof

Disclosed is a display apparatus. The display apparatus obtains first characteristic information, which is provided according to a plurality of sections of content and corresponds to an image characteristic of a section to be displayed among the plurality of seconds, from a signal received in the signal receiver, obtains first image-quality setting information for setting image quality of the section based on the obtained first characteristic information, obtains second characteristic information corresponding to an image characteristic of a frame included in the section from the frame, obtains second image-quality setting information for setting image quality of the frame based on the obtained first image-quality setting information and the obtained second characteristic information, and controls the display to display an image of the frame, the image quality of the frame being set based on the obtained second image-quality setting information.

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD, AND IMAGE CAPTURING APPARATUS
20210344846 · 2021-11-04 ·

An image processing apparatus comprises: an acquisition unit that acquires a first image obtained through shooting and distance information of the first image; a detection unit that detects a main subject from the first image; an extraction unit that extracts another subject from the first image based on the distance information of the main subject; a setting unit that sets parameters of one or more virtual light sources that emit virtual light to the main subject and the extracted other subject; and a processing unit that generates from the first image a second image in which the main subject and the other subject are illuminated with the virtual light using the parameters set by the setting unit.

Image and text typesetting method and related apparatus thereof

A method includes determining a first importance measurement value of a pixel in an image, obtaining at least one text box area stacked on the image, obtaining, based on first importance measurement value of a pixel in a background image corresponding to each text box area a second importance measurement value of a background image corresponding to each text box area, obtaining an importance measurement value gravity center or an importance measurement value mass center of the image based on the first importance measurement value, determining, based on a preset principle and a location relationship between the importance measurement value gravity center and a central area of the image or a location relationship between the importance measurement value mass center and a central area of the image, information about a balance degree value of each text box area relative to the image, and selecting, from the at least one text box area, one text box area to typeset a word.

Ambient light suppression

According to an aspect, there is provided a system (200) comprising: an image projection unit (210) configured to project an illumination pattern onto at least a portion of a scene, an imaging unit (220) configured to capture a plurality of images of the scene while the illumination pattern is projected onto the scene, and a processing unit (230) configured to: demodulate the plurality of images based on the illumination pattern and with respect to a target section in the plurality of captured images, wherein the target section corresponds to one of: a portion of the scene on which the illumination pattern is selectively projected while the plurality of images were captured, a portion of the scene at which the projected illumination pattern is resolvable, a portion of the scene with pixel depth values which satisfy a predetermined range; and generate an ambient light suppressed image of the scene based on results of the demodulation.

Ambient light suppression

According to an aspect, there is provided a system (200) comprising: an image projection unit (210) configured to project an illumination pattern onto at least a portion of a scene, an imaging unit (220) configured to capture a plurality of images of the scene while the illumination pattern is projected onto the scene, and a processing unit (230) configured to: demodulate the plurality of images based on the illumination pattern and with respect to a target section in the plurality of captured images, wherein the target section corresponds to one of: a portion of the scene on which the illumination pattern is selectively projected while the plurality of images were captured, a portion of the scene at which the projected illumination pattern is resolvable, a portion of the scene with pixel depth values which satisfy a predetermined range; and generate an ambient light suppressed image of the scene based on results of the demodulation.

Dynamically generating and changing view-specific-filter parameters for 360-degree videos

This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.

Lighting, color vector, and virtual background correction during a video conference session

An information handling system executing a multimedia multi-user collaboration application (MMCA) may include a memory; a power management unit; a camera to capture video of a user participating in a video conference session; a processor configured to execute code instructions of a trained intelligent collaboration contextual session management system (ICCSMS) neural network to receive as inputs: the type of AV processing instruction modules enabled descriptive of how to visually transform a video frame during a video conference session executed by a multimedia multi-user collaboration application; and sensor data from a plurality of sensors including an ambient light sensor to detect ambient light around a participant of the video conference session and a color senser to detect color vectors in the video frame; the processor applies AV processing instruction adjustments to the enabled AV processing instruction modules received as output from the trained ICC SMS machine learning module to adjust the lighting and color vectors of the video frame based on the sensor inputs and the type of AV processing instruction modules.

Removing moving objects from a video scene captured by a moving camera
11436708 · 2022-09-06 · ·

Methods, an apparatus, and software media are provided for removing unwanted information such as moving or temporary foreground objects from a video sequence. The method performs, for each pixel, a statistical analysis to create a background data model whose color values can be used to detect and remove the unwanted information. The method assumes that for each pixel the background is present in a majority of the frames. The camera that records the video sequence may move relative to the geometry of the video scene. A pixel in a first frame is matched to a location in the geometry. The method determines color values of pixels, matched to the location in the geometry, in successive frames and clusters color values to determine a background color value range. It may use quadratic or better interpolation and extrapolation to determine background color values for unavailable frames.

WINDOWS MANAGEMENT IN A TELEVISION ENVIRONMENT

Media content is received in a windows management application. The media content is from a set of content including zero or more television signal content and zero or more application content. The media content is incorporated into a television signal containing a window configuration. The television signal is then sent from the windows management application to a television where it is displayed.