H04N21/45455

CONTROLLING A DISPLAY TO PROVIDE A USER INTERFACE
20220255982 · 2022-08-11 · ·

Visual content to be displayed on a display of a user device is received. Obfuscation data for obscuring the visual content is generated and the obscured visual content is displayed on the display. A request to stop obscuring the visual content is transmitted from the user device to a remote device, in response to a drag gesture detected at the user device. As the drag gesture is performed, the obfuscation data is modified to reduce a level of obfuscation applied to the visual content before the request has been accepted, so that the visual content remains obscured but with a lower level of obfuscation. If the request is subsequently accepted at the remote device, the display is controlled to stop obscuring the visual content, thereby rendering the visual content fully visible on the display.

SENSITIVITY ASSESSMENT FOR MEDIA PRODUCTION USING ARTIFICIAL INTELLIGENCE

An automatic flagging of sensitive portions of a digital dataset for media production includes receiving the digital dataset comprising at least one of audio data, video data, or audio-video data for producing at least one media program. A processor identifies sensitive portions of the digital dataset likely to be in one or more defined content classifications, based at least in part on comparing unclassified portions of the digital dataset with classified portions of the prior media production using an algorithm, and generates a plurality of sensitivity tags each signifying a sensitivity assessment for a corresponding one of the sensitive portions. The processor may save the plurality of sensitivity tags each correlated to its corresponding one of the sensitive portions in a computer memory for use by a media production or localization team.

Filtering images of live stream content

A method of filtering images of live stream content may include defining a prohibited frame content template; analyzing live stream content at a frame level to determine content within each frame of the live stream content; and comparing a frame of the live stream content against the prohibited frame content template to detect prohibited content in the frame that matches prohibited frame content as defined by the prohibited frame content template.

MASKING IN VIDEO STREAM
20220101883 · 2022-03-31 ·

Methods and devices for combining a mask with a selectively progressing video stream may include receiving a selection of at least one mask with a mask zone that obscures at least a portion of the video stream. The methods and devices may include receiving a selection to emplace the at least one mask at a first location within the video stream. The methods and devices may include receiving a selection to enable a tracking icon to move the at least one mask to a second location within the video stream while the video stream progresses. The methods and devices may include generating a combined output of the video stream and the selective emplacement and movement of the at least one mask during the video stream progression.

Controlling a display to provide a user interface
11277459 · 2022-03-15 · ·

Visual content to be displayed on a display of a user device is received. Obfuscation data for obscuring the visual content is generated and the obscured visual content is displayed on the display. A request to stop obscuring the visual content is transmitted from the user device to a remote device, in response to a drag gesture detected at the user device. As the drag gesture is performed, the obfuscation data is modified to reduce a level of obfuscation applied to the visual content before the request has been accepted, so that the visual content remains obscured but with a lower level of obfuscation. If the request is subsequently accepted at the remote device, the display is controlled to stop obscuring the visual content, thereby rendering the visual content fully visible on the display.

DIRECTING USER FOCUS IN 360 VIDEO CONSUMPTION

Aspects of the subject disclosure may include, for example, a method including obtaining media content and an identification of a plurality of points of interest in the media content, receiving a request from a user to view the media content, obtaining information about the user, identifying one or more highlight and/or degrade points based on the information about the user, modifying the media content to create highlighted content, the highlighted content being the media content modified to attract attention of the user to the highlight point and drive the attention of the user away from the degrade point, presenting the highlighted content to the user, and monitoring the user's consumption of the content. Other embodiments are disclosed.

Video privacy using machine learning

A method, system and computer program product for providing video privacy is provided herein. First video data captured by a video camera is received. A context for the first video data is determined. It is determined that the context matches a privacy context from a set of privacy contexts identified using machine learning. In response to the context matching the privacy context, at least a portion of second video data is blocked that is captured by the video camera subsequent to the first video data.

Blurring privacy masks
11240510 · 2022-02-01 · ·

Methods and apparatus, including computer program products, implementing and using techniques for encoding a video sequence comprising a plurality of image frames, by an encoder are described. An image frame is received from a video stream. An input is received, which indicates one or more regions in the received image frame for which a privacy mask should be applied. The one or more regions are represented by one or more coding units. The image frame is encoded into an output frame, wherein image data in the one or more regions is replaced by intra-predicted coding units with transformed coefficients set to zero, the intra-predicted coding units are obtained from a prediction stage in the encoder.

Sensitivity assessment for media production using artificial intelligence

An automatic flagging of sensitive portions of a digital dataset for media production includes receiving the digital dataset comprising at least one of audio data, video data, or audio-video data for producing at least one media program. A processor identifies sensitive portions of the digital dataset likely to be in one or more defined content classifications, based at least in part on comparing unclassified portions of the digital dataset with classified portions of the prior media production using an algorithm, and generates a plurality of sensitivity tags each signifying a sensitivity assessment for a corresponding one of the sensitive portions. The processor may save the plurality of sensitivity tags each correlated to its corresponding one of the sensitive portions in a computer memory for use by a media production or localization team.

Information processing device

An imaged image obtaining section obtains an imaged image from a camera. A face recognizing section detects the face images of a plurality of users in the imaged image. A display user determining section has a function of determining a user to be included in a display image. When an instruction receiving section receives a changing instruction, the display user determining section changes a user included in the display image. A face image clipping section clips a region including the face image of the determined user from the imaged image. A display image generating section generates the display image including the clipped region.