Patent classifications
H04N21/45455
Method for sharing a digital image between a first user terminal and at least one second user terminal over a communications network
A method for sharing a digital image between a first user terminal and at least one second user terminal over a communications network. The method includes: displaying on the second terminal a “degraded image”, the degraded image being obtained from a “first image” by digital processing of the first image, the digital processing visually concealing content of the first image; and following detection of an interaction of a user with the screen of the second terminal: defining a zone of interaction of the degraded image depending on the location of the interaction on the screen; obtaining a portion of the first image corresponding to the determined zone of the degraded image; and displaying on the screen of the second terminal the portion of the first image in the place of the corresponding area of the degraded image, throughout the duration of the interaction of the user with the screen.
PRIVACY-PRESERVING VIDEO ANALYTICS
Generally discussed herein are devices, systems, and methods for privacy-preserving video. A method can include identifying which classes of objects are present in video data, for each class of the classes identified in the video data, generating respective video streams that include objects of the class and exclude objects not of the class, and providing each of the respective video streams to a content distribution network.
Privacy-preserving video analytics
Generally discussed herein are devices, systems, and methods for privacy-preserving video. A method can include identifying which classes of objects are present in video data, for each class of the classes identified in the video data, generating respective video streams that include objects of the class and exclude objects not of the class, and providing each of the respective video streams to a content distribution network.
Video zoom controls based on received information
In some examples, information is sensed by an optical sensor responsive to light from a marker arranged to indicate a boundary of a physical user collaborative area to receive user-input marks during a video conference session, where the marker is distinct from the physical user collaborative area. Based on the received information, the boundary of the physical user collaborative area is determined. Based on the determined boundary, a video zoom into the physical user collaborative area during the video conference session is controlled.
RENDERING IMAGE CONTENT AS TIME-SPACED FRAMES
Methods, systems, and apparatus, including a media player and computer-readable mediums, are described for rendering media content at a frame rate that is safe to a user. A system, or its media player, obtains media content that includes video content having multiple frames. The system determines a frame rate representing a rate for sequentially displaying the frames to the user when the media player plays the media content. Image content of each frame is scanned and data describing different photosensitivity thresholds is obtained. Based on a photosensitivity of the user, the system determines that one or more frames in a portion of the media content include image content that is unsafe to the user when the media player plays the media content. The system selectively decreases a frame playback rate for the portion of the media content as a function of an input value that is selectable by the user.
CONTROLLING A DISPLAY TO PROVIDE A USER INTERFACE
Visual content to be displayed on a display of a user device is received. Obfuscation data for obscuring the visual content is generated and the obscured visual content is displayed on the display. A request to stop obscuring the visual content is transmitted from the user device to a remote device, in response to a drag gesture detected at the user device. As the drag gesture is performed, the obfuscation data is modified to reduce a level of obfuscation applied to the visual content before the request has been accepted, so that the visual content remains obscured but with a lower level of obfuscation. If the request is subsequently accepted at the remote device, the display is controlled to stop obscuring the visual content, thereby rendering the visual content fully visible on the display.
Masking in video stream
Methods and devices for combining a mask with a selectively progressing video stream may include receiving a selection of at least one mask with a mask zone that obscures at least a portion of the video stream. The methods and devices may include receiving a selection to emplace the at least one mask at a first location within the video stream. The methods and devices may include receiving a selection to enable a tracking icon to move the at least one mask to a second location within the video stream while the video stream progresses. The methods and devices may include generating a combined output of the video stream and the selective emplacement and movement of the at least one mask during the video stream progression.
METHOD AND ELECTRONIC DEVICE FOR DISPLAYING BULLET SCREENS
A method for displaying bullet screens can include: acquiring detection boxes by detecting human-body parts in a plurality of image frames in a target video; and determining a masked region of each of the image frames based on detection boxes in the each of the image frames and a ratio of the detection boxes in the each of the image frames in an image frame to which the detection boxes belong, wherein the client player does not display bullet screens in the masked region when playing the target video.
Automated cinematic decisions based on descriptive models
In one embodiment, a method includes accessing foreground visual data that comprises a set of coordinate points that correspond to a plurality of surface points of a person in an environment; generating a bounding box for the set of coordinate points, wherein the bounding box comprises every coordinate point in the set of coordinate points; providing instructions to collect background visual data for an area in the environment that is outside of the bounding box; and providing the foreground visual data and the background visual data to an intelligent director associated with the computing device.
Method for generating video mask information, method for preventing occlusion from barrage, server and client
Some embodiments of the present disclosure provide a method for generating video mask information, a method for preventing occlusion from barrage, a server and a client. The method for generating video mask information comprises: identifying, for any video frame in the video data to be parsed, an area where a target object is located in the video frame; selecting a plurality of geometric figures to fit the area where the target object is located, to allow a combination of the plurality of geometric figures to cover the area where the target object is located; and generating the mask information of the video frame according to a fitting parameter of the plurality of geometric figures, and sending the mask information and data of the video frame to a client synchronously. Some embodiments of the present disclosure are adopted to avoid occlusion caused by the barrage information on the video image while the vast majority of the barrage information remains and reducing a bandwidth consumption of the client.