Patent classifications
H04N19/40
Virtual file system for cloud-based shared content
A server in a cloud-based environment interfaces with storage devices that store shared content accessible by two or more users. Individual items within the shared content are associated with respective object metadata that is also stored in the cloud-based environment. Download requests initiate downloads of instances of a virtual file system module to two or more user devices associated with two or more users. The downloaded virtual file system modules capture local metadata that pertains to local object operations directed by the users over the shared content. Changed object metadata attributes are delivered to the server and to other user devices that are accessing the shared content. Peer-to-peer connections can be established between the two or more user devices. Object can be divided into smaller portions such that processing the individual smaller portions of a larger object reduces the likelihood of a conflict between user operations over the shared content.
Systems and methods for providing transcoded portions of a video
Multiple videos having individual time durations may be obtained, including a first video with a first time duration. The videos may include visual information defined by one or more electronic media files. An initial portion of the first time duration where the one or more electronic media are to be transcoded may be determined, including determining whether the first time duration is greater than a predefined threshold and if the first time duration is greater than the predefined threshold, determining the initial portion to be an initial time duration that is less than the first time duration. One or more transcoded media files may be generated during the initial portion. A request for the first video may be received from a client computing platform. In response to receipt of the request, the one or more transcoded media files may be transmitted to the client computing platform for display.
Systems and methods for providing transcoded portions of a video
Multiple videos having individual time durations may be obtained, including a first video with a first time duration. The videos may include visual information defined by one or more electronic media files. An initial portion of the first time duration where the one or more electronic media are to be transcoded may be determined, including determining whether the first time duration is greater than a predefined threshold and if the first time duration is greater than the predefined threshold, determining the initial portion to be an initial time duration that is less than the first time duration. One or more transcoded media files may be generated during the initial portion. A request for the first video may be received from a client computing platform. In response to receipt of the request, the one or more transcoded media files may be transmitted to the client computing platform for display.
Image coding method on basis of non-separable secondary transform and device therefor
An image decoding method performed by means of a decoding device according to the present disclosure comprises the steps of: deriving transform coefficients of a target block from a bitstream; deriving a non-separable secondary transform (NSST) index with respect to the target block; performing inverse transform with respect to the transform coefficients of the target block on the basis of the NSST index and thus deriving residual samples of the target block; and generating a reconstructed picture on the basis of the residual samples.
Method for transcoding video and related electronic device
Embodiments of the present disclosure provide a method for transcoding a video. An input attribute of a video is obtained and a target attribute is obtained. A segment transcoding speed of the video is determined based on the input attribute and the target attribute. The segment transcoding speed indicates a transcoding speed of a video segment. The number of video segments of the video is determined based on a preset target transcoding speed and the segment transcoding speed. The video is segmented based on a video length of the video and the number of video segments to obtain the video segments. The video segments are transcoded based on the segment transcoding speed.
Method for transcoding video and related electronic device
Embodiments of the present disclosure provide a method for transcoding a video. An input attribute of a video is obtained and a target attribute is obtained. A segment transcoding speed of the video is determined based on the input attribute and the target attribute. The segment transcoding speed indicates a transcoding speed of a video segment. The number of video segments of the video is determined based on a preset target transcoding speed and the segment transcoding speed. The video is segmented based on a video length of the video and the number of video segments to obtain the video segments. The video segments are transcoded based on the segment transcoding speed.
Receiving compressed video frames in a video conference
One disclosed example method includes receiving, by a video conference provider, video frames from a plurality of existing participants in a video conference; receiving, by the video conference provider, a request from a new user to join the video conference, and in response: generating, by the video conference provider, an instantaneous decoder refresh (IDR) frame; determining, by the video conference provider, one or more prior video frames previously acknowledged by each existing participant of the plurality of existing participants; generating, by the video conference provider, a benchmark frame for each of the plurality of existing participants based on at least one of the determined one or more prior video frames and the IDR frame; transmitting, by the video conference provider, the IDR frame to the new user; and transmitting, by the video conference provider, a message comprising the benchmark frame to each of the plurality of existing participants.
PERCEPTUAL LUMINANCE NONLINEARITY-BASED IMAGE DATA EXCHANGE ACROSS DIFFERENT DISPLAY CAPABILITIES
A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
PERCEPTUAL LUMINANCE NONLINEARITY-BASED IMAGE DATA EXCHANGE ACROSS DIFFERENT DISPLAY CAPABILITIES
A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
PROGRAM, DEVICE, AND METHOD FOR GENERATING SIGNIFICANT VIDEO STREAM FROM ORIGINAL VIDEO STREAM
A program for generating a significant video stream causes a computer to function as coding parameter extraction means for extracting a coding parameter of each macroblock for each frame from an original video stream, macroblock selection means for selecting a significant macroblock that has a coding parameter satisfying a predetermined condition, and significant video stream generation means for generating a significant video stream in which frames of the original video stream temporally synchronized with the frames of the coding parameter in the significant macroblocks are combined in time series.