Patent classifications
H04N21/21805
MULTI-CAMERA LIVE-STREAMING METHOD AND DEVICES
The embodiments disclose a method including capturing video footage of a youth sports event using at least one video camera with a mobile application installed, transmitting to at least one network server with internet and WI-FI connectivity mobile application multi-camera live-streaming video camera captured game footage, recording on at least one database coupled to the network server at least one video camera mobile application multi-camera live-streaming video camera captured game footage, using at least one network computer coupled to at least one network server configured for processing and displaying multi-camera live-streaming video camera captured game footage for live video streaming game broadcast on a plurality of subscribed viewer digital devices, and mixing advertising into the processed multi-camera live-streaming video camera captured game footage broadcast using the at least one network computer.
INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD
An information processing system for obtaining an audio content file for video data providing video content representing a sport event, including: a receiver configured to receive a data stream including the video data; a preference data obtainer configured to obtain preference data, wherein the preference data indicate a selected competitor participating in the sport event; a category identifier obtainer configured to obtain a category identifier from a machine learning algorithm into which the video data is input, wherein the machine learning algorithm is trained to classify a scene represented in the video content into a category of a predetermined set of categories associated with the sport event, wherein the category identifier indicates the category into which the scene is classified; an audio content file obtainer configured to obtain, based on the obtained category identifier and the obtained preference data, the audio content file from a prestored set of audio content files, wherein the audio content file provides audio content associated with the category of the scene and the preference data; and a synchronizer configured to synchronize the audio content and the video content for synchronized play back of the scene by a media player configured to play back the video content and the audio content file.
Video Playing Method, Apparatus, and System, and Computer Storage Medium
This application discloses a video playing method, apparatus, and system, and a computer storage medium, which belongs to the field of video processing technologies. In this application, after receiving the rotation fragment, the terminal decodes the rotation fragment, so that surround playing of a video picture can be implemented, and resolution of a played video picture can be the same as resolution of the video picture in the rotation fragment. This application is not limited by a quantity of cameras used for front-end shooting, and is widely applied.
CONSTRUCTION OF ENVIRONMENT VIEWS FROM SELECTIVELY DETERMINED ENVIRONMENT IMAGES
A computing system may include a client device and a server. The client device may be configured to access a stream of image frames that depict an environment, determine, from the stream of image frames, environment images that satisfy selection criteria, and transmit the environment images to the server. The server may be configured to receive the environment images from the client device, construct a spatial view of the environment based on position data included with the environment images, and navigate the spatial view, including by receiving a movement direction and progressing from a current environment image depicted for the spatial view to a next environment image based on the movement direction.
LOCALIZED DYNAMIC VIDEO STREAMING SYSTEM
A computerized system operable to provide multiple video streams of an event. In an ideal embodiment, the system provides live and dynamic streaming of an event such as a sporting event, concert, march, rally, and the like, to allow viewers to watch video of the event from nearly any angle and vantage point.
TRANSMITTING DEVICE AND RECEIVING DEVICE
A transmitting device (30, 30a) is configured to transmit, to a receiving device (40, 40a), a plurality of video signals captured from different positions, the plurality of video signals being grouped by a plurality of groups depending on imaging positions at which the video signals are captured. The transmitting device (30, 30a) comprises: a controller (32) configured to assign an ID for identifying each of the plurality of groups; and a communication interface (37) configured to transmit a video signal to which the ID is assigned, to the receiving device (40, 40a).
Method and system for media content production
A method and a system configured to execute the mentioned method is suggested, where the method is capturing media content associated with at least one object, using a plurality of media capturing devices, each carried by a mobile communication device, together forming a mobile media device. The method comprise: controlling each of the mobile media devices according to a respective, predefined role and role specific rules for mobile media device movements, while capturing media content, following the movement of the at least one determined object; acquiring sensor data, indicative of the mobile media device movements, from the mobile media devices, and updating the roles of the mobile media devices, based on the acquired sensor data.
ELECTRONIC DEVICE, SERVER AND METHODS FOR VIEWPORT PREDICTION BASED ON HEAD AND EYE GAZE
A method performed by an electronic device for requesting tiles relating to a viewport of an ongoing omnidirectional video stream is provided. The ongoing omnidirectional video stream is provided by a server to be displayed to a user of the electronic device. The electronic device predicts for an impending time period, a future head gaze of the user in relation to a current head gaze of the user, based on: A current head gaze relative to a position of shoulders of the user, a limitation of the head gaze of the user bounded by the shoulders position of the user, and a current eye gaze and eye movements of the user. The electronic device then sends a request to the server. The request requests tiles relating to the viewport for the impending time period, selected based on the predicted future head gaze of the user.
A Method, An Apparatus and a Computer Program Product for Video Encoding and Video Decoding
The embodiments relate to a method including generating a bitstream defining a presentation including an omnidirectional visual media content; encoding into the bitstream a parameter to indicate viewport-control options for viewing the presentation, wherein the viewport-control options includes options controllable by a receiving device and options not-controllable by the receiving device and sending the bitstream to the receiver device; receiving one of the indicated viewport-control options from the receiver device as a response; streaming the presentation to the receiver device; when the response has included an indication on a viewport-control controllable by the receiving device, the method also includes receiving information on viewport definitions from the receiver device during streaming of the presentation and adapting the presentation accordingly; when the response has included an indication on a viewport-control not- controllable by the receiving device, the presentation is streamed to the receiver device according to the viewport-control specified in the response.
A Method, An Apparatus and a Computer Readable Storage Medium for Video Streaming
A method comprising: requesting, by a client, an independently coded first representation of a video content component from a server; receiving and playing a first set of data units of the independently coded first representation; requesting a second set of data units of a second representation, said second set of data units being dependently coded on one Receive and play a first set of data units of or more requested or buffered data units of the first set; and requesting a third set of independently coded data units of a third representation.