Patent classifications
H04N21/21805
Adaptive video streaming
A method, system and apparatus for image capture, analysis and transmission are provided. A link aggregation method involves identifying controller network ports to a source connected to the same subnetwork; producing packets associating corresponding controller network ports selected by the source CPU for substantially uniform selection; and transmitting the packets to their corresponding network ports. An image analysis method involves producing by a camera an indication whether a region of an image differs by a threshold extent from a corresponding region of a reference image; transmitting the indication and image data to a controller via a communications network; and storing at the controller the image data and the indication in association therewith. The controller may perform operations according to positive indications. A transmission method involves receiving user input in respect of a video stream and transmitting, in accordance with the user input, selected data packets of selected image frames thereof.
Image acquisition system and method
A method of capturing free viewpoint content at a location includes recording video on each of a plurality of portable video recording devices at the location; each portable video recording device detecting a wireless synchronisation signal transmitted at the location; and each portable video recording device periodically adding a timestamp to its respective recorded video; where the timestamp is responsive to the detected wireless synchronisation signal, thereby enabling synchronisation of a plurality of recorded videos responsive to the timestamps.
Method and apparatus for transmitting video content using edge computing service
An example method, performed by an edge data network, of transmitting video content includes: obtaining first bearing information from an electronic device connected to the edge data network; determining second predicted bearing information based on the first bearing information; determining a second predicted partial image corresponding to the second predicted bearing information; transmitting, to the electronic device, a second predicted frame generated by encoding the second predicted partial image; obtaining, from the electronic device, second bearing information corresponding to a second partial image; comparing the second predicted bearing information to the obtained second bearing information; generating, based on a result of the comparing, a compensation frame using at least two of a first partial image corresponding to the first bearing information, the second predicted partial image, or the second partial image corresponding to the second bearing information; and transmitting the generated compensation frame to the electronic device based on the result of the comparing.
PORTABLE DIGITAL VIDEO CAMERA CONFIGURED FOR REMOTE IMAGE ACQUISITION CONTROL AND VIEWING
A wearable digital video camera (10) is equipped with wireless connection protocol and global navigation and location positioning system technology to provide remote image acquisition control and viewing. The Bluetooth® packet-based open wireless technology standard protocol (400) is preferred for use in providing control signals or streaming data to the digital video camera and for accessing image content stored on or streaming from the digital video camera. The GPS technology (402) is preferred for use in tracking of the location of the digital video camera as it records image information. A rotating mount (300) with a locking member (330) on the camera housing (22) allows adjustment of the pointing angle of the wearable digital video camera when it is attached to a mounting surface.
DISPLAY SYSTEM AND METHOD
A system for obtaining content for display to a user of a head-mountable display device, HMD, the system comprising one or more audio detection units operable to capture audio in the environment of the user, a motion prediction unit operable to predict motion of the HMD in dependence upon the captured audio, and a content obtaining unit operable to obtain content for display in dependence upon the predicted motion of the HMD.
Enabling motion parallax with multilayer 360-degree video
Systems and methods are described for simulating motion parallax in 360-degree video. In an exemplary embodiment for producing video content, a method includes obtaining a source video, based on information received from a client device, determining a selected number of depth layers, producing, from the source video, a plurality of depth layer videos corresponding to the selected number of depth layers, wherein each depth layer video is associated with at least one respective depth value, and wherein each depth layer video includes regions of the source video having depth values corresponding to the respective associated depth value, and sending the plurality of depth layer videos to the client device.
SPORTS BETTING APPARATUS AND METHOD
An apparatus, including a processor which provides an electronic forum capable of providing a video or audio broadcast of a sporting event to users and capable of allowing the users to communicate with one another before, during, or after, the sporting event via text messaging, video conferencing, or audio conferencing, place a bet or bets on an outcome of, or on an event occurring during, the sporting event, and receive information regarding bets available, betting odds, changes in betting odds, or analytics information; a transmitter which transmits the electronic forum to a user communication device; and a receiver which receives information transmitted from the user communication device. The apparatus provides, via the electronic forum, player performance tracking data and betting activity information or betting market activity information. The player performance tracking data is obtained by an optical camera. a local positioning, or a GPS/GNSS, player performance tracking system.
Apparatus and system for virtual camera configuration and selection
A system and method for virtual camera configuration and selection. For example, one embodiment of a system comprises: a decode subsystem comprising circuitry to concurrently decode a plurality of video streams captured by cameras at an event to generate decoded video streams from a perspective of corresponding virtual cameras (VCAMs); video evaluation logic to apply at least one video quality metric to determine a quality value for the decoded video streams or a subset thereof, and to rank the decoded video streams based, at least in part, on the quality values associated with the decoded video streams; preview logic to provide the decoded video streams or modified versions thereof to one or more computing devices accessible to one or more video production team members and to further provide the quality values and/or the rank generated by the video quality evaluation logic; stream selection hardware logic to select a subset of the plurality of decoded video streams based on input from the one or more video production team members; and transcoder hardware logic to transcode the subset of the plurality of decoded video streams for live transmission over a public or private network.
Methods for timed metadata priority rank signaling for point clouds
Embodiments herein provide techniques for signaling of priority information (e.g., priority ranking) and/or quality information in a timed metadata track associated with point cloud content. For example, embodiments include procedures for signaling of priority information and/or quality information in a timed metadata track to support viewport-dependent distribution of point cloud content, e.g., based on MPEG's International Organization for Standardization (ISO) Base Media File Format (ISOBMFF). In some embodiments, metadata samples of the timed metadata track may include priority information and/or quality information for a point cloud bounding box of a point cloud media presentation (e.g., for one or more point cloud objects in the point cloud bounding box). Other embodiments may be described and claimed.
Methods and apparatus for re-timing and scaling input video tracks
The techniques described herein relate to methods, apparatus, and computer readable media configured to access multimedia data comprising a hierarchical track structure comprising at least a first track at a first level of the hierarchical track structure comprising first media data, wherein the first media data comprises a first sequence of video media units, and a second track at a second level in the hierarchical track structure different than the first level of the first track, the second track comprising metadata specifying a re-timing derivation operation. Output video media units are generated according to the second track, comprising performing the re-timing derivation operation on the first sequence of video media units to modify a timing of the first sequence of video media units by removing one or more video media units associated with the re-timing derivation operation and/or shifting timing information of the first sequence of video media units.