Patent classifications
H04N21/2743
Methods and Systems for Detecting Persons in a Smart Home Environment
The various implementations described herein include methods, devices, and systems for detecting motion and persons. In one aspect, a method is performed at a smart home system that includes a video camera, a server system, and a client device. The video camera captures video and audio, and wirelessly communicates, via the server system, the captured data to the client device. The server system: (1) receives and stores the captured data from the video camera; (2) determines whether an event has occurred, including detected motion; (3) in accordance with a determination that the event has occurred, identifies video and audio corresponding to the event; and (4) classifies the event. The client device receives information indicative of the identified events, displays a user interface for reviewing the video and audio stored by the remote server system, and displays the at least one classification for the event.
Video File Processing Method and Device
Embodiments of the present disclosure may provide a video file processing method, including: receiving a splitting instruction of a video file; determining splitting nodes corresponding to the splitting instruction; splitting the video file to multiple sub-video files using the splitting nodes; and storing the multiple sub-video files. Embodiments of the present disclosure may further provide a video file processing device. With the embodiments of the present disclosure, the video file may be segmentally displayed, which may optimize management of the video file.
MEDIA CLIP CREATION AND DISTRIBUTION SYSTEMS, APPARATUS, AND METHODS
Various embodiments for creating media clips are disclosed. In one example, a method is performed by a server for managing the creation and distribution of media clips, where the server associates a content capture device with an event, the content capture device for recording at least a portion of the event, receives a tag notification from a content tagging device via a network interface, generates a media clip creation command to the content capture device via the network interface, sends the media clip creation command to the content capture device, and receives a media clip created by the content capture device in response to receiving the media clip creation command.
METHOD AND TERMINAL FOR UGC FEEDBACK AND FEEDBACK INFORMATION DISPLAY
The disclosure includes a method for providing feedback on UGC (user generated content) by a user. The method includes: displaying UGC provided by a social-network friend of the user; detecting a starting time-point of a continuous operation on the displayed UGC; counting time from the starting time-point of the continuous operation to obtain a timing duration in real-time; playing a sequence of quantified feedback images that vary as the timing duration increases, until the playing of the sequence of the quantified feedback images is completed or the continuous operation ends; and generating quantified feedback information matching the quantified feedback image displayed when the playing of the sequence of the quantified feedback images is completed or the continuous operation ends, and notifying a terminal logged in by the social-network friend.
METHOD AND TERMINAL FOR UGC FEEDBACK AND FEEDBACK INFORMATION DISPLAY
The disclosure includes a method for providing feedback on UGC (user generated content) by a user. The method includes: displaying UGC provided by a social-network friend of the user; detecting a starting time-point of a continuous operation on the displayed UGC; counting time from the starting time-point of the continuous operation to obtain a timing duration in real-time; playing a sequence of quantified feedback images that vary as the timing duration increases, until the playing of the sequence of the quantified feedback images is completed or the continuous operation ends; and generating quantified feedback information matching the quantified feedback image displayed when the playing of the sequence of the quantified feedback images is completed or the continuous operation ends, and notifying a terminal logged in by the social-network friend.
Comprehensive video collection and storage
A video collection system comprising a body-wearable video camera, a camera dock, and a video collection manager. The camera dock is configured to interface with the body-wearable video camera having a camera-memory element. The camera dock includes a dock-memory element configured to receive and store video data from the camera-memory element. The video collection manager is communicatively coupled with the camera dock. The camera dock sends at least a portion of the video data to the video collection manager.
Comprehensive video collection and storage
A video collection system comprising a body-wearable video camera, a camera dock, and a video collection manager. The camera dock is configured to interface with the body-wearable video camera having a camera-memory element. The camera dock includes a dock-memory element configured to receive and store video data from the camera-memory element. The video collection manager is communicatively coupled with the camera dock. The camera dock sends at least a portion of the video data to the video collection manager.
METHODS AND SYSTEMS FOR GENERATING AND PROVIDING PROGRAM GUIDES AND CONTENT
Systems and methods for identifying, assembling, and transmitting content are described in the illustrative context of electronic program guides and program channels. A first system causes an interactive interstitial to be presented on a remote first device of a user in conjunction with a scheduled program. The first system determines if a second device of the user is available to receive an interstitial interaction request. At least partly in response to determining that the second device is available to receive an interstitial interaction request, the interstitial interaction request is presented via a client hosted on the second device. At least partly in response to determining that the user has provided an interaction via the second device, the interaction is stored in memory. Optionally, an interstitial is composed based at least in part on the user interaction. The composed interstitial is optionally displayed via the first device of the user in conjunction with a scheduled program.
MULTIMEDIA CONTENT MANAGEMENT SYSTEM AND METHOD
A multimedia content management system includes a mobile computing device and a backend server. The mobile computing device includes a memory programmed with a mobile application, a processor module, a wireless communication module configured to communicate over a wireless communication link, and a first multimedia device configured to capture a first video and a second video. The mobile application is configured to transmit the first video and the second video over the wireless communication link via the wireless communication module. The backend server is communicably coupled to the mobile computing device via the wireless communications link and configured to execute a persistent internet accessible request protocol for receiving, updating, and storing transmitted videos. The multimedia content management system is useful for hosting and remotely managing multimedia content.
Systems and methods for encoding and sharing content between devices
Systems and methods for sharing content between devices are disclosed. To request a shared piece of media content, a playback device generates and sends a request to content server. The playback device includes information in the request that indicates the playback capabilities of the device. The content server receives the request and determines the playback capabilities of the playback device from the information in the request. The content server then determines the assets that may be used by the playback device to obtain the media content and generates a top level index file for the playback device that includes information about the determined assets. The top level index file is then sent to the playback device that may then use the top level index file to obtain the media content using the indicated assets.