H04N5/9201

Systems and methods for categorizing motion events

The various embodiments described herein include methods, devices, and systems for categorizing motion events. In one aspect, a method is performed at a camera device. The method includes: (1) capturing a plurality of video frames via the image sensor, the plurality of video frames corresponding to a scene in a field of view of the camera; (2) sending the video frames to the remote server system in real-time; (3) while sending the video frames to the remote server system in real-time: (a) determining that motion has occurred within the scene; (b) in response to determining that motion has occurred within the scene, characterizing the motion as a motion event; and (c) generating motion event metadata for the motion event; and (4) sending the generated motion event metadata to the remote server system concurrently with the video frames.

Live style transfer on a mobile device

Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.

Systems and methods for synchronizing surface data management operations for virtual reality

An exemplary virtual reality system includes a management device, a synchronization device, and a gatekeeper device communicatively coupled to the management device and the synchronization device by way of a network. The gatekeeper device is configured to receive, at a particular time, a frame of a surface data frame sequence that the gatekeeper device is responsible for processing. The gatekeeper device is also configured to transmit, to the synchronization device, the particular time at which the gatekeeper device received the frame, and to receive, from the synchronization device, a timeframe during which the gatekeeper device is to transmit the frame. The gatekeeper device is further configured to transmit, to the management device during the timeframe received from the synchronization device, the frame of the surface data frame sequence. Corresponding systems and methods are also disclosed.

CONDITIONAL CAMERA CONTROL VIA AUTOMATED ASSISTANT COMMANDS

Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.

VIDEO SURVEILLANCE SYSTEM
20220150442 · 2022-05-12 ·

A video surveillance system comprising a video management system and one or more digital devices, each of the one or more digital devices being configured to emulate at least one physical video camera, and to send video streams and/or video metadata via a computer network to a video-receiving system, wherein the video management system comprises: an input interface for receiving one or more video streams from one or more video cameras and/or other video sources, a processing unit configured to receive one or more input video streams from the input interface, each input video stream corresponding to one of the received video streams, and to store the input video streams in a video repository, and an output interface configured to send, via a computer network, one or more of the input video streams and/or one or more of the stored video streams to one or more of the digital devices.

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE CAPTURING APPARATUS, AND STORAGE MEDIUM
20230260299 · 2023-08-17 ·

An image processing apparatus comprises a generation unit configured to generate an image file of captured image data, the generation unit generating the image file with estimation results related to the image data added thereto as metadata, wherein the generation unit generates the metadata so that a first estimation result and a second estimation result are distinguishable from each other, the first estimation result being based on data that is included in the image file, the second estimation result being based on data that is not included in the image file.

Systems and methods for categorizing motion events

The various embodiments described herein include methods, devices, and systems for categorizing motion events. In one aspect, a method is performed at a camera device. The method includes: (1) capturing a plurality of video frames via the image sensor, the plurality of video frames corresponding to a scene in a field of view of the camera; (2) sending the video frames to the remote server system in real-time; (3) while sending the video frames to the remote server system in real-time: (a) determining that motion has occurred within the scene; (b) in response to determining that motion has occurred within the scene, characterizing the motion as a motion event; and (c) generating motion event metadata for the motion event; and (4) sending the generated motion event metadata to the remote server system concurrently with the video frames.

Electronic key photo album, program for creating electronic key photo album, and method for creating electronic key photo album
11315297 · 2022-04-26 · ·

To securely create an electronic album of photos taken of various keys owned by a person regardless of types of the keys, the electronic key photo album (10) includes a dummy photo generator (11) that cuts out a key image from a photo of a key taken together with a background of the key and complements the photo with a dummy image to generate a dummy photo; a photo storage controller (12) that stores the dummy photo in a first storage (101) together with metadata of the photo of the key and stores the key image in a second storage (102) in association with the dummy photo together with a relative position of the key image in the photo of the key; a photo restorer (13) that reads out a dummy photo from the first storage, reads out, from the second storage, a key image associated with the dummy photo thus read out and a relative position of the key image thus read out, and pastes the key image thus read out to the relative position thus read out in the dummy photo thus read out to restore an original photo of a key in the key image thus read out, and a photo forwarder (14) that transfers the original photo of the key to a display device.

Systems and Methods for Categorizing Motion Events

The various embodiments described herein include methods, devices, and systems for categorizing motion events. In one aspect, a method is performed at a camera device. The method includes: (1) capturing a plurality of video frames via the image sensor, the plurality of video frames corresponding to a scene in a field of view of the camera; (2) sending the video frames to the remote server system in real-time; (3) while sending the video frames to the remote server system in real-time: (a) determining that motion has occurred within the scene; (b) in response to determining that motion has occurred within the scene, characterizing the motion as a motion event; and (c) generating motion event metadata for the motion event; and (4) sending the generated motion event metadata to the remote server system concurrently with the video frames.

Conditional camera control via automated assistant commands

Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.