Patent classifications
H04N5/9201
Conditional camera control via automated assistant commands
Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
MODULAR CAMERA BLOCKS FOR VIRTUAL REALITY CAPTURE
An apparatus comprises: a camera module for obtaining a first image, the camera module having at least one port, each of the at least one ports being associated with an attachment position for receiving a second camera module for obtaining a second image; a processor for detecting a position of a second camera module and providing, to an image processing controller, information relating to at least one of the position of the second camera module and the first image obtained by the camera module; and a memory for storing the information relating to at least one of the position of the second camera module and the first image obtained by the camera module.
INTELLIGENT PROCESSING METHOD AND SYSTEM FOR VIDEO DATA
The present application discloses an intelligent processing method and system for video data, wherein, in the method an intelligent camera set a warning rule, the method comprises: the intelligent camera collecting video data and analyzing the collected video data in real time, generating intelligent data if the warning rule is met, which intelligent data contain an encoder identifier and motion trajectory information; the intelligent camera packaging the video data and the intelligent data into a program stream and sending it to a frame analyzing component in a cloud storage system; the frame analyzing component unpacking the received program stream to obtain the video data and the intelligent data, and storing the video data and the intelligent data in storage components respectively; the storage components sending storage address information of the video data and the intelligent data to an index server for recording respectively. The solutions of the present application can perform intelligent processing for the collected video data in real time.
VIDEO PROCESSING SYSTEM
A video processing system includes: an object movement information acquiring means for detecting a moving object moving in a plurality of segment regions from video data obtained by shooting a monitoring target area, and acquiring movement segment region information as object movement information, the movement segment region information representing segment regions where the detected moving object has moved; an object movement information and video data storing means for storing the object movement information in association with the video data corresponding to the object movement information; a retrieval condition inputting means for inputting a sequence of the segment regions as a retrieval condition; and a video data retrieving means for retrieving the object movement information in accordance with the retrieval condition and outputting video data stored in association with the retrieved object movement information, the object movement information being stored by the object movement information and video data storing means.
SYSTEMS AND METHODS FOR CAPTURING IMAGE DATA FOR MAPPING
A method of collecting visual data using a mobile image capture device is provided. The method comprises the steps of: capturing image data with the mobile image capture device and associating time and location data with each image; and storing the image data and associated time and location data. The method further comprises monitoring the position and orientation of the image capture device. The method further comprises: defining an area surrounding or next to the location of the mobile image capture device; and identifying a characteristic of the defined area based on one or more of: the density of the image data in the area; the age of the image data in the area; and/or data demand values associated with locations within the defined area. The timing of capture of image data is based on the characteristic of the defined area.
Reproduction device, reproduction method, and recording medium
The present technology relates to a reproduction device, a reproduction method, and a recording medium that enable content having a wide dynamic range of brightness to be displayed with an appropriate brightness. A recording medium, on which the reproduction device of one aspect of the present technology performs reproduction, records coded data of an extended video that is a video having a second brightness range that is wider than a first brightness range, brightness characteristic information that represents a brightness characteristic of the extended video, and brightness conversion definition information used when performing a brightness conversion of the extended video to a standard video that is a video having the first brightness range. The reproduction device decodes the coded data and converts the extended video obtained by decoding the coded data to the standard video on the basis of the brightness conversion definition information.
REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM
The present technology relates to a reproduction device, a reproduction method, and a recording medium that enable content having a wide dynamic range of brightness to be displayed with an appropriate brightness. A recording medium, on which the reproduction device of one aspect of the present technology performs reproduction, records coded data of an extended video that is a video having a second brightness range that is wider than a first brightness range, brightness characteristic information that represents a brightness characteristic of the extended video, and brightness conversion definition information used when performing a brightness conversion of the extended video to a standard video that is a video having the first brightness range. The reproduction device decodes the coded data and converts the extended video obtained by decoding the coded data to the standard video on the basis of the brightness conversion definition information.
System and method for event data collection and video alignment
Software is provided to collect events that transpire during an event. Captured video of the event can then be aligned to the events by calculating an offset between timestamps provided by the devices used to capture the action data and film data. A video clip can be sourced to review events that occurred.
Video processing system
A video processing system includes: an object movement information acquiring means for detecting a moving object moving in a plurality of segment regions from video data obtained by shooting a monitoring target area, and acquiring movement segment region information as object movement information, the movement segment region information representing segment regions where the detected moving object has moved; an object movement information and video data storing means for storing the object movement information in association with the video data corresponding to the object movement information; a retrieval condition inputting means for inputting a sequence of the segment regions as a retrieval condition; and a video data retrieving means for retrieving the object movement information in accordance with the retrieval condition and outputting video data stored in association with the retrieved object movement information, the object movement information being stored by the object movement information and video data storing means.
Food quantum tracking tools and methods related thereto
The present invention relates to novel methods, tools and systems that provide an individual with the ability to define a personalized portion and evaluate food in real-world situations to offer guidance to the individual on the amount of the food the individual should eat as part of a controlled and consistent diet based on food volume. In particular, the methods, tools and systems of the present invention define food quanta personalized to the user, and further utilizes augmented reality to overlay a selected food quantum on real-world food in order to guide the user to select a definable food quantum, e.g., to achieve a dietary goal.