Patent classifications
H04N5/92
Miniature high definition camera for visible, infrared, and low light applications
A miniature high definition camera system which converts parallel data to serial data at the camera and then back to parallel data at a remote digital video recorder to avoid signal attenuation issues known to occur with parallel data transmitted across data cables. The camera system features video sensors to permit recording in visible, infrared, and ultraviolet wavelengths as well as in low light for night vision.
Miniature high definition camera for visible, infrared, and low light applications
A miniature high definition camera system which converts parallel data to serial data at the camera and then back to parallel data at a remote digital video recorder to avoid signal attenuation issues known to occur with parallel data transmitted across data cables. The camera system features video sensors to permit recording in visible, infrared, and ultraviolet wavelengths as well as in low light for night vision.
LIVE STYLE TRANSFER ON A MOBILE DEVICE
Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.
Display processing apparatus and method, and storage medium
A display processing apparatus acquires a plurality of images and time data corresponding to the respective images. The display processing apparatus selects, as the time interval at which the plurality of images are switched and displayed one by one, either a time interval based on the difference between time data corresponding to images before and after switching, or a predetermined interval. The display processing apparatus switches the plurality of images at the selected time interval, and displays them on a display device.
Imaging device, imaging method, and program
An imaging device including an image sensor that captures a video; and a processor configured to extract a first frame from a plurality of frames constituting the video captured by the image sensor to generate a first static image file, generate a video file constituted of a plurality of frames including the first frame from the video, and store the video file, the first static image file, and additional information indicating a position of the first frame in the video file.
Image pickup apparatus having function of recording voice data, control method for image pickup apparatus, and storage medium
An image pickup apparatus which reduces variations in volume of voice data recorded by a voice memo function without increasing the number of components therein. The image pickup apparatus, having a first and a second display part, determines a display destination of image data according to a detection result of an eye approach detection part, and performs synthesis processing of adding voice data recorded by a sound collecting member to the image data, wherein in a case where user's eye approach is detected by the eye approach detection part and the image data is displayed on the second display part, when voice recording is started by a user operation, a first sound collecting sensitivity adjustment process of adjusting a sound collecting sensitivity of the sound collecting member is performed.
INTERACTIVE PLAYBACK OF A VIDEO CONFERENCE
Systems and methods for interactive playback of a video conference are provided. A request is received for a playback of a video conference between a plurality of participants of a plurality of client devices that each generated one of a plurality of source video streams, where each source video stream was presented during a live stream of the video conference according to a particular layout in a user interface (UI) on a first client device of the plurality of client devices. Playback of the video conference is caused at a second client device, wherein causing playback of the video conference comprises transmitting, to a second client device, each source video stream of the plurality of source video streams for visual rendering according to the particular layout in a UI on the second client device; and capturing a first set of user interaction events associated with one or more time points during playback of the video conference, wherein each user interaction event of the first set is visually rendered at a corresponding time point during playback of the video conference.
CONDITIONAL CAMERA CONTROL VIA AUTOMATED ASSISTANT COMMANDS
Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
System for creating a composite image and methods for use therewith
A system includes a video device for capturing, at a viewing time, a first video image corresponding to a foundation scene at a setting, the foundation scene viewed at the viewing time from a vantage position. A memory stores a library of image data including media generated at a time prior to the viewing time. A vantage position monitor tracks the vantage position and generating vantage position of a human viewer. A digital video data controller selects from the image data in the library, at the viewing time and based on the vantage position data, a plurality of second images corresponding to a modifying scene at the setting, the modifying scene further corresponding to the vantage position. A combiner combines the first video image and the plurality of second images to create a composite image for display.
System for creating a composite image and methods for use therewith
A system includes a video device for capturing, at a viewing time, a first video image corresponding to a foundation scene at a setting, the foundation scene viewed at the viewing time from a vantage position. A memory stores a library of image data including media generated at a time prior to the viewing time. A vantage position monitor tracks the vantage position and generating vantage position of a human viewer. A digital video data controller selects from the image data in the library, at the viewing time and based on the vantage position data, a plurality of second images corresponding to a modifying scene at the setting, the modifying scene further corresponding to the vantage position. A combiner combines the first video image and the plurality of second images to create a composite image for display.