Patent classifications
H04N5/92
IMAGE CAPTURING METHOD AND DISPLAY METHOD FOR RECOGNIZING A RELATIONSHIP AMONG A PLURALITY OF IMAGES DISPLAYED ON A DISPLAY SCREEN
An image capturing apparatus includes a capture part acquiring image data, and a display displaying an image on a display based on the image data. First information acquired includes at least either an azimuth angle or an elevation angle as a direction of the image and at least either an angle of view of the image or angle-of-view related information calculating the angle of view. When a second direction of a second image is included within a range of a first angle of view in a first direction of a first image, the second image is associated with the first image, and the first image is displayed within the display, and display is performed in a state in which the second image to be associated with the first image is overlapped on the first image within the display as the second image or second information indicating the second image.
IMAGE CAPTURING METHOD AND DISPLAY METHOD FOR RECOGNIZING A RELATIONSHIP AMONG A PLURALITY OF IMAGES DISPLAYED ON A DISPLAY SCREEN
An image capturing apparatus includes a capture part acquiring image data, and a display displaying an image on a display based on the image data. First information acquired includes at least either an azimuth angle or an elevation angle as a direction of the image and at least either an angle of view of the image or angle-of-view related information calculating the angle of view. When a second direction of a second image is included within a range of a first angle of view in a first direction of a first image, the second image is associated with the first image, and the first image is displayed within the display, and display is performed in a state in which the second image to be associated with the first image is overlapped on the first image within the display as the second image or second information indicating the second image.
Fast in-place FMP4 to MP4 conversion
In-place conversion of a fragmented MP4 (FMP4) file into an MP4 format without having to create separate files is discussed herein. Audio and visual data is captured by a multimedia application and stored an FMP4 file with an initial moov atom and one or more fragment headers assigned to portions of the audio/video data. Once a capturing session is completed (e.g., user stops recording audio and video), the FMP4 file is converted to an MP4 file by attaching a final moov atom to the FMP4 file and changing the header designation of the initial moov(i) atom from “moov” to an “mdat” designation. This change in designation makes the initial moov(i) atom of the FMP4 file opaque a media player and converts the FMP4 file to MP4 format.
Fast in-place FMP4 to MP4 conversion
In-place conversion of a fragmented MP4 (FMP4) file into an MP4 format without having to create separate files is discussed herein. Audio and visual data is captured by a multimedia application and stored an FMP4 file with an initial moov atom and one or more fragment headers assigned to portions of the audio/video data. Once a capturing session is completed (e.g., user stops recording audio and video), the FMP4 file is converted to an MP4 file by attaching a final moov atom to the FMP4 file and changing the header designation of the initial moov(i) atom from “moov” to an “mdat” designation. This change in designation makes the initial moov(i) atom of the FMP4 file opaque a media player and converts the FMP4 file to MP4 format.
Conditional camera control via automated assistant commands
Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
Apparatus and method for video-audio processing, and program for separating an object sound corresponding to a selected video object
The present technique relates to an apparatus and a method for video-audio processing, and a program each of which enables a desired object sound to be more simply and accurately separated. A video-audio processing apparatus includes a display control portion configured to cause a video object based on a video signal to be displayed; an object selecting portion configured to select the predetermined video object from the one video object or among a plurality of the video objects; and an extraction portion configured to extract an audio signal of the video object selected by the object selecting portion as an audio object signal.
SURGICAL TRACKING AND PROCEDURAL MAP ANALYSIS TOOL
In some embodiments, methods and systems are provided for accessing a surgical dataset including surgical data collected during performance of a surgical procedure. The surgical data can include video data of the surgical procedure. Using the surgical data, a plurality of procedural states associated with the surgical procedure can be determined. For a procedural state of the plurality of procedural states, temporal information can be identified that identifies a part of the video data to be associated with the procedural state. For the procedural state of the plurality of procedural states, electronic data can be generated that characterizes the part of the video data and outputting the electronic data associated with the plurality of procedural states.
Video display device and video display method
A video display device includes: a video receiver that receives video data including a video and dynamic luminance characteristics indicating a time-dependent change in luminance characteristics of the video; a tone mapping processor that, in the case where a luminance region having a luminance less than or equal to a first luminance is defined as a low luminance region, and a luminance region having a luminance exceeding the first luminance is defined as a high luminance region, (i) performs first tone mapping using first conversion characteristics when first luminance characteristics exceed a predetermined threshold value, and (ii) performs second tone mapping using second conversion characteristics when the first luminance characteristics are less than or equal to the predetermined threshold value.
Video display device and video display method
A video display device includes: a video receiver that receives video data including a video and dynamic luminance characteristics indicating a time-dependent change in luminance characteristics of the video; a tone mapping processor that, in the case where a luminance region having a luminance less than or equal to a first luminance is defined as a low luminance region, and a luminance region having a luminance exceeding the first luminance is defined as a high luminance region, (i) performs first tone mapping using first conversion characteristics when first luminance characteristics exceed a predetermined threshold value, and (ii) performs second tone mapping using second conversion characteristics when the first luminance characteristics are less than or equal to the predetermined threshold value.
Video distribution system, video generation method, and reproduction device
A video generation device receives sound information from a distribution apparatus. The video generation device receives first timing information from the distribution apparatus. The video generation device causes a video-taking device to take a video. The video generation device reproduces one or more sounds indicated by the received sound information when the video being taken by the video-taking device. The video generation device identifies one or more timings of reproducing the one or more sounds indicated by the received sound information based on the received first timing information when the one or more sounds indicated by the received sound information being reproduced. The video generation device causes video information indicating the taken video and including second timing information indicating the one or more identified timings to be generated.