Patent classifications
G11B31/006
Information processing device, information processing method, and program
There is provided an information processing device including a data processing section configured to perform processing of playing back content according to a feature image, based on detection of the feature image from a captured image acquired by a capturing of an imaging section, and a specifying section configured to specify a resume point, which is a position of playing back the content, according to a timing at which the detection of the feature image is impossible. The data processing section performs the playback of the content from a position corresponding to the resume point, according to re-detection of the feature image after the detection of the feature image becomes impossible.
DIGITAL CAMERA WITH AUDIO, VISUAL AND MOTION ANALYSIS
A digital camera with audio, visual and motion analysis includes a digital processor, an input processing system, and one or more imaging sensors, sound sensors, and motion sensors. In a non-limiting embodiment, the input processing system including non-transitory computer readable media including code segments, executable by the digital processor, for real-time audio, visual and motion analysis to develop a digital model of an ambient environment of the digital camera from data derived from the imaging sensor(s), sound sensor(s) and motion sensor(s).
Direction indicators for panoramic images
Devices, systems and methods are disclosed for improving a display of panoramic video data by including a first angle indicator, as a visual representation of a direction of a displayed portion of the panoramic video data relative to a reference location, along with a second angle indicator indicating an object of interest. The second angle indicator may display a fixed angle or a recommended angle to improve navigation within the panoramic video data. The fixed angle may be determined by the user or the device and may remain stationary during playback of the panoramic video data, allowing the user to switch between directional views without panning. The recommended angle may be determined based on a location of a tracked object, allowing the user to display the tracked object without panning. The second angle indicator may be represented by an icon illustrating the object of interest.
SURROUND VIDEO PLAYBACK
Methods and systems are disclosed including a computing device configured to allow a user to view a multi-stream video from a selected angle/direction with respect to the contents of the multi-stream video, under the user's control. The multi-stream video is generated using multiple Image Acquisition Devices (IAD), such as cameras, simultaneously, consecutively, or independently filming a scene, each IAD having a different position with respect to each of the other IADs. Each image data stream obtained from each IAD may be uniquely identified to allow selective real-time playback of image data streams under user control. Each image data stream represents a corresponding viewing angle to the user. The user may dynamically change the selection of an image stream, and thus the viewing angle, while viewing a recorded scene. Multiple image streams of the same scene may be selected and viewed simultaneously to provide 3D or other visual effects.
Perspective view entertainment system and method
In a method according to the present disclosure, a director's view version of a film is recorded. Then the film is recorded from the viewpoint of a different characters in the film. The director's view version and character view versions are time synched to create a film that allows a user to switch at any time between the director's view version and one or more of the character view versions during viewing of the film. A system according to the method uses a director's view camera and at least one character view camera to record a scene. A recording processor communicates with the cameras and receives and stores in memory director's view camera data and character view camera data. The recording processor further time-synchs the director's view camera data and character view camera data. A viewing system has a viewing screen, a viewer-operated controller, and a viewing processor configured to display on the viewing screen at least one of the perspective views of the film scene and to switch between the perspective views of the film scene upon actuation by the viewer of the viewer-operated controller.
Sound processing system and processing method that emphasize sound from position designated in displayed video image
A recorder receives designation of a video which is desired to be reproduced from a user. If designation of one or more designated locations where sound is emphasized on a screen of a display which displays the video is received by the recorder from the user via an operation unit during reproduction or temporary stopping of the video, a signal processing unit performs an emphasis process on audio data, that is, the signal processing unit emphasizes audio data in directions directed toward positions corresponding to the designated locations from a microphone array by using audio data recorded in the recorder. A reproducing device reproduces the emphasis-processed audio data and video data in synchronization with each other.
Imaging apparatus
There has been a problem that continuous recording of monitoring data may be impeded by inconsistency in setting values, occurring due to changes in settings of an imaging unit or compression encoding unit performed while recording. Provided are a recording source which outputs a stream including compression-encoded video data, instructing means which change a parameter relating to the recording source, according to a command transmitted from an external device connected via a network, and control means which perform control so as to switch whether or not a command by the instructing means as to the recording source correlated with the recording job can be accepted, in accordance with the recording state of the recording job.
SYSTEM AND METHOD FOR PRESENTING VIRTUAL REALITY CONTENT TO A USER
This disclosure describes a system configured to present primary and secondary, tertiary, etc., virtual reality content to a user. Primary virtual reality content may be displayed to a user, and, responsive to the user turning his view away from the primary virtual reality content, a sensory cue is provided to the user that indicates to the user that his view is no longer directed toward the primary virtual reality content, and secondary, tertiary, etc., virtual reality content may be displayed to the user. Primary virtual reality content may resume when the user returns his view to the primary virtual reality content. Primary virtual reality content may be adjusted based on a user's interaction with the secondary, tertiary, etc., virtual reality content. Secondary, tertiary, etc., virtual reality content may be adjusted based on a user's progression through the primary virtual reality content, or interaction with the primary virtual reality content.
GENERATING CONTENT FOR A VIRTUAL REALITY SYSTEM
The disclosure includes a system and method for generating virtual reality content. For example, the disclosure includes a method for generating virtual reality content that includes a stream of three-dimensional video data and a stream of three-dimensional audio data with a processor-based computing device programmed to perform the generating, providing the virtual reality content to a user, detecting a location of the user's gaze at the virtual reality content, and suggesting an advertisement based on the location of the user's gaze. Another example includes receiving virtual reality content that includes a stream of three-dimensional video data and a stream of three-dimensional audio data to a first user with a processor-based computing device programmed to perform the receiving, generating a social network for the first user, and generating a social graph that includes user interactions with the virtual reality content.
INTEGRATING DATA FROM MULTIPLE DEVICES
A recording system for an emergency response unit includes a first data collection device configured to record a first video, audio or data segment with an incident identifier and transmit a message including the incident identifier. A second data collection device may receive the message and, as appropriate, record at least a second video, audio or data segment with the incident identifier, allowing the first segment and the second segment to be associated using the incident identifier. In other embodiments, a first recording device may begin recording video, audio or legal evidence data with an incident identifier, and a control system may receive a message including the incident identifier from the first recording device, identify one or more additional recording devices located within a certain distance of the first recording device, and obtain recordings from the one or more additional recording devices.