Patent classifications
H04N5/2627
Photobooth kiosk
The present inventive concept relates to a kiosk design for an advanced photographic system. More specifically, the present inventive concept relates to a self-contained, automated photobooth kiosk. In embodiments of the present inventive concept, the photobooth kiosk is capable of taking a 360 degree panoramic photograph or sequence of photographs of a subject and surrounding background. For instance, a customer of the photobooth kiosk may stand in the center of the photobooth and have his or her picture taken with a plurality of specialized machine vision cameras, with the images sent to a central processor such as a computer for processing into a 360 degree panoramic photograph or video clip. After the photo-taking session, the customer may collect prints of the pictures at the kiosk, similar to presently available photobooths. The photobooth kiosk may be fully automated such that no operator is necessary, and all options and features desired by the customer may be self-selected by the customer prior to, during, and after the photo-taking session.
METHOD FOR ACHIEVING BULLET TIME CAPTURING EFFECT AND PANORAMIC CAMERA
Provided in the present invention are a method for achieving a bullet time capturing effect and a panoramic camera. The method comprises: acquiring a panoramic video captured when a panoramic camera rotates around a capture target; acquiring from within the panoramic video hemispherical images close to the side of the capture target; splicing the hemispherical images to generate a spliced image; and fixing a viewpoint of the spliced image, thus achieving a bullet time capturing effect. According to the present invention, only one panoramic camera is needed to be able to capture the bullet time capturing effect, so that the capturing cost of the bullet time capturing effect in the present invention is low. Meanwhile, since the bullet time capturing effect is obtained by means of a panoramic video being captured when a panoramic camera rotates around a capture target and processing being carried out on the panoramic video, the precision is high.
METHOD TO CONFIGURE A VIRTUAL CAMERA PATH
A computer-implemented system and method of configuring a path of a virtual camera. The method comprises receiving user steering information to control the path of the virtual camera in a scene; determining a primary target based upon a field of view of the virtual camera; and estimating a future path and a corresponding future field of view of the virtual camera, based on the received steering information. The method further comprises determining a secondary target of the scene proximate to the estimated future path of the virtual camera based on a preferred perspective of the secondary target; and configuring the path to capture the secondary target from the preferred perspective.
View interpolation for visual storytelling
A plurality of frames of a video recorded by a video camera and depth maps of the plurality of frames are stored in a data storage. One or more target video camera positions are determined. Each frame of the plurality of frames is associated with one or more of the target video camera positions. For each frame, one or more synthesized frames from the viewpoint of the one or more target camera positions associated with that frame are generated by applying a view interpolation algorithm to that frame using the color pixels of that frame and the depth map of that frame. Users can provide their input about the new camera positions and other camera parameters through multiple input modalities. The synthesized frames are concatenated to create a modified video. Other embodiments are also described and claimed.
Method to configure a virtual camera path
A computer-implemented system and method of configuring a path of a virtual camera. The method comprises receiving user steering information to control the path of the virtual camera in a scene; determining a primary target based upon a field of view of the virtual camera; and estimating a future path and a corresponding future field of view of the virtual camera, based on the received steering information. The method further comprises determining a secondary target of the scene proximate to the estimated future path of the virtual camera based on a preferred perspective of the secondary target; and configuring the path to capture the secondary target from the preferred perspective.
Pulsating Image
A system for creating a pulsating image, comprising an electronic device comprising a camera and configured to communicate with a storage; a designated application running on the electronic device; the designated application comprises: a capturing module configured to communicate with the camera and receive a micro video having predetermined characteristics upon a user request; and a packetizing module configured to receive the micro video, create a pulsating image and a pulsating thumbnail of the pulsating image out of the micro video and packetize the pulsating image and the pulsating thumbnail.
Video processing device, video processing method, and program
The present technology relates to a video processing device, a video processing method, and a program for providing a bullet time video centering on a moving object. The video processing device acquires view point position information indicating the range of movement of a view point position for an object in an input image frame where the object is imaged in chronological order at multiple different view point positions. Time information indicates a time range in the chronological order of imaging the input image frame. The device processes the input image frame such that the object is at a specific position on an output image frame when the view point position moves within the time range indicated by the time information and the view point position movement range indicated by the view point position information.
METHOD TO CONFIGURE A VIRTUAL CAMERA PATH
A computer-implemented system and method of configuring a path of a virtual camera. The method comprises receiving user steering information to control the path of the virtual camera in a scene; determining a primary target based upon a field of view of the virtual camera; and estimating a future path and a corresponding future field of view of the virtual camera, based on the received steering information. The method further comprises determining a secondary target of the scene proximate to the estimated future path of the virtual camera based on a preferred perspective of the secondary target; and configuring the path to capture the secondary target from the preferred perspective.
METHOD AND SYSTEM FOR SYNTHESIZING A LANE IMAGE
A method for synthesizing a lane image is proposed in the present application. This method includes the following steps. M continuous image frames are retrieved at a frame rate f from a video image capture device. A quantity N for image mapping is determined based on a dash length L of a dashed line and a distance S between two dashes of the dashed lines. A frame interval for mapping image frames is determined based on the dash length L, the distance S, the velocity v, and the frame rate f. At least N image frames are retrieved from the M continuous image frames at the frame interval. The at least N image frames are synthesized to obtain the lane image using an image synthesizing device.
Image processing apparatus
An image processing apparatus determines, with a freeze signal, an image of a freeze target, or determines, with no freeze signal, a latest image as an image to be displayed; performs color-balance adjustments using first and second parameters, based on the imaging signal of the determined image, thereby generating first and second imaging signals, respectively; generates a display purpose imaging signal, based on the generated first imaging signal; detects signals of plural color components included in the second imaging signal; calculates, based on the detected signals, a color-balance parameter for the color-balance adjustment; sets, when inputting no freeze signal, a latest color-balance parameter calculated, as the first and the second parameters, or sets, when the freeze instruction signal is input, a color-balance parameter corresponding to the image of the freeze target as the first parameter and the latest color-balance parameter as the second parameter.