Patent classifications
H04N5/2625
METHOD AND APPARATUS FOR IMAGING A SAMPLE USING A MICROSCOPE SCANNER
A microscope scanner is provided comprising a detector array for obtaining an image from a sample and a sample holder configured to move relative to the detector array. The sample holder can be configured to move to a plurality of target positions relative to the detector array in accordance with position control signals issued by a controller and the detector array is configured to capture images during an imaging scan based on the position control signals.
IMAGING DEVICE AND IMAGE PROCESSING METHOD
There is provided an imaging device including: an image processing unit; a face recognition processing unit; a storage unit; and a composition processing unit for generating composite data by a composition process so that persons photographed in each of a plurality of image data are included in one image data. The face recognition processing unit recognizes a first person by performing a face recognition process on first image data. When second image data obtained by photographing a second person at the different photographing timing of the first image data, with the same background as the first image data, is recorded in the storage unit, the composition processing unit generates composite data in which the first person and the second person are superimposed on the same background.
METHOD FOR TRANSFERRING AT LEAST ONE IMAGE CONTENT TO AT LEAST ONE VIEWER
A method for transmitting at least one image content to at least one viewer. The image contents in the method may be represented by a plurality of individual images. The method may also have a display periodically showing the plurality of individual images one after the other, and a camera outputting a plurality of pictures of a scenery at least partially containing the display. The camera outputting at least one of the shown individual images for each picture and the individual images of at least one of the image contents being transmitted to at least one viewer.
VISUAL EXPERIENCE MODULATION BASED ON STROBOSCOPIC EFFECT
An approach for modifying in real-time by removing or reinforcing stroboscopic effect from images associated with a viewing experience is disclosed. The approach includes identifying video clips, detecting environmental parameters and calculating display setting. The approach also analyzes display setting using recommendation from GAN, output displaying setting on an AR display and receiving feedback from user.
Display control device, display control method, and program
The present invention relates to a display control device, a display control method, and a program, enabling realization of various playing display using frame images subjected to compositing processing as a motion transition image. A recording medium has recorded therein, as a series of recording frame image data, each of composited image data generated by sequentially performing image compositing processing for each input of frame image data, each of composited image data being enabled of moving image display in which a moving subject image in each frame image data is sequentially placed so as to be arrayed in a predetermined direction, with each recording frame image data being playable. In accordance with operation of an operation input unit such as a touch panel, a frame position to be played is determined out of the series of recording frame image data, and playing of the recording frame image data at the determined frame position is executed. Thus, a scene which the user desires to view out of the motion transition image can be easily viewed.
SYSTEM AND METHOD FOR REAL-TIME CAMERA TRACKING TO FORM A COMPOSITE IMAGE
A system and method for tracking the movement of a recording device to form a composite image is provided. The system has a user device with a sensor array capturing motion data and velocity vector data of the recording device when the recording device is in motion, an attachment member for coupling the user device to the motion capturing device, and a server with program modules. The program modules described are a calibration module for calibrating a position of the user device relative to a position of a lens of the recording device, a recorder module for receiving the motion data and velocity vector data from the sensor array; and a conversion module for combining the position of the user device relative to the lens of the recording device, and the motion data and velocity vector data and transforming the data into a file that is usable by a compositing suite, a three-dimensional application, or both.
APPARATUS AND METHOD FOR FILMING A SCENE USING LIGHTING SETUPS ACTUATED REPEATEDLY DURING EACH ENTIRE FRAME WITHOUT VISIBLE FLICKER ON SET WHILE ACQUIRING IMAGES SYNCHRONOUSLY WITH THE LIGHTING SETUPS ONLY DURING A PORTION OF EACH FRAME
Apparatus and method for filming a scene using a plurality of strobable lighting setups in rapid sequence to concurrently record a plurality of motion picture clips of the scene, one motion picture clip for each strobable lighting setup. The apparatus includes a plurality of strobable light sources that are coordinated to form the plurality of lighting setups, a controller to actuate the strobable lighting setups at a constant rate in a sequence that repeats multiple times during each macro frame, and a camera to capture a burst sequence of images within each macro frame. The burst sequence of images shows the scene illuminated by each one of the plurality of lighting setups in sequence during each macro frame. Since the constant rate is above the flicker threshold, people on set viewing the scene illuminated by the repeating sequence of strobable lighting setups perceive apparently continuous (non-flickering) illumination of the scene.
Method and System for Synthesizing a Lane Image
A method for synthesizing a lane image is proposed in the present application. This method includes the following steps. M continuous image frames are retrieved at a frame rate f from a video image capture device. A quantity N for image mapping is determined based on a dash length L of a dashed line and a distance S between two dashes of the dashed lines. A frame interval for mapping image frames is determined based on the dash length L, the distance S, the velocity v, and the frame rate f. At least N image frames are retrieved from the M continuous image frames at the frame interval. The at least N image frames are synthesized to obtain the lane image using an image synthesizing device.
APPARATUS AND METHOD FOR RECORDING A SCENE FOR A PLURALITY OF LIGHTING SETUPS USING A VARIABLE FRAME RATE CAMERA TO CAPTURE MICRO FRAMES DURING ONLY A PORTION OF EACH CINEMATIC FRAME
Apparatus for recording a scene using a plurality of lighting setups in rapid sequence to concurrently record a plurality of motion picture clips of the scene, one motion picture clip for each lighting setup, the plurality of clips together exhibiting negligible motion offset. The apparatus includes multiple light sources, a controller to define the plurality of lighting setups using the multiple light sources and to actuate the lighting setups in sequence, a variable frame rate camera to capture a sequence of micro frames showing the scene illuminated by each one of the plurality of lighting setups in sequence during each micro frame, and optionally a processing module to process the sequence of micro frames to generate a motion picture clip of the scene for each of the lighting setups. The duration of the micro frame sequence is short enough to minimize the need for an algorithm for removing motion artifacts.
METHOD AND APPARATUS FOR DETERMINING PHOTOGRAPHING DELAY TIME, AND PHOTOGRAPHING DEVICE
A method and an apparatus for determining a photographing delay time and a photographing device are disclosed. The method includes: controlling the photographing device to form, according to a preset imaging cycle, an image of a photographed object, and storing an imaged photo; each time photographing is initiated, displaying the imaged photo and receiving a first operation instruction entered by a user; determining, according to the first operation instruction, a corresponding target imaged photo; calculating a difference between an initiation moment of each time of photographing and an imaging moment of the corresponding target imaged photo, to obtain a delay time corresponding to a single time of photographing; and calculating an average value of delay times corresponding to at least two times of photographing, where the average value may be used as a standard delay time caused by an operation of the user on the photographing device.