Patent classifications
H04N23/80
System and method for image stitching
A system for stitching images together is disclosed. The images are sometimes referred to as frames, such as frames in a video sequence. The system comprises one or more imagers (e.g. cameras) that work in coordination with a matching amount of custom code modules. The system achieves image stitching using approximately one third the Field of View (FOV) of each imager (camera) and also by increasing the number of imagers to be above a predetermined threshold. The system displays these stitched images or frames on a computer monitor, either in a still-image context but also in a video-context. Normally these tasks would involve a great detail of computation, but the system achieves these effects while managing the computational load. In stitching the images together, it is sometimes necessary to introduce some image distortion (faceting) in the combined image. The system ensures no gaps in any captured view, and assists in achieving full situational awareness for a viewer.
Conference device with multi-videostream capability
A conference device comprising a first image sensor for provision of first image data, a second image sensor for provision of second image data, a first image processor configured for provision of a first primary videostream and a first secondary videostream based on the first image data, a second image processor configured for provision of a second primary videostream and a second secondary videostream based on the second image data, and an intermediate image processor in communication with the first image processor and the second image processor and configured for provision of a field-of-view videostream and a region-of-interest videostream, wherein the field-of-view videostream is based on the first primary videostream and the second primary videostream, and wherein the region-of-interest videostream is based on one or more of the first secondary videostream and the second secondary videostream.
Image processing apparatus
An image processing apparatus includes: an image acquisition unit configured to acquire a photographed image; an editing unit configured to edit the photographed image by executing an action including an application and image processing that is a function of the application; an information acquisition unit configured to acquire photographing information on photographing of the photographed image; a first acquisition unit configured to acquire first history information indicating a history of actions for the photographed image; a second acquisition unit configured to acquire second history information based on at least a history of actions for an image different from the photographed image; and a control unit configured to determine a candidate for an action that is to be executed next on a basis of the first history information and the second history information and perform control to notify the determined candidate.
Electronic device capable of controlling image display effect, and method for displaying image
An electronic device includes a first camera, a second camera, a display, a memory, and a processor. The processor collects a first image obtained by the first camera with respect to an external object and a second image obtained by the second camera with respect to the external object, generates a third image with respect to the external object using a first area of the first image and a second area of the second image, which corresponds to the first area, identifies an input associated with the third image displayed through the display, and displays an image generated using at least one of the first image, the second image, or depth information in response to the input. The generating operation of the third image includes generating the depth information with respect to the third image.
Signal processing apparatus, photoelectric conversion apparatus, photoelectric conversion system, control method of signal processing apparatus, and non-transitory computer-readable storage medium
A signal processing apparatus that processes image data output from a photoelectric conversion unit including a light-receiving region and a light-blocking region. The apparatus includes a control data generation unit that outputs control data used to generate correction data for correcting the image data using a trained model generated through machine learning, and a signal processing unit that generates the correction data on the basis of light-blocked image data and the control data, the light-blocked image data being image data, among the image data, that is from the light-blocking region, and corrects light-received image data in accordance with the correction data without applying the trained model, the light-received image data being image data, among the image data, that is from the light-receiving region.
Medical image processing apparatus and medical observation system
A medical image processing apparatus includes an image processor configured to: receive a plurality of first image data captured at different times and generated by illumination of light in a first wavelength band in sequence; receive a plurality of second image data captured at different times and generated by illumination of light in a second wavelength band different from the first wavelength band in sequence; generate first and second images based on the received first and second image data, respectively; and output the generated first image and second image to a display in chronological order of the first and second images and in accordance with a preset display pattern of the first and second images.
Camera detection of human activity with co-occurrence
Methods, systems, and apparatus for camera detection of human activity with co-occurrence are disclosed. A method includes detecting a person in an image captured by a camera; in response to detecting the person in the image, determining optical flow in portions of a first set of images; determining that particular portions of the first set of images satisfy optical flow criteria; in response to determining that the particular portions of the first set of images satisfy optical flow criteria, classifying the particular portions of the first set of images as indicative of human activity; receiving a second set of images captured by the camera after the first set of images; and determining that the second set of images likely shows human activity based on analyzing portions of the second set of images that correspond to the particular portions of the first set of images classified as indicative of human activity.
Image signal processing device, imaging device, flicker check method in imaging device, and server
This invention enables one to check flicker of a HFR image signal. A display image signal of a first frame rate for flicker check is obtained on the basis of an image signal of a second frame rate. For example, the display image signal of the first frame rate is generated from the image signal of the second frame rate by a frame thinning process. In this case, a frame to be thinned is determined from a relationship between the second frame rate and a light source frequency. For example, the number of frames to be a flicker period is obtained from the second frame rate and the light source frequency, and the frame to be thinned is determined so that continuous frames of the number of frames to be the flicker period are present.
Camera Module, Imaging Method, and Imaging Apparatus
This application provides a camera module, an imaging method, and an imaging apparatus. The camera module 111 this application includes a filter module and a sensor module. The filter module is configured to output target optical signals of different bands in optical signals incident on the filter module to a same pixel on the sensor module at different times. The sensor module is configured to: convert the target optical signals incident on the sensor module into electrical signals, and output the electrical signals.
TRANSMISSION ELEMENT IMAGING DEVICE AND TRANSMISSION ELEMENT IMAGING METHOD
Disclosed is a transmission element imaging device capturing an image of a transmission element that transmits a signal, including a reception element group including a plurality of reception elements each receiving the signal transmitted from the transmission element, a direction detection unit detecting a direction of the transmission element on a basis of the signal received by each of the plurality of reception elements, a camera whose relative positional relation with the reception element group is determined and which captures an image in the direction of the transmission element, and an image processing unit generating an image in which a marker indicating the direction of the transmission element is added to the image captured by the camera.