H04N23/80

SELECTIVELY INCREASING DEPTH-OF-FIELD IN SCENES WITH MULTIPLE REGIONS OF INTEREST
20230012219 · 2023-01-12 ·

The present disclosure provides systems, apparatus, methods, and computer-readable media that support multi-frame depth-of-field (MF-DOF) for deblurring background regions of interest (ROIs), such as background faces, that may be blurred due to a large aperture size or other characteristics of the camera used to capture the image frame. The processing may include the use of two image frames obtained at two different focus points corresponding to the multiple ROIs in the image frame. The corrected image frame may be determined by deblurring one or more ROIs of the first image frame using an AI-based model and/or local gradient information. The MF-DOF may allow selectively increasing a depth-of-field (DOF) of an image to provide focused capture of multiple regions of interest, without causing a reduction in aperture (and subsequently an amount of light available for photography) or background blur that may be desired for photography.

Image Content Removal Method and Related Apparatus
20230217097 · 2023-07-06 ·

This application discloses an image content removal method, and relates to the field of computer vision. The method includes: enabling a camera application; displaying a photographing preview interface of the camera application; obtaining a first preview picture and a first reference frame picture that are captured by a camera; determining a first object in the first preview picture as a to-be-removed object; and determining to-be-filled content in the first preview picture based on the first reference frame picture, where the to-be-filled content is image content that is of a second object and that is shielded by the first object in the first preview picture. The terminal generates a first restored picture based on the to-be-filled content and the first preview picture. In this way, image content that a user does not want in a picture or a video shot by the user can be removed.

Image processing apparatus, image processing method, and electronic apparatus

An image processing apparatus includes a first acquisition unit that acquires a first pixel signal output from a first pixel, a second acquisition unit that acquires a second pixel signal output from a second pixel having a size smaller than that of the first pixel, a temperature detection unit that detects temperature; a composition gain determination unit that determines a composition gain corresponding to the detected temperature, and a composition unit that composes the first pixel signal and the second pixel signal multiplied by the composition gain.

VISION SENSOR, IMAGE PROCESSING DEVICE INCLUDING THE SAME, AND OPERATING METHOD OF THE VISION SENSOR

Provided is a vision sensor including a pixel array including a plurality of pixels disposed in a matrix form, an event detection circuit configured to detect whether an event has occurred in the plurality of pixels and generate event signals corresponding to pixels from among the plurality of pixels in which an event has occurred, a map data processor configured to generate a timestamp map based on the event signals, and an interface circuit configured to transmit vision sensor data including at least one of the event signals and the timestamp map to an external processor, wherein the timestamp map includes timestamp information indicating polarity information, address information, and an event occurrence time of a pixel included in an event signal corresponding to the pixel.

VISION SENSOR, IMAGE PROCESSING DEVICE INCLUDING THE SAME, AND OPERATING METHOD OF THE VISION SENSOR

Provided is a vision sensor including a pixel array including a plurality of pixels disposed in a matrix form, an event detection circuit configured to detect whether an event has occurred in the plurality of pixels and generate event signals corresponding to pixels from among the plurality of pixels in which an event has occurred, a map data processor configured to generate a timestamp map based on the event signals, and an interface circuit configured to transmit vision sensor data including at least one of the event signals and the timestamp map to an external processor, wherein the timestamp map includes timestamp information indicating polarity information, address information, and an event occurrence time of a pixel included in an event signal corresponding to the pixel.

MANAGEMENT OF VIDEO PLAYBACK SPEED BASED ON OBJECTS OF INTEREST IN THE VIDEO DATA
20230215464 · 2023-07-06 ·

Systems, methods, and software described herein manage the playback speed of video data based on processing objects in the video data. In one example, a video processing service obtains video data from a video source and identifies objects of interest in the video data. The video processing service further determines complexity in frames of the video data related to the objects of interest and updates playback speeds for segments of the video data based on the complexity of the frames.

Method and device for correcting vehicle view cameras
11548452 · 2023-01-10 · ·

A method for correcting a camera by using a plurality of pattern members placed on the ground of a vehicle enables receiving pattern information of a plurality of pattern members by using a plurality of cameras disposed on the circumference of a vehicle being driven, calculating a first parameter on the basis of the pattern information, calculating trajectory information of the vehicle by using the pattern information, and calculating a second parameter by correcting the first parameter on the basis of the trajectory information of the vehicle.

FRAMING AN IMAGE OF A USER REQUESTING TO SPEAK IN A NETWORK-BASED COMMUNICATION SESSION
20230217113 · 2023-07-06 ·

Disclosed in some examples, are methods, systems, and machine-readable mediums for allowing participants of communication sessions joining from conference rooms to indicate a desire to speak using their own personal computing device, and, to automatically frame that user with the conference room camera using a position of the user determined automatically. For example, a user may have a communication application executing on their mobile device that is logged into the network-based conference. If the user wishes to speak, they may activate a control within the application instance executing on their mobile device. The in-room meeting device may then automatically locate the user, and may direct the camera to pan, tilt, or zoom so that the camera frames the user.

FRAMING AN IMAGE OF A USER REQUESTING TO SPEAK IN A NETWORK-BASED COMMUNICATION SESSION
20230217113 · 2023-07-06 ·

Disclosed in some examples, are methods, systems, and machine-readable mediums for allowing participants of communication sessions joining from conference rooms to indicate a desire to speak using their own personal computing device, and, to automatically frame that user with the conference room camera using a position of the user determined automatically. For example, a user may have a communication application executing on their mobile device that is logged into the network-based conference. If the user wishes to speak, they may activate a control within the application instance executing on their mobile device. The in-room meeting device may then automatically locate the user, and may direct the camera to pan, tilt, or zoom so that the camera frames the user.

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

An image processing device that can facilitate setting related to layer information based on distance information is provided.

The image processing device includes an image acquisition unit configured to acquire an image including a subject through a lens unit, a distance information acquisition unit configured to acquire distance information indicating a distance to the subject, a layer information generation unit configured to generate layer information on a layer for each distance based on the distance information, and a setting unit configured to set a reference for generating the layer information and switch display of a setting value capable of being set in accordance with the lens information of the lens unit.