Patent classifications
H04N23/675
Carrier-assisted tracking
A method includes receiving selection of a target within an image captured by an image sensor of a payload and displayed on a user interface of the payload, detecting a deviation of the target from an expected target state within the image, generating, based at least partly on the deviation, a payload control signal including a first angular velocity for rotating the payload about an axis of the carrier to reduce the deviation about the axis in a subsequent image, and generating a base support control signal including a second angular velocity for rotating the payload with respect to the axis. When the first and second angular velocities are received, the carrier is controlled to rotate the payload at a third angular velocity about the axis. The third angular velocity is the first angular velocity, the second angular velocity, or a combination of both.
Image pickup device and electronic system including the same
An image pickup device includes first and second cameras, and first and second image signal processors (ISP). The first camera obtains a first image of an object. The second camera obtains a second image of the object. The first ISP performs a first auto focusing (AF), a first auto white balancing (AWB) and a first auto exposing (AE) for the first camera based on a first region-of-interest (ROI) in the first image, and obtains a first distance between the object and the first camera based on a result of the first AF. The second ISP calculates first disparity information associated with the first and second images based on the first distance, moves a second ROI in the second image based on the first disparity information, and performs a second AF, a second AWB and a second AE for the second camera based on the moved second ROI.
Apparatus, method thereof, and recording medium
An apparatus includes a reference coordinate selection unit configured to select reference coordinates of two points from a focus frame area set by a setting unit, and determines arrangement intervals of focus frames based on coordinates on image data before correction corresponding to the coordinates selected by the reference coordinate selection unit and a number of focus frames.
AUTOMATIC FOCUS DETECTION METHOD OF CAMERA AND ELECTRIC DEVICE SUPPORTING SAME
An electronic device includes a camera device configured to adjust a focus; a distance extraction device; and a processor configured to: obtain a first image using the camera device; set a first region of interest for focus detection in a portion of the first image; obtain first depth information using the distance extraction device, the first depth information including a depth distance corresponding to at least one pixel included in the first image; set a second region of interest in another portion of the first image based on at least two portions which differ in depth distance being included in the first region of interest based on the first depth information; and capture an image based on a focus determined corresponding to the second region of interest.
Focus detection apparatus, imaging apparatus, and focus detection method
A focus detection apparatus includes a selection unit configured to select as the focus detection area a first focus detection area and a second focus detection area including the first focus detection area and its periphery, and an information acquiring unit configured to acquire first information on whether or not the object moving within the imaging screen can be continuously captured in the first focus detection area. The selection unit selects the first focus detection area when the first information indicates that the object can be continuously captured in the first focus detection area, and the selection unit selects the second focus detection area when the first information indicates that the object cannot be continuously captured.
IMAGING APPARATUS CAPABLE OF SWITCHING DISPLAY METHODS
An imaging apparatus comprises an image pickup unit, a cutout image generation unit for cutting out a specified area in a pickup image taken by the image pickup unit to generate a cutout image enlarged at a specified magnification, an image display unit for displaying one or both of the pickup image taken by the image pickup unit and the cutout image generated by the cutout image generation unit, a display image control unit, for controlling a method of displaying an image the image display unit displays, a manual focus operation unit for the user to control through manual operation the focus position of the image pickup unit, and a manual zoom operation unit for the user to control the zoom magnification of the image pickup unit.
Dynamic-Ledger-Enabled Edge-Device Query Processing
A method for processing a query for data stored in a distributed database includes receiving, at an edge device, the query for data stored in the distributed database from a query device. The method includes causing, by the edge device, the query to be stored on a dynamic ledger maintained by the distributed database. The method includes detecting, by the edge device, that summary data has been stored on the dynamic ledger. The method includes generating, by the edge device, an approximate response to the query based on the summary data stored on the dynamic ledger. The method includes transmitting, to the query device, the approximate response.
Event-assisted autofocus methods and apparatus implementing the same
A focus method and an image sensing apparatus are disclosed. The method includes capturing, by a plurality of event sensing pixels, event data of a targeted scene, wherein the event data indicates which pixels of the event sensing pixels have changes in light intensity, accumulating the event data for a predetermined time interval to obtain accumulated event data, determining whether a scene change occurs in the targeted scene according to the accumulated event data, obtaining one or more interest regions in the targeted scene according to the accumulated event data in response to the scene change, and providing at least one of the one or more interest regions for a focus operation. The image sensing apparatus comprises a plurality of image sensing pixels, a plurality of event sensing pixels, and a controller configured to perform said method.
IMAGING APPARATUS
An imaging apparatus includes: an image sensor that captures a subject image to generate image data; a first depth measurer that acquires first depth information indicating a depth at a first spatial resolution, the depth showing a distance between the imaging apparatus and a subject in an image indicated by the image data; a second depth measurer that acquires second depth information indicating the depth in the image at a second spatial resolution different from the first spatial resolution; and a controller that acquires third depth information indicating the depth at the first or second spatial resolution for each region of different regions in the image, based on the first depth information and the second depth information.
IMAGING DEVICE WITH SPECTROMETER AND METHODS FOR USE THEREWITH
A user device for imaging a scene includes a first plurality of optical sensors coupled to a substrate for collecting an image of a scene and a second plurality of optical sensors coupled to the substrate for collecting spectral information from the image. A plurality of sets of interference filters are associated with the second plurality of optical sensors, where each interference filter of a set of interference filters is configured to pass light in one of a plurality of wavelength ranges to one or more optical sensors of the second plurality of optical sensors and each optical sensor of the plurality of optical sensors is associated with a spatial area of the image. A processor is adapted to receive an output from the first plurality of optical sensors and the second plurality of optical sensors and determine, based on the spectral information, a target area within the scene. The processor is further adapted to retrieve focus data for the scene, determine a focus distance for the target area and output user-perceptible information to an output display.