Patent classifications
H04N23/70
IMAGE READING DEVICE CONTROL METHOD, IMAGE READING DEVICE, AND NON-TRANSITORYCOMPUTER-READABLE MEDIUM WITH STORED PROGRAM
The present invention provides a control method of an image reading device, an image reading device, and a non-transitory computer-readable medium having a program stored thereon, which avoid a reduction in reading resolution. The control method of the image reading device includes: a first step (ST101) of generating N line-shaped images indicating N line-shaped regions (W11) in an imaging target (T1) being conveyed in a conveying direction (X1) by imaging the N line-shaped regions (W11) extending in a direction perpendicular to the conveying direction (X1); a second step (ST102) of performing the same step as the first step (ST101) at a point in time when the imaging target (T1) is conveyed by an amount associated to a width (11a) of N−1 line-shaped regions; and a step (ST103) of generating a read image by arranging the N line-shaped images generated in each of the steps in ascending order.
Determination of luminance values using image signal processing pipeline
Apparatuses, systems, and techniques to receive, at one or more processors associated with an image signal processing (ISP) pipeline for a camera, an image generated using an image sensor of the camera, wherein the image comprises a plurality of channels associated with color information of the image; process, by the one or more processors, the plurality of channels of the image to generate a plurality of luminance and/or radiance values; generate, by the one or more processors, an updated version of the image using the plurality of luminance and/or radiance values; and output the updated version of the image.
AUTO-ALIGNMENT OF MULTI-SENSOR OVERLAY USING BI-DIRECTIONAL DETECTOR/DISPLAY
An optical device includes an underlying device configured output light to an optical output to output an image of objects in an environment to a user. The light is output in a first spectrum. A stacked device is configured to be coupled in an overlapping fashion to an optical output of the underlying device. The stacked device is transparent, according to a first transmission efficiency, to light in the first spectrum. The stacked device includes a plurality of electro-optical circuits including: a plurality of light emitters configured to output light, and a plurality of detectors configured to detect light in the first spectrum from the underlying device that can be used to detect the objects in the image. The light emitters are configured to output light dependent on light detected by the detectors and additional information about characteristics of the objects in the environment.
AUTO-ALIGNMENT OF MULTI-SENSOR OVERLAY USING BI-DIRECTIONAL DETECTOR/DISPLAY
An optical device includes an underlying device configured output light to an optical output to output an image of objects in an environment to a user. The light is output in a first spectrum. A stacked device is configured to be coupled in an overlapping fashion to an optical output of the underlying device. The stacked device is transparent, according to a first transmission efficiency, to light in the first spectrum. The stacked device includes a plurality of electro-optical circuits including: a plurality of light emitters configured to output light, and a plurality of detectors configured to detect light in the first spectrum from the underlying device that can be used to detect the objects in the image. The light emitters are configured to output light dependent on light detected by the detectors and additional information about characteristics of the objects in the environment.
Information processing device, information processing method, image capturing device, and program
In an information compression unit 311, in an image capturing unit 121 (221), among a plurality of pixel output units that receives object light that enters without passing through any of an image capturing lens and a pinhole, pixel outputs of at least two of the pixel output units have incident angle directivity modulated into different incident angle directivity according to an incident angle of the object light. The information compression unit 311 performs compression processing to reduce an amount of data of pixel output information generated by the image capturing unit 121 (221). For example, by computation of a difference between set reference value information and the pixel output information, linear calculation of a set calculation parameter and the pixel output information, and the like, the information compression unit 311 reduces a word length of the pixel output information, and reduces the amount of data of the pixel output information generated according to the object light that enters without passing through any of an image capturing lens and a pinhole.
Learning device, image generation device, learning method, image generation method, and program
A second learning data acquisition section acquires an input image. A wide angle-of-view image generation section generates, in response to an input of the input image, a generated wide angle-of-view image that is an image having a wider angle of view than the input image. The second learning data acquisition section acquires a comparative wide angle-of-view image that is an image to be compared with the generated wide angle-of-view image. A second learning section performs learning for the wide angle-of-view image generation section by, on the basis of a comparison result between the generated wide angle-of-view image and the comparative wide angle-of-view image, updating parameter values of the wide angle-of-view image generation section such that, according to the luminance levels of pixels in the comparative wide angle-of-view image or the luminance levels of pixels in the generated wide angle-of-view image, update amounts of the parameter values concerning the pixels are increased.
ELECTRONIC DEVICE AND METHOD FOR CAPTURING IMAGE THEREOF
An electronic device, including a memory; a display panel; an image sensor disposed at a lower end of the display panel; a processor operatively connected to the image sensor; and a display driver integrated circuit operatively connected to the display panel and the processor, and configured to sense that a first frame is output on the display panel and to transmit a first signal to the processor based on the first frame being output on the display panel, wherein the processor is configured to: generate the first frame having a designated pixel value based on a shoot command being received from a user, control the display panel to output the first frame, and control the image sensor to capture an image based on the first signal being received from the display driver integrated circuit, and store the captured image in the memory.
APERTURE AND APERTURE CONTROL METHOD, IMAGING LENS, AND ELECTRONIC DEVICE
Embodiments of this application provide an aperture and an aperture control method, an imaging lens, and an electronic device. The aperture includes a first substrate and a second substrate, and a first area and a second area are included between the first substrate and the second substrate. A drive electrode array on the second substrate is located in the first area, a common electrode on the second substrate is located in the second area, and the common electrode is covered by a first fluid located in the second area. The drive electrode array includes transparent drive electrodes arranged in an array. The aperture further includes a second fluid, and the second fluid covers the first fluid and the drive electrode array. The first fluid is an opaque electrolyte, the second fluid is a transparent liquid, and the first fluid is insoluble with the second fluid.
CAMERA AGNOSTIC CORE MONITOR INCORPORATING PROJECTED IMAGES WITH HIGH SPATIAL FREQUENCY
A camera agnostic core monitor for an enhanced flight vision system (EFVS) is disclosed. In embodiments, a structured light projector (SLP) generates and projects a precise geometric pattern or other like artifact, which is reflected by collimating elements into the EFVS optical path. Within the optical path, the EFVS focal plane array is illuminated by, and detects, the projected artifacts within the scene imagery captured for display by the EFVS. Image processors assess the presentation of the detected artifacts (e.g., position/orientation relative to the expected presentation of the detected artifact within the scene imagery) to verify that the displayed EFVS imagery is not misleading.
Monitoring device and image capturing method
A monitoring device includes an image capturing module, a photosensitive element, a first processing module and a second processing module. The photosensitive element obtains a lighting parameter of a monitored environment. The first processing module is electrically connected to the photosensitive element, stores the lighting parameter, and continuously updates the lighting parameter. The second processing module is electrically connected to the image capturing module and the first processing module, and the second processing module has a sleep mode. The second processing module includes an exposure function control unit. When the second processing module is awakened and converted to a recording mode, the exposure function control unit receives the updated lighting parameter from the first processing module and calculates an exposure parameter according to the lighting parameter to control the image capturing module.