Patent classifications
H04N101/00
Camera multi-line time-division exposure processing method and system
Provided are a camera multi-line time-division exposure processing method and a system. N sensor lines correspond to the n light sources in a one-to-one correspondence. The sensor lines is configured to respectively collect image data of an object moving through the camera's field of view along one direction under the corresponding light sources. The method includes: obtaining a trigger signal to trigger at one time the n light sources to turn on and off sequentially, collecting image data of the object exposed under a turned-on light source, and extracting image data of the object obtained by one sensor line corresponding to the turned-on light source as valid data; splicing all the valid data of the same portion of the object to obtain a spliced image under different light sources; and cyclically outputting the spliced image to obtain a complete image of the object.
Buffer management for plug-in architectures in computation graph structures
A computer vision processing device is provided which comprises memory configured to store data and a processor. The processor is configured to store captured image data in a first buffer and acquire access to the captured image data in the first buffer when the captured image data is available for processing. The processor is also configured to execute a first group of operations in a processing pipeline, each of which processes the captured image data accessed from the first buffer and return the first buffer for storing next captured image data when a last operation of the first group of operations executes.
Display device, display controlling method, and computer program
A display device, method, computer-readable storage medium and user interface, each of which detects contact to or proximity of an object with respect to a generated image, and responsive to detection of contact to or proximity of the object to the generated image, disables any operational functions associated with a first portion of the generated image. Additionally, operation associated with a second portion of the generated image is allowed responsive to the detection of contact to or proximity of the object to the generated image, where the second portion of the generated image is different from the first portion of the generated image. An indication corresponding to the second portion of the generated image for which operation is enabled may be displayed on the generated image.
Apparatus that performs zooming operation, control method therefor, and storage medium
An image pickup apparatus which enhances usability in shooting in a case where a user sets an initial lens position. A system control unit starts shifting a zoom position of a taking lens in response to start of a zooming operation in which the zoom position is changed, and stops shifting the zoom position in response to termination of the zooming operation. When the zoom position reaches a predetermined zoom position, which is inside an optical zoom range and is neither an optical wide-angle end nor an optical telephoto end, while the zoom position is being shifted in a first direction in response to a first zooming operation for shifting the zoom position in the first direction, the system control unit provides control to fix the zoom position at the predetermined zoom position even when the first zooming operation has not been terminated.
Imaging apparatus and information processing method
There is provided an imaging apparatus including a user interface control section. The user interface control section performs a process of displaying a plurality of images in a stacked form as a first display mode of an image group including the plurality of images, a process of individually displaying each of the plurality of images as a second display mode of the image group, and a process of detecting an operation of recording a voice note corresponding to a selected image selected in the second display mode.
Image selection method and electronic device
Disclosed are an image selection method and an electronic device. In the method, an electronic device can detect feedback information related to a user operation. The feedback information may include an optimal image selected by a decision model and a changed optimal image. The feedback information may further include images that are deleted, browsed, added-to-favorites, or shared and operation records. The feedback information may further include a facial feature in a gallery and a proportion of images including the facial feature in the gallery. The electronic device adjusts, according to the feedback information, parameters of the decision model configured to perform image selection, to obtain an updated decision model. The electronic device can perform image selection according to the updated decision model. Through the implementation of the technical solution, a selected optimal image is more in line with user habits, thereby improving convenience.
Image processing circuitry and image processing method
An image processing circuitry configured to: store, based on an obtained command, a first image identifier of first image data or the first image data on a write-once-read-many memory, wherein the first image identifier is generated based on the first image data such that it is unique for the first image data.
Memories and moments in augmented reality (AR)
In one aspect, the method includes causing presentation of a camera interface, the camera interface to display a first image captured by a camera of a computing device of a first user, causing presentation of a plurality of augmentation icons within the camera interface, each of the image augmentation icons being associated with a respective image augmentation mechanism, detecting user selection of an auxiliary augmentation icon of the plurality of image augmentation icons, the second image icon being associated with an second image augmentation mechanism, determining an second image selection criterion for an second image augmentation mechanism, selecting an second image based on the second image selection criterion, the second image being selected from a plurality of photographs associated with the first user, and overlaying the second image on the first image to generate a composite image that is presented within the camera interface.
Reducing camera throughput to downstream systems using an intermediary device
Reducing camera throughput to downstream systems using an intermediary device, including: receiving, by a device and from a camera via a first data link between the device and the camera, a frame; selecting, by the device, an area of focus for the frame; generating, by the device from the frame, a downsampled frame and a cropped frame, wherein the cropped frame is based on the area of focus; and providing, by the device to a computing system of an autonomous vehicle via a second data link between the device and the computing system, the downsampled frame and the cropped frame instead of the frame, wherein the second data link has a lower bandwidth than the first data link.
Image processing apparatus capable of converting image file such that all annotation information can be used, control method therefor, and storage medium
An image processing apparatus that is capable of converting an image file such that all annotation information recorded in the image file can be used. When converting a first-format image file including image data and annotation information on the image data to a second-format image file whose memory capacity of a predetermined area, where annotation information is recorded, is less than that of the first-format image file, the image processing apparatus, when the annotation information included in the first-format image file is not predetermined annotation information whose data amount is equal to or greater than a predetermined value, records the annotation information into the predetermined area in the second-format image file, and, when the annotation information included in the first-format image file is the predetermined annotation information, records link information on the annotation information in the predetermined area in the second-format image file.