Patent classifications
H04N23/45
Imaging apparatus, image data processing method of imaging apparatus, and program
An imaging apparatus includes a plurality of imaging elements, at least one signal processing circuit, and a transfer path, in which each of the plurality of imaging elements includes a memory that is incorporated in the imaging element and stores image data obtained by imaging a subject, and a communication interface that is incorporated in the imaging element and outputs output image data based on the image data stored in the memory, the transfer path connects the plurality of imaging elements and a single signal processing circuit in series, and the communication interface of each of the plurality of imaging elements outputs the output image data to an imaging element in a rear stage or the signal processing circuit through the transfer path.
Array Imaging Module and Molded Photosensitive Assembly and Manufacturing Method Thereof for Electronic Device
An array imaging module includes a molded photosensitive assembly which includes a supporting member, at least a circuit board, at least two photosensitive units, at least two lead wires, and a mold sealer. The photosensitive units are coupled at the chip coupling area of the circuit board. The lead wires are electrically connected the photosensitive units at the chip coupling area of the circuit board. The mold sealer includes a main mold body and has two optical windows. When the main mold body is formed, the lead wires, the circuit board and the photosensitive units are sealed and molded by the main mold body of the mold sealer, such that after the main mold body is formed, the main mold body and at least a portion of the circuit board are integrally formed together at a position that the photosensitive units are aligned with the optical windows respectively.
Computer Vision Based Driver Assistance Devices, Systems, Methods and Associated Computer Executable Code
The present invention includes computer vision based driver assistance devices, systems, methods and associated computer executable code (hereinafter collectively referred to as: “ADAS”). According to some embodiments, an ADAS may include one or more fixed image/video sensors and one or more adjustable or otherwise movable image/video sensors, characterized by different dimensions of fields of view. According to some embodiments of the present invention, an ADAS may include improved image processing. According to some embodiments, an ADAS may also include one or more sensors adapted to monitor/sense an interior of the vehicle and/or the persons within. An ADAS may include one or more sensors adapted to detect parameters relating to the driver of the vehicle and processing circuitry adapted to assess mental conditions/alertness of the driver and directions of driver gaze. These may be used to modify ADAS operation/thresholds.
Image capture display terminal
The control module outputs a control signal to control the first image capture module and the second image capture module to be in a working state in a time-sharing manner. A first signal interface is electrically connected to the first node. The first optimization unit is electrically connected between the first node and the first image capture module, and the second optimization unit is electrically connected between the first node and the second image capture module. The first optimization unit is configured to ensure the smoothness of a curve of a first image signal corresponding to a first image captured when the first image capture module is in the working state, and the second optimization unit is configured to ensure the smoothness of a curve of a second image signal corresponding to a second image captured when the second image capture module is in the working state.
Image capture display terminal
The control module outputs a control signal to control the first image capture module and the second image capture module to be in a working state in a time-sharing manner. A first signal interface is electrically connected to the first node. The first optimization unit is electrically connected between the first node and the first image capture module, and the second optimization unit is electrically connected between the first node and the second image capture module. The first optimization unit is configured to ensure the smoothness of a curve of a first image signal corresponding to a first image captured when the first image capture module is in the working state, and the second optimization unit is configured to ensure the smoothness of a curve of a second image signal corresponding to a second image captured when the second image capture module is in the working state.
Image alignment for computational photography
Image frames for computational photography may be corrected, such as through rolling shutter correction (RSC), prior to fusion of the image frames to reduce wobble and jitter artifacts present in a video sequence of HDR-enhanced image frames. First and second motion data regarding motion of the image capture device may be determined for times corresponding to the capturing of the first and second image frames, respectively. The rolling shutter correction (RSC) may be applied to the first and second image frames based on both the first and second motion data. The corrected first and second image frames may then be aligned and fused to obtain a single output image frame with higher dynamic range than either of the first or second image frames.
METHODS AND APPARATUS TO OPERATE A MOBILE CAMERA FOR LOW-POWER USAGE
Disclosed examples include accessing sensor data; recognizing, by executing an instruction with programmable circuitry, a feature in the sensor data based on a convolutional neural network; and transitioning, by executing an instruction with the programmable circuitry, a mobile device between at least two of motion feature detection, audio feature detection, or camera feature detection after the feature is recognized in the sensor data, the mobile device to operate at a different level of power consumption after the transition than before the transition.
METHODS AND APPARATUS TO OPERATE A MOBILE CAMERA FOR LOW-POWER USAGE
Disclosed examples include accessing sensor data; recognizing, by executing an instruction with programmable circuitry, a feature in the sensor data based on a convolutional neural network; and transitioning, by executing an instruction with the programmable circuitry, a mobile device between at least two of motion feature detection, audio feature detection, or camera feature detection after the feature is recognized in the sensor data, the mobile device to operate at a different level of power consumption after the transition than before the transition.
ELECTRIC SHAVER WITH IMAGING CAPABILITY
System and method for improving the shaving experience by providing improved visibility of the skin shaving area. A digital camera is integrated with the electric shaver for close image capturing of shaving area, and displaying it on a display unit. The display unit can be integral part of the electric shaver casing, or housed in a separated device which receives the image via a communication channel. The communication channel can be wireless (using radio, audio or light) or wired, such as dedicated cabling or using powerline communication. A light source is used to better illuminate the shaving area. Video compression and digital image processing techniques are used for providing for improved shaving results. The wired communication medium can simultaneously be used also for carrying power from the electric shaver assembly to the display unit, or from the display unit to the electric shaver.
Multi-Baseline Camera Array System Architectures for Depth Augmentation in VR/AR Applications
Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.