Patent classifications
H04N13/271
Systems and Methods for Thermal Imaging
A technology is described for thermal imaging. In one example of the technology, a plurality of thermal sensors in a non-collinear configuration are used to simultaneously image scene regions of an ambient environment. Series of synchronized thermal image sets may be obtained from the thermal sensors, and virtual-stereo pairs of image tiles may be defined by selecting image tiles from a plurality of undetermined pairs of image tiles. Thereafter, two-dimensional (2D) correlation may be performed on the virtual-stereo pairs of thermal image tiles to form 2D correlation tiles for the scene region of the ambient environment, and a depth map of the ambient environment may be generated after consolidating the 2D correlation tiles corresponding to the same environmental objects to increase contrast of objects represented in the depth map.
Systems and Methods for Thermal Imaging
A technology is described for thermal imaging. In one example of the technology, a plurality of thermal sensors in a non-collinear configuration are used to simultaneously image scene regions of an ambient environment. Series of synchronized thermal image sets may be obtained from the thermal sensors, and virtual-stereo pairs of image tiles may be defined by selecting image tiles from a plurality of undetermined pairs of image tiles. Thereafter, two-dimensional (2D) correlation may be performed on the virtual-stereo pairs of thermal image tiles to form 2D correlation tiles for the scene region of the ambient environment, and a depth map of the ambient environment may be generated after consolidating the 2D correlation tiles corresponding to the same environmental objects to increase contrast of objects represented in the depth map.
Three-Dimensional Sensor Acuity Recovery Assistance
A computing device includes: a three-dimensional (3D) sensor configured to capture point cloud data from a field of view (FOV); an auxiliary sensor configured to capture reference depth measurements corresponding to a surface within the FOV; a controller connected with the 3D sensor and the auxiliary sensor, the controller configured to: detect a reference depth capture condition; when the reference depth capture condition satisfies a quality criterion, control the auxiliary sensor to capture a reference depth corresponding to the surface within the FOV; and initiate, based on the captured reference depth, generation of corrective data for use at the 3D sensor to capture the point cloud data.
VEHICLE TERRAIN CAPTURE SYSTEM AND DISPLAY OF 3D DIGITAL IMAGE AND 3D SEQUENCE
To simulate a 3D image of a terrain, including a vehicle having a geocoding detector to identify coordinate reference data, the vehicle to traverse the terrain, a memory device for storing an instruction, and a capture module in communication with the processor and connected to the vehicle, the capture module having a 2D RGB digital camera to capture a series of 2D digital images of the terrain and a digital elevation capture device to capture a series of digital elevation scans to generate a digital elevation model of the terrain, with the coordinate reference data, overlay the series of 2D digital images of the terrain thereon the digital elevation model of the terrain while maintaining the coordinate reference data, a key subject point is identified in the series of 2D digital images, and a display configured to display a multidimensional digital image/sequence.
CAMERA MODULE
A camera module according to an embodiment of the present invention comprises: a light output portion for successively outputting a first output light signal and a second output light signal, which are emitted to an object, during a single period; a lens portion for concentrating a first input light signal and a second input light signal, which are reflected from the object, the lens portion comprising an infrared (IR) filter and at least one lens disposed on the IR filter; an image sensor for generating a first electric signal and a second electric signal from the first input light signal and the second input light signal, which have been concentrated by the lens portion; a tilting portion for shifting optical paths of the first input light signal and the second input light signal according to a predetermined rule; and an image control portion for acquiring depth information of the object by using the first electric signal and a phase difference between the first output light signal and the first input light signal, and acquiring a 2D image of the object by using the second electric signal.
CAMERA DEVICE
A camera device according to an embodiment of the present invention includes a light output unit that outputs an output light signal to be irradiated to an object, a lens unit that condenses an input light signal reflected from the object, an image sensor that generates an electric signal from the input light signal condensed by the lens unit and an image processing unit that extracts a depth map of the object using at least one of a time difference and a phase difference between the output light signal and the input light signal received by the image sensor, the lens unit including IR (InfraRed) filter, a plurality of solid lenses disposed on the IR filter and a liquid lens disposed on the plurality of solid lenses, or disposed between the plurality of solid lenses, the camera device further including a first driving unit that controls shifting of the IR filter or the image sensor and a second driving unit that controls a curvature of the liquid lens, an optical path of the input light signal being repeatedly shifted according to a predetermined rule by one of the first driving unit and the second driving unit, and the optical path of the input light signal being shifted according to predetermined control information by the other one of the first driving unit and the second driving unit.
PROCESSING APPARATUS, PROCESSING SYSTEM, IMAGE PICKUP APPARATUS, PROCESSING METHOD, AND MEMORY MEDIUM
An apparatus includes at least one processor configured to execute a plurality of tasks including a first normal acquiring task configured to acquire first normal information of an object, a designated area acquiring task configured to acquire a designated portion in the object, the designated area being designated by a user, a second normal acquiring task configured to acquire second normal information of the object, the second normal information being normal information having a lower frequency than a frequency of the first normal information, a virtual light source determining task configured to determine a virtual light source condition based on the second normal information corresponding to the designated portion, and a rendering task configured to generate a rendering image using the first normal information and the virtual light source condition.
ENDOSCOPY SYSTEM AND METHOD OF RECONSTRUCTING THREE-DIMENSIONAL STRUCTURE
An endoscopy system including a flexible insertion tube, a motion sensing device and a processor is provided. The flexible insertion tube has a central axis. The motion sensing device includes a housing, a plurality of patterns and a plurality of sensors. The patterns are disposed at a surface of the flexible insertion tube according to an axial orientation distribution and an angle distribution based on the central axis. During the relative motion of the flexible insertion tube between the motion sensing device via a guiding hole, the sensors sense a motion state of the patterns so as to obtain a motion-state sensing result. The processor determines an insertion depth information and an insertion tube rotating angle information based on the motion-state sensing result, the axial orientation distribution and the angle distribution. A method of reconstructing a three-dimensional structure is also provided.
ADVANCED DRIVER ASSIST SYSTEM AND METHOD OF DETECTING OBJECT IN THE SAME
ADAS includes a processing circuit and a memory which stores instructions executable by the processing circuit. The processing circuit executes the instructions to cause the ADAS to receive, from a vehicle that is in motion, a video sequence, generate a position image including at least one object included in the stereo image, generate a second position information associated with the at least one object based on reflected signals received from the vehicle, determine regions each including at least a portion of the at least one object as candidate bounding boxes based on the stereo image and the position image, and selectively adjusting class scores of respective ones of the candidate bounding boxes associated with the at least one object based on whether a respective first position information of the respective ones of the candidate bounding boxes matches the second position information.
System, method, and apparatus for determining a high dynamic range image
Systems and methods are disclosed for image signal processing. For example, systems may include an image sensor and a processing apparatus. The image sensor captures image data using a plurality of selectable exposure times. The processing apparatus receives a first image from the image sensor captured with a first exposure time and receives a second image from the image sensor captured with a second exposure time that is less than the first exposure time. A high dynamic range image is determined based on the first image and the second image, wherein an image portion of the high dynamic range image is based on a corresponding image portion of the second image when a pixel of a corresponding image portion of the first image is saturated. An output image that is based on the high dynamic range image is stored, displayed, or transmitted.