Patent classifications
H04N13/211
Stereo imaging miniature endoscope with single imaging and conjugated multi-bandpass filters
An endoscope includes a housing with a distal end insertable into a cavity; an image capture device at the distal end to obtain 3D images, and process them to form a video signal; and a folded substrate folded into a U-shape having first and second legs. The image capture device includes a detector and a lens system with right and left multi-band pass filters having right pass bands that are complements of left pass bands. The lens system receives the 3D images including right and left images. The detector faces the lens system to obtain the right and left images. A processing circuit faces the proximal end behind the detector to process signals from the detector. The folded substrate includes the detector at an outer side of the first leg facing the lens system and the processing circuit at an outer side of the second leg facing the proximal end.
Stereo imaging miniature endoscope with single imaging and conjugated multi-bandpass filters
An endoscope includes a housing with a distal end insertable into a cavity; an image capture device at the distal end to obtain 3D images, and process them to form a video signal; and a folded substrate folded into a U-shape having first and second legs. The image capture device includes a detector and a lens system with right and left multi-band pass filters having right pass bands that are complements of left pass bands. The lens system receives the 3D images including right and left images. The detector faces the lens system to obtain the right and left images. A processing circuit faces the proximal end behind the detector to process signals from the detector. The folded substrate includes the detector at an outer side of the first leg facing the lens system and the processing circuit at an outer side of the second leg facing the proximal end.
METHOD FOR DETERMINING OBJECT INFORMATION RELATING TO AN OBJECT IN A VEHICLE ENVIRONMENT, CONTROL UNIT AND VEHICLE
The disclosure relates to a method for determining object information relating to an abject in an environment of a vehicle having a camera. The method includes: capturing the environment with the camera from a first position; changing the position of the camera; capturing the environment with the camera from a second position; determining object information relating to an object by selecting at least one first pixel in the first image and at least one second pixel in the second image, by selecting the first pixel and the second pixel such that they are assigned to the same object point of the object, and determining object coordinates of the assigned object point by triangulation. Changing the position of the camera is brought about by controlling an active actuator system in the vehicle. The actuator system adjusts the camera by an adjustment distance without changing a driving condition of the vehicle.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
There is provided an image processing apparatus comprising. An obtainment unit obtains a first circular fisheye image accompanied by a first missing region in which no pixel value is present. A generation unit generates a first equidistant cylindrical projection image by performing first equidistant cylindrical transformation processing based on the first circular fisheye image. The generation unit generates the first equidistant cylindrical projection image such that a first corresponding region corresponding to the first missing region has a pixel value in the first equidistant cylindrical projection image.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
There is provided an image processing apparatus comprising. An obtainment unit obtains a first circular fisheye image accompanied by a first missing region in which no pixel value is present. A generation unit generates a first equidistant cylindrical projection image by performing first equidistant cylindrical transformation processing based on the first circular fisheye image. The generation unit generates the first equidistant cylindrical projection image such that a first corresponding region corresponding to the first missing region has a pixel value in the first equidistant cylindrical projection image.
THREE-DIMENSIONAL IMAGES CREATIONS
Examples described herein relate to an imaging device. For instance, the imaging device can comprise a plurality of lenses to receive light, a reflector to transmit the light from the plurality of lenses through a shaft, a mirror to receive the light transmitted by the reflector and reflect the light into a sensor, a motor to rotate the mirror to allow the mirror to channel light into the sensor, the sensor to a set of frames of image data based on the light received from the mirror, and a processing resource to synchronize a motor speed based on the set of frames.
MULTI-TIER CAMERA RIG FOR STEREOSCOPIC IMAGE CAPTURE
In on the general aspect, a camera rig can include a first tier of images sensors including a first plurality of image sensors where the first plurality of image sensors are arranged in a circular shape and oriented such that a field of view of each of the first plurality of image sensors has an axis perpendicular to a tangent of the circular shape. The camera rig can include a second tier of image sensors including a second plurality of image sensors where the second plurality of image sensors are oriented such that a field of view of each of the second plurality of image sensors has an axis non-parallel to the field of view of each of the first plurality of image sensors.
Still-image extracting method and image processing device for implementing the same
A still-image extracting method is disclosed. Frames of an object are extracted as still images from a moving image stream chronologically continuously captured by a camera. The camera moves relative to the object. First frames are extracted from the moving image stream. Image capture times of the extracted first frames are obtained. Image capture positions of the camera at the image capture times of the first frames are identified based on the first frames. Image capture times of the frames captured at image capture positions spaced at equal intervals are estimated based on both the image capture positions, identified by the first frames, of the camera and the obtained image capture times. Second frames at the estimated image capture times are extracted as frames captured and obtained at image capture positions spaced apart at equal intervals from the moving image stream.
Method for controlling electronic device
According to one embodiment, a method for controlling an electronic device includes illuminating an object while moving a light emitting area formed by turning on light emitting elements simultaneously, capturing a shadow generated by the object by image sensing elements on a same substrate as the light emitting elements, and creating three-dimensional data about an outer shape of the object based on a shadow image.
Method for controlling electronic device
According to one embodiment, a method for controlling an electronic device includes illuminating an object while moving a light emitting area formed by turning on light emitting elements simultaneously, capturing a shadow generated by the object by image sensing elements on a same substrate as the light emitting elements, and creating three-dimensional data about an outer shape of the object based on a shadow image.