Patent classifications
G06T7/571
Methods and apparatus for absolute and relative depth measurements using camera focus distance
A depth measuring apparatus includes a camera assembly configured to capture a plurality of images of a target at a plurality of distances from the target. The depth measuring apparatus further includes a controller configured to, for each of a plurality of regions within the plurality of images: determine corresponding gradient values within the plurality of images; determine a corresponding maximum gradient value from the corresponding gradient values; and determine, based on the corresponding maximum gradient value, a depth measurement for a region of the plurality of regions.
Methods and apparatus for absolute and relative depth measurements using camera focus distance
A depth measuring apparatus includes a camera assembly configured to capture a plurality of images of a target at a plurality of distances from the target. The depth measuring apparatus further includes a controller configured to, for each of a plurality of regions within the plurality of images: determine corresponding gradient values within the plurality of images; determine a corresponding maximum gradient value from the corresponding gradient values; and determine, based on the corresponding maximum gradient value, a depth measurement for a region of the plurality of regions.
Method for performing region-of-interest-based depth detection with aid of pattern-adjustable projector, and associated apparatus
A method for performing region-of-interest (ROI)-based depth detection with aid of a pattern-adjustable projector and associated apparatus are provided. The method includes: utilizing a first camera to capture a first image, wherein the first image includes image contents indicating one or more objects; utilizing an image processing circuit to determine a ROI of the first image according to the image contents of the first image; utilizing the image processing circuit to perform projection region selection to determine a selected projection region corresponding to the ROI among multiple predetermined projection regions, wherein the selected projection region is selected from the multiple predetermined projection regions according to the ROI; utilizing the pattern-adjustable projector to project a predetermined pattern according to the selected projection region, for performing depth detection; utilizing a second camera to capture a second image; and performing the depth detection according to the second image to generate a depth map.
Method for performing region-of-interest-based depth detection with aid of pattern-adjustable projector, and associated apparatus
A method for performing region-of-interest (ROI)-based depth detection with aid of a pattern-adjustable projector and associated apparatus are provided. The method includes: utilizing a first camera to capture a first image, wherein the first image includes image contents indicating one or more objects; utilizing an image processing circuit to determine a ROI of the first image according to the image contents of the first image; utilizing the image processing circuit to perform projection region selection to determine a selected projection region corresponding to the ROI among multiple predetermined projection regions, wherein the selected projection region is selected from the multiple predetermined projection regions according to the ROI; utilizing the pattern-adjustable projector to project a predetermined pattern according to the selected projection region, for performing depth detection; utilizing a second camera to capture a second image; and performing the depth detection according to the second image to generate a depth map.
Position and attitude estimation device, position and attitude estimation method, and storage medium
According to one embodiment, a position and attitude estimation device includes a processor. The processor is configured to acquire time-series images continuously captured by a capture device installed on a mobile object, estimate first position and attitude of the mobile object based on the acquired time-series images, estimate a distance to a subject included in the acquired time-series images and correct the estimated first position and attitude to a second position and attitude based on an actual scale, based on the estimated distance.
VARIED DEPTH DETERMINATION USING STEREO VISION AND PHASE DETECTION AUTO FOCUS (PDAF)
Disclosed are systems, methods, and non-transitory computer-readable media for varied depth determination using, stereo vision and phase detection auto focus (PDAF). Computer stereo vision (stereo vision) is used to extract three-dimensional information from digital images. To utilize stereo vison, two optical sensors are displaced horizontally from one another and used to capture images depicting two differing views of a real-world environment from two different vantage points. The relative depth of the objects captured in the images is determined using triangulation by comparing the relative positions of the objects in the two images. For example, the relative positions of matching objects (e.g., features) identified in the captured images are used along with the known orientation of the optical sensors (e.g., distance between the optical sensors, vantage points the optical sensors) to estimate the depth of the objects.
IMAGE DISPLAY METHOD, IMAGE DISPLAY DEVICE AND RECORDING MEDIUM
An image display method includes the following operations (a) to (e). The (a) is of obtaining a plurality of two-dimensional images by two-dimensionally imaging a specimen, in which a plurality of objects to be observed are present three-dimensionally in the specimen, at a plurality of mutually different focus positions. The (b) is of obtaining image data representing a three-dimensional shape of the specimen. The (c) is of obtaining a three-dimensional image of the specimen based on the image data. The (d) is of obtaining the two-dimensional image selected from the plurality of two-dimensional images or a two-dimensional image generated to be focused on the plurality of objects based on the plurality of two-dimensional images as an integration two-dimensional image. The (e) is of integrating the integration two-dimensional image obtained in the (d) with the three-dimensional image obtained in the (c) and displaying an integrated image on a display unit.
IMAGE DISPLAY METHOD, IMAGE DISPLAY DEVICE AND RECORDING MEDIUM
An image display method includes the following operations (a) to (e). The (a) is of obtaining a plurality of two-dimensional images by two-dimensionally imaging a specimen, in which a plurality of objects to be observed are present three-dimensionally in the specimen, at a plurality of mutually different focus positions. The (b) is of obtaining image data representing a three-dimensional shape of the specimen. The (c) is of obtaining a three-dimensional image of the specimen based on the image data. The (d) is of obtaining the two-dimensional image selected from the plurality of two-dimensional images or a two-dimensional image generated to be focused on the plurality of objects based on the plurality of two-dimensional images as an integration two-dimensional image. The (e) is of integrating the integration two-dimensional image obtained in the (d) with the three-dimensional image obtained in the (c) and displaying an integrated image on a display unit.
METHOD, MOBILE DEVICE AND CLEANING ROBOT FOR SPECIFYING CLEANING AREAS
A method for specifying a cleaning area to a cleaning robot without an in-built map provides a hand-held mobile device capturing a two-dimensional code label arranged on a top of a cleaning robot parked on a charging base, and obtaining a positional relationship between the mobile device and the cleaning robot through the captured image. The cleaning robot is controlled to enter a cleaning mode under the guidance of the mobile device. With captured images, a user can specify an area within the environment for cleaning, and through a touch display screen can control the cleaning robot to go to the specified cleaning area for cleaning. The mobile device and the cleaning robot employing the method are also disclosed.
METHOD, MOBILE DEVICE AND CLEANING ROBOT FOR SPECIFYING CLEANING AREAS
A method for specifying a cleaning area to a cleaning robot without an in-built map provides a hand-held mobile device capturing a two-dimensional code label arranged on a top of a cleaning robot parked on a charging base, and obtaining a positional relationship between the mobile device and the cleaning robot through the captured image. The cleaning robot is controlled to enter a cleaning mode under the guidance of the mobile device. With captured images, a user can specify an area within the environment for cleaning, and through a touch display screen can control the cleaning robot to go to the specified cleaning area for cleaning. The mobile device and the cleaning robot employing the method are also disclosed.