Patent classifications
G06T7/571
METHOD, MOBILE DEVICE AND CLEANING ROBOT FOR SPECIFYING CLEANING AREAS
A method for specifying a cleaning area to a cleaning robot without an in-built map provides a hand-held mobile device capturing a two-dimensional code label arranged on a top of a cleaning robot parked on a charging base, and obtaining a positional relationship between the mobile device and the cleaning robot through the captured image. The cleaning robot is controlled to enter a cleaning mode under the guidance of the mobile device. With captured images, a user can specify an area within the environment for cleaning, and through a touch display screen can control the cleaning robot to go to the specified cleaning area for cleaning. The mobile device and the cleaning robot employing the method are also disclosed.
IMAGE PROCESSING APPARATUS AND CONTROL METHOD OF IMAGE PROCESSING APPARATUS
An image processing apparatus includes an acquisition unit configured to acquire an image captured by an imaging unit, and image capturing information at the time of image capturing of the image, and a calculation unit configured to calculate an object side pixel dimension of a target subject in the image based on the image capturing information and a pixel dimension of the imaging unit, wherein the acquisition unit acquires in-focus information indicating an in-focus state of a subject in an image, as the image capturing information, and wherein the calculation unit calculates the object side pixel dimension based on the in-focus information.
IMAGE PROCESSING DEVICE, IMAGING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM
An image processing device includes a shape acquisition unit configured to acquire shape information of a subject, a first region detection unit configured to detect a first region generating a shadow of the subject, a second region detection unit configured to detect a second region onto which the shadow is projected, a virtual light source direction setting unit configured to determine a direction of a virtual light source in which the first region projects the shadow onto the second region on the basis of the shape information, the first region, and the second region, and an image generation unit configured to generate an image with the shadow on the basis of the shape information and the determined direction of the virtual light source.
IMAGE PROCESSING DEVICE, IMAGING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM
An image processing device includes a shape acquisition unit configured to acquire shape information of a subject, a first region detection unit configured to detect a first region generating a shadow of the subject, a second region detection unit configured to detect a second region onto which the shadow is projected, a virtual light source direction setting unit configured to determine a direction of a virtual light source in which the first region projects the shadow onto the second region on the basis of the shape information, the first region, and the second region, and an image generation unit configured to generate an image with the shadow on the basis of the shape information and the determined direction of the virtual light source.
Mobile terminal and remote operation method
A mobile terminal to be carried by a user of a vehicle acquires a captured image of the vehicle, acquires distance information on a distance to the vehicle based on the captured image, determines whether the distance to the vehicle is within a predetermined allowable distance based on the distance information, and transmits an operation signal corresponding to an operation content input by a user to the vehicle when the distance to the vehicle is determined to be within the allowable distance.
Mobile terminal and remote operation method
A mobile terminal to be carried by a user of a vehicle acquires a captured image of the vehicle, acquires distance information on a distance to the vehicle based on the captured image, determines whether the distance to the vehicle is within a predetermined allowable distance based on the distance information, and transmits an operation signal corresponding to an operation content input by a user to the vehicle when the distance to the vehicle is determined to be within the allowable distance.
Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
An image processing apparatus including a division unit configured to divide first image data having a first dynamic range into a plurality of regions, an obtaining unit configured to obtain distance information indicating a distance from a focal plane in each of the plurality of regions, a determining unit configured to determine a conversion characteristic of each of the plurality of regions based on the distance information, a conversion unit configured to convert each of the plurality of regions into second image data having a second dynamic range smaller than the first dynamic range by using the conversion characteristic determined by the determining unit, and a storage unit configured to store a first conversion characteristic and a second conversion characteristic that can be used for the conversion.
Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
An image processing apparatus including a division unit configured to divide first image data having a first dynamic range into a plurality of regions, an obtaining unit configured to obtain distance information indicating a distance from a focal plane in each of the plurality of regions, a determining unit configured to determine a conversion characteristic of each of the plurality of regions based on the distance information, a conversion unit configured to convert each of the plurality of regions into second image data having a second dynamic range smaller than the first dynamic range by using the conversion characteristic determined by the determining unit, and a storage unit configured to store a first conversion characteristic and a second conversion characteristic that can be used for the conversion.
VARIED DEPTH DETERMINATION USING STEREO VISION AND PHASE DETECTION AUTO FOCUS (PDAF)
Disclosed are systems, methods, and non-transitory computer-readable media for varied depth determination using stereo vision and phase detection auto focus (PDAF). Computer stereo vision (stereo vision) is used to extract three-dimensional information from digital images. To utilize stereo vision, two optical sensors are displaced horizontally from one another and used to capture images depicting two differing views of a real-world environment from two different vantage points. The relative depth of the objects captured in the images is determined using triangulation by comparing the relative positions of the objects in the two images. For example, the relative positions of matching objects (e.g., features) identified in the captured images are used along with the known orientation of the optical sensors (e.g., distance between the optical sensors, vantage points the optical sensors) to estimate the depth of the objects.
Imaging device, distance measurement method, distance measurement program, and recording medium
There are provided an imaging device, a distance measurement method, a distance measurement program, and a recording medium capable of accurately measuring a distance to a subject without depending on a color of the subject. A bifocal imaging lens, a first pixel and a second pixel that respectively pupil-divide and selectively receive luminous flux incident through a first region of the first region and a second region having different focusing distances of the imaging lens, an image sensor having a third pixel and a fourth pixel corresponding to the second region, a first image acquisition unit (41-1) and a second image acquisition unit (41-2) that acquire a first image and a second image having asymmetric blurs from a first pixel group (22A) and a third pixel group (22C) of the image sensor, a third image acquisition unit (43-1) and a fourth image acquisition unit (43-2) that add pixel values of adjacent pixels of the first and second pixels of the image sensor and add pixel values of adjacent pixels of the third and fourth pixels to acquire a third image and a fourth image having symmetric blurs, and a distance calculation unit (45) that calculates a distance to a subject in the image based on the acquired first and third images or the acquired second and fourth images are included.