G06V10/16

METHOD OF CONSTRUCTING FRONT PANORAMA OF SHELVING FROM ARBITRARY SERIES OF FRAMES BASED ON SHELVING 3D MODEL

The present invention relates to methods for visual display of images of real shelvings with products for analysis of shelving contents. There are provided methods for constructing a shelving front panorama. A method comprises: capturing color image frames of the shelving, displaying the shelving and its contents; reconstructing a shelving 3D model based of depth data and capturing device position data for each frame; determining a projection plane corresponding to the shelving front edge; stitching the color image frames of the shelving by projective transformation of each color image frame of the shelving onto the projection plane. The resulting shelving front panorama displays the shelving and its contents. The disclosure eliminates the need for observing frontal orientation of each shelving image frame when capturing images of the shelving; the capturing can be made along an arbitrary path.

Multi-sensor analysis of food

In an embodiment, a method for estimating a composition of food includes: receiving a first three-dimensional (3D) image; identifying food in the first 3D image; determining a volume of the identified food based on the first 3D image; and estimating a composition of the identified food using a millimeter-wave radar.

METHOD FOR THE COMPUTER-ASSISTED LEARNING OF AN ARTIFICIAL NEURAL NETWORK FOR DETECTING STRUCTURAL FEATURES OF OBJECTS

A method for the computer-aided training of an artificial neural network (ANN) for recognizing structural features on objects, by means of which method identified structural features on objects are recognizable rapidly and reliable. That is achieved by virtue of the fact that a convolutional neural network (CNN) having a multiplicity of neurons is used for the training of an ANN for feature recognition on objects. Said network comprises a multiplicity of convolutional and/or pooling layers for the extraction of information from images of individual objects. In this case, the images of the objects are respectively scaled or scaled up and/or down from layer to layer. During the scaling of the images information about the structural features of the objects is maintained, specifically independently of the scaling of the images.

Apparatus for assisting driving of a host vehicle based on augmented reality and method thereof
11557201 · 2023-01-17 · ·

An apparatus for assisting driving of a host vehicle based on augmented reality and a method thereof are provided. The apparatus for assisting driving of a host vehicle based on augmented reality includes an image sensor configured to capture an image of surroundings of the host vehicle, and a controller communicatively connected to the image sensor. The controller is configured to transmit the captured image to a cloud server through a wireless communicator, determine whether information about the captured image is lost due to the other vehicle neighboring the host vehicle, receive a panoramic image of a corresponding location from the cloud server through the wireless communicator if the information of the captured image is lost, process the received panoramic image in combination with the image, the information of which is lost, so as to generate an augmented-reality image, and perform autonomous driving according to the augmented-reality image or display the augmented-reality image on a display.

Multi-Angle Object Recognition
20230215126 · 2023-07-06 ·

Methods, systems, and apparatus for controlling smart devices are described. In one aspect a method includes capturing, by a camera on a user device, a plurality of successive images for display in an application environment of an application executing on the user device, performing an object recognition process on the images, the object recognition process including determining that a plurality of images, each depicting a particular object, are required to perform object recognition on the particular object, and in response to the determination, generating a user interface element that indicates a camera operation to be performed, the camera option capturing two or more images, determining that a user, in response to the user interface element, has caused the indicated camera operation to be performed to capture the two or more images, and in response, determining whether a particular object is positively identified from the plurality of images.

ELECTRONIC DEVICE INCLUDING CAMERA AND METHOD FOR GENERATING VIDEO RECORDING OF A MOVING OBJECT
20230215018 · 2023-07-06 ·

An electronic device includes a display, a memory and a processor. The memory includes instructions for causing the processor to, when generating a video, receive a video signal including an object to obtain a plurality of frame images having a first size, estimate a movement trajectory of the object included in the plurality of frame images, stitch together frame images in which overlapping portions of the video signal are arranged to overlap according to the movement trajectory of the object to generate a stitched image having a second size larger than the first size; and store the generated stitched image and the plurality of frame images as a video file.

Image stitching device and image stitching method

An image stitching method includes: receiving a first image and a second image; determining that both the first image and the second image include a target object; obtaining a first brightness value and a second brightness value, the first brightness value being a brightness value of the target object in the first image, and the second brightness value being a brightness value of the target object in the second image; adjusting a brightness value of the first image and a brightness value of the second image according to the first brightness value and the second brightness value, so as to obtain a first image to be stitched and a second image to be stitched; and stitching the first image to be stitched and the second image to be stitched to obtain a first stitched image.

IMAGE CROPPING USING DEPTH INFORMATION

A device configured to capture a first image of an item on a platform using a camera and to determine a first number of pixels in the first image that corresponds with the item. The device is further configured to capture a first depth image of an item on the platform using a three-dimensional (3D) sensor and to determine a second number of pixels within the first depth image that corresponds with the item. The device is further configured to determine that the difference between the first number of pixels in the first image and the second number of pixels in the first depth image is less than the difference threshold value, to extract the plurality of pixels corresponding with the item in the first image from the first image to generate a second image, and to output the second image.

3D object sensing system
11538183 · 2022-12-27 · ·

A 3D object sensing system includes an object positioning unit, an object sensing unit, and an evaluation unit. The object positioning unit has a rotatable platform and a platform position sensing unit. The object sensing unit includes two individual sensing systems which each have a sensing area. A positioning unit defines a positional relation of the individual sensing systems to one another. The two individual sensing systems sense object data of object points of the 3D object and provide the object data the evaluation unit. The evaluation unit includes respective evaluation modules for each of the at least two individual sensing systems, an overall evaluation module and a generation module.

ABSOLUTE GEOSPATIAL ACCURACY MODEL FOR REMOTE SENSING WITHOUT SURVEYED CONTROL POINTS

Estimating absolute geospatial accuracy in input images without the use of surveyed control points is disclosed. For example, the absolute geospatial accuracy of a satellite images may be estimated without the use of control points (GCPs). The absolute geospatial accuracy of the input images may be estimated based on a statistical measure of relative accuracies between pairs of overlapping images. The estimation of the absolute geospatial accuracy may include determining a root mean square error of the relative accuracies between pairs of overlapping images. For example, the absolute geospatial accuracy of the input images may be estimated by determining a root mean square error of the shears of respective pairs of overlapping images. The estimated absolute geospatial accuracy may be used to curate GCPs, evaluate a digital elevation map, generate a heatmap, or determine whether the adjust the images until a target absolute geospatial accuracy is met.