Patent classifications
G06V10/16
FUSION AND ASSOCIATION OF TRAFFIC OBJECTS IN DRIVING ENVIRONMENT
A method is provided. The method includes: obtaining first environmental information and second environmental information, where the first environmental information and the second environmental information are acquired by different sensors; determining, based on the first environmental information, information about a first lane of a first traffic object in the first environmental information, and determining; and determining whether the first traffic object and the second traffic object have an association relationship.
Systems and methods for matching audio and image information
System and methods for processing audio signals are disclosed. In one implementation, a system may comprise a wearable camera configured to capture images from an environment of a user; a microphone configured to capture sounds from the environment of the user; and a processor. The processor may be configured to receive at least one image of the plurality of images, the at least one image comprising a plurality of image portions associated with corresponding image portion timestamps; receive at least one audio signal representative of the sounds captured by the at least one microphone; identify an audio timestamp associated with a portion of the audio signal; identify an image portion from among the plurality of image portions, the image portion having an image portion timestamp associated with the audio timestamp; and analyze the image portion to identify a voice originating from an object represented in the image.
METHOD OF PROCESSING IMAGE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
A method of processing an image, an electronic device, and a storage medium, which relate to the artificial intelligence field, in particular to fields of computer vision and intelligent transportation technologies. The method includes: determining at least one key frame image in a scene image sequence captured by a target camera; determining a camera pose parameter associated with each key frame image in the at least one key frame image, according to a geographic feature associated with the key frame image; and projecting each scene image in the scene image sequence to obtain a target projection image according to the camera pose parameter associated with the key frame image, so as to generate a scene map based on the target projection image. The geographic feature associated with any key frame image indicates localization information of the target camera at a time instant of capturing the corresponding key frame image.
DEVICE AND METHOD FOR ACQUIRING DEPTH OF SPACE BY USING CAMERA
A device and method of obtaining a depth of a space are provided. The method includes obtaining a plurality of images by photographing a periphery of a camera a plurality of times while sequentially rotating the camera by a preset angle, identifying a first feature region in a first image and an n-th feature region in an n-th image, the n-th feature region being identical with the first feature region, by comparing adjacent images between the first image and the n-th image from among the plurality of images, obtaining a base line value with respect to the first image and the n-th image, obtaining a disparity value between the first feature region and the n-th feature region, and determining a depth of the first feature region or the n-th feature region based on at least the base line value and the disparity value.
Systems and Methods for Image Based Perception
Systems and methods for image-based perception. The methods comprise: obtaining, by a computing device, images captured by a plurality of cameras with overlapping fields of view; generating, by the computing device, spatial feature maps indicating locations of features in the images; defining, by the computing device, predicted cuboids at each location of an object in the images based on the spatial feature maps; and assigning, by the computing device, at least two cuboids of said predicted cuboids to a given object when predictions from images captured by separate cameras of the plurality of cameras should be associated with a same detected object.
CLUSTER BASED PHOTO NAVIGATION
The technology relates to navigating imagery that is organized into clusters based on common patterns exhibited when imagery is captured. For example, a set of captured images which satisfy a predetermined pattern may be determined. The images in the set of set of captured images may be grouped into one or more clusters according to the predetermined pattern. A request to display a first cluster of the one or more clusters may be received and, in response, a first captured image from the requested first cluster may be selected. The selected first captured image may then be displayed.
Cloud-based framework for processing, analyzing, and visualizing imaging data
Embodiments of the present disclosure provide methods, apparatus, systems, computing devices, computing entities, and/or the like for detecting objects located in an area of interest. In accordance with one embodiment, a method is provided comprising: receiving, via an interface provided through a general instance on a cloud environment, imaging data comprising raw images collected on the area of interest; upon receiving the images: activating a central processing unit (CPU) focused instance on the cloud environment and processing, via the image, the raw images to generate an image map of the area of interest; and after generating the image map: activating a graphical processing unit (GPU) focused instance on the cloud environment and performing object detection, via the image, on a region within the image map by applying one or more object detection algorithms to the region to identify locations of the objects in the region.
User feedback for real-time checking and improving quality of scanned image
A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.
IMAGE DISPLAY METHOD, IMAGE DISPLAY DEVICE AND RECORDING MEDIUM
An image display method includes the following operations (a) to (e). The (a) is of obtaining a plurality of two-dimensional images by two-dimensionally imaging a specimen, in which a plurality of objects to be observed are present three-dimensionally in the specimen, at a plurality of mutually different focus positions. The (b) is of obtaining image data representing a three-dimensional shape of the specimen. The (c) is of obtaining a three-dimensional image of the specimen based on the image data. The (d) is of obtaining the two-dimensional image selected from the plurality of two-dimensional images or a two-dimensional image generated to be focused on the plurality of objects based on the plurality of two-dimensional images as an integration two-dimensional image. The (e) is of integrating the integration two-dimensional image obtained in the (d) with the three-dimensional image obtained in the (c) and displaying an integrated image on a display unit.
WATER AREA OBJECT DETECTION SYSTEM AND MARINE VESSEL
A water area object detection system includes a first imager to image an object around a hull, a second imager provided on the hull such that an imaging direction of the second imager is the same or substantially the same as an imaging direction of the first imager and operable to image the object around the hull, and a controller configured or programmed to perform a control to create a water area map around the hull based on images captured by the first imager and the second imager. The second imager is spaced apart in an upward-downward direction of the hull from the first imager, and the first imager is spaced apart in the imaging direction from the second imager so as not to overlap the second imager in the upward-downward direction perpendicular to the imaging direction.