G06T2207/20088

Apparatus, method, and program with verification of detected position information using additional physical characteristic points

Provided is a position detection unit configured to detect position information of a first imaging device and a second imaging device on the basis of corresponding characteristic points from a first characteristic point detected as a physical characteristic point regarding a subject imaged by the first imaging device, and a second characteristic point detected as a physical characteristic point regarding the subject imaged by the second imaging device. The present technology can be applied to an information processing apparatus that specifies positions of a plurality of imaging devices.

REAL-TIME ANOMALY DETECTION FOR INDUSTRIAL PROCESSES

In one embodiment, a device comprises interface circuitry and processing circuitry. The processing circuitry receives, via the interface circuitry, a video stream captured by a camera during performance of an industrial process, wherein the video stream comprises a sequence of frames; detects, based on analyzing the sequence of frames, a degree of particle scatter that occurs during performance of the industrial process; and determines, based on the degree of particle scatter, that an anomaly occurs during performance of the industrial process.

Method and device for recognising distance in real time

A device for recognizing distance in real time includes first, second, and cameras. The third camera arranged nearer the first camera than the second camera. The first, second and third cameras acquire simultaneously first, second and third images, respectively, and an electronic circuit of the device estimates the distance of an object as a function of a stereoscopic correspondence established between first and second elements representative of the object. The first and second elements belong to the first and second images, respectively. The stereoscopic correspondence is established by a relationship between the first elements and corresponding third elements belonging to the third image.

SYSTEMS AND METHODS FOR PRODUCT IDENTIFICATION USING IMAGE ANALYSIS AND TRAINED NEURAL NETWORK
20210248178 · 2021-08-12 · ·

Disclosed are methods, systems, and non-transitory computer-readable medium for analysis of images including wearable items. For example, a method may include obtaining a first set of images, each of the first set of images depicting a product; obtaining a first set of labels associated with the first set of images; training an image segmentation neural network based on the first set of images and the first set of labels; obtaining a second set of images, each of the second set of images depicting a known product; obtaining a second set of labels associated with the second set of images; training an image classification neural network based on the second set of images and the second set of labels; receiving a query image depicting a product that is not yet identified; and performing image segmentation of the query image and identifying the product in the image by performing image analysis.

Systems and methods for product identification using image analysis and trained neural network

Disclosed are methods, systems, and non-transitory computer-readable medium for analysis of images including wearable items. For example, a method may include obtaining a first set of images, each of the first set of images depicting a product; obtaining a first set of labels associated with the first set of images; training an image segmentation neural network based on the first set of images and the first set of labels; obtaining a second set of images, each of the second set of images depicting a known product; obtaining a second set of labels associated with the second set of images; training an image classification neural network based on the second set of images and the second set of labels; receiving a query image depicting a product that is not yet identified; and performing image segmentation of the query image and identifying the product in the image by performing image analysis.

MULTI-VIEW POSITIONING USING REFLECTIONS
20210304435 · 2021-09-30 ·

A device determines the positioning of objects in a scene by implementing a robust and deterministic method. The device obtains object detection data (ODD) which identifies the objects and locations of reference points of the objects in views of the scene. The obtained ODD is processed to identify a first image object of a first view as a mirror reflection of a real object. A virtual view associated with a virtual camera position is created, including the ODD associated with the first image object of the first view. The ODD associated with the first image object is removed from the first view. Based on the ODD associated with at least said virtual view and a further view of the one or more views, a position of said first image object is computed.

Object tracking by an unmanned aerial vehicle using visual sensors

Systems and methods are disclosed for tracking objects in a physical environment using visual sensors onboard an autonomous unmanned aerial vehicle (UAV). In certain embodiments, images of the physical environment captured by the onboard visual sensors are processed to extract semantic information about detected objects. Processing of the captured images may involve applying machine learning techniques such as a deep convolutional neural network to extract semantic cues regarding objects detected in the images. The object tracking can be utilized, for example, to facilitate autonomous navigation by the UAV or to generate and display augmentative information regarding tracked objects to users.

Streaming-based deep learning models for computer vision tasks
10891525 · 2021-01-12 · ·

A method and system for processing images by a neural network are provided. A first computer can sequentially transfer at least two versions of an image having increasingly greater resolution to a second computer. The second computer performs an image identification process on each of the sequentially transferred at least two versions until the image is identified.

MAPPING AND TRACKING SYSTEM WITH FEATURES IN THREE-DIMENSIONAL SPACE
20200404245 · 2020-12-24 ·

LK-SURF, Robust Kalman Filter, HAR-SLAM, and Landmark Promotion SLAM methods are disclosed. LK-SURF is an image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking using stereo images to produce 3D features can be tracked and identified. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis and the X84 outlier rejection rule. Hierarchical Active Ripple SLAM is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple tracked objects, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of tracked objects, landmarks, and estimated object locations. In Landmark Promotion SLAM, only reliable mapped landmarks are promoted through various layers of SLAM to generate larger maps.

METHOD FOR MEASURING OBJECTS IN DIGESTIVE TRACT BASED ON IMAGING SYSTEM
20200342596 · 2020-10-29 · ·

A method for measuring objects in a digestive tract based on an imaging system is provided. The imaging system captures a detection image in the measurement stage. The depth distance z.sub.i from a target point P to a board is calculated, a correction factor is obtained according to the predicted brightness g.sup.1(z.sub.i) of a reference point P in the detection image. The depth image z(x, y) from the actual position of each pixel to the board is calibrated by the correction factor. The scale r of each pixel is calculated according to the depth image z(x, y). The actual two-dimensional coordinates S.sub.i of each pixel are calculated by the scale r, and the actual three-dimensional coordinates (S.sub.i, z(x, y)) of each pixel are obtained. The distance between any two pixels in the detection image or the area within any range are calculated.