Patent classifications
G06T2207/20088
Multi-view positioning using reflections
A device determines the positioning of objects in a scene by implementing a robust and deterministic method. The device obtains object detection data (ODD) which identifies the objects and locations of reference points of the objects in views of the scene. The obtained ODD is processed to identify a first image object of a first view as a mirror reflection of a real object. A virtual view associated with a virtual camera position is created, including the ODD associated with the first image object of the first view. The ODD associated with the first image object is removed from the first view. Based on the ODD associated with at least said virtual view and a further view of the one or more views, a position of said first image object is computed.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
Provided is a position detection unit configured to detect position information of a first imaging device and a second imaging device on the basis of corresponding characteristic points from a first characteristic point detected as a physical characteristic point regarding a subject imaged by the first imaging device, and a second characteristic point detected as a physical characteristic point regarding the subject imaged by the second imaging device. The present technology can be applied to an information processing apparatus that specifies positions of a plurality of imaging devices.
Object Tracking By An Unmanned Aerial Vehicle Using Visual Sensors
Systems and methods are disclosed for tracking objects in a physical environment using visual sensors onboard an autonomous unmanned aerial vehicle (UAV). In certain embodiments, images of the physical environment captured by the onboard visual sensors are processed to extract semantic information about detected objects. Processing of the captured images may involve applying machine learning techniques such as a deep convolutional neural network to extract semantic cues regarding objects detected in the images. The object tracking can be utilized, for example, to facilitate autonomous navigation by the UAV or to generate and display augmentative information regarding tracked objects to users.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM
An image processing apparatus according to the present application includes a reception unit and a specification unit. The reception unit receives image data produced through image capturing by a predetermined image capturing apparatus and including an elliptical figure. The specification unit performs projection transform of the image data so that the elliptical figure included in the image data received by the reception unit appears to be an exact circle, and specifies, based on characteristic information on the exact circle obtained through the projection transform, the exact circle to be a marker used in predetermined processing on the image data.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An object is to provide an image processing apparatus capable of appropriately detecting changes of a target object. An image processing apparatus may include: object-driven feature extractor means to extract relevant features of target object from input images; a feature merger means to merge the features extracted from the input images into a merged feature; a change classifier means to predict a probability of each change class based on the merged feature; an object classifier means to predict a probability of each object class based on the extracted features of each image; a multi-loss calculator means to calculate a combined loss from a change classification loss and an object classification loss; and a parameter updater means to update the parameters of the object-driven feature extractor means.
ELECTRICAL EQUIPMENT MANAGEMENT
Techniques are described for electrical equipment management. Embodiments include a method for electrical equipment refit management, comprising: receiving a selection of an order of an existing electrical equipment system to disassemble and, in response, retrieving and providing a plurality of disassembly instructions; performing an iterative disassembly process to manage the disassembly of the existing electrical equipment system. A selection of an order of a refit electrical equipment system to assemble is received and, in response, retrieving and providing a plurality of refit instructions; An iterative assembly process is performed to manage the assembly of the refit electrical equipment system. Digital artifacts of the refit electrical equipment system are recorded during performance of the plurality of refit instructions as part of the iterative assembly process. A final assembly report for the refit electrical equipment system and stored together with a unique identifier corresponding to the assembled refit electrical equipment system.
SYSTEMS AND METHODS FOR PRODUCT IDENTIFICATION USING IMAGE ANALYSIS AND TRAINED NEURAL NETWORK
Disclosed are methods, systems, and non-transitory computer-readable medium for analysis of images including wearable items. For example, a method may include obtaining a first set of images, each of the first set of images depicting a product; obtaining a first set of labels associated with the first set of images; training an image segmentation neural network based on the first set of images and the first set of labels; obtaining a second set of images, each of the second set of images depicting a known product; obtaining a second set of labels associated with the second set of images; training an image classification neural network based on the second set of images and the second set of labels; receiving a query image depicting a product that is not yet identified; and performing image segmentation of the query image and identifying the product in the image by performing image analysis.
Object tracking by an unmanned aerial vehicle using visual sensors
Systems and methods are disclosed for tracking objects in a physical environment using visual sensors onboard an autonomous unmanned aerial vehicle (UAV). In certain embodiments, images of the physical environment captured by the onboard visual sensors are processed to extract semantic information about detected objects. Processing of the captured images may involve applying machine learning techniques such as a deep convolutional neural network to extract semantic cues regarding objects detected in the images. The object tracking can be utilized, for example, to facilitate autonomous navigation by the UAV or to generate and display augmentative information regarding tracked objects to users.
METHOD AND DEVICE FOR NEURAL NETWORK-BASED OPTICAL COHERENCE TOMOGRAPHY (OCT) IMAGE LESION DETECTION, AND MEDIUM
A method and device for neural network-based optical coherence tomography (OCT) image lesion detection, and a medium are provided. The method includes the following. An OCT image is obtained. The OCT image is inputted into a lesion-detection network model. A position, a category score, and a positive score of each lesion box in the OCT image are outputted through the lesion-detection network model. A lesion detection result of the OCT image is obtained according to the position, the category score, and the positive score of each lesion box. The lesion-detection network model includes a category detection branch configured to obtain, for each of the anchor boxes, a position and a category score of the anchor box, and a lesion positive score regression branch configured to obtain, for each of the anchor boxes, a positive score of whether the anchor box belongs to a lesion, to reflect severity of lesion positive.
Systems and methods for product identification using image analysis and trained neural network
Disclosed are methods, systems, and non-transitory computer-readable medium for analysis of images including wearable items. For example, a method may include obtaining a first set of images, each of the first set of images depicting a product; obtaining a first set of labels associated with the first set of images; training an image segmentation neural network based on the first set of images and the first set of labels; obtaining a second set of images, each of the second set of images depicting a known product; obtaining a second set of labels associated with the second set of images; training an image classification neural network based on the second set of images and the second set of labels; receiving a query image depicting a product that is not yet identified; and performing image segmentation of the query image and identifying the product in the image by performing image analysis.