Patent classifications
G06V10/759
Aircraft door camera system for engine inlet monitoring
A camera with a field of view toward an external environment of an aircraft is disposed within an aircraft door such that an engine inlet of an engine of the aircraft is within the field of view of the camera. A display device is disposed within an interior of the aircraft. A processor is operatively coupled to the camera and to the display device. The processor analyzes image data captured by the camera to determine whether persons or foreign objects are present near the engine inlet, or detect damage or deformation of the engine inlet.
Method of identifying a target subject, apparatus and non-transitory computer readable medium
In one aspect, a method includes receiving an image of a target subject; determining a direction in response to the receipt of the image, the direction being one in which the target subject was likely to move during a time period in the past or is likely to move during a time period in the future; determining a target area within which another image of the target subject can be expected to appear based on the determined direction; and determining if a portion of a subsequent image is outside the determined target area to identify if the subsequent image is one relating to the target subject, wherein the subsequent image is one taken during the time period in the past or during the time period in the future.
USER INTERFACE TO SELECT FIELD OF VIEW OF A CAMERA IN A SMART GLASS
A wearable device for use in immersive reality applications is provided. The wearable device includes eyepieces to provide a forward-image to a user, a first forward-looking camera mounted on the frame and having a field of view, a processor configured to identify a region of interest within the forward-image, and an interface device to indicate to the user that a field of view of the first forward-looking camera is misaligned with the region of interest. Methods of use of the device, a memory storing instructions and a processor to execute the instructions to cause the device to perform the methods of use, are also provided.
Methods and systems for automated cross-browser user interface testing
Methods and apparatuses are described for automated cross-browser user interface testing. A computing device captures (i) a first image file corresponding to a first current user interface view of a web application on a first testing platform and (ii) a second image file corresponding to a second current user interface view of a web application on a second testing platform. The computing device prepares the image files, and compares the prepared image files using a structural similarity index measure. The computing device determines that the prepared first image file and the prepared second image file represent a common user interface view when the structural similarity index measure is within a predetermined range. The computing device highlights corresponding regions that visually diverge from each other in each of the prepared image files and transmits a notification message comprising the highlighted image files.
USER INTERFACE TO SELECT FIELD OF VIEW OF A CAMERA IN A SMART GLASS
A wearable device for use in immersive reality applications is provided. The wearable device has a frame including an eyepiece to provide a forward-image to a user, a first forward-looking camera mounted on the frame, the first forward-looking camera having a field of view within the forward-image, a sensor configured to receive a command from the user, the command indicative of a region of interest within the field of view, and an interface device to indicate to the user that a field of view of the first forward-looking camera is aligned with the region of interest. Methods of use of the device, a memory storing instructions and a processor to execute the instructions to cause the device to perform the methods of use, are also provided.
System and method of visual attribute recognition
A system and method of automatic product attribute recognition receive training images having bounding boxes associated with one or more products in the training images, receive attribute values for each of the one or more products in the training images, and train a first convolutional neural network (CNN) model to generate bounding boxes for and identify each of the one or more products with the training images until the accuracy of the first CNN model is above a first predetermined threshold. The system and method further train a second CNN model for each of the products associated with the cropped images until the second CNN generates attribute values for the one or more attributes with an accuracy above a second predetermined threshold, and automatically recognize the one or more attributes for a new product image by presenting the product image to the first and second CNN models.
SYSTEMS AND METHODS FOR INTERPRETABLE CLASSIFICATION OF IMAGES USING INHERENTLY EXPLAINABLE NEURAL NETWORKS
An artificial intelligence-based image processing system comprises a processor that executes instructions stored on a memory to classify an input image with a prototypical part neural network including a backbone subnetwork, a prototype subnetwork, and a readout subnetwork to produce an interpretable classification of the input image including one or a combination of a classification result of the input image and an interpretation of the classification result. The backbone subnetwork is trained with machine learning to process the input image with an incomplete sequence of active convolutional layers producing feature embeddings representing features extracted from pixels of different regions of the input image. The prototype subnetwork is trained to compare the feature embeddings with prototypical feature embeddings to produce results of comparison and the readout subnetwork is configured to analyze the results of comparison to produce the interpretable classification of the input image.
Tile-based digital image correspondence
A computing device may obtain a first captured image of a scene and a second captured image of the scene. For a plurality of mn pixel tiles of the first captured image, the computing device may determine respective distance matrixes. The distance matrixes may represent respective fit confidences between the mn pixel tiles and pluralities of target pq pixel tiles in the second captured image. The computing device may approximate the distance matrixes with respective bivariate surfaces. The computing device may upsample the bivariate surfaces to obtain respective offsets for pixels in the plurality of mn pixel tiles. The respective offsets, when applied to pixels in the plurality of mn pixel tiles, may cause parts of the first captured image to estimate locations in the second captured image.
IMAGE RECOGNITION APPARATUS, COMMODITY INFORMATION PROCESSING APPARATUS AND IMAGE RECOGNITION METHOD
An image recognition apparatus includes an acquisition unit and a controller. The acquisition unit acquires an image which captures by photography a pattern indicative of an object. The controller is configured to specify a pattern area from a first image which the acquisition unit acquires, to recognize a pattern which the specified pattern area includes, to acquire a second image from the acquisition unit, to determine whether a disposition of the object of the first image and a disposition of the object of the second image coincide, and to specify a pattern area from the second image, if determining that the disposition of the object of the first image and the disposition of the object of the second image are non-coincident, and to recognize a pattern which the specified pattern area includes.
ARTICLE RECOGNITION APPARATUS AND IMAGE PROCESSING METHOD FOR ARTICLE RECOGNITION APPARATUS
According to one embodiment, an article recognition apparatus includes an image acquisition unit, a recognition unit, a region detection unit, a storage unit, and a determination unit. The recognition unit recognizes each of the articles. The region detection unit determines article region information. The storage unit stores article information including a reference value for the article region information. The determination unit determines that an unrecognized article exists, if the reference value for the article region information of each article which the recognition unit recognized does not match with the article region information.