Patent classifications
G06V10/809
Automated Usability Assessment Of Buildings Using Visual Data Of Captured In-Room Images
Techniques are described for automated operations related to analyzing visual data from images captured in rooms of a building and optionally additional captured data about the rooms to assess room layout and other usability information for the building's rooms and optionally for the overall building, and to subsequently using the assessed usability information in one or more further automated manners, such as to improve navigation of the building. The automated operations may include identifying one or more objects in each of the rooms to assess, evaluating one or more target attributes of each object, assessing usability of each object using its target attributes' evaluations and each room using its objects' assessment and other room information with respect to an indicated purpose, and combining the assessments of multiple rooms in a building and other building information to assess usability of the building with respect to its indicated purpose.
Method and device for selecting a fingerprint image in a sequence of images for authentication or identification
A method for selecting an image of a fingerprint for identifying an individual is described. The method includes: acquiring a current image comprising a fingerprint and segmenting said fingerprint; determining a value representing a stability of said current image; determining a value representing the sharpness of said current image; determining a score, said score being a combination of said value representing a stability, of said value representing a sharpness and of a number of segmented fingerprints; and selecting said current image for identifying said individual in the case where said score is higher than a first threshold value, and otherwise storing said current image in memory as best image in the case where the score thereof is higher than a best score value, and repeating this method.
Image recognition method for offline and online synchronous operation
An image identifying method for offline and online synchronous operation is disclosed. When the mobile device focuses on an recognition target, images frames of an recognition target are retrieved and sent to an recognition server. The mobile device executes an offline recognition operation, and synchronously an recognition server executes an online recognition operation. A plurality of the matching data is saved in the mobile device, and a plurality of recognition data is saved in an recognition server, wherein the recognition data larger than the matching data. When the mobile device firstly generates the recognition result, the recognition result is displayed for user searching reference. If the mobile device receives an recognition result from the recognition server before the mobile device completing the recognition, the recognition result replied form the recognition server is displayed on the display monitor.
Method of detecting object in image and image processing device
At least one example embodiment discloses a method of detecting an object in an image. The method includes receiving an image, generating first images for performing a first classification operation based on the received image, reviewing first-image features of the first images using a first feature extraction method with first-type features, first classifying at least some of the first images as second images, the classified first images having first-image features matching the first-type features, reviewing second-image features of the second images using a second feature extraction method with second-type features, second classifying at least some of the second images as third images, the classified second images having second-image features matching the second-type features and detecting an object in the received image based on results of the first and second classifying.
LEVERAGING ON LOCAL AND GLOBAL TEXTURES OF BRAIN TISSUES FOR ROBUST AUTOMATIC BRAIN TUMOR DETECTION
A method for performing cellular classification includes generating a plurality of local dense Scale Invariant Feature Transform (SIFT) features based on a set of input images and converting the plurality of local dense SIFT features into a multi-dimensional code using a feature coding process. A first classification component is used to generate first output confidence values based on the multi-dimensional code and a plurality of global Local Binary Pattern Histogram (LBP-H) features are generated based on the set of input images. A second classification component is used to generate second output confidence values based on the plurality of LBP-H features and the first output confidence values and the second output confidence values are merged. Each of the set of input images may then be classified as one of a plurality of cell types using the merged output confidence values.
RETRIEVING IMAGES THAT CORRESPOND TO A TARGET SUBJECT MATTER WITHIN A TARGET CONTEXT
Techniques are provided herein for retrieving images that correspond to a target subject matter within a target context. Although useful in a number of applications, the techniques provided herein are particularly useful in contextual product association and visualization. A method is provided to apply product images to a neural network. The neural network is configured to classify the products in the images. The images are associated with a context representing the combination of classified products in the images. These techniques leverage both seller-provided images of products and user-generated content, which potentially includes hundreds or thousands of images of the same or similar products as the seller-provided images. A graphical user interface is configured to permit a user to select the context of interest in which to visualize the products.
System and method for image reconstruction
A method and a system for image reconstruction are provided. The method may include acquiring raw image data, wherein the raw image data may include a plurality of frequency domain undersampled image data samples. The method may include generating a first reconstruction result based on the raw image data using a first reconstruction method, and generating a second reconstruction result based on the raw image data using a second reconstruction method. The method may further include fusing the first reconstruction result and the second reconstruction result, and generating a reconstructed image based on a result of the fusion.
IMAGE RECOGNITION DEVICE, SOLID-STATE IMAGING DEVICE, AND IMAGE RECOGNITION METHOD
Provided are an image recognition device (1), a solid-state imaging device (CIS 2), and an image recognition method capable of improving accuracy in recognizing a subject. The image recognition device (1) according to the present disclosure includes an imaging unit (4) and a recognition unit (9). The imaging unit (4) captures a plurality of images having different sensitivities in one frame period to generate image data of the plurality of images. The recognition unit (9) recognizes the subject from the image data of each of the images, and recognizes the subject captured in an image of one frame based on a result of recognizing the subject.
Vision-Based Frictionless Self-Checkouts for Small Baskets
A vison-based self-checkout terminal is provided. Purchased items are placed on a base and multiple cameras take multiple images of each item placed on the base. A location for each item placed on the base is determined along with a depth and the dimensions of each item at its given location on the base. Each item's images are then cropped, and item recognition is performed for each item on that item's cropped images with that item's corresponding depth and dimension attributes. An item identifier for each item is obtained along with a corresponding price and a transaction associated with items are completed.
Appearance and Movement Based Model for Determining Risk of Micro Mobility Users
The systems and methods disclosed herein provide a risk prediction system that uses trained machine learning models to make predictions that a VRU will take a particular action. The system first receives, in a video stream, an image depicting a VRU operating a micro-mobility vehicle and extract the depictions from the image. The extraction process may be determined by bounding box classifiers trained to identify various VRUs and micro-mobility vehicles. The system feeds the extracted depictions to machine learning models and receives, as an output, risk profiles for the VRU and the micro-mobility vehicle. The risk profile may include data associated with the VRU/micro-mobility vehicle determined based on classifications of the VRU and the micro-mobility vehicles. The system may then generate a prediction that the VRU operating the micro-mobility vehicle will take a particular action based on the risk profile.