Patent classifications
G06K9/20
Collation/retrieval system, collation/retrieval server, image feature extraction apparatus, collation/retrieval method, and program
The present invention is a collation/retrieval system collating a product manufactured by or delivered from a producer or a distributor with a product to be collated comprising: a storage unit that stores an image feature of a predetermined collation area of the product determined in advance at a position relative to a reference section common in every product; a to-be-collated product feature extraction unit that receives an image of the product to be collated and detecting the reference section of the product from the received image to extract an image feature of the collation area determined by reference to the reference section; and a collation unit that collates the stored image feature with the image feature of the collation area of the product to be collated.
Device and method for identifying an object at least partially covered by a transparent material
A device and method for identifying an object at least partially covered by a transparent material. The abstract of the disclosure is submitted herewith as required by 37 C.F.R. §1.72(b). As stated in 37 C.F.R. §1.72(b): A brief abstract of the technical disclosure in the specification must commence on a separate sheet, preferably following the claims, under the heading “Abstract of the Disclosure.” The purpose of the abstract is to enable the Patent and Trademark Office and the public generally to determine quickly from a cursory inspection the nature and gist of the technical disclosure. The abstract shall not be used for interpreting the scope of the claims. Therefore, any statements made relating to the abstract are not intended to limit the claims in any manner and should not be interpreted as limiting the claims in any manner.
Apparatus and methods for safe navigation of robotic devices
Apparatus and methods for navigation of a robotic device configured to operate in an environment comprising objects and/or persons. Location of objects and/or persons may changed prior and/or during operation of the robot. In one embodiment, a bistatic sensor comprises a transmitter and a receiver. The receiver may be spatially displaced from the transmitter. The transmitter may project a pattern on a surface in the direction of robot movement. In one variant, the pattern comprises an encoded portion and an information portion. The information portion may be used to communicate information related to robot movement to one or more persons. The encoded portion may be used to determine presence of one or more object in the path of the robot. The receiver may sample a reflected pattern and compare it with the transmitted pattern. Based on a similarity measure breaching a threshold, indication of object present may be produced.
Determining regions of interest based on user interaction
A system and method provide for determining regions of interest within an image based on viewer interaction with the image. At least one image associated with a location is provided for display in a viewport, and pose data related to user interaction with the at least one image is identified. Weights are assigned to portions of the at least one image based on the pose data, the weights indicating at least a period of time the portion of the at least one image is generally at a center of the viewport. Based on the assigned weights, image regions of interest of the at least one image are determined.
Fingerprint and Palmprint Image Collector with Honeycomb Structure, and Terminal Device
A fingerprint and palmprint image collector with a honeycomb structure and a terminal device are provided. The image collector includes a light guide plate, and a light source for emitting at least part of light into the light guide plate. A surface of the light guide plate is provided with a honeycomb plate. A plurality of vias parallel to each other are densely disposed on the honeycomb plate. The diameter of each via is in a range from 0.5 micrometer to 50 micrometers, and the size of an acquired fingerprint or palmprint is equal to the size of an acquired image. The thickness of the honeycomb plate is more than five times of the diameter of the via. The distance between centers of adjacent vias is less than or equal to 50.8 micrometers. The other surface of the honeycomb plate is provided with an image sensor. The image collector can be integrated into the terminal device. The fingerprint and palmprint image collector with the honeycomb structure according to the invention has a compact structure and a thin thickness, and can improve the contrast of the fingerprint image and the adaptive capacity for dry and wet fingers. The fingerprint and palmprint image collector is integrated into the terminal device so that the terminal device has a fingerprint and palmprint acquisition function through fewer additional component and low cost.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING SYSTEM
The present technology relates to an information processing device, an information processing method, a program, and an information processing system realizing smooth entrance with an electronic ticket. ID information used for determining admittance or non-admittance of entrance with an electronic ticket is extracted from a captured image of a superimposition image which contains a predetermined image and the ID information superimposed on the predetermined image to determine admittance or non-admittance of entrance with the electronic ticket on the basis of the electronic ticket and the ID information. For example, the present technology is applicable to an entrance gate system or others provided to check entrance into an event site or the like.
OBJECT RECOGNITION DEVICE, OBJECT RECOGNITION METHOD, AND PROGRAM
An object recognition device includes an acquisition unit configured to acquire a recognition target image that serves as an object to be recognized; a retrieval unit configured to search an image database storing a plurality of image data in association with tag information and retrieve a similar image that matches the recognition target image; and a recognition unit configured to recognize the object included in the recognition target image on the basis of tag information associated with a similar image obtained by the retrieval unit. The recognition may select the tag information that appears most frequently among the tag information associated with the similar images as a recognition result. The recognition unit may also compute a tag information reliability score from the similar image in the retrieval result and recognize an object taking into account said reliability score.
SYSTEM AND METHOD FOR DYNAMIC CAMOUFLAGING
Systems and methods for dynamic camouflaging are disclosed. A computer-implemented method can be used with the system including determining, by a computing device, if current environment image data is available for a location of one or more users, and instructing, by the computing device, at least one image-enabled clothing system of the one or more users to display a camouflage image based on the determining. The camouflage image is based on the current environment image data when the current environment image data is available, and the camouflage image is based on historic image data associated with the location of the one or more users when the current environment image data is not available.
IMAGE PROCESSING SYSTEMS AND/OR METHODS
The present invention provides a method (100,200) for identifying, retrieving and/or processing one or more images (12.sub.n) from one or more source network locations (14.sub.n) for display at one or more predetermined target network locations (16.sub.n). The method includes the steps of: acquiring an address (36.sub.n) for each of the one or more source network locations (14.sub.n); perusing data available at each of the one or more source network locations (14.sub.n) to identify one or more images (12.sub.n) suitable for display at the one or more target network locations (16.sub.n); retrieving any images (12.sub.n) identified as being suitable for display at the one or more target network locations (16.sub.n); processing the retrieved images (12.sub.n), as required or desired, in order to adapt the images (12.sub.n) for display at the one or more target network locations (16.sub.n); and, selectively displaying the retrieved and/or processed image or images (12.sub.n) at the one or more target network locations (16.sub.n). Also provided is an associated system (10) for use with the method (100,200) of the invention.
INSPECTION METHOD AND INSPECTION APPARATUS
An inspection method according to the embodiments includes applying light of a light source to an inspection target; receiving light from the inspection target to obtain a first image of the inspection target by a sensor; based on an image of a first pattern comprising repetitive patterns unresolvable with a wavelength of the light source in the first image, calculating a deviation of luminance values with respect to each of first regions in the first pattern by a processor; obtaining a second image of the inspection target by the sensor; correcting luminance values of the second image by the processor based on the deviations of the luminance values; and comparing the repetitive patterns of the corrected second image with each other by a comparer.