Patent classifications
G06K9/64
INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READBLE MEDIUM STORING PROGRAM
An information processing apparatus includes a processor configured to set a first reference object, and if a second reference object identical or similar to the first reference object is recognized, virtually display a target object in relation to the second reference object. The target object is recognized in advance together with the first reference object.
Privacy-preserving sanitization for visual computing queries
In one embodiment, an apparatus comprises a communication interface and a processor. The communication interface is to communicate with a visual computing device over a network. The processor is to: access visual data captured by a camera; detect a particular feature in the visual data, wherein the particular feature comprises a visual indication of privacy-sensitive information; sanitize the visual data to mask the privacy-sensitive information associated with the particular feature, wherein sanitizing the visual data causes sanitized visual data to be produced; and transmit, via the communication interface, the sanitized visual data to the visual computing device over the network, wherein the visual computing device is to use the sanitized visual data to process a visual query associated with the visual data.
CASCADE CONVOLUTIONAL NEURAL NETWORK
In one embodiment, an apparatus comprises a communication interface and a processor. The communication interface is to communicate with a plurality of devices. The processor is to: receive compressed data from a first device, wherein the compressed data is associated with visual data captured by sensor(s); perform a current stage of processing on the compressed data using a current CNN, wherein the current stage of processing corresponds to one of a plurality of processing stages associated with the visual data, and wherein the current CNN corresponds to one of a plurality of CNNs associated with the plurality of processing stages; obtain an output associated with the current stage of processing; determine, based on the output, whether processing associated with the visual data is complete; if the processing is complete, output a result associated with the visual data; if the processing is incomplete, transmit the compressed data to a second device.
FEATURE AMOUNT GENERATION METHOD, FEATURE AMOUNT GENERATION DEVICE, AND FEATURE AMOUNT GENERATION PROGRAM
Low-dimensional feature values with which semantic factors of content are ascertained are generated from relevance between sets of two types of content.
Based on a relation indicator indicating a pair of groups indicating which groups are related to first types of content groups among second types of content groups, an initial feature value extracting unit 11 extracts initial feature values of the first type of content and the second type of content. A content pair selecting unit 12 selects a content pair by selecting one first type of content and one second type of content from each pair of groups indicated by the relation indicator. A feature value conversion function generating unit 13 generates feature conversion functions 31 of converting the initial feature values into low-dimensional feature values based on the content pair selected from each pair of groups.
Image processing system, image processing method, and program
An image processing system, an image processing method, and a program capable of implementing an association of a person appearing in a video image through a simple operation are provided. The image processing system includes an input device which accepts input of video images captured by a plurality of video cameras, a display screen generating unit which causes a display device to display at least one video image among the video images inputted from the input device, and a tracked person registering unit which is capable of registering one or more persons appearing in the video image displayed by the display device. When a person appears in the video image displayed by the display device, the display screen generating unit selectably displays person images of one or more persons, which are associable with the person appearing in the video image and which are registered by the tracked person registering unit, in a vicinity of the video image.
MULTI-DOMAIN CONVOLUTIONAL NEURAL NETWORK
In one embodiment, an apparatus comprises a memory and a processor. The memory is to store visual data associated with a visual representation captured by one or more sensors. The processor is to: obtain the visual data associated with the visual representation captured by the one or more sensors, wherein the visual data comprises uncompressed visual data or compressed visual data; process the visual data using a convolutional neural network (CNN), wherein the CNN comprises a plurality of layers, wherein the plurality of layers comprises a plurality of filters, and wherein the plurality of filters comprises one or more pixel-domain filters to perform processing associated with uncompressed data and one or more compressed-domain filters to perform processing associated with compressed data; and classify the visual data based on an output of the CNN.
METHOD FOR EVALUATING ENVIRONMENTAL NOISE OF DEVICE, APPARATUS, MEDIUM AND ELECTRONIC DEVICE
The present disclosure provides a method, apparatus, medium, and electronic device for evaluating environmental noise of device. The method comprises obtaining original image data to be displayed; determining at least part of the original image data to be displayed as source data; obtaining comparison data according to the source data; obtaining a difference value according to the comparison data and the source data; and evaluating environmental noise of device according to the difference value.
Image processing methods and devices
A method for image processing includes: acquiring features of multiple images of a target object and a standard feature of the target object; and determining trusted images of the target object from the multiple images of the target object according to similarities between the features of the multiple images of the target object and the standard feature thereof, wherein similarities between features of the trusted images of the target object and the standard feature of the target object meet a preset similarity requirement. The image processing method may be applied to application scenarios such as image comparison, identity recognition, target object search, and similar target object determination.
3D/2D Vascular Registration Method and Its Means
A 3D/2D vascular registration method includes: according to topological information of vessels in a 3D vascular image, a first vascular image model is obtained, and according to the topological information of vessels in a 2D vascular image, a second vascular image model is obtained; according to the first vascular image model and the second vascular image model, obtain a spatial transformation relationship between the three-dimensional vascular image and the two-dimensional vascular image; wherein, the spatial transformation relationship is used to register the 3D vascular image and the 2D vascular image. The 3D/2D vascular registration method can establish the vascular image model according to the topological information of the vessel in the vascular image model, register according to the vascular image model, so as to give consideration to both high accuracy and high calculation efficiency.
Image attribute processing system, image attribute processing apparatus, and method of processing image attributes
An information processing system includes circuitry configured to store, in a memory, one or more feature value patterns associated with appearance attribute of one or more groups of persons calculated from a plurality of acquired image data using machine learning, in which each one of the groups assumed to have a unique group value being different for each one of the groups, receive image data of a target person input as analysis target data, analyze an appearance attribute of the target person in the image data using the one or more feature value patterns associated with the appearance attribute of the one or more groups stored in the memory, and output a response corresponding to an analysis result of the appearance attribute of the target person.