Patent classifications
G06F16/51
INFORMATION PROCESSING UNIT, INFORMATION PROCESSING METHOD, AND PROGRAM
An information processing unit includes: a diagnostic image input section that inputs the diagnostic image; an operation information obtaining section that obtains display operation history information representing an operation history of a user who controls displaying of the diagnostic image; a query image generation section that extracts a predetermined region of the input diagnostic image to generate a query image; a diagnosed image obtaining section that supplies the generated query image and the display operation history information to a diagnosed image search unit and obtains the diagnosed image obtained as a search result by the diagnosed image search unit; and a display control section that displays the diagnostic image and the obtained diagnosed image for comparison.
METHOD FOR AUTOMATICALLY NAMING PHOTOS BASED ON MOBILE TERMINAL, SYSTEM, AND MOBILE TERMINAL
A method for automatically naming photos based on a mobile terminal, a system, and a mobile terminal are proposed. The method includes presetting a photo naming rule and storing the photo naming rule in the mobile terminal; updating, in real-time, calendar information of a naming resource provided in the mobile terminal; and searching for the naming resource corresponding to a current time from the calendar information when a new photo is detected to be stored, automatically naming the new photo according to the preset naming rule, and storing the named photo in a specific category.
METHOD AND APPARATUS FOR RECOMMENDING AN INTERFACE THEME
A method, and an apparatus for recommending an interface theme are provided. An exemplary embodiment of the method includes: obtaining a target image which includes an image of a target person; obtaining characteristic information of the target person based on the target image; obtaining a selection list of recommended themes, wherein the recommended themes are interface themes that match the characteristic information of the target person; and outputting the selection list of recommended themes.
Method and Apparatus for Accessing a Terminal Device Camera to a Target Device
The present disclosure discloses a method and apparatus for accessing a terminal device camera to a target device. The method includes: establishing a connection channel between the terminal device camera and the target device; starting the terminal device camera and obtaining image data captured by the terminal device camera; and transmitting the obtained image data to the target device through the connection channel established.
Machine-learning for enhanced machine reading of non-ideal capture conditions
Implementations of the present disclosure include receiving a training image, providing a hash pattern that is representative of the training image, applying a plurality of filters to the training image to provide a respective plurality of filtered training images, identifying a filter to be associated with the hash pattern based on the plurality of filtered training images, and storing a mapping of the filter to the hash pattern within a set of mapping in a data store.
Machine-learning for enhanced machine reading of non-ideal capture conditions
Implementations of the present disclosure include receiving a training image, providing a hash pattern that is representative of the training image, applying a plurality of filters to the training image to provide a respective plurality of filtered training images, identifying a filter to be associated with the hash pattern based on the plurality of filtered training images, and storing a mapping of the filter to the hash pattern within a set of mapping in a data store.
TEMPORAL-BASED VISUALIZED IDENTIFICATION OF COHORTS OF DATA POINTS PRODUCED FROM WEIGHTED DISTANCES AND DENSITY-BASED GROUPING
A user-selected group of data points is received. Weighted distances between further data points with the user-selected group of data points are computed, the weighted distances computed based on respective weights assigned to dimensions of data points. Density-based grouping of the further data points is performed based on the computed weighted distances, the density-based grouping producing cohorts of data points. A graphical visualization is generated including pixels representing the user-selected group of data points and the cohorts of data points. The graphical visualization provides a temporal-based visualized identification of the cohorts with the user selected group of data points.
Contextual local image recognition dataset
A contextual local image recognition module of a device retrieves a primary content dataset from a server and then generates and updates a contextual content dataset based on an image captured with the device. The device stores the primary content dataset and the contextual content dataset. The primary content dataset comprises a first set of images and corresponding virtual object models. The contextual content dataset comprises a second set of images and corresponding virtual object models retrieved from the server.
Contextual local image recognition dataset
A contextual local image recognition module of a device retrieves a primary content dataset from a server and then generates and updates a contextual content dataset based on an image captured with the device. The device stores the primary content dataset and the contextual content dataset. The primary content dataset comprises a first set of images and corresponding virtual object models. The contextual content dataset comprises a second set of images and corresponding virtual object models retrieved from the server.
Real time visual validation of digital content using a distributed ledger
A digital asset is represented and verified as a set of related digital asset or other content objects. Related metadata is stored on an immutable distributed ledger separately from the content objects themselves. For example, a transaction object includes metadata such as identifiers for two or more content objects, fingerprints for the content objects. The content objects may be stored in a local or cloud object repository. Validation of a later identified content object may include determining a fingerprint for the later identified content object, mapping that fingerprint to an address within the immutable distributed ledger to retrieve metadata previously mapped, and comparing the two fingerprints. Visual validation may be provided when the first and second fingerprints match, such as by displaying a positive icon adjacent the later identified object.