Patent classifications
G06K9/68
TEMPORAL-BASED VISUALIZED IDENTIFICATION OF COHORTS OF DATA POINTS PRODUCED FROM WEIGHTED DISTANCES AND DENSITY-BASED GROUPING
A user-selected group of data points is received. Weighted distances between further data points with the user-selected group of data points are computed, the weighted distances computed based on respective weights assigned to dimensions of data points. Density-based grouping of the further data points is performed based on the computed weighted distances, the density-based grouping producing cohorts of data points. A graphical visualization is generated including pixels representing the user-selected group of data points and the cohorts of data points. The graphical visualization provides a temporal-based visualized identification of the cohorts with the user selected group of data points.
PERSONALIZED SUMMARY GENERATION OF DATA VISUALIZATIONS
Various embodiments are generally directed to systems for summarizing data visualizations (i.e., images of data visualizations), such as a graph image, for instance. Some embodiments are particularly directed to a personalized graph summarizer that analyzes a data visualization, or image, to detect pre-defined patterns within the data visualization, and produces a textual summary of the data visualization based on the pre-defined patterns detected within the data visualization. In various embodiments, the personalized graph summarizer may include features to adapt to the preferences of a user for generating an automated, personalized computer-generated narrative. For instance, additional pre-defined patterns may be created for detection and/or the textual summary may be tailored based on user preferences. In some such instances, one or more of the user preferences may be automatically determined by the personalized graph summarizer without requiring the user to explicitly indicate them. Embodiments may integrate machine learning and computer vision concepts.
Method and system for assessing similarity of documents
A method for assessing similarity of documents. The method includes extracting a reference document text from a reference document, extracting an archived document text from an archived document, and quantifying the reference document and the archived document. Quantifying the reference and archived documents includes tokenizing sentences of the reference document and archived document, respectively, and vectorizing the tokenized sentences to obtain a reference document text vector and an archived document text vector for each sentence of the reference and archived document, respectively. The method also includes determining a document similarity value of the quantified reference document and the quantified archived document. Determining the document similarity value includes calculating a set of vector similarity values for a set of combinations of a reference document text vector and an archived document text vector, and calculating the document similarity value, including a sum of the plurality of vector similarity values.
AUTOMATED DATA EXTRACTION FROM SCATTER PLOT IMAGES
The invention relates to a computer-implemented method for automatically extracting data from a scatter plot. The method comprises receiving a digital image of a scatter plot; analyzing the received digital image for identifying a plurality of pixel sets, each pixel set being a group of adjacent pixels; analyzing the pixel sets in the received image or in a derivative of the received image for generating a plurality of templates; comparing the templates with pixels of a target image for identifying matching templates; identifying data points for the identified similar templates; assigning to each identified data point a data series; and returning the identified data points.
Information processing device, information processing method and information processing program
An image comparison unit (81) compares a query image with a registered image to detect, in the registered image, a region corresponding to the query image. An action information determining unit (82), on the basis of intermediate information in which sub-region information identifying sub-regions in the registered image and action information representing information processing to be executed by a target device are associated with each other, identifies sub-regions on the basis of the sub-region information, chooses a sub-region having the highest degree of matching with the detected region among the identified sub-regions, and identifies action information corresponding to the chosen sub-region. An action information execution unit (83) causes the target device to execute information processing corresponding to the action information.
SYSTEM AND METHOD FOR DETECTING OBJECTS IN AN AUTOMOTIVE ENVIRONMENT
Advanced driver assistance systems (ADAS) and methods for object detection such as traffic lights, speed signs, in an automotive environment, are disclosed. In an embodiment, ADAS includes camera system for capturing image frames of at least a part of surroundings of vehicle, memory comprising image processing instructions and processing system for detecting one or more objects in a coarse detection followed by a fine detection. Coarse detection includes detecting presence of the one or more objects in non-consecutive image frames of the image frames, where non-consecutive image frames are determined by skipping one or more frames of the image frames. Upon detection of presence of the one or more objects in coarse detection, fine detection of the one or more objects is performed in a predetermined number of neighboring image frames of a frame in which the presence of the objects is detected in coarse detection.
Information processing apparatus, system, and non-transitory computer readable medium
An information processing apparatus includes a recognition unit. The recognition unit has image recognition methods of plural types. The recognition unit recognizes, in a case where image data of a first document and image data of a second document are generated by a generation unit that reads a document and generates image data of the document, the type of the first document from the image data of the first document and recognizing the type of the second document from the image data of the second document using an image recognition method corresponding to the type of the first document among the image recognition methods of the plural types, the first document and the second document being included in plural documents.
Computer-readable medium storing therein image processing program, image processing device, and image processing method
A non-transitory computer-readable storage medium storing an image processing device that causes a computer to execute a process includes: individually detecting, as a detection target candidate, a first portion of each of detection targets that appear in an image imaged by an imaging device, by using a first detection area corresponding to the first portion; detecting a detection target for a detection target candidate corresponding to a detection target that is not covered by another detection target and that is included in the detected detection target candidates, by using a second detection area corresponding to a second portion including the first portion; and detecting a detection target for a detection target candidate corresponding to a detection target, covered by another detection target and included in the detected detection target candidates, by using a third detection area that corresponds to a third portion including the first portion.
Analyzing font similarity for presentation
A system includes a computing device that includes a memory configured to store instructions. The system also includes a processor to execute the instructions to perform operations that include receiving data representing features of a first font and data representing features of a second font. The first font and the second font are capable of representing one or more glyphs. Operations also include receiving survey-based data representing the similarity between the first and second fonts, and, training a machine learning system using the features of the first font, the features of the second font and the survey-based data that represents the similarity between the first and second fonts.
Image processing method, image processing apparatus, program, storage medium, production apparatus, and method of producing assembly
A tentative local score between a point in a feature image in a template image and a point, in a target object image, at a position corresponding to the point in the feature image is calculated, and a determination is performed as to whether the tentative local score is smaller than 0. In a case where the tentative local score is greater than or equal to 0, the tentative local score is employed as a local score. In a case where the tentative local score is smaller than 0, the tentative local score is multiplied by a coefficient and the result is employed as a degree of local similarity.