Patent classifications
G06F40/30
System and method for quality assessment of product description
A system for assessing text content of a product. The system includes a computing device having a processor and a storage device storing computer executable code. The computer executable code, when executed at the processor, is configured to: provide text contents and confounding features of products; train a first regression model using the text content and the confounding features of the products; train the second regression model using the confounding features; operate the first regression model using the text contents and the confounding features to obtain a total loss; operate the second regression model using the confounding features of to obtain a partial loss; subtract the total loss from the partial loss to obtain a residual loss; use the residual loss to evaluate models and parameters for the regression models; and use the first regression model to obtain log odds of the words indicating importance of the words.
SYSTEMS AND METHODS FOR DATA AGGREGATION AND CYCLICAL EVENT PREDICTION
The present invention relates to an artificial intelligence method and system for event predication, comprising: receiving, user messages, user activity data, event data, user identification information and transaction data; scraping webpages for additional event data; applying a natural language processing module to process the event data; constructing a training data set using the processed event data; constructing user preferences from the user messages, the user activity data, the user identification information and the transaction data; training a predictive model using the training data set to determine at least one upcoming event predictions determining to display the at least one event predictions based on the user profile; if it is determined to display one of the at least one event predictions, generating a graphical user interface display with a calendar depicting the at least one event prediction; and presenting the graphical user interface display to the user.
Facilitating alerts for predicted conditions
Operational machine components of an information technology (IT) or other microprocessor- or microcontroller-permeated environment generate disparate forms of machine data. Network connections are established between these components and processors of an automatic data intake and query system (DIQS). The DIQS conducts network transactions on a periodic and/or continuous basis with the machine components to receive the disparate data and ingest certain of the data as measurement entries of a DIQS metrics datastore that is searchable for DIQS query processing. The DIQS may receive search queries to process against the received and ingested data via an exposed network interface. In one example embodiment, a query building component conducts a user interface using a network attached client device. The query building component may elicit search criteria via the user interface using a natural language interface, construct a proper query therefrom, and present new information based on results returned from the DIQS.
Facilitating alerts for predicted conditions
Operational machine components of an information technology (IT) or other microprocessor- or microcontroller-permeated environment generate disparate forms of machine data. Network connections are established between these components and processors of an automatic data intake and query system (DIQS). The DIQS conducts network transactions on a periodic and/or continuous basis with the machine components to receive the disparate data and ingest certain of the data as measurement entries of a DIQS metrics datastore that is searchable for DIQS query processing. The DIQS may receive search queries to process against the received and ingested data via an exposed network interface. In one example embodiment, a query building component conducts a user interface using a network attached client device. The query building component may elicit search criteria via the user interface using a natural language interface, construct a proper query therefrom, and present new information based on results returned from the DIQS.
Conversation history within conversational machine reading comprehension
Aspects described herein include a method of conversational machine reading comprehension, as well as an associated system and computer program product. The method comprises receiving a plurality of questions relating to a context, and generating a sequence of context graphs. Each of the context graphs includes encoded representations of: (i) the context, (ii) a respective question of the plurality of questions, and (iii) a respective conversation history reflecting: (a) one or more previous questions relative to the respective question, and (b) one or more previous answers to the one or more previous questions. The method further comprises identifying, using at least one graph neural network, one or more temporal dependencies between adjacent context graphs of the sequence. The method further comprises predicting, based at least on the one or more temporal dependencies, an answer for a first question of the plurality of questions.
Dynamic data relationships in whiteboard regions
A whiteboard template can include multiple regions that are associated with different data sources. Each region can be associated with a different data source and can present objects based upon logical representations stored in an associated data source. Logical representations of objects in a region can include links to other objects in other regions associated with other data sources. When an object is moved between regions, transformations can be applied to the logical representation associated with the object. If the object is linked to other objects, the transformation can be propagated to the logical representations of the linked objects. In this manner, a single movement of an object between regions in a template can result in the updating of multiple objects and associated data sources, the updating of the visual properties of objects in multiple regions, and updating the visual properties of the regions themselves.
System and method for language processing using adaptive regularization
A system and method incorporate prior knowledge into the optimization and regularization of a classification and regression model. The optimization may be a regularization process and the prior knowledge may be incorporated through adjustment of a cost function. A method of at least one processor developing a classification and regression model may be provided. The method may be implemented by at least one processor that implements classification and regression model functionality, including receiving training data and adjusting the model according to the training data; testing the classification and regression model; and employing prior knowledge during an optimization of the classification and regression model. The regularizing can include adjusting feature weights according to prior knowledge. In various embodiments, such systems and methods can be used in the processing of language inputs, e.g., speech and/or text inputs, to achieve greater interpretation accuracy.
Methods and systems for generating domain-specific text summarizations
Embodiments provide methods and systems for generating domain-specific text summary. Method performed by processor includes receiving request to generate text summary of textual content from user device of user and applying pre-trained language generation model over textual content for encoding textual content into word embedding vectors. Method includes predicting current word of the text summary, by iteratively performing: generating first probability distribution of first set of words using first decoder based on word embedding vectors, generating second probability distribution of second set of words using second decoder based on word embedding vectors, and ensembling first and second probability distributions using configurable weight parameter for determining current word. First probability distribution indicates selection probability of each word being selected as current word. Method includes providing custom reward score as feedback to second decoder based on custom reward model and modifying second probability distribution of words for text summary based on feedback.
Methods and systems for generating domain-specific text summarizations
Embodiments provide methods and systems for generating domain-specific text summary. Method performed by processor includes receiving request to generate text summary of textual content from user device of user and applying pre-trained language generation model over textual content for encoding textual content into word embedding vectors. Method includes predicting current word of the text summary, by iteratively performing: generating first probability distribution of first set of words using first decoder based on word embedding vectors, generating second probability distribution of second set of words using second decoder based on word embedding vectors, and ensembling first and second probability distributions using configurable weight parameter for determining current word. First probability distribution indicates selection probability of each word being selected as current word. Method includes providing custom reward score as feedback to second decoder based on custom reward model and modifying second probability distribution of words for text summary based on feedback.
Conversational relevance modeling using convolutional neural network
Non-limiting examples of the present disclosure describe a convolutional neural network (CNN) architecture configured to evaluate conversational relevance of query-response pairs. A CNN model is provided that can include a first branch, a second branch, and multilayer perceptron (MLP) layers. The first branch includes convolutional layers with dynamic pooling to process a query. The second branch includes convolutional layers with dynamic pooling to process candidate responses for the query. The query and the candidate responses are processed in parallel using the CNN model. The MLP layers are configured to rank query-response pairs based on conversational relevance.