G06F18/295

CUSTOMER JOURNEY MANAGEMENT ENGINE

Provided is a process, including: obtaining a first training dataset, training a first machine-learning model on the first training dataset, obtaining a set of candidate question sequences, forming virtual subject-entity records, forming a second training dataset, training a second machine-learning model, and storing the adjusted parameters of the second machine-learning model in memory.

Computerized device for driving assistance

A computerized device for driving assistance comprises a memory (4) designed to receive data point cloud data (8) in which a point cloud associates, for a given instant, points each having coordinates in a plane associated with the point cloud and a value denoting a height. The device furthermore comprises a calculator (6) designed to access the memory (4) and, for a given point cloud, to calculate data on the probability of belonging to a reference surface, associated with each point of the data point cloud, on the one hand, and node data associating a value denoting a height (hi) and two values indicating a slope in a plane associated with the plane of the given point cloud, on the other hand, by determining a Gaussian random conditional field by way of the data point cloud data (8) corresponding to the given point cloud, which Gaussian random conditional field is represented by a mesh of nodes in said associated plane, which nodes are defined by the node data, and to return the data on the probability of belonging to a reference surface and/or at least some of the node data and values denoting a height.

States simulator for reinforcement learning models

A method, apparatus and a product for generating a dataset for a reinforcement model. The method comprises obtaining a plurality of different subsets of the set of features; for each subset of features, determining a policy using a Markov Decision Process; obtaining a state comprises a valuation of each feature of the set of features; applying the plurality of policies on the state, whereby obtaining a plurality of suggested actions for the state, based on different projections of the state onto different subsets of features; determining, for the state, one or more actions and corresponding scores thereof based on the plurality of suggested actions; and training a reinforcement learning model using the state and the one or more actions and corresponding scores thereof.

DIRECTED CONTROL TRANSFER WITH AUTONOMOUS VEHICLES

Techniques for cognitive analysis for directed control transfer with autonomous vehicles are described. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.

Adaptive, self-tuning virtual sensing system for cyber-attack neutralization

An industrial asset may have a plurality of monitoring nodes, each monitoring node generating a series of monitoring node values over time representing current operation of the industrial asset. An abnormality detection computer may determine that an abnormal monitoring node is currently being attacked or experiencing a fault. An autonomous, resilient estimator may continuously execute an adaptive learning process to create or update virtual sensor models for that monitoring node. Responsive to an indication that a monitoring node is currently being attacked or experiencing a fault, a level of neutralization may be automatically determined. The autonomous, resilient estimator may then be dynamically reconfigured to estimate a series of virtual node values based on information from normal monitoring nodes, appropriate virtual sensor models, and the determined level of neutralization. The series of monitoring node values from the abnormal monitoring node or nodes may then be replaced with the virtual node values.

INFORMATION PROCESSING METHOD AND RELATED DEVICE
20230087821 · 2023-03-23 ·

This application disclose an information processing method and a related device. This application provide a first AI entity in an access network, and define a plurality of basic interaction modes between the first AI entity and a terminal device. In an interaction mode, the first AI entity may receive second AI model information sent by the terminal device. The second AI model information does not include user data of the terminal device. The first AI entity may update first AI model information of the first AI entity based on the second AI model information, and then send updated first AI model information to the terminal device, so that the terminal device trains and updates the second AI model information.

SYSTEM AND METHOD OF MONITORING MENTAL HEALTH CONDITIONS
20230091240 · 2023-03-23 · ·

Systems and methods for assessing mental health of a person, including: recording digital activity data of the user on the mobile device, wherein the digital activity data does not include personal content of the user, applying the digital activity data to a plurality of machine learning models, wherein each of the machine learning models represents a digital phenotype for a different mental health condition, determining a similarity score for each mental health condition corresponding to each of the plurality of machine learning models based on comparing an output of the respective machine learning model to a digital phenotype for the corresponding mental health condition, outputting, at a user's display, the at least one mental health similarity score, and predicting, by the processor, at least one mental health condition of the user based on the calculated similarity scores.

Apparatus and methods for improved subsurface data processing systems

A method and apparatus for subsurface data processing includes determining a set of clusters based at least in part on measurement vectors associated with different depths or times in subsurface data, defining clusters in a subsurface data by classes associated with a state mode, reducing a quantity of the subsurface data based at least in part on the classes, and storing the reduced quantity of the subsurface data and classes with the state model in a training database for a machine learning process.

Apparatuses, methods, and systems for 3-channel dynamic contextual script recognition using neural network image analytics and 4-tuple machine learning with enhanced templates and context data

In some embodiments, a method includes training a first machine learning model based on multiple documents and multiple templates associated with the multiple documents. The method further includes executing the first machine learning model to generate multiple relevancy masks, the multiple relevancy masks to remove a visual structure of the multiple templates from a visual structure of the multiple documents. The method further includes generating multiple multichannel field images to include the multiple relevancy masks and at least one of the multiple documents or the multiple templates. The method further includes training a second machine learning model based on the multiple multichannel field images and multiple non-native texts associated with the multiple documents. The method further includes executing the second machine learning model to generate multiple non-native texts from the multiple multichannel field images.

REINFORCEMENT LEARNING SIMULATION OF SUPPLY CHAIN GRAPH

A computing system including a processor configured to receive training data including, for each of a plurality of training timesteps, training forecast states associated with respective training-phase agents included in a training supply chain graph. The processor may train a reinforcement learning simulation of the training supply chain graph using the training data via policy gradient reinforcement learning. At each training timestep, the training forecast states may be shared between simulations of the training-phase agents during training. The processor may receive runtime forecast states associated with respective runtime agents included in a runtime supply chain graph. For a runtime agent, at the trained reinforcement learning simulation, the processor may generate a respective runtime action output associated with a corresponding runtime forecast state of the runtime agent based at least in part on the runtime forecast states. The processor may output the runtime action output.