G06V30/194

Data model generation using generative adversarial networks

Methods for generating data models using a generative adversarial network can begin by receiving a data model generation request by a model optimizer from an interface. The model optimizer can provision computing resources with a data model. As a further step, a synthetic dataset for training the data model can be generated using a generative network of a generative adversarial network, the generative network trained to generate output data differing at least a predetermined amount from a reference dataset according to a similarity metric. The computing resources can train the data model using the synthetic dataset. The model optimizer can evaluate performance criteria of the data model and, based on the evaluation of the performance criteria of the data model, store the data model and metadata of the data model in a model storage. The data model can then be used to process production data.

Data model generation using generative adversarial networks

Methods for generating data models using a generative adversarial network can begin by receiving a data model generation request by a model optimizer from an interface. The model optimizer can provision computing resources with a data model. As a further step, a synthetic dataset for training the data model can be generated using a generative network of a generative adversarial network, the generative network trained to generate output data differing at least a predetermined amount from a reference dataset according to a similarity metric. The computing resources can train the data model using the synthetic dataset. The model optimizer can evaluate performance criteria of the data model and, based on the evaluation of the performance criteria of the data model, store the data model and metadata of the data model in a model storage. The data model can then be used to process production data.

Machine learning for computing enabled systems and/or devices
11699295 · 2023-07-11 ·

Aspects of the disclosure generally relate to computing enabled systems and/or devices and may be generally directed to machine learning for computing enabled systems and/or devices. In some aspects, the system captures one or more digital pictures, receives one or more instruction sets, and learns correlations between the captured pictures and the received instruction sets.

Machine learning for computing enabled systems and/or devices
11699295 · 2023-07-11 ·

Aspects of the disclosure generally relate to computing enabled systems and/or devices and may be generally directed to machine learning for computing enabled systems and/or devices. In some aspects, the system captures one or more digital pictures, receives one or more instruction sets, and learns correlations between the captured pictures and the received instruction sets.

Emoji Understanding in Online Experiences

Understanding emojis in the context of online experiences is described. In at least some embodiments, text input is received and a vector representation of the text input is computed. Based on the vector representation, one or more emojis that correspond to the vector representation of the text input are ascertained and a response is formulated that includes at least one of the one or more emojis. In other embodiments, input from a client machine is received. The input includes at least one emoji. A computed vector representation of the emoji is used to look for vector representations of words or phrases that are close to the computed vector representation of the emoji. At least one of the words or phrases is selected and at least one task is performed using the selected word(s) or phrase(s).

Determining an item that has confirmed characteristics

In various example embodiments, a system and method for determining an item that has confirmed characteristics are described herein. An image that depicts an object is received from a client device. Structured data that corresponds to characteristics of one or more items are retrieved. A set of characteristics is determined, the set of characteristics being predicted to match with the object. An interface that includes a request for confirmation of the set of characteristics is generated. The interface is displayed on the client device. Confirmation that at least one characteristic from the set of characteristics matches with the object depicted in the image is received from the client device.

Method and system for integrated global and distributed learning in autonomous driving vehicles

The present teaching relates to system, method, medium for in-situ perception in an autonomous driving vehicle. A plurality of types of sensor data are received, which are acquired by a plurality of types of sensors deployed on the vehicle to provide information about surrounding of the vehicle. Based on at least one model, one or more surrounding items are tracked from a first of the plurality of types of sensor data acquired by a first type sensors. At least some of the tracked items are automatically labeled via cross validation and are used to locally adapt, on-the-fly, the at least one model. Model update information is received which from a model update center, which is derived based on the labeled at least some items. The at least one model is updated using the model update information.

Method and system for integrated global and distributed learning in autonomous driving vehicles

The present teaching relates to system, method, medium for in-situ perception in an autonomous driving vehicle. A plurality of types of sensor data are received, which are acquired by a plurality of types of sensors deployed on the vehicle to provide information about surrounding of the vehicle. Based on at least one model, one or more surrounding items are tracked from a first of the plurality of types of sensor data acquired by a first type sensors. At least some of the tracked items are automatically labeled via cross validation and are used to locally adapt, on-the-fly, the at least one model. Model update information is received which from a model update center, which is derived based on the labeled at least some items. The at least one model is updated using the model update information.

Wearable respiratory monitoring system based on resonant microphone array

A method for continuous acoustic signature recognition and classification includes a step of obtaining an audio input signal from a resonant microphone array positioned proximate to a target, the audio input signal having a plurality of channels. The target produces characterizing audio signals depending on a state or condition of the target. A plurality of features is extracted from the audio input signal with a signal processor. The plurality of features is classified to determine the state of the target. An acoustic monitoring system implementing the method is also provided.

Identifying image aesthetics using region composition graphs

The disclosed computer-implemented method may include generating a three-dimensional (3D) feature map for a digital image using a fully convolutional network (FCN). The 3D feature map may be configured to identify features of the digital image and identify an image region for each identified feature. The method may also include generating a region composition graph that includes the identified features and image regions. The region composition graph may be configured to model mutual dependencies between features of the 3D feature map. The method may further include performing a graph convolution on the region composition graph to determine a feature aesthetic value for each node according to the weightings in the node's weighted connecting segments, and calculating a weighted average for each node's feature aesthetic value to provide a combined level of aesthetic appeal for the digital image. Various other methods, systems, and computer-readable media are also disclosed.