G06N3/082

Automated personalized classification of journey data captured by one or more movement-sensing devices

A technique is described herein for automatically logging journeys taken by a user, and then automatically classifying the purposes of the journeys. In one implementation, the technique obtains journey data from one or more movement-sensing devices as a user travels from a starting location to an ending location in a vehicle. The technique generates a set of features based on the journey data, and then uses a machine-trainable model (such as a neural network) to make its classification based on the features. The machine-trainable model accepts at least one feature that is based on statistical information regarding at least one aspect of prior journeys that the user has taken. Overall, the technique provides a resource-efficient solution that rapidly provides personalized results to individual respective users. In some implementations, the technique performs its personalization without sharing journey data with a remote server.

Complex-valued neural network with learnable non-linearities in medical imaging

For machine training and application of a trained complex-valued machine learning model, an activation function of the machine learning model, such as a neural network, includes a learnable parameter that is complex or defined in a complex domain with two dimensions, such as real and imaginary or magnitude and phase dimensions. The complex learnable parameter is trained for any of various applications, such as MR fingerprinting, other medical imaging, or non-medical uses.

Learning apparatus, generation apparatus, classification apparatus, learning method, and non-transitory computer readable storage medium
11580362 · 2023-02-14 · ·

According to one aspect of an embodiment a learning apparatus includes a first acquiring unit that acquires first output information that is output by an output layer when predetermined input information is input to a model that includes an input layer, a plurality of intermediate layers, and the output layer. The learning apparatus includes a second acquiring unit that acquires intermediate output information that is based on pieces of intermediate information that are output by the plurality of intermediate layers when the input information is input to the model. The learning apparatus includes a learning unit that learns the model based on the first output information and the intermediate output information.

Diagnostic systems and methods for deep learning models configured for semiconductor applications

Methods and systems for performing diagnostic functions for a deep learning model are provided. One system includes one or more components executed by one or more computer subsystems. The one or more components include a deep learning model configured for determining information from an image generated for a specimen by an imaging tool. The one or more components also include a diagnostic component configured for determining one or more causal portions of the image that resulted in the information being determined and for performing one or more functions based on the determined one or more causal portions of the image.

Diagnostic systems and methods for deep learning models configured for semiconductor applications

Methods and systems for performing diagnostic functions for a deep learning model are provided. One system includes one or more components executed by one or more computer subsystems. The one or more components include a deep learning model configured for determining information from an image generated for a specimen by an imaging tool. The one or more components also include a diagnostic component configured for determining one or more causal portions of the image that resulted in the information being determined and for performing one or more functions based on the determined one or more causal portions of the image.

Distance metrics and clustering in recurrent neural networks

Distance metrics and clustering in recurrent neural networks. For example, a method includes determining whether topological patterns of activity in a collection of topological patterns occur in a recurrent artificial neural network in response to input of first data into the recurrent artificial neural network, and determining a distance between the first data and either second data or a reference based on the topological patterns of activity that are determined to occur in response to the input of the first data.

Distance metrics and clustering in recurrent neural networks

Distance metrics and clustering in recurrent neural networks. For example, a method includes determining whether topological patterns of activity in a collection of topological patterns occur in a recurrent artificial neural network in response to input of first data into the recurrent artificial neural network, and determining a distance between the first data and either second data or a reference based on the topological patterns of activity that are determined to occur in response to the input of the first data.

Encoding and decoding image data

Certain aspects of the present disclosure provide techniques for encoding image data for one or more images. In one embodiment, a method includes the steps of downscaling the one or more images, and encoding the one or more downscaled images using an image codec. Another embodiment concerns a computer-implemented method of decoding encoded image data, and a computer-implemented method of encoding and decoding image data.

Tensor dropout using a mask having a different ordering than the tensor

A method for selectively dropping out feature elements from a tensor in a neural network is disclosed. The method includes receiving a first tensor from a first layer of a neural network. The first tensor includes multiple feature elements arranged in a first order. A compressed mask for the first tensor is obtained. The compressed mask includes single-bit mask elements respectively corresponding to the multiple feature elements of the first tensor and has a second order that is different than the first order of their corresponding feature elements in the first tensor. Feature elements from the first tensor are selectively dropped out based on the compressed mask to form a second tensor which is propagated to a second layer of the neural network.

Computing apparatus using convolutional neural network and method of operating the same

An apparatus and a method use a convolutional neural network (CNN) including a plurality of convolution layers in the field of artificial intelligence (AI) systems and applications thereof. A computing apparatus using a CNN including a plurality of convolution layers includes a memory storing one or more instructions; and one or more processors configured to execute the one or more instructions stored in the memory to obtain input data; identify a filter for performing a convolution operation with respect to the input data, on one of the plurality of convolution layers; identify a plurality of sub-filters corresponding to different filtering regions within the filter; provide a plurality of feature maps based on the plurality of sub-filters; and obtain output data, based on the plurality of feature maps.