Patent classifications
G06N3/008
Robot and controlling method thereof
Disclosed herein is a robot including an output interface including at least one of a display or a speaker, a camera, and a processor controlling the output interface to output content, acquiring an image including a user through the camera while the content is output, detecting an over-immersion state of the user based on the acquired image, and controlling an operation of releasing over-immersion when the over-immersion state is detected.
AI-based laundry treatment apparatus and operation method thereof
Disclosed is an artificial intelligence (AI)-based laundry treatment apparatus comprising: a washing module; a camera to acquire an image of laundry; a memory for storing therein a laundry recognition model, wherein the model is trained using a machine learning or deep learning algorithm, wherein the model is configured to recognize laundry information about the laundry; and a processor configured to apply image data from the acquired image to the laundry recognition model to acquire the laundry information.
AI-based laundry treatment apparatus and operation method thereof
Disclosed is an artificial intelligence (AI)-based laundry treatment apparatus comprising: a washing module; a camera to acquire an image of laundry; a memory for storing therein a laundry recognition model, wherein the model is trained using a machine learning or deep learning algorithm, wherein the model is configured to recognize laundry information about the laundry; and a processor configured to apply image data from the acquired image to the laundry recognition model to acquire the laundry information.
Generation of training data to train a classifier to identify distinct physical user devices in a cross-device context
Techniques are disclosed for accurately identifying distinct physical user devices in a cross-device context. An example embodiment applies a multi-phase approach to generate labeled training datasets from a corpus of unlabeled device records. Such labeled training datasets can be used for training machine learning systems to predict the occurrence of device records that have been wrongly (or correctly, as the case may be) attributed to different physical user devices. Such identification of improper attribution can be particularly helpful in web-based analytics. The labeled training datasets include labeled pairs of device records generated using multiple strategies for inferring whether the two device records of a pair of device records represent the same physical user device (or different physical user devices). The labeled pairs of device records can then be used to train classifiers to predict with confidence whether two device records represent or do not represent the same physical user device.
Neural network configuration for wireless communication system assistance
Methods, systems, and devices for wireless communications are described. Generally, the described techniques provide for communicating capability information (e.g., regarding neural network blocks supported by a user equipment (UE) and a base station). A base station may configure one or more neural network block parameters, and may transmit the neural network block parameters to the UE. The UE may configure or reconfigure a neural network block according to the neural network block parameters, and may process one or more signals, e.g., baseband signals, generated by the UE using the neural network block and the neural network block parameters.
Neural network configuration for wireless communication system assistance
Methods, systems, and devices for wireless communications are described. Generally, the described techniques provide for communicating capability information (e.g., regarding neural network blocks supported by a user equipment (UE) and a base station). A base station may configure one or more neural network block parameters, and may transmit the neural network block parameters to the UE. The UE may configure or reconfigure a neural network block according to the neural network block parameters, and may process one or more signals, e.g., baseband signals, generated by the UE using the neural network block and the neural network block parameters.
Mobility surrogates
A mobility surrogate includes a humanoid form supporting at least one camera that captures image data from a first physical location in which the first mobility surrogate is disposed to produce an image signal and a mobility base. The mobility base includes a support mechanism, with the humanoid form affixed to the support on the mobility base and a transport module that includes mechanical drive mechanism and a transport control module including a processor and memory that are configured to receive control messages from a network and process the control messages to control the transport module according to the control messages received from the network.
CONTROLLING AGENTS INTERACTING WITH AN ENVIRONMENT USING BRAIN EMULATION NEURAL NETWORKS
In one aspect, there is provided a method performed by one or more data processing apparatus for selecting actions to be performed by an agent interacting with an environment, the method including, at each of multiple time steps, receiving an observation characterizing a current state of the environment at the time step, providing an input including the observation to an action selection neural network having a brain emulation sub-network with an architecture that is based on synaptic connectivity between biological neurons in a brain of a biological organism, processing the input including the observation characterizing the current state of the environment at the time step using the action selection neural network having the brain emulation sub-network to generate an action selection output, and selecting an action to be performed by the agent at the time step based on the action selection output.
Artificial intelligence device mounted on vehicle to perform self-diagnosis, and method for the same
An artificial intelligence device mounted on a vehicle is provided. A sensing unit acquires a gyroscope sensor value, an acceleration sensor value, a GPS sensor value, and a proximity sensor value. If the acquired data satisfies a predetermined reference value, a processor inputs the acquired sensor values to an artificial intelligence model, acquires whether an impact requiring self-diagnosis occurs and impact direction information as a result value, selects an ECU module to perform self-diagnosis according to the acquired result value, and performs self-diagnosis.
INTENT DRIVEN VOICE INTERFACE
An audio stream received from an audio transceiver. The audio stream is in an environment that includes audio of a first user. An acoustic communication of the first user is detected from the audio stream. An audio intent trigger of the first user is identified from the audio stream and based on the acoustic communication. An assistance action for the first user is initiated in response to the audio intent trigger and by a voice-based interface.