Patent classifications
G06F18/2135
AUTOMATICALLY CLASSIFYING ANIMAL BEHAVIOR
Systems and methods are disclosed to objectively identify sub-second behavioral modules in the three-dimensional (3D) video data that represents the motion of a subject. Defining behavioral modules based upon structure in the 3D video data itself—rather than using a priori definitions for what should constitute a measurable unit of action—identifies a previously-unexplored sub-second regularity that defines a timescale upon which behavior is organized, yields important information about the components and structure of behavior, offers insight into the nature of behavioral change in the subject, and enables objective discovery of subtle alterations in patterned action. The systems and methods of the invention can be applied to drug or gene therapy classification, drug or gene therapy screening, disease study including early detection of the onset of a disease, toxicology research, side-effect study, learning and memory process study, anxiety study, and analysis in consumer behavior.
Learning to search user experience designs based on structural similarity
Embodiments are disclosed for learning structural similarity of user experience (UX) designs using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise generating a representation of a layout of a graphical user interface (GUI), the layout including a plurality of control components, each control component including a control type, geometric features, and relationship features to at least one other control component, generating a search embedding for the representation of the layout using a neural network, and querying a repository of layouts in embedding space using the search embedding to obtain a plurality of layouts based on similarity to the layout of the GUI in the embedding space.
Method, server and computer-readable medium for recommending nodes of interactive content
Disclosed are a method, a server and a computer-readable medium for recommending nodes of an interactive content, in which, when receiving recommendation request information for requesting a recommendation node for a specific node included in an interactive content from a user generating the interactive content, a first embedding value for a first set including the specific node is calculated, and a second embedding value for each second set including each of a plurality of nodes of each of one or more other interactive contents included in the service server is calculated, so as to calculate a similarity between the first embedding value and the second embedding value and provide the user with a next node, as a recommendation node, of a node corresponding to the second embedding value determined based on the similarity.
ARRANGEMENT FOR PRODUCING HEAD RELATED TRANSFER FUNCTION FILTERS
When three-dimensional audio is produced by using headphones, particular HRTF-filters are used to modify sound for the left and right channels of the headphone. As the morphology of every ear is different, it is beneficial to have HRTF-filters particularly designed for the user of headphones. Such filters may be produced by deriving ear geometry from a plurality of images taken with an ordinary camera, detecting necessary features from images and fitting said features to a model that has been produced from accurately scanned ears comprising representative values for different sizes and shapes. Taken images are sent to a server (52) that performs the necessary computations and submits the data further or produces the requested filter.
PREDICTIVE MODELING FOR CHAMBER CONDITION MONITORING
The subject matter of this specification can be implemented in, among other things, methods, systems, computer-readable storage medium. A method can include a processing device receiving training data. The training data may include first sensor data indicating a first state of an environment of a first processing chamber processing a first substrate. The training data may further include first process tool data indicating a state of first processing tools processing the first substrate. The training data may further include first process result data corresponding to the first substrate processed by the first process tool. The processing device may further train a first model using the training data. The trained first model receives new input having second sensor data and second process tool data to produce second output based on the new input. The second output indicating a second process result data corresponding to a second substrate.
PREDICTIVE MODELING FOR CHAMBER CONDITION MONITORING
The subject matter of this specification can be implemented in, among other things, methods, systems, computer-readable storage medium. A method can include a processing device receiving training data. The training data may include first sensor data indicating a first state of an environment of a first processing chamber processing a first substrate. The training data may further include first process tool data indicating a state of first processing tools processing the first substrate. The training data may further include first process result data corresponding to the first substrate processed by the first process tool. The processing device may further train a first model using the training data. The trained first model receives new input having second sensor data and second process tool data to produce second output based on the new input. The second output indicating a second process result data corresponding to a second substrate.
3-D ULTRASOUND IMAGING DEVICE AND METHODS
The present disclosure includes a method of diagnosing a condition of bodily tissue using a computer, the method comprising comparing, using a computer, a 3D tissue model derived from an ultrasound scan of the bodily tissue with at least one 3D tissue model having common tissue with the bodily tissue, and diagnosing a condition of the bodily tissue responsive to comparing the 3D tissue models.
HYPERSPACE-BASED PROCESSING OF DATASETS FOR ELECTRONIC DESIGN AUTOMATION (EDA) APPLICATIONS
A computing system may include a hyperspace generation engine and a hyperspace processing engine. The hyperspace generation engine may be configured to access a feature vector set, and feature vectors in the feature vector set may represent values for multiple parameters of data points in a dataset. The hyperspace generation engine may further be configured to perform a principal component analysis on the feature vector set and quantize the principal component space into a hyperspace comprised of hyperboxes. The hyperspace processing engine may be configured to process the dataset according to a mapping of the feature vector set into the hyperboxes of the hyperspace.
METHOD AND SYSTEM FOR COMPRESSING APPLICATION DATA FOR OPERATIONS ON MULTI-CORE SYSTEMS
A system and method to compress application control data, such as weights for a layer of a convolutional neural network, is disclosed. A multi-core system for executing at least one layer of the convolutional neural network includes a storage device storing a compressed weight matrix of a set of weights of the at least one layer of the convolutional network and a decompression matrix. The compressed weight matrix is formed by matrix factorization and quantization of a floating point value of each weight to a floating point format. A decompression module is operable to obtain an approximation of the weight values by decompressing the compressed weight matrix through the decompression matrix. A plurality of cores executes the at least one layer of the convolutional neural network with the approximation of weight values to produce an inference output.
Systems and methods for real-time complex character animations and interactivity
Systems, methods, and non-transitory computer-readable media can identify a virtual character being presented to a user within a real-time immersive environment. A first animation to be applied to the virtual character is determined. A nonverbal communication animation to be applied to the virtual character simultaneously with the first animation is determined. The virtual character is animated in real-time based on the first animation and the nonverbal communication animation.