Patent classifications
G06F9/54
Extensible platform for orchestration of data with built-in scalability and clustering
In a computer system, an orchestration platform includes extensible components that interact with external systems and technology. The platform scales by way of a plurality of application servers using a clustering architecture.
SYSTEMS AND METHODS FOR AUTOMATED NOTIFICATION AND RESOLUTION OF TRIGGER EVENTS
Systems and methods for automated notification and resolution of trigger events are disclosed. According to one embodiment, a method for automated notification and resolution of trigger events may include: (1) monitoring, by a backend computer program, an account for a trigger condition; (2) generating, by the backend computer program, a voice notification for the trigger condition; (3) communicating, by the backend computer program, the voice notification to an electronic device associated with the account and a link to a network location to resolve the trigger condition; (4) presenting, by the backend computer program and at the network location to resolve the trigger condition, one or more resolution options; (5) receiving, by the backend computer program and at the network location to resolve the trigger condition, a selection of one of the one or more resolution options; and (6) executing, by the backend computer program, the selected resolution option.
Local controller for local API authorization method and apparatus
Some embodiments provide a local controller on a set of host computers that reduce the volume of data that is communicated between the server set and the set of host computers. The local controller executing on a particular host computer, in some embodiments, receives a portion of the namespace including only the policies (e.g., opcode) that are relevant to API-authorization processing for the applications executing on the particular host computer provided by a local agent executing on the computer to authorize the API requests based on policies and parameters. The local controller analyzes the received policies (e.g., policy opcodes) and identifies the parameters (e.g. operands), or parameter types, needed for API-authorization processing (e.g., evaluating the policy opcode upon receiving a particular API request) by the local agent. In some embodiments, the local controller performs this analysis for each updated set of policies (e.g., policy opcodes).
Phased deployment of deep-learning models to customer facing APIs
Techniques for phased deployment of machine learning models are described. Customers can call a training API to initiate model training, but then must wait while the training completes before the model can be used to perform inference. Depending on the type of model, machine learning algorithm being used for training, size of the training dataset, etc. this training process may take hours or days to complete. This leads to significant downtime where inference requests cannot be served. Embodiments improve upon existing systems by providing phased deployment of custom models. For example, a simple, less accurate model, can be provided synchronously in response to a request for a custom model. At the same time, one or more machine learning models can be trained asynchronously in the background. When the machine learning model is ready for use, the customers' traffic and jobs can be transferred over to the better model.
Phased deployment of deep-learning models to customer facing APIs
Techniques for phased deployment of machine learning models are described. Customers can call a training API to initiate model training, but then must wait while the training completes before the model can be used to perform inference. Depending on the type of model, machine learning algorithm being used for training, size of the training dataset, etc. this training process may take hours or days to complete. This leads to significant downtime where inference requests cannot be served. Embodiments improve upon existing systems by providing phased deployment of custom models. For example, a simple, less accurate model, can be provided synchronously in response to a request for a custom model. At the same time, one or more machine learning models can be trained asynchronously in the background. When the machine learning model is ready for use, the customers' traffic and jobs can be transferred over to the better model.
Event producer system of a messaging platform for delivering real-time messages
This disclosure relates to streaming real-time messages over time to client applications according to query subscriptions that match content from a large stream of messages exchanged on a messaging platform in a manner that increases the speed of message delivery, effectively controls the management of computer resources to handle the fluctuation of the number of active query subscriptions, and/or increases the security of matching the query subscriptions against messages generated from the perspective of the authors while delivering those messages in real-time from the perspective of the users that initiated the query subscriptions.
Mobile computing device notification mode determination
Systems and methods are described for providing a notification mode determination service. A notification mode determination service may apply various criteria to determine a mode for displaying a notification on a mobile computing device, and may analyze responses to previously displayed notifications in order to determine the criteria to apply, prioritize the application of the criteria, and identify preferred notification modes. Notifications may be displayed using audio feedback, visual feedback, haptic feedback, or combinations thereof, and may be deferred until a particular time or condition is reached. Notification modes may be determined based on factors such as a foreground software application, a type or category of the foreground software application, calendar events, holidays, geolocations, and the like.
Convolutional layer acceleration unit, embedded system having the same, and method for operating the embedded system
Disclosed herein are a convolutional layer acceleration unit, an embedded system having the convolutional layer acceleration unit, and a method for operating the embedded system. The method for operating an embedded system, the embedded system performing an accelerated processing capability programmed using a Lightweight Intelligent Software Framework (LISF), includes initializing and configuring, by a parallelization managing function entity (FE), entities present in resources for performing mathematical operations in parallel, and processing in parallel, by an acceleration managing FE, the mathematical operations using the configured entities.
Methods and systems for continuous asynchronous code deployment
Systems, methods and computer program products are presented for the automated deployment of a code update to a device. One or more clusters of devices may be connected to a development environment for deployment of one or more code updates through respective development pipelines to the respective clusters of devices. A first cluster of devices receives a module for implementation of an agent for the first cluster of devices and a central queue local to a centralized controller of the development environment. The agent reports at least one status of a respective device to the centralized controller of the development environment, whereby that status may correspond to a code update image pulled onto the respective device. The agent retrieves one or more instruction messages from the centralized controller in response to the reported status of the respective device.
Method and apparatus for processing data
A user device has a plurality of modules which support an application such as gaming application. The user device has a stream processing module which is able to stream process events which are generated, for example when the application is run. The events which are generated by the modules are passed to an event module which distributes the events to other of the modules.