Patent classifications
G06F2209/548
DISTRIBUTED COMMAND EXECUTION IN MULTI-LOCATION STUDIO ENVIRONMENTS
A content production management system within a distributed studio environment includes a command interface module and a command queue management module. The command interface module is configured to render a user interface for a set of content production entities associated with a set of content production volumes within the distributed studio environment. The command queue management module, upon execution of software instructions, is configured to perform the operations of receiving, from the command interface module, a command targeting a target content production entity, assigning a synchronized execution time to the command, enqueueing the command into a command queue associated with the target content production entity according to the synchronized execution time, and enabling the target content production entity to execute the command from the command queue according to the synchronized execution time.
SYSTEMS AND METHODS FOR AI META-CONSTELLATION
System and method for device constellation according to certain embodiments. For example, a method for device constellation, the method includes the steps of: receiving a request, the request including a plurality of request parameters; decomposing the request into one or more tasks; selecting one or more edge devices based at least in part on the plurality of request parameters; assigning the one or more tasks to the one or more selected edge devices to cause the one or more selected edge devices to perform the one or more tasks; and receiving one or more task results from the one or more selected edge devices.
EVENT QUEUING AND DISTRIBUTION SYSTEM
A REST-based event distribution system is described, with particular applicability to the distribution of distributed filesystem notifications over a high-latency best-effort network such as the Internet. In one embodiment, event channels are mapped to URL spaces and created and distributed through the use of HTTP POST and GET requests. The system is optimized for short polling by clients; an event history is maintained to buffer messages and maintain the idempotence of requests. In another embodiment, the events are registered as a SIP event pack allowing for the distribution of filesystem events.
Message stream processor microbatching
Embodiments provide a batching system that conforms message batches to publication constraints and also to message ordering requirements. An output array of messages is formed from messages received from a plurality of input streams, in which the messages are ordered. The output array preserves the ordering of the messages found in the source input streams. Messages are added from a head of the output array to a batch until addition of a next message to the batch would violate a particular batch processing constraint imposed on the batch. According to embodiments, one or more additional messages are included in the current batch when addition of the one or more additional messages to the batch (a) does not violate the particular batch processing constraint, and (b) continues to preserve the ordering of the messages, in the batch, with respect to the respective ordering of each of the plurality of input streams.
BATCH SCHEDULING FUNCTION CALLS OF A TRANSACTIONAL APPLICATION PROGRAMMING INTERFACE (API) PROTOCOL
Embodiments described herein are generally directed to improving performance of a transactional API protocol by batch scheduling data dependent functions. In an example, a prescribed sequence of function calls associated with a transactional application programming interface (API) is received that is to be carried out by an executer (e.g., a compute service or a second processing resource remote from a first processing resource with which an application is associated) to perform an atomic unit of work on behalf of the application. Transport latency over an interconnect between the application and the executer is reduced by: (i) creating a batch representing the prescribed sequence of function calls in a form of a list of function descriptors in which variable arguments of the prescribed sequence of function calls are replaced with corresponding global memory references; and (ii) transmitting the batch via the interconnect as a single message.
Methods and systems of scheduling computer processes or tasks in a distributed system
A cloud computer system is provided that includes a plurality of computer devices and a database. The plurality of computer devices execute a plurality of virtual machines, with one of the virtual machines serving as a controller node and the remainder serving as worker instances. The controller node is programmed to accept a request to initiate a distributed process that includes a plurality of data jobs, determine a number of worker instances to create across the plurality of computer devices, and cause the number of worker instances to be created on the plurality of computer devices. The worker instances are programmed to create a unique message queue for the corresponding worker instance, and store a reference for the unique message queue that was created for the corresponding worker to the database. The controller node retrieves the reference to the unique message queues and posts jobs to the message queues for execution by the worker instances.
Task Processing Method and Device, and Electronic Device
A task processing method, a task processing device and an electronic device are provided, which relate to the field of cloud computing technology and big data technology, in particular to the field of task processing technology. The task processing method includes: obtaining a task processing request for a to-be-processed task, the task processing request including processing time information of the to-be-processed task and a service type of the to-be-processed task; in the case that the processing time information of the to-be-processed task meets a triggering condition, writing the to-be-processed task into a corresponding message queue in accordance with the service type of the to-be-processed task, one message queue corresponding to a respective one service type; and processing the to-be-processed task in the message queue, to obtain a task processing result of the to-be-processed task.
METHODS AND SYSTEMS FOR EXCHANGING NETWORK PACKETS BETWEEN HOST AND MEMORY MODULE USING MULTIPLE QUEUES
A method and system for exchanging network packets in a memory system is provided. A size of each network packet to be transmitted is determined. Each network packets is segregated into one of plural queues based on the size of the network packet. Each network packet is transmitted over a shared memory, according to the queue in which the network packet is segregated.
DECENTRALIZED DATA CENTERS
Example methods and systems are directed to a decentralized computing arrangement including a management system connected to a wide area network. The management system has a publish/subscribe messaging platform and a platform manager to provide an application for installation on edge devices. Each edge device has a wide area network interface to connect to the wide area network thereby to receive configuration data from the management system to install the application on the edge device. The edge device further includes a messaging interface to receive messages from the publish/subscribe messaging platform. The messages control installation of the application and allow communications between the edge device and the management system based on topics.
System and Method for Browser Based Polling
A system, method, and computer-readable medium are disclosed for browser based polling of jobs used to build a web page of a web application. A web browser builds the web page of the web application and connects with one or more web services for jobs used to build the web page of the web application. A reusable service from a library is downloaded and is used at the web browser to poll the jobs as they are received from the web services. Polling the jobs is performed until download is complete. The web page application is updated when the download is complete.