Patent classifications
G06F9/5038
TECHNIQUES FOR IMPLEMENTING ROLLBACK OF INFRASTRUCTURE CHANGES IN A CLOUD INFRASTRUCTURE ORCHESTRATION SERVICE
Techniques for implementing rollback of infrastructure changes in an infrastructure orchestration service are described. In certain examples, an infrastructure orchestration service is disclosed that manages both provisioning and deploying of infrastructure assets within a cloud environment. The service receives a plan comprising a set of instructions associated with a set of infrastructure assets of an execution target and identifies a first state of the set of infrastructure assets. The service executes the set of instructions in the plan to achieve a second state for the set of infrastructure assets. Based in part on the executing, the service receives a trigger for rolling back the plan to restore the set of infrastructure assets in the plan to the first state and executes a rollback plan for the plan. The service then transmits a result associated with the execution of the rollback plan.
METHODS AND APPARATUS TO HANDLE DEPENDENCIES ASSOCIATED WITH RESOURCE DEPLOYMENT REQUESTS
An example apparatus includes a dependency graph generator to generate a dependency graph based on a resource request file specifying a first resource and a second resource to deploy to a resource-based service, the dependency graph representative of the first resource being dependent on a second resource, a verification controller to generate a status indicator after a determination that a time-based ordering of a first request relative to a second request satisfies the dependency graph, and a resource controller to cause transmission of the first request and the second request to the resource-based service based on the dependency graph, and, after determining that the time-based ordering of the first request relative to the second request satisfies the dependency graph, cause transmission of the status indicator to a user device.
Systems and methods for identifying a set of characters in a media file
The illustrative embodiments described herein provide systems and methods for notifying a user when a set of characters are identified in a media file. In one embodiment, a method includes receiving a set of characters inputted by the user of a computing device, playing the media file, transcribing the media file to form a transcription, and determining whether the transcription of the media file includes the set of characters. The method also includes initiating a notification prompt on a graphical user interface of the computing device in response to determining that the media file includes the set of characters.
METHOD AND SYSTEM FOR PERFORMING DISTRIBUTED COMPUTER VISION WORKLOADS IN A COMPUTER VISION ENVIRONMENT
Techniques described herein relate to a method for managing a computer vision environment. The method includes identifying a CV alert; generating a CV alert case associated with the CV alert; identifying nearby CV nodes of the plurality of CV nodes; transmitting CV alert to the nearby CV nodes; for each of the nearby CV nodes: receiving the CV alert; determining, based on CV environment configuration information of the nearby CV node and the CV alert, whether to perform a distributed CV workload; when the determination is to perform the distributed CV workload: initiating performance of the distributed CV workload by the nearby CV nodes to generate CV data; updating the CV alert case using CV data generated during the performance of the distributed CV workload to obtain an updated CV alert case; and transmitting by the nearby CV node to the VMS the updated CV alert case.
AUTOMATIC BACKUP DISTRIBUTION FOR CLUSTERED DATABASES
A data management platform may receive, from a user of a data management platform, a first job request to perform a backup of data from a data source to a database managed by the user. In some examples, the database may be configured as a set of database instances running on a set of computing nodes of a computing cluster. The data management platform may store a backup load indication that indicates which computing node is assigned to perform the backup of the data based on receiving the first job request. The data management platform may receive one or more second job requests subsequent to receiving the first job request and may determine a backup load for one or more computing nodes of the set of computing nodes. The data management platform may then assign one or more target computing nodes for performing the one or more second job requests.
SAFE CRITICAL SECTION OPERATIONS FOR VIRTUAL MACHINES WITH VIRTUAL CENTRAL PROCESSING UNIT OVERCOMMIT
Safe critical section operations for virtual machines with virtual central processing unit overcommit are provided by: in response to identifying a preempting task to run on a first physical central processing unit (PCPU) from a second PCPU, setting a status of a flag in a virtual memory used by a first virtual central processing unit (VCPU) running on the first PCPU to indicate that the preempting task will interrupt the first VCPU; in response to initiating execution of a read-side critical section operation scheduled by the first VCPU to run on the first PCPU, checking the status of the flag in the virtual memory; and in response to the status of the flag being positive: exiting the first VCPU to a hypervisor; executing, by the hypervisor, the preempting task on the first PCPU; and after completing the preempting task, continuing execution of the read-side critical section operation.
LOCKING AND SYNCHRONIZATION FOR HIERARCHICAL RESOURCE RESERVATION IN A DATA CENTER
An example method of reserving a resource of virtualized infrastructure in a data center on behalf of a client includes: obtaining, by a resource lock manager from a topology manager, a sub-topology for the resource from a resource topology of the virtualized infrastructure; setting, by the resource lock manager, an exclusive lock on the resource and on each of at least one descendant in the sub-topology for the resource, each exclusive lock disallowing any other lock on its respective resource; setting, by the resource lock manager, a limited lock on each ancestor in the sub-topology for the resource, each limited lock allowing any other limited lock on its respective resource; and notifying the client that a reservation of the resource is granted.
Method and apparatus for stateless parallel processing of tasks and workflows
In a method for parallel processing of a data stream, a processing task is received to process the data stream that includes a plurality of segments. A split operation is performed on the data stream to split the plurality of segments into N sub-streams. Each of the N sub-streams includes one or more segments of the plurality of segments. The N is a positive integer. N sub-processing tasks are performed on the N sub-streams to generate N processed sub-streams. A merge operation is performed on the N processed sub-streams based on a merge buffer to generate a merged output data stream. The merge buffer includes an output iFIFO buffer and N sub-output iFIFO buffers coupled to the output iFIFO buffer. The merged output data stream is identical to an output data stream that is generated when the processing task is applied directly to the data stream without the split operation.
SYSTEM FOR HIGH PERFORMANCE ON-DEMAND VIDEO TRANSCODING
The Cloud-based Video Streaming Service (CVSS) architecture is disclosed to transcode video streams in an on-demand manner. The architecture provides a platform for streaming service providers to utilize cloud resources in a cost-efficient manner and with respect to the Quality of Service (QoS) demands of video streams. In particular, the architecture includes a QoS-aware scheduling method to efficiently map video streams to cloud resources, and a cost-aware dynamic (i.e., elastic) resource provisioning policy that adapts the resource acquisition with respect to the video streaming QoS demands. Simulation results based on realistic cloud traces and with various workload conditions, demonstrate that the CVSS architecture can satisfy video streaming QoS demands and reduces the incurred cost of stream providers up to 70%.
DYNAMIC DISTRIBUTION OF A COMPUTATIONAL GRAPH
Dynamic distribution of a computational graph that defines a set of operations comprising a first subset of one or more operations and a second subset of one or more operations. In one aspect, there is a method for generating output data based on the computational graph. The method includes a first device storing information related to the computational graph, the information related to the computational graph comprising information representing the first subset of operations. The method also includes the first device receiving input data and the first device performing the first subset of operations using the received input data, thereby producing first output data corresponding to the first subset of operations. The method further includes the first device exposing the first output data as a discoverable resource so that the first output data is discoverable by other devices.