Patent classifications
H04L67/1095
Scaling host policy via distribution
Techniques are disclosed for processing data packets and implementing policies in a software defined network (SDN) of a virtual computing environment. At least two SDN appliances are configured to disaggregate enforcement of policies of the SDN from hosts of the virtual computing environment. The hosts are implemented on servers communicatively coupled to network interfaces of the SDN appliance. The servers host a plurality of virtual machines. The servers are communicatively coupled to network interfaces of at least two top-of-rack switches (ToRs). The SDN appliance comprises a plurality of smart network interface cards (sNICs) configured to implement functionality of the SDN appliance. The sNICs have a floating network interface configured to provide a virtual port connection to an endpoint within a virtual network of the virtual computing environment.
Scaling host policy via distribution
Techniques are disclosed for processing data packets and implementing policies in a software defined network (SDN) of a virtual computing environment. At least two SDN appliances are configured to disaggregate enforcement of policies of the SDN from hosts of the virtual computing environment. The hosts are implemented on servers communicatively coupled to network interfaces of the SDN appliance. The servers host a plurality of virtual machines. The servers are communicatively coupled to network interfaces of at least two top-of-rack switches (ToRs). The SDN appliance comprises a plurality of smart network interface cards (sNICs) configured to implement functionality of the SDN appliance. The sNICs have a floating network interface configured to provide a virtual port connection to an endpoint within a virtual network of the virtual computing environment.
System and method for cloud deployment optimization
Systems and methods of cloud deployment optimization are disclosed. In some example embodiments, a method comprises running original instances of an application concurrently on original servers to implement an online service, receiving, by the original instances of the application original requests for one or more functions of the online service, receiving a command to deploy a number of additional instances of the application, transmitting synthetic requests for the function(s) of the online service to one of the original servers according to a predetermined optimization criteria, deploying the number of additional instances of the application on additional servers using a copy of the original instance of the application, and running the deployed additional instances of the application on their corresponding additional servers concurrently with the original instances of the application being run on their corresponding original servers.
System and method for cloud deployment optimization
Systems and methods of cloud deployment optimization are disclosed. In some example embodiments, a method comprises running original instances of an application concurrently on original servers to implement an online service, receiving, by the original instances of the application original requests for one or more functions of the online service, receiving a command to deploy a number of additional instances of the application, transmitting synthetic requests for the function(s) of the online service to one of the original servers according to a predetermined optimization criteria, deploying the number of additional instances of the application on additional servers using a copy of the original instance of the application, and running the deployed additional instances of the application on their corresponding additional servers concurrently with the original instances of the application being run on their corresponding original servers.
Synchronizing media content streams for live broadcasts and listener interactivity
A creator establishes a media program including a plurality of media files or content in accordance with a broadcast plan. Mixing systems establish connections with a mobile device of the creator, and with sources of the audio files or content, either manually or automatically in response to instructions of the creator. Broadcast systems establish connections with computer devices of listeners and a mixing system. Conference systems establish connections with the mobile device of the creator and a mixing system. The connections established may be one-way or two-way channels. The media program may include live or previously recorded audio content including words or sounds of the creator, as well as advertisements, music, news, sports, weather, or other programming. The media program may also include live or previously recorded interviews or other conversations with guests, including but not limited to one or more listeners.
Synchronizing media content streams for live broadcasts and listener interactivity
A creator establishes a media program including a plurality of media files or content in accordance with a broadcast plan. Mixing systems establish connections with a mobile device of the creator, and with sources of the audio files or content, either manually or automatically in response to instructions of the creator. Broadcast systems establish connections with computer devices of listeners and a mixing system. Conference systems establish connections with the mobile device of the creator and a mixing system. The connections established may be one-way or two-way channels. The media program may include live or previously recorded audio content including words or sounds of the creator, as well as advertisements, music, news, sports, weather, or other programming. The media program may also include live or previously recorded interviews or other conversations with guests, including but not limited to one or more listeners.
Space-efficient techniques for generating unique instances of data objects
A set of data units associated with a data object is obtained, such that respective instances of the data object can be reconstructed from respective subsets of the set. Corresponding to a request for the data object, a first subset of the set is identified. The first subset meets a uniqueness criterion with respect to other subsets of the set that are used to respond to other requests for the data object. An instance of the data object is reconstructed from the first subset.
Remote shared content experience systems
A system provides a “virtual room” for remotely sharing content experiences via electronic devices at different locations. The system may enable synchronization of the content at the different locations, access control, be able to provide and/or experience interaction feedback regarding the content, control the interaction feedback that is provided and/or experienced, enhance the ability of people to distinguish the content from the interaction feedback, and so on. As such, people may be able to share content experiences more like they were present in a single location while remote from each other.
Remote shared content experience systems
A system provides a “virtual room” for remotely sharing content experiences via electronic devices at different locations. The system may enable synchronization of the content at the different locations, access control, be able to provide and/or experience interaction feedback regarding the content, control the interaction feedback that is provided and/or experienced, enhance the ability of people to distinguish the content from the interaction feedback, and so on. As such, people may be able to share content experiences more like they were present in a single location while remote from each other.
Intelligently pre-positioning and migrating compute capacity in an overlay network, with compute handoff and data consistency
Edge server compute capacity demand in an overlay network is predicted and used to pre-position compute capacity in advance of application-specific demands. Preferably, machine learning is used to proactively predict anticipated compute capacity needs for an edge server region (e.g., a set of co-located edge servers). In advance, compute capacity (application instances) are made available in-region, and data associated with an application instance is migrated to be close to the instance. The approach facilitates compute-at-the-edge services, which require data (state) to be close to a pre-positioned latency-sensitive application instance. Overlay network mapping (globally) may be used for more long-term positioning, with short-duration scheduling then being done in-region as needed. Compute instances and associated state are migrated intelligently based on predicted (e.g., machine-learned) demand, and with full data consistency enforced.