H04L12/5602

System, Apparatus And Method For Traffic Shaping Of Data Communication Via An Interconnect
20190081900 · 2019-03-14 ·

In one embodiment, an apparatus includes: a transmitter to send first data to a device coupled to the apparatus via a physical link; a receiver to receive second data from the device via the physical link; and a control circuit to control the transmitter to send the first data at a first effective rate during a link activation interval of a data transfer interval and to control the receiver to receive the second data at a second effective rate during the link activation interval, the second effective rate different than the first effective rate. Other embodiments are described and claimed.

Virtual quantized congestion notification

Congestion management for data traffic in a virtual domain identifies a congestion source and sends a message to the source to adjust data traffic rates. The source may be a virtual machine hosted by a physical server with one or more virtual servers incorporated. A congestion manager may identify the source and send the message to the source without affecting other data sources hosted by the physical server or the virtual servers. In some embodiments, information about the congestion source may be encapsulated in a packet payload readable only by the congestion source so only the congestion source receives the instruction to adjust the transmission rate.

EFFICIENTLY EXECUTING CONCURRENT OPERATIONS THAT DEPEND ON EACH OTHER IN A STREAMING DATA ENVIRONMENT
20180332087 · 2018-11-15 · ·

Implementations are provided herein for accepting operations asynchronously in a particular order and efficiently committing them into an append-only log while preserving relative order. Operations that are dependent on one or more operations prior to it in the log will be guaranteed to fail, and not accepted, if any of the prior operations failed. If an operation succeeds, it is guaranteed that all operations it depended on are also successful.

ATOMICALLY COMMITTING RELATED STREAMING DATA ACROSS MULTIPLE DISTRIBUTED RESOURCES

Implementations are provided herein for atomically committing related stream data across multiple, distributed resources. Transactions can be established that are distributed across multiple hosts, and their data can be made to appear atomic to an observing process. A master status for the transaction can be used to flag to other processes that the transaction is being committed. A stream to which the transaction is being appended to can be locked until the transaction data is committed in full. It can be appreciated that one an event to a transaction is acknowledged, and the transaction is merged, its commitment is guaranteed.

DE-MULTIPLEXING STREAMING DATA FROM A FAST WRITE AHEAD DURABLE LOG INTO SEGMENTS

Implementations are provided herein for managing streaming data that appended and multiplexed into a durable write ahead replicated log. By writing data in the durable log, large amounts of small writes can be processed quickly. Data in the durable log can be de-multiplexed and packaged into segment containers. Segment containers can serialize stream segment specific data and can be stored in long term storage. Data that has been stored in long term storage can be truncated form the durable write ahead log making room for new data.

ENCODING AND TRANSMITTING STREAM DATA WITHOUT PRIOR KNOWLEDGE OF DATA SIZE
20180332096 · 2018-11-15 · ·

Implementations are provided herein for encoding and transmitting streaming data from a client application to a server for storage without prior knowledge of the size of the streaming data. A header can be sent that includes a batch size chunk size. Raw streaming data can be packaged into the chunk. Chunks can be packaged and sent at any time prior to filling up with streaming data, by padding the chunk and including a footer that delineates the amount of raw stream data in the chunk. Chunks that are full can have a footer that delineates the entire chunk is raw stream data. It can be appreciated that you do not need to buffer data on the client side as chunks do not need to be full to send. Latency on processing streaming data can also be reduced by limited or eliminated buffering.

CONCURRENT READ-WRITE ACCESS TO A STREAM BY A PLURALITY OF CLIENTS USING A STATE SYNCHRONIZER
20180332325 · 2018-11-15 · ·

Implementations are provided herein for a set of clients to concurrently have read-write access to a stream where all other clients are aware of all changes being made to the stream. A pravega node can track the data that has been written to the stream. The set of clients can dynamically read the stream. A client among the set of clients can update the stream by sending a request to the pravega node that includes the update and a total length of the stream that was written to the stream at the time of the last read update by the client. If the total length of the stream received from the client matches the actual length of the stream maintained by the pravega node, the pravega node will update the stream. If not, a failure message can be sent to the client and the client can process more reads to the stream before making another attempt to update the stream.

EXECUTING STREAMING DATA WRITES WITHOUT DUPLICATION OR LOSS

Implementations are provided herein for executive streaming data writes without duplication or loss. A client application and a pravega node can work to track where write data is, how much data has been written, and what specific data has been acknowledged by the pravega node as successfully written. In the event of an error or connection disruption, the client application can reconnect and determine how much data has been written and resend what data still needs to be written. The data can be written exactly once, and once written and acknowledged, will no longer be subject to data loss.

ORGANIZING PRESENT AND FUTURE READS FROM A TIERED STREAMING DATA STORAGE LAYER
20180332366 · 2018-11-15 · ·

Implementations are provided herein for organizing present and future reads from a tiered streaming data storage layer. Implementations allow for access to multi-tiered streaming data organized in different append-only segments, some of which may be related to each other. Streaming data can be read from fast local tier 1 storage, streaming data can be retrieved from fold tier 2 storage, and registrations can be made to read streaming data that has not yet been written to the storage layer.

DYNAMICALLY SCALING A NUMBER OF STREAM SEGMENTS THAT DYNAMICALLY STORE STREAMING DATA WHILE PRESERVING THE ORDER OF WRITES

Implementations are provided herein for auto-scaling a set of stream segments to a stream. In one implementation, the amount of stream segments can be scaled up and down depending on the amount of data ingested from writers. As the number of stream segments change, writers can have their streaming data transition to a newly merged stream segment or a newly split stream segment. The defined ordering of data as written by the writer is preserved as stream segments are scaled. It can be appreciated that a dynamically scaled stream can offer more capacity than any individual host can provide, while still preserving data order.