Patent classifications
H04L12/879
Systems and methods for extending internal endpoints of a network device
An integrated circuit (IC) device includes a network device. The network device includes first and second network ports each configured to connect to a network, and an internal endpoint port configured to connect to first endpoint having a first processing unit and second endpoint having a second processing unit. A lookup circuit is configured to provide a first forwarding decision for a first frame to be forwarded to the first endpoint. An endpoint extension circuit is configured to determine a first memory channel based on the first forwarding decision for forwarding the first frame, and forward the first frame to the first endpoint using the determined memory channel.
CONFIGURABLE RECEIVE BUFFER SIZE
Examples described herein relate to a network interface device comprising circuitry to: allocate a first number of buffers to store received packets associated with a first descriptor ring; allocate a second number of buffers to store received packets associated with a second descriptor ring; and based on receipt of a packet, copy the received packet into a number of buffers based on whether the received packet is associated with the first descriptor ring or the second descriptor ring. In some examples, the circuitry is to copy the received packet starting at an offset from a start of a starting buffer in a number of buffers, wherein the offset is based on whether the received packet is associated with the first descriptor ring or the second descriptor ring and wherein the number of buffers is based on whether the received packet is associated with the first descriptor ring or the second descriptor ring.
Network packet receiving apparatus and method
A network packet receiving device that includes packet queues, a credit allocation circuit and a credit management circuit is provided. Each of the packet queues corresponds to a packet transmission channel and receives packets. The credit allocation circuit calculates packet amount of each of the packet queues to control the descriptor credit of each of the packet queues within a credit range. The credit management circuit points each of public entries of a public link list to one of descriptors in a single descriptor buffer. The credit management circuit further receives a credit requesting command from the packet queues to assign the descriptors to the packet queues through the public entries under the condition that the descriptor credit is within a credit range such that a DMA circuit performs a DMA operation on the packets according to the descriptors.
SYSTEMS AND METHODS FOR ENHANCING OR AFFECTING NEURAL STIMULATION EFFICIENCY AND/OR EFFICACY
Systems and methods for enhancing or affecting neural stimulation efficiency and/or efficacy are disclosed. In one embodiment, a system and/or method may apply electromagnetic stimulation to a patient's nervous system over a first time domain according to a first set of stimulation parameters, and over a second time domain according to a second set of stimulation parameters. The first and second time domains may be sequential, simultaneous, or nested. Stimulation parameters may vary in accordance with one or more types of duty cycle, amplitude, pulse repetition frequency, pulse width, spatiotemporal, and/or polarity variations. Stimulation may be applied at subthreshold, threshold, and/or suprathreshold levels in one or more periodic, aperiodic (e.g., chaotic), and/or pseudo-random manners. In some embodiments stimulation may comprise a burst pattern having an interburst frequency corresponding to an intrinsic brainwave frequency, and regular and/or varying intraburst stimulation parameters. Stimulation signals providing reduced power consumption with at least adequate symptomatic relief may be applied prior to moderate or significant power source depletion.
RESOURCE SHARING IN A TELECOMMUNICATIONS ENVIRONMENT
A transceiver is designed to share memory and processing power amongst a plurality of transmitter and/or receiver latency paths, in a communications transceiver that carries or supports multiple applications. For example, the transmitter and/or receiver latency paths of the transceiver can share an interleaver/deinterleaver memory. This allocation can be done based on the data rate, latency, BER, impulse noise protection requirements of the application, data or information being transported over each latency path, or in general any parameter associated with the communications system.
Computer remote indirect memory access system
A remote indirect memory access system and method for networked computer servers. The system comprises a network interface card having a network interface memory and a system memory operatively connected to the network interface card. The system memory has a plurality of electronic memory queues, wherein each of the memory queues corresponds to one of a plurality of receive processes in the computer server, with each of the memory queues having a corresponding head pointer and tail pointer. Each of the memory queues is assigned to receive electronic messages from a plurality of sender computers. The NIC comprises a tail pointer table, with the tail pointer table comprising initial memory location data of the tail pointers for the memory queues. The memory location data referenced by corresponding queue identifiers.
Streaming platform reader
A streaming platform reader includes: a plurality of reader threads configured to retrieve messages from a plurality of partitions of a streaming platform, wherein each message in the plurality of partitions is associated with a unique identifier; a plurality of queues coupled to the plurality of reader threads configured to store messages or an end of partition signal from the reader threads, wherein each queue includes a first position that stores the earliest message stored by a queue; a writer thread controlled by gate control logic that: compares the identifiers of all of the messages in the first positions of the queues of the plurality of queues, and forwards, to a memory, the message associated with the earliest identifier; and wherein the gate control logic blocks the writer thread unless each of the queues contains a message or an end of partition signal.
Methods and systems for streaming data packets on peripheral component interconnect (PCI) and on-chip bus interconnects
A method and architecture to write data between a source and destination by memory mapped writes or streaming packets between any of a host, a peripheral or a sub-peripheral device. A stream address is used to write the data to a memory of the destination without the source being aware of physical addresses of destination memory, i.e., memory descriptors or pointers are not used, allowing the destination to manage its own memory. The stream address may enable streaming data packets over interconnects that may not allow packet streaming by dividing a data packet into data chunks and including a stream address for each chunk. The stream address for a given packet includes a repeated first portion indicating the destination and a varied second portion indicating variable information for each data chunk such as start of packet (SoP) and end of packet (EoP) identifiers.
Systems and methods for enhancing or affecting neural stimulation efficiency and/or efficacy
Systems and methods for enhancing or affecting neural stimulation efficiency and/or efficacy are disclosed. In one embodiment, a system and/or method may apply electromagnetic stimulation to a patient's nervous system over a first time domain according to a first set of stimulation parameters, and over a second time domain according to a second set of stimulation parameters. The first and second time domains may be sequential, simultaneous, or nested. Stimulation parameters may vary in accordance with one or more types of duty cycle, amplitude, pulse repetition frequency, pulse width, spatiotemporal, and/or polarity variations. Stimulation may be applied at subthreshold, threshold, and/or suprathreshold levels in one or more periodic, aperiodic (e.g., chaotic), and/or pseudo-random manners. In some embodiments stimulation may comprise a burst pattern having an interburst frequency corresponding to an intrinsic brainwave frequency, and regular and/or varying intraburst stimulation parameters. Stimulation signals providing reduced power consumption with at least adequate symptomatic relief may be applied prior to moderate or significant power source depletion.
Method and apparatus for using multiple linked memory lists
An apparatus and method for queuing data to a memory buffer. The method includes selecting a queue from a plurality of queues; receiving a token of data from the selected queue and requesting, by a queue module, addresses and pointers from a buffer manager for addresses allocated by the buffer manager for storing the token of data. Subsequently, a memory list is accessed by the buffer manager and addresses and pointers are generated to allocated addresses in the memory list which comprises a plurality of linked memory lists for additional address allocation. The method further includes writing into the accessed memory list the pointers for the allocated address where the pointers link together allocated addresses; and migrating to other memory lists for additional address allocations upon receipt of subsequent tokens of data from the queue; and generating additional pointers linking together the allocated addresses in the other memory lists.