Patent classifications
G06F13/161
Method to Handle Host, Device, and Link's Latency Tolerant Requirements over USB Type-C Power Delivery Using Vendor Defined Messaging for all Alternate Modes
A system and method for performing a latency tolerance operation, comprising: determining whether a host and a device coupled to a cable are both capable of communicating information regarding latency tolerance; identifying a host latency tolerance and a device latency tolerance; configuring the host and the device to communicate based upon the host latency tolerance and the device latency tolerance; and, communicating between the host and the device, the communicating conforming to the host latency tolerance and the device latency tolerance.
Power management message bus system
A message bus is utilized for energy management/control. The publish/subscribe message bus present between site gateways, a central server farm, and other entities, facilitates exchange of messages pertaining to management and control of power generation and/or storage. On-site publishers/subscribers can include, e.g., PV inverters, battery devices, energy meters, etc. Non-site specific publishers/subscribers can include, e.g., web clients, database servers (for logging), and various server components of the message bus. Messages exchanged between publishers and subscribers can include control messages (e.g., begin charging battery X) and measurement messages (e.g., the current charge of battery X is Y). Embodiments may implement logic at a site gateway prioritizing transmission of messages to local site devices. Thus where a gateway cannot simultaneously transmit device control messages and device data acquisition messages (e.g., due to processing burden or congestion), site gateway logic can prioritize transmission of the control messages over the locally-generated data acquisition requests.
METHOD, APPARATUS, SYSTEM FOR EARLY PAGE GRANULAR HINTS FROM A PCIE DEVICE
Aspects of the embodiments are directed to systems and methods for providing and using hints in data packets to perform memory transaction optimization processes prior to receiving one or more data packets that rely on memory transactions. The systems and methods can include receiving, from a device connected to the root complex across a PCIe-compliant link, a data packet; identifying from the received device a memory transaction hint bit; determining a memory transaction from the memory transaction hint bit; and performing an optimization process based, at least in part, on the determined memory transaction.
MEMORY SYSTEM AND OPERATION METHOD OF THE SAME
A memory system includes: a plurality of memory devices, one of which includes an unrepaired defective memory cell; a control bus that is shared by the plurality of the memory devices; a plurality of data buses assigned to each of the plurality of the memory devices; and a memory controller that communicates with the plurality of the memory devices through the control bus and the plurality of the data buses, a control latency of the memory device including unrepaired defective memory cells is set differently from a control latency of the other memory devices, where the control latency is used for recognizing control signals of the control bus.
Nucleic acid based data storage
Provided herein are compositions, devices, systems and methods for the generation and use of biomolecule-based information for storage. Additionally, devices described herein for de novo synthesis of nucleic acids encoding information related to the original source information may be rigid or flexible material. Further described herein are highly efficient methods for long term data storage with 100% accuracy in the retention of information. Also provided herein are methods and systems for efficient transfer of preselected polynucleotides from a storage structure for reading stored information.
DRAINING A WRITE QUEUE BASED ON INFORMATION FROM A READ QUEUE
A method to access a memory chip having memory banks includes processing read requests in a read queue, and when a write queue is filled beyond a high watermark, stopping the processing of the read requests in the read queue and draining the write queue until the write queue is under a low watermark. Draining the write queue include issuing write requests in an order based on information in the read queue. When the write queue is under the low watermark, the method includes stopping the draining of the write queue and again processing the read requests in the read queue.
SYSTEM DEVICE, AND METHOD FOR MEMORY INTERFACE INCLUDING RECONFIGURABLE CHANNEL
A method of communicating with a memory device through a plurality of sub-channels and a control sub-channel includes; setting a first mode or a second mode. In the first mode, writing or reading first data corresponding to a command synchronized to the control sub-channel through the plurality of sub-channels, and in the second mode, independently writing or reading second data and third data respectively corresponding to different commands synchronized to the control sub-channel through the plurality of sub-channels.
HIGH THROUGHPUT, LOW POWER, HIGH PARITY ARCHITECTURE FOR DATABASE SSD
A method and apparatus for the increase of internal data throughput and processing capability for SSD's, to enable processing of database commands on an SSD. A front-end ASIC is provided with 256 to 512 RISC processing cores to enable decomposition and parallelization of host commands to front-end module (FM) ASICs that each in turn are coupled to multiple NVM dies, as well as processing of host database operations such as insert, select, update, and delete. Each FM ASIC is architected to increase parity bits to 33.3% of NVM data, and process parity data with 14 LDPC's. By increasing the parity bits to 33.3%, BER is reduced, power consumption is reduced, and data throughput within the SSD is increased.
Memory controller, information processing apparatus, and method of controlling memory controller
A memory controller has a request holding unit holding a write request and a read request; a transmission unit transmitting any one of the write request and the read request to a memory through a transmission bus; a reception unit receiving read data corresponding to the read request through a reception bus; and a request arbitration unit performing: a first processing of transmitting the write request before the read request, when a first reception time is not later than a second reception time, and a second processing of transmitting the read request before the write request, when the first reception time is later than the second reception time. The first reception time is when reception of the read data is started when the write request is transmitted first, and the second reception time is when the reception of the read data is started when the read request is transmitted first.
Efficient suspend-resume operation in memory devices
A method includes executing a first memory access operation in a memory. A progress indication, which is indicative of a progress of execution of the first memory access operation, is obtained from the memory. Based on the progress indication, a decision is made whether to suspend the execution of the first memory access operation in order to execute a second memory access operation.