G06F13/1605

PERFORMING SAVE STATE SWITCHING IN SELECTIVE LANES BETWEEN ELECTRONIC DEVICES IN UFS SYSTEM

Disclosed are a method and a Universal Flash Storage (UFS) system for performing save state switching using selective lanes between a first electronic device and a second electronic device. The method includes: determining, by the first electronic device, whether a data request is received from an application layer of the first electronic device; and performing, by the first electronic device, at least one of: setting a first lane from among a plurality of lanes between the first electronic device and the second electronic device to an active state and the other lanes from among the plurality of lanes to a power save state based on determining that the data request is not received from the application layer of the first electronic device; and setting the plurality of lanes between the first electronic device and the second electronic device to the active state based on determining that the data request is received from the application layer of the first electronic device.

BUS ARBITRATION CIRCUIT AND DATA TRANSFER SYSTEM INCLUDING THE SAME
20220164295 · 2022-05-26 · ·

A bus arbitration circuit includes a first bus port, a second bus port, a first output circuit connected to the first bus port, a second output circuit connected to the second bus port, a control circuit, and a switch circuit. The control circuit includes a first input port, a second input port, a control signal output port, and an output port. The first input port receives data of the first bus port, the second input port receives data of the second bus port, and data is outputted from the output port to an input port of the first output circuit. The switch circuit has an input port connected to the first bus port, a control port connected to the control signal output port of the control circuit, and an output port from which data of a host bus is outputted to an input port of the second output circuit.

Clock crossing FIFO status converged synchronizer

A synchronizer that can generate pipeline (e.g., FIFO, LIFO) status in a single step without intermediate synchronization. The status can be an indicator of whether a pipeline is full, empty, almost full, or almost empty. The synchronizer (also referred to as a double-sync or ripple-based pipeline status synchronizer) can be used with any kind of clock crossing pipeline and all kinds of pointer encodings. The double-sync and ripple-based pipeline status synchronizers eliminate costly validation and semi-manual timing closure, suggests better performance and testability, and have lower area and power.

Storage system and method for providing a dual-priority credit system

A storage system and method for providing a dual-priority credit system are disclosed. In one embodiment, a storage system is provided comprising a memory and a controller. The controller is configured to receive, from a host, a plurality of credits for sending messages to the host; allocate a first portion of the plurality of credits for non-urgent messages; and allocate a second portion of the plurality of credits for urgent messages. Other embodiments are provided.

PROCESSING AND STORAGE CIRCUIT
20220156210 · 2022-05-19 ·

A processing and storage circuit includes an internal bus, one or more first-level internal memory units, a central processing unit (CPU), one or more hardware acceleration engines, and an arbiter. The first-level internal memory unit is coupled to the internal bus. The CPU includes a second-level internal memory unit, and is configured to access the first-level internal memory unit via the internal bus, and when the CPU accesses data, the first-level internal memory unit is accessed preferentially. The hardware acceleration engine is configured to access the first-level internal memory unit via the internal bus. The arbiter is coupled to the internal bus, configured to decide whether the CPU or the hardware acceleration engine be allowed to access the first-level internal memory unit. The arbiter sets the priority of the CPU accessing the first-level internal memory unit to be over the hardware acceleration engine.

Write merging on stores with different tags

Techniques for caching data are provided that include receiving, by a caching system, a write memory command for a memory address, the write memory command associated with a first color tag, determining, by a first sub-cache of the caching system, that the memory address is not cached in the first sub-cache, determining, by second sub-cache of the caching system, that the memory address is not cached in the second sub-cache, storing first data associated with the first write memory command in a cache line of the second sub-cache, storing the first color tag in the second sub-cache, receiving a second write memory command for the cache line, the write memory command associated with a second color tag, merging the second color tag with the first color tag, storing the merged color tag, and evicting the cache line based on the merged color tag.

Information processing device
11734206 · 2023-08-22 · ·

Provided is a unit that causes transmission of smallest payload data to a communication interface to be in standby during a time period from a time, at which it is determined that a transmission time of smallest payload data exceeds a reference value during a control cycle, to a time at which the communication interface transmits the smallest payload data to be transmitted next after the most recent smallest payload data transmitted at the time.

MULTIPLE PRECISION MEMORY SYSTEM
20220147468 · 2022-05-12 ·

Space in a memory is allocated based on the highest used precision. When the maximum used precision is not being used, the bits required for that particular precision level (e.g., floating point format) are transferred between the processor and the memory while the rest are not. A given floating point number is distributed over non-contiguous addresses. Each portion of the given floating point number is located at the same offset within the access units, groups, and/or memory arrays. This allows a sequencer in the memory device to successively access a precision dependent number of access units, groups, and/or memory arrays without receiving additional requests over the memory channel.

Multiple-requestor memory access pipeline and arbiter

In described examples, a coherent memory system includes a central processing unit (CPU) and first and second level caches. The memory system can include a pipeline for accessing data stored in one of the caches. Requestors can access the data stored in one of the caches by sending requests at a same time that can be arbitrated by the pipeline.

Systems and methods for an arbitration controller to arbitrate multiple automation requests on a material handling vehicle

An automation arbitration controller system is provided. The automation arbitration controller is configured to arbitrate a multiple of automation requests on a material handling vehicle, which includes a material handling vehicle controller. The automation arbitration controller can be connected to at least one automation controller via an automation communication bus. The automation controller can be configured to send one or more automation requests. The automation arbitration controller can also be connected to a material handling vehicle controller via a vehicle communication bus. The material handling vehicle controller can be configured to send one or more automation requests.