G06F9/3555

Vector generating instruction for generating a vector comprising a sequence of elements that wraps as required

An apparatus and method are provided for performing vector processing operations. In particular the apparatus has processing circuitry to perform the vector processing operations and an instruction decoder to decode vector instructions to control the processing circuitry to perform the vector processing operations specified by the vector instructions. The instruction decoder is responsive to a vector generating instruction identifying a scalar start value and wrapping control information, to control the processing circuitry to generate a vector comprising a plurality of elements. In particular, the processing circuitry is arranged to generate the vector such that the first element in the plurality is dependent on the scalar start value, and the values of the plurality of elements follow a regularly progressing sequence that is constrained to wrap as required to ensure that each value is within bounds determined from the wrapping control information. The vector generating instruction can be useful in a variety of situations, a particular use case being to implement a circular addressing mode within memory, where the vector generating instruction can be coupled with an associated vector memory access instruction. Such an approach can remove the need to provide additional logic within the memory access path to support such circular addressing.

Physical quantity detection device

A physical quantity detection device that can improve arithmetic resolution while preventing an increase in memory capacity is obtained. A physical quantity detection device 100 according to the present invention includes: a physical quantity detection sensor that detects a physical quantity of a measurement target gas; a storage unit that records a correction amount corresponding to a detection value of the physical quantity detection sensor; and an arithmetic unit 110 that performs output adjustment of the detection value using the detection value and the correction amount. Resolution of the storage unit 120 is lower than arithmetic resolution of the arithmetic unit 110.

System, apparatus and method for symbolic store address generation for data-parallel processor

In one embodiment, an apparatus includes: a plurality of execution lanes to perform parallel execution of instructions; and a unified symbolic store address buffer coupled to the plurality of execution lanes, the unified symbolic store address buffer comprising a plurality of entries each to store a symbolic store address for a store instruction to be executed by at least some of the plurality of execution lanes. Other embodiments are described and claimed.

System and method for vector communication

There is disclosed in an example, an endpoint apparatus for an interconnect, comprising: a mechanical and electrical interface to the interconnect; and one or more logic elements comprising an interface vector engine to: receive a first scalar transaction for the interface; determine that the first scalar transaction meets a criterion for vectorization; receive a second scalar transaction for the interface; determine that the second transaction meets the criterion for vectorization; vectorize the first scalar transaction and second scalar transaction into a vector transaction; and send the vector transaction via the electrical interface.

SYSTEM AND METHOD FOR PIPELINED TIME-DOMAIN COMPUTING USING TIME-DOMAIN FLIP-FLOPS AND ITS APPLICATION IN TIME-SERIES ANALYSIS
20220019434 · 2022-01-20 ·

Systems and/or methods can include a ring based inverter chain that constructs multi-bit flip-flops that store time. The time flip-flops serve as storage units and enable pipeline operations. Single cells used in time series analysis, such as dynamic time warping are rendered by the time-domain circuits. The circuits include time flip-flops, Min, and ABS circuits. A and the matrix can be constructed through the single cells.

DATA PROCESSING APPARATUS AND RELATED PRODUCTS

The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.

HIGHLY SCALABLE, PEER-BASED, REAL-TIME AGENT ARCHITECTURE

A system architecture that synthesizes agent environments and virtualization environments to provide for a highly scalable, peer-to-peer real-time agent architecture. The agent environment enables agents participating in a shared experience to be peers of one another. Agents can choose which agents they want to peer by meeting in the agent environment and using published information determine which other agents they want to peer with to communicate stream source data therebetween. Virtualization environments are a mechanism for executing applications (“stream sources”). Any one or more of available virtualization environments (e.g., cloud infrastructure) may be selected in accordance with predetermined criteria to execute stream sources. In addition, non-virtualized environments (e.g., physical devices) may be utilized to run the stream sources in accordance with deployment criteria. As such, processes may be run over a large number of possibly different environments to provide novel end-user solutions and greater scaling of resources.

Service load independent resource usage detection and scaling for container-based system

A computer implemented method and related system determine a current load result of a software container executing on a compute node in a container system. In response to determining that the current load result exceeds a predetermined scale-up threshold for the software container, the method adds a first plurality of replicas of the software container to the compute node, where a quantity of the first plurality of replicas is related to the current load result. In response to determining that the current load result is less than a predetermined scale-down threshold for the software container, the method deletes a second plurality of replicas of the software container from the compute node, where a quantity of the second plurality of replicas is related to the current load result.

Thermal state inference based frequency scaling
11811846 · 2023-11-07 · ·

The systems and methods monitor thermal states associated with a device. The systems and methods set thermal thresholds associated with the device. The systems and methods infer the thermal thresholds from information gathered by a client application running on the device. The systems and methods implement a stored policy associated with a violation of one of the thermal thresholds by one of the monitored thermal states.

COMPUTING ENVIRONMENT PROVISIONING

Text is received from a user describing item(s) for migration to a computing environment with cloud feature(s), resulting in item description(s), the text including unstructured text that are processed separately. Text mining is performed on the unstructured text to extract item feature(s). For each listing a portion of the unstructured text is extracted, resulting in an extracted text portion for each listing from which an entity is identified. Each entity or item feature is mapped to cloud feature(s) available from solution(s) with cloud feature(s). Based on the cloud feature(s), recommendation(s) are made to the user regarding cloud feature(s) of the solution(s) for optional consideration by the user. Explanation(s) for the recommended cloud feature(s) from explainability model(s) may be provided to the user.