Patent classifications
G06F9/3891
Transaction-enabled systems and methods for resource acquisition for a fleet of machines
The present disclosure describes transaction-enabling systems and methods. A system can include a controller and a fleet of machines, each having at least one of a compute task requirement, a networking task requirement, and an energy consumption task requirement. The controller may include a resource requirement circuit to determine an amount of a resource for each of the machines to service the task requirement for each machine, a forward resource market circuit to access a forward resource market, and a resource distribution circuit to execute an aggregated transaction of the resource on the forward resource market.
Look-up table initialize
A digital data processor includes an instruction memory storing instructions specifying a data processing operation and a data operand field, an instruction decoder coupled to the instruction memory for recalling instructions from the instruction memory and determining the operation and the data operand, and an operational unit coupled to a data register file and to an instruction decoder to perform a data processing operation upon an operand corresponding to an instruction decoded by the instruction decoder and storing results of the data processing operation. The operational unit is configured to perform a table write in response to a look up table initialization instruction by duplicating at least one data element from a source data register to create duplicated data elements, and writing the duplicated data elements to a specified location in a specified number of at least one table and a corresponding location in at least one other table.
Method and apparatus for optimizing scan data and method and apparatus for correcting trajectory
A method and an apparatus optimizes scan data obtained by sensors on vehicle, and corrects trajectory for a vehicle/robot based on the optimized scan data. The method for optimizing the scan data obtained by scanning environment elements, includes: step of obtaining the scan data, including obtaining at least two frames of scan data respectively corresponding to different timings; step of cluster processing, based on the characteristic of the data points, including classifying the plurality of data points in each frame of the scan data into one or more clusters; step of establishing correspondence, among the at least two frames of scan data, including searching and obtaining at least one set of clusters having correspondence; step of optimizing clusters, among the at least two frames of scan data, including conducting calculation to each set of the at least one set of clusters having correspondence, to obtain optimized clusters respectively corresponding to each set of the at least one set of clusters having correspondence; and step of optimizing the scan data, including accumulating all optimized clusters to obtain an optimized scan date for the at least two frames of scan data.
Per-instruction energy debugging using instruction sampling hardware
A processor utilizes instruction based sampling to generate sampling data sampled on a per instruction basis during execution of an instruction. The sampling data indicates what processor hardware was used due to the execution of the instruction. Software receives the sampling data and generates an estimate of energy used by the instruction based on the sampling data. The sampling data may include microarchitectural events and the energy estimate utilizes a base energy amount corresponding to the instruction executed along with energy amounts corresponding to the microarchitectural events in the sampling data. The sampling data may include switching events associated with hardware blocks that switched due to execution of the instruction and the energy estimate for the instruction is based on the switching events and capacitance estimates associated with the hardware blocks.
EDGE CLOUD BUILDING SYSTEM AND METHOD FOR PARALLEL INSTALLATION OF EDGE CLOUD
The present invention relates to an edge cloud infrastructure building technology, and particularly, to a system and a method for building an edge cloud, which can simultaneously a large-scale edge cloud in parallel. To this end, the edge cloud building system according to the present invention as an edge cloud building system for parallel installation of an edge cloud include, when a cloud infrastructure provisioning automation platform on a central cloud transmits a multiple cluster installation request to each of a plurality of edge clouds which is scheduled to be built, generating, by a cluster controller on the plurality of edge clouds, a custom resource (CR) based on a custom resource definition (CRD) for cluster provisioning included in the multiple cluster installation request to generate a cluster-specific worker controller; and building, by each cluster-specific worker controller, a cluster constituted by a master node and a worker node, and multiple clusters are simultaneously generated on each of the plurality of edge clouds.
Automated discovery of databases
A networked computing system comprises a backup node cluster of a backup service in communication with a host database node cluster of a host, a host database at least initially undiscovered by the backup node cluster, one or more processors coupled with memory storing instructions that, when executed, perform operations comprising at least installing a backup agent on at least one node of the host database node cluster, registering the host at the backup service, based on the host registration, triggering a host database discovery process to discover the undiscovered database automatically, the discovery process including a discovery call, in response to the discovery call, receiving metadata relating to the discovered database, and communicating with the discovered database.
REGISTER FILE FOR SYSTOLIC ARRAY
A processing apparatus includes a general-purpose parallel processing engine including a set of multiple processing elements including a single precision floating-point unit, a double precision floating point unit, and an integer unit; a matrix accelerator including one or more systolic arrays; a first register file coupled with a first read control circuit, wherein the first read control circuit couples with the set of multiple processing elements and the matrix accelerator to arbitrate read requests to the first register file from the set of multiple processing elements and the matrix accelerator; and a second register file coupled with a second read control circuit, wherein the second read control circuit couples with the matrix accelerator to arbitrate read requests to the second register file from the matrix accelerator and limit access to the second register file by the set of multiple processing elements.
Executing multiple programs simultaneously on a processor core
Systems and methods are disclosed for allocating resources to contexts in block-based processor architectures. In one example of the disclosed technology, a processor is configured to spatially allocate resources between multiple contexts being executed by the processor, including caches, functional units, and register files. In a second example of the disclosed technology, a processor is configured to temporally allocate resources between multiple contexts, for example, on a clock cycle basis, including caches, register files, and branch predictors. Each context is guaranteed access to its allocated resources to avoid starvation from contexts competing for resources of the processor. A results buffer can be used for folding larger instruction blocks into portions that can be mapped to smaller-sized instruction windows. The results buffer stores operand results that can be passed to subsequent portions of an instruction block.
Method and a system for capacity planning
A capacity planning method for Always On Availability Group, AG, cluster renewal includes selecting a source AG cluster to be replaced with a target AG cluster, selecting at least one performance monitor and monitoring performance of instances and databases to obtain time series. Trends of the time series are defined and at least one benchmark value is obtained for source and target nodes and calculating at least one benchmark ratio. The time series are adjusted based on the defined trends and the at least one benchmark ratio. A logical grouping of instances and databases is constituted, and workloads of the logical groups are calculated for each node on basis of the adjusted time series. A required capacity of the target AG cluster nodes is predicted. Finally, the required capacity of the target AG cluster nodes is compared to verify, whether the target node has sufficient capacity.
SHARING INSTRUCTION CACHE LINES BETWEEN MULITPLE THREADS
Aspects are provided for sharing instruction cache footprint between multiple threads using instruction cache set/way pointers and a tracking table. The tracking table is built up over time for shared pages, even when the instruction cache has no access to real addresses or translation information. A set/way pointer to an instruction cache line is derived from the system memory address associated with a first thread's instruction fetch. The set/way pointer is stored as a surrogate for the system memory address in both an instruction cache directory (IDIR) and a tracking table. Another set/way pointer to an instruction cache line is derived from the system memory address associated with a second thread's instruction fetch. A match is detected between the set/way pointer and the other set/way pointer. The instruction cache directory is updated to indicate that the instruction cache line is shared between multiple threads.