Patent classifications
G06F9/355
VNF LIFECYCLE MANAGEMENT METHOD AND APPARATUS
In a method of managing the lifecycle of a virtual network function (VNF), a VNF manager sends a pre-notification to the VNF to notify the VNF in advance that the VNF is to enter a target stage of the lifecycle. After receiving a response to the pre-notification from the VNF, the VNF manager controls the VNF to enter the target stage of the lifecycle.
HARDWARE ENFORCEMENT OF BOUNDARIES ON THE CONTROL, SPACE, TIME, MODULARITY, REFERENCE, INITIALIZATION, AND MUTABILITY ASPECTS OF SOFTWARE
Modifications to existing computer hardware, compiler changes or source-to-source transforms performed during the software build process, and a collection of libraries and modifications to existing standard system software and libraries. The invention allows a program author to enforce various kinds of locality of causality in software to provide enforcement of boundaries for the following aspects of a computer program: control, space, time, modularity, reference, initialization, and mutability. Where these properties do not suffice to guarantee a property at static time, dynamic checks may be added and the constraints on control flow prevent such dynamic checks from being avoided by the program.
DYNAMIC WORKFLOW SELECTION USING STRUCTURE AND CONTEXT FOR SCALABLE OPTIMIZATION
A system and a method are disclosed for recommending a change to improve performance of a target workflow. A workflow management system receives the target workflow intended to be used in a particular context to achieve a target result. The target workflow has a structure with a plurality of steps performed in a predefined order, but there may be options for modifying the workflow to lead to better performance (e.g., change type of action performed in a step, change order of steps, add a new step). The workflow management system identifies candidate workflows that are similar to the target workflow and identifies historical changes that have been made to these candidate workflows. Using a machine learning model, the workflow management system determines a change from one of the historical changes made to the candidate workflows associated with the highest expected impact when applied to the target workflow.
Memory device supporting skip calculation mode and method of operating the same
A memory device includes a memory cell array formed in a semiconductor die, the memory cell array including a plurality of memory cells to store data and a calculation circuit formed in the semiconductor die. The calculation circuit performs calculations based on broadcast data and internal data and omits the calculations with respect to invalid data and performs the calculations with respect to valid data based on index data in a skip calculation mode, where the broadcast data are provided from outside the semiconductor die, the internal data are read from the memory cell array, and the index data indicates whether the internal data are the valid data or the invalid data. Power consumption is reduced by omitting the calculations and the read operation with respect to the invalid data through the skip calculation mode based on the index data.
Memory device supporting skip calculation mode and method of operating the same
A memory device includes a memory cell array formed in a semiconductor die, the memory cell array including a plurality of memory cells to store data and a calculation circuit formed in the semiconductor die. The calculation circuit performs calculations based on broadcast data and internal data and omits the calculations with respect to invalid data and performs the calculations with respect to valid data based on index data in a skip calculation mode, where the broadcast data are provided from outside the semiconductor die, the internal data are read from the memory cell array, and the index data indicates whether the internal data are the valid data or the invalid data. Power consumption is reduced by omitting the calculations and the read operation with respect to the invalid data through the skip calculation mode based on the index data.
METHOD FOR PATCHING CHIP AND CHIP
An embodiment of the present application discloses a method for patching a chip and a chip. The chip includes a first program, and the method includes: when a function that needs to be replaced in the first program is run, executing an interrupt service routine according to a pre-stored correspondence relationship between an address of the function that needs to be replaced and an interrupt instruction, where the interrupt service routine is a service routine scheduled by an interrupt instruction corresponding to the function that needs to be replaced, and a return address of the interrupt service routine is an address of a patch function of the function that needs to be replaced; and running the patch function according to the address of the patch function, to perform patch processing on the first program.
Thermal state inference based frequency scaling
The systems and methods monitor thermal states associated with a device. The systems and methods set thermal thresholds associated with the device. The systems and methods infer the thermal thresholds from information gathered by a client application running on the device. The systems and methods implement a stored policy associated with a violation of one of the thermal thresholds by one of the monitored thermal states.
Hierarchical workload allocation in a storage system
A method for hierarchical workload allocation in a storage system, the method may include determining to reallocate a compute workload of a current compute core of the storage system; wherein the current compute core is responsible for executing a workload allocation unit that comprises one or more first type shards; and reallocating the compute workload by (a) maintaining the responsibility of the current compute core for executing the workload allocation unit, and (b) reallocating at least one first type shard of the one or more first type shards to a new workload allocation unit that is allocated to a new compute core of new compute cores.
Secure control flow prediction
Systems and methods are disclosed for secure control flow prediction. Some implementations may be used to eliminate or mitigate the Spectre-class of attacks in a processor. For example, an integrated circuit (e.g., a processor) for executing instructions includes a control flow predictor with entries that include respective indications of whether the entry has been activated for use in a current process, wherein the integrated circuit is configured to access the indication in one of the entries that is associated with a control flow instruction that is scheduled for execution; determine, based on the indication, whether the entry of the control flow predictor associated with the control flow instruction is activated for use in a current process; and responsive to a determination that the entry is not activated for use in the current process, apply a constraint on speculative execution based on control flow prediction for the control flow instruction.
PARALLEL PROCESSOR, ADDRESS GENERATOR OF PARALLEL PROCESSOR, AND ELECTRONIC DEVICE INCLUDING PARALLEL PROCESSOR
Disclosed is a parallel processor. The parallel processor includes a processing element array including a plurality of processing elements arranged in rows and columns, a row memory group including row memories corresponding to rows of the processing elements, a column memory group including column memories corresponding to columns of the processing elements, and a controller to generate a first address and a second address, to send the first address to the row memory group, and to send the second address to the column memory group. The controller supports convolution operations having mutually different forms, by changing a scheme of generating the first address.