Patent classifications
G06F9/44578
Virtual function driver loading method and server using global and local identifiers corresponding to locations of the virtual functions
A driver loading method and a server, where when receiving a service request, the server determines a first global index and a first global virtual function (VF) identifier corresponding to a first function description of a designated function included in the service request, determines a virtual machine (VM) corresponding to the service request, associates the first global VF identifier with the VM, allocates a first local index on the VM to the designated function, creates a correspondence between the first local index and the first function description, and sends the correspondence to the VM. The VM loads, according to the correspondence, a driver of the designated function for a first VF corresponding to the first global VF identifier. According to the foregoing method, different drivers can be loaded for VFs that have different functions and that are virtualized by a Peripheral Component Interconnect Express (PCIe) device.
Optimizing image reconstruction for container registries
A computer-implemented method includes receiving characteristic information of a container registry. The container registry includes a plurality of container images. The method includes selecting at least two container images in the container registry and selecting parameters for optimization based on the characteristic information. The method also includes generating a cost function based on the parameters for optimization and optimizing the at least two container images in the container registry based on the cost function. A computer-implemented method includes receiving a composition of each of at least two layers in a container image. The composition of each of the at least two layers includes at least one file. The method includes mapping overlap between the composition of the at least two layers and estimating a redundancy in the container image based on the overlap. The method also includes calculating new layers which reduce the redundancy in the container image.
Resource loading at application startup using attributes of historical data groups
An electronic device includes a memory and processing circuitry. The memory is to be loaded with resources for applications to be executed at the electronic device. The processing circuitry obtains a current data group having attributes for a current running scene. Further, the processing circuitry obtains historical data groups respectively corresponding to a plurality of historical scenes. A historical data group of the historical data groups includes corresponding attributes to the attributes for a historical running scene. Then, the processing circuitry calculates similarities respectively for the historical data groups to the current data group, and determines a historical scene from the plurality of historical scenes based on the similarities. In addition, the processing circuitry determines a potential application for the current running scene. The potential application was executed in the determined historical scene. Then, the processing circuitry loads a resource for the potential application into the memory.
Method and device for the accelerated execution of applications
An aim of the invention is to enable the acceleration of the execution, in particular the starting of an application. The invention relates to a method for executing an application which is performed by at least one device. The method involves providing data parts from a data memory, which are required for the execution of the application. The data parts are stored in the data memory in an order which is at least in some areas based on an expected required order.
Parameter configuration system of electronic device
An operation parameter configuration method includes configuring at least two groups of operation parameters of an application, detecting a startup signal of the application in real time, confirming one of the at least two groups of operation parameters according to the startup signal, and starting the application in a foreground of the electronic device according to one confirmed group of operation parameters. The at least two groups of operation parameters include a group of default operation parameters and a group of optimal operation parameters. The group of optimal operation parameters are calculated according to a history of execution of the application by an electronic device. The group of optimal operation parameters is calculated according to a history of execution of the application in the foreground of the electronic device.
System and method for accelerating processing in event-driven server-less computing
Disclosed are systems and methods for execution of applications in a virtual execution environment. An exemplary method comprises receiving from a client, a request for execution of an application in at least one virtual execution environment on at least one hardware node, determining whether there is a state snapshot of an application in the virtual execution environment, restoring a state of the application from a state snapshot in the virtual execution environment when the state snapshot of the application is found, starting the application without restoring the state of the application from the state snapshot and creating a new state snapshot of the application when the state snapshot of the application is not found, continuing execution of the application in the virtual execution environment and returning a response of the application to the client.
Method for preloading application based on history and condition and electronic device supporting same
An embodiment discloses an electronic device including: a first memory in which multiple applications are stored; a second memory; and at least one processor operatively connected to the first memory and the second memory. The processor(s) is configured to determine, based on a history of usage of the multiple applications for a first period of time, a priority of the multiple applications over multiple time intervals included in a second period of time. The processor(s) is further configured to preload a predetermined first number of applications into the second memory based on the priority if a designated condition being satisfied; and preload a first list of applications into the second memory if the designated condition is satisfied in a first time interval, and preload a second list of applications into the second memory if the designated condition is satisfied in a second time interval.
Method of generating a representation of a program logic, decompilation apparatus, recompilation system and computer program products
A method of generating a representation of a program logic includes: capturing first program code in a low-level programming language, the program code having been generated by compiling program logic defined in a high-level language; dividing the captured first program code into a sequence of code sections based on a predetermined set of at least partially parameterized code patterns, wherein specific parameter values are captured for each code section and a terminal symbol of an intermediate language is assigned to each code section; assigning the assigned terminal symbols to non-terminal symbols of the intermediate language based on a context-free grammar, wherein a totality of the assigned non-terminal symbols describes the program logic of the first program code in the intermediate language; and generating a representation of the program logic independent of the first processor architecture based on the associated non-terminal symbols of the intermediate language and the detected parameter values.
Flexible accelerator for sparse tensors (FAST) in machine learning
An apparatus includes a first tensor compute cluster configured to receive first input feature tensors, a second tensor compute cluster configured to receive second input feature tensors more sparse than the first input feature tensors, and a vector accelerator. The apparatus also includes circuitry configured to partition an input feature map into a plurality of input feature tensors based on a compression criteria and assign each of the plurality of input feature tensors to one of the first tensor compute cluster, the second tensor compute cluster, or the vector accelerator based upon at least one of parameters including a sparsity and an optimization parameter.
System and method for automatic generation and management of feature level application directory
Embodiments of the present invention provide a system for automatically generating and managing application directories of one or more applications associated with an entity. The system is configured for identifying initiation of packaging of one or more program codes associated with at least one application, scanning the one or more program codes to identify one or more parameters associated with the one or more program codes, and automatically generating an application directory associated with the at least one application based at least on the one or more parameters identified by scanning the one or more program codes, wherein the one or more parameters comprise one or more dependencies, one or more screens, one or more permissions, one or more services, one or more navigational parameters, one or more base classes, one or more logging frameworks, and one or more static analyzers.