Patent classifications
G06F9/485
DEVICE SUSPEND METHOD AND COMPUTING DEVICE
A device suspension method and a computing device are provided. In the method, before a device enters a suspended state, memory space occupied by a background process that is unrelated to a foreground process is released. In this way, the background process unrelated to the foreground process is not saved in a memory of the device. In other words, it reduces data stored in the memory when the device is suspended. Therefore, when the device needs to be woken up, only a relatively small amount of data needs to be read from the memory, and a working state can be rapidly restored. This can reduce a delay of reading data from the memory when the device is woken up, thereby accelerating a wakeup speed of the device. In addition, the data is stored in the memory when the device is suspended.
METHOD FOR MANAGING FUNCTION BASED ON ENGINE, ELECTRONIC DEVICE AND MEDIUM
Disclosed are a method for managing a function based on an engine, an electronic device and a medium, which relate to a field of computer technologies, and particularly to a field of artificial intelligence (AI) technologies such as cloud computing, big data and deep learning. The technical solution includes: generating a function creating request, in which the function creating request comprises Java Archive File (JAR) package path information; sending the function creating request to a coordinate machine node of the engine; obtaining, by the coordinate machine node based on the JAR package path information, a JAR package; copying the JAR package to a plug-in directory corresponding to each worker node of at least one worker node of the engine; and performing, by a daemon thread, registration and loading of a function corresponding to the JAR package in the plug-in directory.
EDGE FUNCTION BURSTING
One example method includes determining that local resources at an edge site are inadequate to support performance of a function needed by software running on the edge site, invoking a client agent, in response to invoking the client agent, receiving an execution manifest, determining, by the client agent, where to execute the function, wherein the determining comprises identifying a target execution environment for the function and the determining is based in part on information contained in the execution manifest, and transmitting, by the client agent, the execution manifest to a server agent of the target execution environment, and the execution manifest facilitates execution of the function in the target execution environment.
CONFIGURING A RESOURCE FOR EXECUTING A COMPUTATIONAL OPERATION
A computing node is disclosed. The computing node comprises processing circuitry configured to cause the computing node to receive a message (102) comprising configuration information for a resource of a data object that is hosted at the computing node and is associated with a computational operation, which computational operation is executable by the computing node. The processing circuitry is further configured to cause the computing node to configure (104) the resource of the data object on the computing node in accordance with the received configuration information, and to execute (106) the computational operation in accordance with the configured resource. Also disclosed are a corresponding server node and methods of operating a computing node and a server node. The computing node may comprise a Lightweight Machine to Machine (LwM2M) client and the server node may comprise an LwM2M server.
DATA PROCESSING SYSTEM, OPERATION METHOD THEREOF, AND STORAGE DEVICE THEREFOR
A data processing system may include a storage device configured to: transmit, to an exterior, prediction information, for each power mode, that indicates a predicted time for performing a background operation for managing a memory device; and perform the background operation in an idle state of the storage device by switching to a corresponding power mode in response to a power mode control signal that is received in the idle state; and a control device configured to: determine a power mode of the storage device and an idle time for the idle state during which the background operation is performed based on the prediction information; transmit the power mode control signal to the storage device; and suspend, during the idle time, execution of a command processing request transmitted to the storage device.
Live migration of clusters in containerized environments
The technology provides for live migration from a first cluster to a second cluster. For instance, when requests to one or more cluster control planes are received, a predetermined fraction of the received requests may be allocated to a control plane of the second cluster, while a remaining fraction of the received requests may be allocated to a control plane of the first cluster. The predetermined fraction of requests are handled using the control plane of the second cluster. While handling the predetermined fraction of requests, it is detected whether there are failures in the second cluster. Based on not detecting failures in the second cluster, the predetermined fraction of requests allocated to the control plane of the second cluster may be increased in predetermined stages until all requests are allocated to the control plane of the second cluster.
DYNAMIC ALLOCATION OF EXECUTABLE CODE FOR MULTI-ARCHITECTURE HETEROGENEOUS COMPUTING
An apparatus for executing a software program, comprising processing units and a hardware processor adapted for: in an intermediate representation of the software program, where the intermediate representation comprises blocks, each associated with an execution block of the software program and comprising intermediate instructions, identifying a calling block and a target block, where the calling block comprises a control-flow intermediate instruction to execute a target intermediate instruction of the target block; generating target instructions using the target block; generating calling instructions using the calling block and a computer control instruction for invoking the target instructions, when the calling instructions are executed by a calling processing unit and the target instructions are executed by a target processing unit; configuring the calling processing unit for executing the calling instructions; and configuring the target processing unit for executing the target instructions.
Long running workflows for robotic process automation
Systems and methods for executing a robotic process automation (RPA) workflow are provided. The RPA workflow is executed by a first robot. The execution of the RPA workflow is suspended by the first robot. A current context of the RPA workflow is serialized at a time of the suspension and the current context of the RPA workflow is stored. The execution of the RPA workflow is resumed by a second robot based on a triggering condition by retrieving the current context of the RPA workflow. The first robot and the second robot may be the same robot or different robots.
Method and system for calling/executing an action from an outside application within an existing open application
Systems and methods for executing a second application within a primary application window are provided, thereby improving the usability of graphical user interfaces (GUI). An exemplary method comprises executing a first application on the primary application window. The primary application window displays a plurality of GUI elements associated with the first application. The first application is configured to execute a second application upon processing an event invoked on the primary application window. Thereafter, the first application and the primary application window are suspended and a secondary application window is displayed within the primary application window. The second application window displays a plurality of GUI elements associated with the secondary application. The first application and primary application window automatically resume after closing the secondary application window.
Service processing method and apparatus, electronic device, and storage medium
Disclosed are a service processing method and apparatus, an electronic device, and a computer-readable storage medium. The method includes: when receiving a User Interface (UI) request, creating a process instance corresponding to the UI request, and storing instance information of the process instance in a storage module (S101); determining a target process instance from the storage module, and determining a step to be executed of the target process instance based on target instance information of the target process instance (S102); searching, from a register, and executing a target method corresponding to the step to be executed, wherein the register includes all methods compiled according to a preset development specification (S103).