G06F9/44563

Preloading enhanced application startup
10445126 · 2019-10-15 · ·

Preloading enhanced application startup is disclosed. For example, a first local socket associated with a first copy of an executable program loaded in a memory receives a first instruction to launch a second copy of the executable program. The executable program executes in one of two modes, a server mode and an active mode, and the first copy of the executable program executes in the server mode. The first copy of the executable program is cloned to launch the second copy of the executable program, which is launched in the active mode. A third copy of the executable program associated with a second local socket is launched in the server mode. The third copy of the executable program is determined to be actively running, after which the first copy of the executable program is terminated.

Returning a runtime type loaded from an archive in a module system

Returning a runtime type loaded from an archive in a module system is disclosed. Operations include (a) identifying, by a class loader implemented in a runtime environment, an archived runtime type loaded into an archive from a module source; (b) identifying a particular package associated with the archived runtime type; (c) determining that the particular package is defined to a runtime module that is defined to (i) the class loader or (ii) any class loader in the class loader hierarchy to which the class loader delegates; and (d) returning directly or indirectly, by the class loader, a runtime type loaded based on the archived runtime type from the archive.

Techniques for capturing state information and performing actions for threads in a multi-threaded computing environment
10387161 · 2019-08-20 · ·

Techniques are disclosed for implementing an extensible, light-weight, flexible (ELF) processing platform that can efficiently capture state information from multiple threads during execution of instructions (e.g., an instance of a game). The ELF processing platform supports execution of multiple threads in a single process for parallel execution of multiple instances of the same or different program code or games. Upon capturing the state information, one or more threads may be executed in the ELF platform to compute one or more actions to perform at any state of execution by each of those threads. The threads can easily access the state information from a shared memory space and use the state information to implement rule-based and/or learning-based techniques for determining subsequent actions for execution for the threads.

Application data sharing and decision service platform

Systems, methods, and software are disclosed herein for facilitating dynamic sharing of application data among multiple isolated applications executing on one or more application platforms. In an implementation, a decision service monitors event configuration information corresponding to an event, monitors application data feeds provided by one or more producer applications associated with the event, detects an event reconfiguration trigger based on the one or more application data feeds, and responsive to the event reconfiguration trigger, automatically modifies the event configuration information. The decision service then directs at least on application platforms to invoke at least one data consumer application for execution of at least one actions based, at least in part, on the modified event configuration information.

Build-time memory management for multi-core embedded system

Methods for generating executable files for two or more independent programs to be run on separate processor cores of an embedded system wherein the programs share data/code via shared memory by symbolically referring to data/code generated by another program. The methods implement a two-stage link process. In the first link stage addresses in shared memory are allocated to the shared code and data of the independent programs, and the allocated memory addresses are stored in a library. In a second link stage executable code and initialized data is generated for the non-shared code and initialized data of each independent program which is linked to the shared data/code by the addresses in the library.

Using containers to clean runtime resources when unloading a shared library

Embodiments of the invention include a computer-implemented method that includes accessing, using a processor, a loader library; using the processor to generate a mock library comprising a mock version of the loader library; using the processor to containerize the loader library; and using the processor to unload the loader library.

HYPERVISOR-BASED JUST-IN-TIME COMPILATION

Systems and methods improve performance and resource-efficiency of Just-in-Time (JIT) compilation in a hypervisor-based virtualized computing environment. A user attempts to launch an application that has been previously compiled by a JIT compiler into an intermediate, platform-independent format. A JIT accelerator selects a unique function signature that identifies the application and the user's target platform. If the signature cannot be found in a repository, indicating that the application has never been run on the target platform, the accelerator generates and stores the requested executable program in shared memory and saves the signature in the repository. The system then returns to the user a pointer to the stored platform-specific executable. If multiple users of the same platform request the same application, the system recognizes an affinity among those requests identified by their shared signature, and provides each user a pointer to the same previously stored, shared executable.

Re-Playable Execution Optimized for Page Sharing in a Managed Runtime Environment
20190087211 · 2019-03-21 ·

Embodiments of this disclosure allow non-position-independent-code to be shared between a closed application and a subsequent application without converting the non-position-independent-code into position-independent-code. In particular, embodiment techniques store live data of a closed application during runtime of the closed application, and thereafter page a portion of the live data that is common to both the closed application and a subsequent application back into volatile memory at the same virtual memory address in which the portion of live data was stored during runtime of the closed application so that the paged lived data may be re-used to execute the subsequent application in the managed runtime environment. Because the paged live data is stored at the same virtual memory address during the runtimes of both applications, non-position-independent-code can be shared between the applications.

TECHNIQUES FOR CAPTURING STATE INFORMATION AND PERFORMING ACTIONS FOR THREADS IN A MULTI-THREADED COMPUTING ENVIRONMENT
20190073224 · 2019-03-07 ·

Techniques are disclosed for implementing an extensible, light-weight, flexible (ELF) processing platform that can efficiently capture state information from multiple threads during execution of instructions (e.g., an instance of a game). The ELF processing platform supports execution of multiple threads in a single process for parallel execution of multiple instances of the same or different program code or games. Upon capturing the state information, one or more threads may be executed in the ELF platform to compute one or more actions to perform at any state of execution by each of those threads. The threads can easily access the state information from a shared memory space and use the state information to implement rule-based and/or learning-based techniques for determining subsequent actions for execution for the threads.

Method and system for sharing driver pages

Method for sharing driver pages, including, on a computer system having a first driver and performing system services, instantiating Containers; code and data of the driver are loaded sequentially into pages arranged in a virtual memory, and instantiating a next driver upon a request from one of the Containers for system services by: loading physical pages of the next driver into physical memory and allocated virtual memory pages for the next driver; associating the next driver with the first driver and comparing pages of the next driver and the first driver, generating a set of identical physical pages; mapping virtual pages of the next driver to identical physical pages of the first driver while atomically protecting identical physical pages; non-identical pages of the next driver from physical memory remain mapped to virtual pages of the next driver; releasing physical memory occupied by identical physical pages of the next driver.