Patent classifications
G06F9/44563
Drive control method and apparatus, and display device
Disclosed are a drive control method and apparatus and a display device. The drive control method may be applied to a controller, and include: adding at least one configuration instruction into a target region of one row of data to obtain a target row of data, wherein the configuration instruction is intended for self-configuration of a drive parameter by a first driver chip, and the target region includes at least one of a blank region and a region where display data is located; and sending the target row of data to the first driver chip.
INTEGRATING OVERLAID DIGITAL CONTENT INTO DISPLAYED DATA VIA PROCESSING CIRCUITRY USING A COMPUTING MEMORY AND AN OPERATING SYSTEM MEMORY
An apparatus, method, and computer readable medium that include accessing a memory of an apparatus, the memory including a computing memory space and an operating system (OS) memory space, the computing memory space allocated to a software application; copying data corresponding to the OS memory space and data corresponding to the computing memory space into an array; determining a rank for the software application of the computing memory space in the array to determine whether the software application is a top ranked application; and in response to determining the software application is the top ranked application, identifying, in the memory, a reference patch; retrieving the secondary digital content from a remote device based on the reference patch; and after retrieving the secondary digital content from the remote device, overlaying the secondary digital content into the displayed data.
USING CONTAINERS TO CLEAN RUNTIME RESOUCES WHEN UNLOADING A SHARED LIBRARY
Embodiments of the invention include a computer-implemented method that includes accessing, using a processor, a loader library; using the processor to generate a mock library comprising a mock version of the loader library; using the processor to containerize the loader library; and using the processor to unload the loader library.
Securing an application framework from shared library sideload vulnerabilities
There is disclosed in one example a computing apparatus, including: a processor and a memory; an operating system; an application framework including instructions to search a target directory for one or more shared libraries and to attempt to load the one or more shared libraries if found; and an application including: a library file including a primary feature module to provide a primary feature of the application, the primary feature module structured to operate within the application framework, wherein the library file is not independently executable by the operating system; and an unmanaged executable binary to host the library file, wherein the unmanaged executable binary is not managed by the application framework, and includes hooks to intercept the application framework's attempt to load the one or more shared libraries, and to provide security services to the one or more shared libraries before permitting the application framework to attempt to load the one or more shared libraries.
Automated Generation of Deployment Workflows for Cloud Platforms Based on Logical Stacks
A method implemented in a data center management node including obtaining, from memory, a physical stack describing a configuration of platform components across multiple operating platforms on a data center infrastructure, generating, by a processor, a graph describing correlations between the operating platforms and the data center infrastructure based on a platform library, wherein the platform library describes configurations of the platform components for each of the operating platforms separately, generating, by the processor, one or more logical stacks based on the graph, wherein the one or more logical stacks indicate deployable configurations of the operating platforms without depicting the platform components, and representing the logic stack to a user.
DYNAMICALLY SIZED LOCALS WITH PRECISE GARBAGE COLLECTION REPORTING
An instance of universally shared generic code is generated. A runtime parameter enables the size of a stack frame on which local data can be stored to be determined. Dynamically sized locals can be stored on a stack enabling precise garbage collection reporting. One frame of the stack is allocated for each code segment to simplify GC reporting. A reporting region in the frame memory region comprises a count of locals and a location at which the local is found in the stack.
RETURNING A RUNTIME TYPE LOADED FROM AN ARCHIVE IN A MODULE SYSTEM
Returning a runtime type loaded from an archive in a module system is disclosed. Operations include (a) identifying, by a class loader implemented in a runtime environment, an archived runtime type loaded into an archive from a module source; (b) identifying a particular package associated with the archived runtime type; (c) determining that the particular package is defined to a runtime module that is defined to (i) the class loader or (ii) any class loader in the class loader hierarchy to which the class loader delegates; and (d) returning directly or indirectly, by the class loader, a runtime type loaded based on the archived runtime type from the archive.
Application data sharing and decision service platform
Systems, methods, and software are disclosed herein for facilitating dynamic sharing of application data among multiple isolated applications executing on one or more application platforms. In an implementation, a decision service monitors event configuration information corresponding to an event, monitors application data feeds provided by one or more producer applications associated with the event, detects an event reconfiguration trigger based on the one or more application data feeds, and responsive to the event reconfiguration trigger, automatically modifies the event configuration information. The decision service then directs at least on application platforms to invoke at least one data consumer application for execution of at least one actions based, at least in part, on the modified event configuration information.
Re-playable execution optimized for page sharing in a managed runtime environment
Embodiments of this disclosure allow non-position-independent-code to be shared between a closed application and a subsequent application without converting the non-position-independent-code into position-independent-code. In particular, embodiment techniques store live data of a closed application during runtime of the closed application, and thereafter page a portion of the live data that is common to both the closed application and a subsequent application back into volatile memory at the same virtual memory address in which the portion of live data was stored during runtime of the closed application so that the paged lived data may be re-used to execute the subsequent application in the managed runtime environment. Because the paged live data is stored at the same virtual memory address during the runtimes of both applications, non-position-independent-code can be shared between the applications.
Dynamic-link library usage based on memory size
Aspects of the present disclosure are directed to methods, systems, and computer program products for using dynamic-link library based on memory size. In the method, a request for calling a first function in a Dynamic-link library (DLL) at runtime is received first. A size of a memory allocated to the DLL is then determined. Then call relationship of functions in the DLL is obtained. At last, functions related to the first function in the DLL are loaded into the memory allocated to the DLL based on the size of the memory allocated to the DLL and call relationship of functions in the DLL.