Patent classifications
G06F9/544
Data read-write method and apparatus and circular queue
The present invention provides a data read-write method and apparatus and a circular queue. The method includes: obtaining an offset position of a write pointer from a queue head of a circular queue; determining an offset position of a read pointer according to the offset position of the write pointer; and reading data from the circular queue according to the offset position of the read pointer. Single input multiple output of share memory is implemented, and therefore a plurality of read threads may read data from the circular queue in parallel, thereby effectively improving read-write efficiency of data, and reducing memory consumption.
Communication method and apparatus
A communication method includes monitoring, by a shared agent, shared memory, wherein the shared memory is used by a first application, wherein the first application runs on a virtual device, wherein the virtual device is located on a host, wherein the shared memory belongs to a part of memory of the host and does not belong to memory specified by the host for the virtual device, and wherein the shared agent is disposed on the host independent of the virtual device, determining, by the shared agent, whether data of the first application is written to the shared memory, reading, by the shared agent, the data from the shared memory and sending the data to a second application in response to the data of the first application is written to the shared memory, wherein the second application is a data sharing party specified by the first application.
Panel self-refresh (PSR) transmission of bulk data
The present disclosure is directed to systems and methods of transferring bulk data, such as OLED compensation mask data, generated by a source device to a sink device using a high-bandwidth embedded DisplayPort (eDP) connection contemporaneous with an ENABLED Panel Self-Refresh (PSR) mode. Upon ENABLING the PSR mode, the source control circuitry causes the source transmitter circuitry, the sink receiver circuitry, and the eDP high-bandwidth communication link to remain active rather than inactive. The source control circuitry generates one or more data transport units (DTUs) having a header portion that contains data indicative of the presence of a bulk data payload and the non-display status of the bulk data payload carried by the DTUs.
Coherent capturing of shared-buffer status
A network element includes multiple ports configured to communicate over a network, a buffer memory, a snapshot memory, and circuitry. The circuitry is configured to forward packets between the ports, to temporarily store information associated with the packets in the buffer memory, to continuously write at least part of the information to the snapshot memory concurrently with storage of the information in the buffer memory, and, in response to at least one predefined diagnostic event, to stop writing of the information to the snapshot memory, so as to create in the snapshot memory a coherent snapshot corresponding to a time of the diagnostic event.
MULTIPLATFORM MICROSERVICE CONNECTION TECHNIQUES
Inter-microservice communications are managed through in-memory connection routing. A sending microservice writes a message over a port associated with the connection. The message is routed directly to one or more receiving microservices associated with the connection over their ports associated with the connection. The message may be converted to a different format or multiple different formats through plugins processed when the message is received over the sending microservice's port and before the converting messages are routed over the receiving microservices' ports. The inter-microservice communications are hardware and platform independent or agnostic, such that the microservices associated with the connection can be processed on different hardware and different platforms from one another.
Digital signal processing plug-in implementation
In some examples, digital signal processing plug-in implementation may include obtaining attributes of a user interface for a digital signal processing plug-in, and obtaining attributes of digital signal processing logic for the digital signal processing plug-in. The digital signal processing plug-in implementation may include generating, based on the attributes of the user interface and the attributes of the digital signal processing logic, a plug-in process to control operation of the user interface and the digital signal processing logic. Further, the digital signal processing plug-in implementation may include establishing, based on the generated plug-in process, a two-way communication link between a host and the plug-in process to implement the digital signal processing plug-in.
COMPUTER-BASED SYSTEMS CONFIGURED FOR AUTOMATED COMPUTER SCRIPT ANALYSIS AND MALWARE DETECTION AND METHODS THEREOF
Systems and methods enable automated and scalable obfuscation detection in programming scripts, including processing devices that receive software programming scripts and a symbol set. The processing devices determine a frequency of each symbol and an average frequency of the symbols in the script text. The processing devices determine a normal score of each symbol based on the frequency of each symbol and the average frequency to create a symbol feature for each symbol including the normal score. The processing devices utilize an obfuscation machine learning model including a classifier for binary obfuscation classification to detect obfuscation in the script based on the symbol features. The processing devices cause to display an alert indicting an obfuscated software programming script on a screen of a computing device associated with an administrative user to recommend security analysis of the software programming script based on the binary obfuscation classification.
METHOD AND COMPUTER FOR THE MANAGEMENT OF DATA EXCHANGES BETWEEN A PLURALITY OF TASKS
Disclosed is a method of managing data exchanges between a plurality of tasks by a computer of a vehicle, in particular a motor vehicle, the method including a phase of grouping functions into sets, each set including data-producing functions and consuming functions. For each set of functions, a first phase includes the steps of executing the producing functions in order to produce what are referred to as “produced” data, and of storing a copy of each produced datum, and a second phase includes the steps of restoring the data to be consumed by the consuming functions, on the basis of the stored copies, and of executing the consuming functions on the basis of the restored data to be consumed.
METHOD FOR ALLOCATING DATA PROCESSING TASKS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
A method for allocating data processing tasks, an electronic device, and a readable storage medium are provided, which relate to the fields of computer vision and artificial intelligence. The method includes: determining a plurality of data processing tasks of a target application for a graphics processor; and allocating, by using a load balancing strategy, the plurality of data processing tasks to a plurality of worker processes created for the target application, wherein the plurality of worker processes are pre-configured with a corresponding graphics processor resource.
Multi-tile memory management mechanism
Graphics processors for implementing multi-tile memory management are disclosed. In one embodiment, a graphics processor includes a first graphics device having a local memory, a second graphics device having a local memory, and a graphics driver to provide a single virtual allocation with a common virtual address range to mirror a resource to each local memory of the first and second graphics devices.