G06F9/3871

Connected car big data acquisition device, system and method for storing data gathered in a single platform

The present disclosure provides a connected car big data acquisition device, and connected car big data collecting system and method using the device. A connected car big data acquisition device includes: an interface configured to communicate with a device associated with a data production layer and a device associated with a service using layer; a storage configured to store data; and a controller configured to generate preprocessed data based on raw data related to connected cars received from a device associated with a data production layer through a preprocessing process using an integrated platform and to control the preprocessed data to be stored in the storage based on a memory availability ratio of the storage.

Method and Apparatus for Desynchronizing Execution in a Vector Processor
20220342844 · 2022-10-27 · ·

In one implementation a vector processor unit having preload registers for at least some of vector length, vector constant, vector address, and vector stride. Each preload register has an input and an output. All the preload register inputs are coupled to receive a new vector parameters. Each of the preload registers' outputs are coupled to a first input of a respective multiplexor, and the second input of all the respective multiplexors are coupled to the new vector parameters.

Execution control of a multi-threaded, self-scheduling reconfigurable computing fabric
11635959 · 2023-04-25 · ·

Representative apparatus, method, and system embodiments are disclosed for configurable computing. A representative system includes an interconnection network; a processor; and a plurality of configurable circuit clusters. Each configurable circuit cluster includes a plurality of configurable circuits arranged in an array; a synchronous network coupled to each configurable circuit of the array; and an asynchronous packet network coupled to each configurable circuit of the array. A representative configurable circuit includes a configurable computation circuit and a configuration memory having a first, instruction memory storing a plurality of data path configuration instructions to configure a data path of the configurable computation circuit; and a second, instruction and instruction index memory storing a plurality of spoke instructions and data path configuration instruction indices for selection of a master synchronous input, a current data path configuration instruction, and a next data path configuration instruction for a next configurable computation circuit.

Methods and systems for managing asynchronous function calls
11474861 · 2022-10-18 · ·

This disclosure generally relates to operating systems and methods of computing devices for managing system and function calls. An example method include determining that a fiber is requesting to wait for one or more results of an asynchronous function call, pausing execution of the fiber until the one or more results are completed, enqueuing the paused fiber in a local queue of the one or more results, determining that the one or more results are completed, accessing one or more queued fibers in the local queue of the one or more results. The one or more queued fibers comprise the fiber and resuming execution of the one or more queued fibers, and the asynchronous function call is called by a thread to execute a task without being blocked while the task is being completed.

Error handling during asynchronous processing of sequential data blocks

A data analytics system stores a data file that includes an ordered set of data blocks. The data blocks can be parsed out of order. An error management module of the data analytics system detects a parse error occurring during parsing of a data block and generates an error message for the parse error. The error message includes unresolved location information indicating a location of the detected parse error in the data block. The error management module resolves the unresolved location information after determining that one or more additional data blocks preceding the data block in the ordered set have been parsed. The error management module generates resolved location information that indicates a location of the parse error in the data file. The error management module updates the error message with the resolved location information and outputs the updated error message.

REDUCING CALL STACK USAGE FOR FUTURE OBJECT COMPLETIONS
20230064547 · 2023-03-02 ·

Reducing call stack usage for future object completions is disclosed herein. In one example, a processor device of a computing device employs a completion queue when managing completions of future objects. When a future object is determined to have a status of complete, the processor device determines whether the current thread of the future object is associated with a completion queue. If so, a completion operation of the future object is enqueued in the completion queue. If the current thread is not associated with a completion queue, one is created and associated with the current thread, and the completion operation of the future object is performed. After completion, if the completion queue is not empty, any enqueued completion operations are dequeued and performed. Once the completion queue is empty, the completion queue is removed.

Tracking asynchronous event processing

A messaging system receives a registration from a first microservice for one or more event types to publish, and the registration includes an event report policy. The messaging system receives a first event, and the first event is described by the event report policy. The first event is monitored as it is processed by a second microservice. An event report describing the results of the monitoring is delivered to the first microservice.

Techniques for efficiently transferring data to a processor

A technique for block data transfer is disclosed that reduces data transfer and memory access overheads and significantly reduces multiprocessor activity and energy consumption. Threads executing on a multiprocessor needing data stored in global memory can request and store the needed data in on-chip shared memory, which can be accessed by the threads multiple times. The data can be loaded from global memory and stored in shared memory using an instruction which directs the data into the shared memory without storing the data in registers and/or cache memory of the multiprocessor during the data transfer.

Programming language trigger mechanisms for parallel asynchronous enumerations

Embodiments described herein are directed to a programming language trigger mechanism. The trigger mechanism is a small piece of code that a software developer utilizes in a computer program. The trigger mechanism enables computing operations or tasks to be performed asynchronously and in a parallel fashion. In particular, logic (e.g., operations or tasks) associated with the trigger mechanism are provided to a plurality of resources for processing in parallel. Each resource asynchronously processes the task provided thereto and asynchronously provides the result. The results are asynchronously returned as an enumeration. The enumeration enables the software developer to enumerate through the returned elements as a simple stream of results as they are calculated.

SYSTEM AND METHOD FOR ASYNCHRONOUS DISTRIBUTION OF OPERATIONS THAT REQUIRE SYNCHRONOUS EXECUTION
20230133744 · 2023-05-04 ·

A system and method may manage traffic to software applications that ingest operations into an asynchronous queue when those operations are required to execute in a synchronous manner. An identifier may be retrieved from data corresponding to each client operation. A process distribution module may be placed in front of the two incompatible systems/applications to inspect each data payload and intelligently distribute the transactions to each instance based on a well-defined algorithm (e.g., even/odd, last digit, etc.). Synchronous execution may then occur according to a timestamp for each operation.