G06F9/4484

Microcontroller and semiconductor device
11526598 · 2022-12-13 · ·

A microcontroller includes a CPU and a cryptographic circuit, and when a first program uses the cryptographic circuit, the second program transmits installation information of the first program and encrypted program installation information to the cryptographic circuit. The cryptographic circuit decrypts the encrypted program installation information and compares it with the installation information of the first program. In the case of match, the use of the cryptographic circuit by the first program is permitted.

Special device and system to disable a driver from texting and being distracted during driving
20220385756 · 2022-12-01 ·

A device and system to disable a driver from texting and being distracted during driving a vehicle, this disable a driver from texting and being distracted by combining a blocking signal component from an ECM of an engine or a discrete electronic blocking device to notify the cellphone application that the vehicle engine is functioning and operation movement is occurring with at least one blocking application that signals the cellphone to block all incoming and outgoing communication signals to a set of temporarily unwanted cellphone applications and functions but still leaving a set of emergency applications working and available for use; the device also has a manner of communicating to the blocked application which are temporarily disabled and way to clear and re-enable the applications when the period is done.

ON THE FLY CONFIGURATION OF A PROCESSING CIRCUIT
20220374246 · 2022-11-24 ·

A method for on-the fly updating of a processing circuit, the method includes monitoring, by multiple coroutines and during a monitoring period, a progress of multiple suspend-update-resume sequences executed by the processing circuit, wherein at least some of the multiple execute and suspend-update-resume sequences partially overlap and are not mutually synchronized, and wherein each suspend-update-resume sequence comprises on-the-fly updates; and determining, by a merged coroutine, timings of the multiple suspend-update-resume sequences, wherein the determining comprises performing multiple calculation iterations, wherein a calculation iteration of the multiple calculation iterations comprises calculating, in a an iterative manner, a timing of a next suspend-update-resume sequence to be executed out of the multiple suspend-update-resume sequences, and wherein the calculating is responsive to timing offsets between different suspend-update-resume sequences.

Application programming interface as a service

An application programming interface (API) as a service is disclosed. In embodiments, a client provides code to be executed along with a configuration file for that code. Based on that, virtual machine(s) and load balancer(s) may be selected, a domain name service configured, and throttling and scaling configured. Through this, an API as a service may be provided on behalf of a client with minimal configuration required by the client or an administrator of a web service platform that provides the API as a service.

Shadow stack enforcement range for dynamic code

Enforcing shadow stack violations for dynamic code. A thread is executed at a processor, which includes generating a portion of dynamic code for execution by the thread, identifying a range of memory addresses where the portion of dynamic code is loaded in memory, and initiating execution of the portion of dynamic code. Based at least on execution of the thread, an exception triggered by a mismatch between a first return address popped from a call stack corresponding to the thread and a second return address popped from a shadow stack corresponding to the thread is processed. Processing the exception includes (i) determining whether the second return address popped from the shadow stack is within the identified range of addresses, and (ii) based on having determined that the second return address is within the range of addresses, initiating a shadow stack enforcement action.

Signal handling between programs associated with different addressing modes

Techniques for signal handling between programs associated with different addressing modes in a computer system are described herein. An aspect includes, based on a signal occurring during execution of a first program in a first runtime environment, wherein the first program and the first runtime environment are associated with a first addressing mode, invoking a first signal exit routine associated with the first addressing mode. Another aspect includes allocating a signal information area (SIA) by the first signal exit routine. Another aspect includes calling a second signal exit routine associated with a second addressing mode that is different from the first addressing mode with an address of the SIA. Another aspect includes allocating a mirror SIA by the second signal exit routine. Another aspect includes handling the signal, and resuming execution based on the handling of the signal.

System of Multiple Stacks in a Processor Devoid of an Effective Address Generator
20220342668 · 2022-10-27 · ·

In one implementation devoid of an effective address generator a method of call operation comprises pushing one or more parameters onto a first stack, pushing the contents of one or more registers onto a second stack, popping off the first stack one or more of the parameters into one or more of the registers whose contents were pushed onto the second stack, performing register to register operations on the one or more registers whose contents were pushed onto the second stack with a result of the register to register operations being stored in a result register, the result register being one of the one or more registers whose contents were pushed onto the second stack, popping off the second stack the contents of all the one or more registers into their respective registers from which they came, and returning control to an instruction following the call.

METHODS AND APPARATUS FOR CONTEXT SWITCHING

Aspects of the present disclosure relate to apparatus comprising execution circuitry comprising at least one execution unit to execute program instructions, and control circuitry. The control circuitry receives a stream of processing instructions, and issues each received instruction to one of said at least one execution unit. Responsive to determining that a first type of context switch is to be performed from an initial context to a new context, issuing continues until a pre-emption point in the stream of processing instructions is reached. Responsive to reaching the pre-emption point, state information is stored, and the new context is switched to. Responsive to determining that a context switch is to be performed to return from the new context to the initial context, the processing status is restored from the state information, and the stream of processing instructions is continued.

VISUALIZING API INVOCATION FLOWS IN CONTAINERIZED ENVIRONMENTS

An approach to generating end-to-end visualizations of invocations from coarse granular application programming interface (API) requests within a containerized environment may be presented. A coarse-granular API request may be intercepted. The coarse-granular API request may receive a unique identifier, which will be assigned to all invocations associated with the coarse-granular API request. Any invocations associated with the coarse-granular API within the containerized environment may be monitored. Detected invocations resulting from the coarse-granular API request may be annotated with a sequence number and the unique ID of the associated coarse-granular API request. An invocation flow for the coarse-granular API request may be generated based on the unique ID, relationship between the invocations and microservices, and the sequence number of the invocations.

Performance threshold

Example systems relate to system call acceleration. A system may include a processor and a non-transitory computer readable medium. The non-transitory computer readable medium may include instructions to cause the processor to run a plurality of benchmarks for a hardware configuration. The non-transitory computer readable medium may further include instructions to determine a benchmark matrix based on the plurality of benchmarks. The non-transitory computer readable medium may include instructions to determine an input/output (I/O) bandwidth ceiling for the hardware configuration based on the benchmark matrix. Additionally, the non-transitory computer readable medium may include instructions to determine a performance threshold of an I/O access parameter for the hardware configuration based on the bandwidth ceiling.