G06F9/4494

SEAMLESS SYNCHRONIZATION ACROSS DIFFERENT APPLICATIONS

A computing system and methods are provided for executing an application framework to provision a pipeline or workspace which may include applications that collaborate, such as, in performing a physical actuation of a physical component. The provisioning may include determining or receiving an indication of the applications to be installed and contextual information of each of the applications. The contextual information includes data objects, data types, or data formats supported by each of the applications and relationships among the applications. The computing system may provision one or more of the applications based on the indication of the applications to be installed and the contextual information. The computing system may receive an indication of an update at a first application of the applications and propagate the update to a subset of the applications based on the contextual information.

Dynamically-imposed field and method type restrictions for managed execution environments

A data structure (e.g., field, method parameter, or method return value) is defined by a descriptor to be of a particular type, which imposes a first set of restrictions on values assumable by the data structure. Separately, the data structure is associated with a type restriction that defines a second set of restrictions that further restricts the values assumable by the data structure. The descriptor and type restriction are encoded separately in a program binary. Responsive to identifying a value for the data structure that (a) is not forbidden by the first set of restrictions defined the descriptor and (b) is forbidden by the second set of restrictions defined by the type restriction, a runtime environment may perform a restrictive operation, such as: blocking storage of the value to a field; blocking passing of the value to a method parameter; or blocking return of the value from a method.

Visual Macro Language for Colorimetric Software
20230205497 · 2023-06-29 ·

Described herein is a computer-implemented method for providing color quality control applications to a user. The computer-implemented method includes presenting one or more applications in a graphical user interface, GUI, where the one or more applications include (i) one or more software-as-a-service (SaaS) applications and/or (ii) one or more on-premises applications for providing one or more color quality control functionalities. The GUI includes one or more macro language widgets including a plurality of macro building blocks with each macro building block representing a dedicated functionality. The computer-implemented method further includes selecting a number of macro building blocks of the plurality of macro building blocks and connecting the selected number of macro building blocks with each other via socket connections to provide a desired script, which is executable to create a customized protocol to provide a sequence of functionalities.

Implementing optional specialization when executing code

A compiler is capable of compiling instructions that do or do not supply specialization information for a generic type. The generic type is compiled into an unspecialized type. If specialization information was supplied, the unspecialized type is adorned with information indicating type restrictions for application programming interface (API) points associated with the unspecialized type, which becomes a specialized type. A runtime environment is capable of executing calls to a same API point that do or do not indicate a specialized type, and is capable of executing calls to a same API point of objects of an unspecialized type or of objects of a specialized type. When the call to an API point indicates a specialized type, and the specialized type matches that of the object (if the API point belongs to an object), then a runtime environment may perform optimized accesses based on type restrictions derived from the specialized type.

Systems and Methods for Using Error Correction and Pipelining Techniques for an Access Triggered Computer Architecture

A method for improving performance of an access triggered architecture for a computer implemented application is provided. The method first executes typical operations of the access triggered architecture according to an execution time, wherein the typical operations comprise: obtaining a dataset and an instruction set; and using the instruction set to transmit the dataset to a functional block associated with an operation, wherein the functional block performs the operation using the dataset to generate a revised dataset. The method further creates a pipeline of the typical operations to reduce the execution time of the typical operations, to create a reduced execution time; and executes the typical operations according to the reduced execution time, using the pipeline.

SWITCH STATE DETERMINING DEVICE
20170357517 · 2017-12-14 ·

A switch state determining device, includes: an external terminal to which a switch is externally attached; a constant current generating part configured to generate a constant current and flow the constant current through the external terminal; a voltage comparing part configured to compare a terminal voltage of the external terminal with a threshold voltage to generate a comparison signal; an ON/OFF determining part configured to output an ON/OFF determination signal of the switch depending on the comparison signal; a threshold voltage control part configured to adjust the threshold voltage depending on a threshold voltage set value; and a level determining part configured to output a level determination signal of the terminal voltage or a terminal voltage value.

Application program management method and apparatus, and storage medium
11681601 · 2023-06-20 · ·

An application program management method and apparatus, and a non-transitory computer-readable storage medium are disclosed. The application program management method may include: determining a current extra inspection policy for a target application program according to a current running type of the target application program in response to a determination that a freezing detection of the target application program is required; determining a current inspection policy corresponding to the target application program based on a basic inspection policy corresponding to the target application program and the current extra inspection policy; and freezing the target application program in response to a determination that a running state of the target application program satisfies the current inspection policy.

MANAGING A SET OF COMPUTE NODES WHICH HAVE DIFFERENT CONFIGURATIONS IN A STREAM COMPUTING ENVIRONMENT
20170344387 · 2017-11-30 ·

Disclosed aspects relate to managing a set of compute nodes for processing a stream of tuples using a set of processing elements. The set of compute nodes is structured to include both a first compute node having a first configuration and a second compute node having a second configuration. The first configuration differs from the second configuration. Based on the first configuration and the set of processing elements which includes a first processing element, a determination is made to establish the first processing element on the first compute node and the first processing element is established on the first compute node. In embodiments, based on the second configuration and the set of processing elements which includes a second processing element, a determination is made to establish the second processing element on the second compute node and the second processing element is established on the second compute node.

Stream computing application shutdown and restart without data loss
20170344382 · 2017-11-30 ·

In a stream computing application shutdown, a shutdown message is received by a source operator of the stream computing application. In response, the source operator stops acquiring data from external sources, sends any cached data to an output queue of the source operator, sends the shutdown message to the output queue of the source operator, and sends the cached data and shutdown message to an input queue of another operator in the stream computing application. The source operator then terminates. In response to receiving the shutdown message, the other operator completes the processing of data in its input queue and sends any outputs from the processing of the data in its input queue to one or more output destinations. The other operator then terminates. In this manner, a stream computing application may be shut down while ensuring that any already inputted data is processed to completion, thus avoiding data loss.

Reachability-based coordination for cyclic dataflow

Various embodiments provide techniques for working with large-scale collections of data pertaining to real world systems, such as a social network, a roadmap/GPS system, etc. The techniques perform incremental, iterative, and interactive parallel computation using a coordination clock protocol, which applies to scheduling computations and managing resources such as memory and network resources, etc., in cyclic graphs including those resulting from a differential dataflow model that performs computations on differences in the collections of data.