Patent classifications
G06F9/548
Implementing optional specialization when executing code
A compiler is capable of compiling instructions that do or do not supply specialization information for a generic type. The generic type is compiled into an unspecialized type. If specialization information was supplied, the unspecialized type is adorned with information indicating type restrictions for application programming interface (API) points associated with the unspecialized type, which becomes a specialized type. A runtime environment is capable of executing calls to a same API point that do or do not indicate a specialized type, and is capable of executing calls to a same API point of objects of an unspecialized type or of objects of a specialized type. When the call to an API point indicates a specialized type, and the specialized type matches that of the object (if the API point belongs to an object), then a runtime environment may perform optimized accesses based on type restrictions derived from the specialized type.
REVIEW AND TICKET MANAGEMENT SYSTEM AND METHOD
The present disclosure provides for an apparatus for accessing a management system comprising a memory operable to store a plurality of customer reviews received from a plurality of external sources and a processor operably coupled to the memory. The processor is configured to receive the plurality of customer reviews, wherein receiving the plurality of customer reviews comprises of transmitting a functional call at a pre-determined frequency with property identifiers to an application programming interface (API). The processor is further configured to categorize each of the received plurality of customer reviews into a designated category and to generate a response to one of the received plurality of customer reviews. The processor is further configured to transmit the generated response to one of the plurality of external sources and to display each of the received plurality of customer reviews through one or more widgets on a user interface of the management system.
Generic data exchange method using hierarchical routing
A process including retrieving a list of one or more candidate objects with which an origin object can communicate using a standard command language, wherein at least one of the one or more candidate objects uses a command language different than the standard command language. The process queries the schema of one or more target objects selected from among the one or more candidate objects and uses the standard command language to transmit to the one or more target objects commands and/or data consistent with the schemas of the target objects.
On-premise data collection and ingestion using industrial cloud agents
A cloud agent facilitates collection of industrial data from one or more data sources on the plant floor and migration of the collected data to a cloud platform for storage and processing. Collection services associated with the cloud agent perform on-premise data collection of historical, live, and/or alarm data directly from industrial devices networked to the agent or from intermediate data concentrators that gather the data from the devices. Queue processing services executed by the cloud agent package the data into a data packet comprising header information that identifies a customer associated with the industrial enterprise, processing priority information, and other information that informs data processing services on the cloud platform how to process and/or direct the incoming data. The cloud agent then establishes a communication channel to the cloud platform and sends the data via the channel.
EMBEDDED CAPACITY-COMPUTER MODULE FOR MICROSERVICE LOAD BALANCING AND DISTRIBUTION
Disclosed herein are system, method, and computer program product embodiments for microservice load balancing and distribution using an embedded computer capacity module. An embodiment operates by retrieving an application programming interface (API) request from a client. The embodiment stores objects representing the API request in a job detail database containing details related to the API request. The embodiment determines an available bandwidth of the service instance. The embodiment transmits the determined available bandwidth of the service instance to a job processor. The embodiment selects tasks from the objects representing the API request stored in the job detail database based on the determined available bandwidth of the service instance. The embodiment executes the selected tasks.
External function invocation by a data system
A query referencing a function associated with a remote software component is received by a network-based data warehouse system. Temporary security credentials corresponding to a role at a cloud computing service platform are obtained. The role has permission to send calls to a web endpoint corresponding to the remote software component. A request comprising input data and electronically signed using the temporary security credentials is sent to a web Application Programming Interface (API) management system of the cloud computing service platform. The request, when received by the web API management system, causes the web API management system to invoke external functionality provided by the remote software component at the web endpoint with respect to the input data. A response comprising a result of invoking the external functionality is received from the web API management system, and the result data is processed according to the query.
Object-oriented memory client
A hardware client and corresponding method employ an object-oriented memory device. The hardware client generates an object-oriented message associated with an object of an object class. The object class includes at least one data member and at least one method. The hardware client transmits the object-oriented message generated to the object-oriented memory device via a hardware communications interface. The hardware communications interface couples the hardware client to the object-oriented memory device. The object is instantiated or to-be instantiated in at least one physical memory of the object-oriented memory device according to the object class. The at least one method enables the object-oriented memory device to access the at least one data member for the hardware client.
Digital processing systems and methods for digital workflow system dispensing physical reward in collaborative work systems
Systems, methods, and computer-readable media for providing physical rewards from disbursed networked dispensers are disclosed. The systems and methods may involve at least one processor configured to: maintain and cause to be displayed a workflow table having rows, columns and cells; track a workflow milestone via a designated cell configured to maintain data indicating that the workflow milestone is reached; access a data structure storing a rule containing a condition associated with the designated cell and a conditional trigger associated with at least one dispenser; receive an input via the designated cell; compare the input with the condition to determine a match; and activate the conditional trigger to cause at least one dispensing signal to be transmitted over a network to the at least one dispenser to cause the at least one dispenser to dispense a physical item as a result of the milestone being reached.
REDUCING THE STARTUP LATENCY OF FUNCTIONS IN A FAAS INFRASTRUCTURE
Techniques for reducing the startup latency of functions in a Functions-as-a-Service (FaaS) infrastructure are provided. In one set of embodiments, a function manager of the FaaS infrastructure can receive a request to invoke a function uploaded to the infrastructure and can retrieve information associated with the function. The retrieved information can include an indicator of whether instances of the function may be sticky (i.e., kept in host system primary memory after function execution is complete), and a list of zero or more host systems in the FaaS infrastructure that currently have an unused sticky instance of the function in their respective primary memories. If the indicator indicates that instances of the function may be sticky and if the list identifies at least one host system with an unused sticky instance of the function in its primary memory, the function manager can select the at least one host system for executing the function.
Digital processing systems and methods for data visualization extrapolation engine for item extraction and mapping in collaborative work systems
Systems, methods, and computer-readable media for extrapolating information display visualizations are disclosed. The systems and methods may involve maintaining a board with a plurality of items, each item defined by a row of cells, and wherein each cell is configured to contain data and is associated with a column heading; linking at least a first column to at least a second column so that a change in data in a cell of the at least first column causes a change in data of a cell in the at least second column; receiving a first selection of a particular item from the board, wherein the particular item includes a plurality of cells with data in each cell, and wherein data in a first cell of the plurality of cells is linked to data in a second cell of the plurality of cells; upon receipt of the first selection.