Patent classifications
G06F9/541
Methods, systems, and computer readable media for data translation using a representational state transfer (REST) application programming interface (API)
According to one method, the method comprises: receiving, from a client via a REST API, input in a first format; converting, using predetermined metadata, the input in the first format into input in a second format; sending the input in the second format to a legacy system for performing an operation using the input in the second format; receiving, from the legacy system, output in the second format, wherein the output is based at least in part on the operation performed using the input in the second format; converting, using the predetermined metadata, the output in the second format into output in the first format; and sending, to the client via the REST API, the output in the first format.
DYNAMIC ALLOCATION OF EXECUTABLE CODE FOR MULTI-ARCHITECTURE HETEROGENEOUS COMPUTING
An apparatus for executing a software program, comprising processing units and a hardware processor adapted for: in an intermediate representation of the software program, where the intermediate representation comprises blocks, each associated with an execution block of the software program and comprising intermediate instructions, identifying a calling block and a target block, where the calling block comprises a control-flow intermediate instruction to execute a target intermediate instruction of the target block; generating target instructions using the target block; generating calling instructions using the calling block and a computer control instruction for invoking the target instructions, when the calling instructions are executed by a calling processing unit and the target instructions are executed by a target processing unit; configuring the calling processing unit for executing the calling instructions; and configuring the target processing unit for executing the target instructions.
Orchestration for automated performance testing
Methods, systems, and devices supporting orchestration for automated performance testing are described. A server may orchestrate performance testing for software applications across multiple different test environments. The server may receive a performance test indicating an application to test and a set of test parameters. The server may determine a local or a non-local test environment for running the performance test. The server may deploy the application to the test environment, where the deploying involves deploying a first component of the performance test to a first test artifact in the test environment and deploying a second component of the performance test different from the first component to a second test artifact in the test environment. The server may execute the performance test to obtain a result set, where the executing involves executing multiple performance test components as well as orchestrating results across multiple test artifacts to obtain the result set.
Updating a Digital Object Representing a Real-World Object
A method, computer program and computer program product for allowing update of digital objects as well as to an edge node and process control system including an edge node. The edge node obtains a copy of an original digital object from a process control server, the original object having a number of aspects and being provided according to a first process control data format, provides the copy as a modified object in a second data format that is open for applications external to the process control system, in which second data format the modified object compromises a number of data models receive an update of the modified object from the application, where the update includes a new data model.
Automated orchestration of containers by assessing microservices
Performing container scaling and migration for container-based microservices is provided. A first set of features is extracted from each respective microservice of a plurality of different microservices. A number of containers required at a future point in time for each respective microservice of the plurality of different microservices is predicted using a trained forecasting model and the first set of features extracted from each respective microservice. A scaling label and a scaling value are assigned to each respective microservice of the plurality of different microservices based on a predicted change in a current number of containers corresponding to each respective microservice according to the number of containers required at the future point in time for each respective microservice. The current number of containers corresponding to each respective microservice of the plurality of different microservices is adjusted based on the scaling label and the scaling value assigned to each respective microservice.
Datapath load distribution for a RIC
To provide a low latency near RT RIC, some embodiments separate the RIC's functions into several different components that operate on different machines (e.g., execute on VMs or Pods) operating on the same host computer or different host computers. Some embodiments also provide high speed interfaces between these machines. Some or all of these interfaces operate in non-blocking, lockless manner in order to ensure that critical near RT RIC operations (e.g., datapath processes) are not delayed due to multiple requests causing one or more components to stall. In addition, each of these RIC components also has an internal architecture that is designed to operate in a non-blocking manner so that no one process of a component can block the operation of another process of the component. All of these low latency features allow the near RT RIC to serve as a high speed IO between the E2 nodes and the xApps.
Multi-active electronic subscriber identity module profiles for multi-service user equipment
A wireless communication device for establishing two different user equipment (UE) radio access network (RAN) attachments. The wireless communication device comprises an application processor; a baseband processor; a non-transitory memory; a virtual user equipment (UE) application stored in the non-transitory memory that, when executed by the application processor as a first virtual UE instance accesses a first eSIM profile stored in the non-transitory memory, establishes a first UE attachment to a radio access network based on credentials accessed from the first eSIM profile, and conducts a first wireless communication session via the first UE attachment, and when executed by the application processor as a second virtual UE application instance accesses a second eSIM profile stored in the non-transitory memory, establishes a second UE attachment to a radio access network based on credentials accessed from the second eSIM profile, and conducts a second wireless communication session via the second UE attachment.
Templates for mapping data events to API calls
Templates for mapping data events to API calls is leveraged in a digital medium environment. For instance, to enable communication between an event-driven architecture (EDA) system and an application programming interface (API) system, the described techniques utilize templates that enable EDA events to be mapped to API communications and API communications to be mapped to EDA events.
System and method for processing telephony sessions
In one embodiment, the method of processing telephony sessions includes: communicating with an application server using an application layer protocol; processing telephony instructions with a call router; and creating call router resources accessible through a call router Application Programming Interface (API). In another embodiment, the system for processing telephony sessions includes: a call router, a URI for an application server, a telephony instruction executed by the call router, and a call router API resource.
A METHOD AND AN APPARATUS FOR ENABLING ACCESS TO PROCESS DATA OF A FOOD PRODUCTION PLANT
A method for enabling access to process data for a food production plant can include receiving, by a central server, properties of the food production plant from the control system, generating a data model based on the properties, said data model comprising data model properties, wherein the data model is control system type independent, transmitting the data model properties to an application programming interface (API), receiving adapted data model properties from the API, updating the data model based on the adapted data model properties, receiving, by the central server, plant design data from a plant design tool, generating an API based on the data model and the plant design data, and transmitting the API to a monitoring device, thereby providing for that the monitoring device is enabled to receive the process data from the food production plant.