G06F16/245

Efficient semantic analysis of program code
11550556 · 2023-01-10 · ·

Provided are systems and methods of a compiler that efficiently processes semantic analysis. For example, the compiler may perform semantic analysis on as much of the source code as possible during compile time. For any instructions, such as dynamic expressions, that are not known at compile time, the compiler may encode semantic bytecode for performing the semantic checks on such dynamic expressions, and their dependent expressions, during execution/runtime of the program. In one example, the method may include compiling source code of a program into bytecode, identifying, during the compiling, a dynamic expression that includes one or more dependent static expressions within the source code, generating semantic bytecode for semantic analysis of the one or more dependent static expressions of the dynamic expression, and adding the semantic bytecode to the bytecode of the program.

SYSTEM AND METHOD FOR DATA WAREHOUSE STORAGE CAPACITY OPTIMIZATION BASED ON USAGE FREQUENCY OF DATA OBJECTS

A system for optimizing memory utilization receives a query statement that indicated to retrieve data objects. For a first data object, the system determines whether the first data object is stored in a high-grade data repository or a low-grade data repository. The system determines whether a recent usage frequency of the first data object exceeds a usage frequency threshold. If the system determines that the first data object is stored in the high-grade data repository and that the recent usage frequency is less than the usage frequency threshold, the system moves the first data object to the low-grade data repository. If the system determines that the first data object is stored in the low-grade data repository and that the recent usage frequency is more than the usage frequency threshold, the system moves the first data object to the high-grade data repository. The system outputs the data objects.

SYSTEM AND METHOD FOR DATA WAREHOUSE STORAGE CAPACITY OPTIMIZATION BASED ON USAGE FREQUENCY OF DATA OBJECTS

A system for optimizing memory utilization receives a query statement that indicated to retrieve data objects. For a first data object, the system determines whether the first data object is stored in a high-grade data repository or a low-grade data repository. The system determines whether a recent usage frequency of the first data object exceeds a usage frequency threshold. If the system determines that the first data object is stored in the high-grade data repository and that the recent usage frequency is less than the usage frequency threshold, the system moves the first data object to the low-grade data repository. If the system determines that the first data object is stored in the low-grade data repository and that the recent usage frequency is more than the usage frequency threshold, the system moves the first data object to the high-grade data repository. The system outputs the data objects.

Systems and methods for processing natural language queries for healthcare data

In some embodiments of the present disclosure, techniques are utilized that allow answers to be provided to end users such as health care consumers, based on benefit book documents. The benefit book documents, which do not initially contain machine-readable structural or semantic information, are processed in order to detect structure and create semantic content based on the structure. This semantic content may then be added to a graph that represents the information contained in the benefit book document. A computing device may then use the nodes of this graph to answer questions received from consumers, where templates that provide answers to the questions reference the nodes of the graph.

Systems and methods for processing natural language queries for healthcare data

In some embodiments of the present disclosure, techniques are utilized that allow answers to be provided to end users such as health care consumers, based on benefit book documents. The benefit book documents, which do not initially contain machine-readable structural or semantic information, are processed in order to detect structure and create semantic content based on the structure. This semantic content may then be added to a graph that represents the information contained in the benefit book document. A computing device may then use the nodes of this graph to answer questions received from consumers, where templates that provide answers to the questions reference the nodes of the graph.

Expediting processing of selected events on a time-limited basis

Techniques are described that enable an IT and security operations application to prioritize the processing of selected events for a defined period of time. Data is obtained reflecting activity within an IT environment, wherein the data includes a plurality of events each representing an occurrence of activity within the IT environment. A severity level is assigned to each event of the plurality of events, where the events are processed by the IT and security operations application in an order that is based at least in part on the severity level assigned to each event. Input is received identifying at least one event of the plurality of events for expedited processing to obtain a set of expedited events, and the identified events are processed by the IT and security operations application before processing events that are not in the set of expedited events.

Expediting processing of selected events on a time-limited basis

Techniques are described that enable an IT and security operations application to prioritize the processing of selected events for a defined period of time. Data is obtained reflecting activity within an IT environment, wherein the data includes a plurality of events each representing an occurrence of activity within the IT environment. A severity level is assigned to each event of the plurality of events, where the events are processed by the IT and security operations application in an order that is based at least in part on the severity level assigned to each event. Input is received identifying at least one event of the plurality of events for expedited processing to obtain a set of expedited events, and the identified events are processed by the IT and security operations application before processing events that are not in the set of expedited events.

Systems and methods for providing automated integration and error resolution of records in complex data systems

A claim editing engine for automated integration and error resolution of claim records is provided. The processor of the engine is configured to extract a set of claim components of a plurality of claim components. The processor is further configured to transform the set of claim components to conform to a standardized data format. The processor is also configured to integrate the set of transformed claim components into a set of unified claims by unifying each of the set of transformed claim components having matching claim identifiers into a unified claim. The processor is configured to apply a rule set to the set of unified claims to generate a simulation of execution of the set of claims and identify errors in the simulated execution. The processor is configured to transmit an instruction to resolve each identified error. The processor is configured to cause each resolved unified claim to be processed.

Systems and methods for providing automated integration and error resolution of records in complex data systems

A claim editing engine for automated integration and error resolution of claim records is provided. The processor of the engine is configured to extract a set of claim components of a plurality of claim components. The processor is further configured to transform the set of claim components to conform to a standardized data format. The processor is also configured to integrate the set of transformed claim components into a set of unified claims by unifying each of the set of transformed claim components having matching claim identifiers into a unified claim. The processor is configured to apply a rule set to the set of unified claims to generate a simulation of execution of the set of claims and identify errors in the simulated execution. The processor is configured to transmit an instruction to resolve each identified error. The processor is configured to cause each resolved unified claim to be processed.

Selecting between hydration-based scanning and stateless scale-out scanning to improve query performance

When a query is received by a stateful data processing service, the service determines, for each table scan (and associated operations) of a query, whether to select the table scan for execution by a stateless data processing service. The selected table scans are sent to the stateless data processing service for execution, and results are received by the stateful data processing service. The stateful data processing service may also execute other table scans of the query locally, against a local data cache. If the data is not present in the local data cache, then the stateful data processing service will copy the table data into the local data cache before executing the table scan. A query result based on the remote and/or local table scans may then be returned to the client.