H04L67/1089

Data Caching, Distribution and Request Consolidation in a Local Network
20180007160 · 2018-01-04 ·

A data caching and distribution method, performed by a plurality of computational machines in a linear communication orbit, includes generating a data request by a first machine to request specific data, and passing the data request along a data request path that tracks the linear communication orbit until the request is received at a second machine, in the linear communication orbit, that returns the specific data in response to the data request. The method includes, at a third machine between the second machine and the first machine in the linear communication orbit, conditionally storing the specific data in a local cache of the third machine according to a data caching method.

Platform for data sharing of patient-generated real-world data from clinical trials

Methods, systems, and apparatus, including computer-readable media, for a hierarchical multi-tenant data access platforms. In some implementations, the a server system stores data collected through a multi-tenant data access platform configured to collect data for each of multiple tenant organizations and to selectively make the collected data available according to policies associated with the respective tenant organizations. The server system receives a request associated with a user, and the server system generates and provides a response according to the organization hierarchy data and policy data for the unit of the organization that has data that would be used in generating the response to the request.

Mechanism to enforce consistent next hops in a multi-tier network

In general, the disclosure relates to a method for forwarding a packet through a multi-tier network by establishing a routing protocol session with network devices in the multi-tier network, obtaining routing protocol information from network devices of the multi-tier network, determining a group using the routing protocol information, generating an ordered group listing using network device identifiers (NDIs) for the network devices in the group, and programming a network device hardware of a network device of the set of network devices of the multi-tier network using the ordered group listing. The group includes a set of the network devices of the multi-tier network.

Method and apparatus for recovering missing data in multi-source hybrid overlay network

In a multi-source hybrid overlay network including a plurality of peers, an apparatus that recovers missing data occurring in a tree recovery process, acquires data that a peer does not have in a pull method through exchanging buffer maps with a first counterpart peer connected to a primary path recovered in the tree recovery process and at least one second counterpart peer connected to at least one candidate path, and provides data that the first counterpart peer does not have to the first counterpart peer in the push method, when a data recovery policy is a pull method, is provided.

SYSTEMS AND METHODS FOR MEMORY TRACING IN ASSET MANAGING SYSTEMS

The present embodiments relate to implementing change data on no-master NoSQL data stores. An optimized node can be identified from a plurality of NoSQL data storage nodes and a specialized node can be connected (e.g., collocated) to the optimized node. The specialized node can maintain change data capture (CDC) data provided by client nodes in a hash map that can be used as a point of truth for coordinating CDC data across the plurality of NoSQL data storage nodes. The plurality of NoSQL data storage nodes can identify and coordinate all read/write data obtained from multiple client devices in a geographically separated large-scale (e.g., planet scale) system to identify change data in a distributed data store. The specialized data can provide read data to devices in the large-scale system to reconcile inconsistencies in change data across nodes in the large-scale system.

Hyperscale artificial intelligence and machine learning infrastructure

A hyperscale artificial intelligence and machine learning infrastructure includes a plurality of racks, where: at least one or more of the racks include one or more GPU servers; at least one or more of the racks include one or more storage systems; each of the racks include one or more switches coupled to at least one switch in another rack; and the one or more GPU servers are configured to execute one or more artificial intelligence or machine learning applications, wherein data stored within the one or more storage systems is used as input to the one or more artificial intelligence or machine learning applications.

MECHANISM TO ENFORCE CONSISTENT NEXT HOPS IN A MULTI-TIER NETWORK
20230035984 · 2023-02-02 ·

In general, the disclosure relates to a method for forwarding a packet through a multi-tier network by establishing a routing protocol session with network devices in the multi-tier network, obtaining routing protocol information from network devices of the multi-tier network, determining a group using the routing protocol information, generating an ordered group listing using network device identifiers (NDIs) for the network devices in the group, and programming a network device hardware of a network device of the set of network devices of the multi-tier network using the ordered group listing. The group includes a set of the network devices of the multi-tier network.

PATH MANAGEMENT

Embodiments relate to methods, systems, and computer program products for path management in a processing system. In a method, in response to receiving a request for adding a target controlling unit into a processing system, a plurality of network nodes in the processing system are divided into a group of subnets based on a topology of the plurality of network nodes, the plurality of network nodes being connected to at least one controlling unit in the processing system. A workload estimation is determined, the workload estimation representing a workload to be caused by the target controlling unit to the processing system. A target subnet is selected from the group of subnets for connecting the target controlling unit into the processing system based on the workload estimation. With these embodiments, the target subnet may be selected in an automatic way such that the performance of the processing system may be increased.

Cascading payload replication to target compute nodes

Cascading payload replication to target compute nodes is disclosed. Cascading payload replication can be accomplished using a two-stage operation for a replication operation. In the first stage, a plan is generated and distributed for the replication operation. The plan includes an assignment of compute nodes to tree nodes in a tree hierarchy. In the second phase, the payload is distributed according to the plan. The plan is different for at least two replication operations. Thus, the cascading payload replication is adaptable to changing target compute nodes and provides for load balancing.

Artificial Intelligence And Machine Learning Hyperscale Infrastructure

A hyperscale artificial intelligence and machine learning infrastructure includes a plurality of racks, where: at least one or more of the racks include one or more GPU servers; at least one or more of the racks include one or more storage systems; each of the racks include one or more switches coupled to at least one switch in another rack; and the one or more GPU servers are configured to execute one or more artificial intelligence or machine learning applications, wherein data stored within the one or more storage systems is used as input to the one or more artificial intelligence or machine learning applications.