H04L47/12

NETWORK CONTROL METHOD AND DATA PROCESSING SYSTEM
20230239245 · 2023-07-27 · ·

The present disclosure relates to a network control method and a data processing system for reducing traffic in a network and reducing a processing load of an application that performs data processing.

A network connection device determines, on the basis of a manifest, an optimal location for execution of an application that processes sensor data generated by a sensor device, from among the sensor device and a device on a path in a network connected to the sensor device. The present technology can be applied to, for example, a network control method of cloud computing.

VOICE COMMUNICATION BETWEEN A SPEAKER AND A RECIPIENT OVER A COMMUNICATION NETWORK

Voice communication, between a speaker and a recipient, either or both of which may be in a motor vehicle, is provided via a communication network. In a first step, an input speech utterance is received from the speaker. Optionally, a bandwidth of a connection to the communication network is evaluated at the side of the speaker. The input speech utterance is then converted to text. At least the text is transmitted over the communication network. In case of a sufficiently large bandwidth, the input speech utterance may be transmitted as voice and as text. The transmitted text is converted into an output speech utterance that simulates a voice of the speaker. Finally, the output speech utterance is provided to the recipient.

Transmission latency leveling apparatuses, methods and systems

Embodiments of the transmission latency leveling apparatuses, methods and systems provide an electronic bidding order management infrastructure, such as a “point-of-presence,” which receives and routes electronic trading orders from different trading entities at a server via a transmission medium to create a certain amount of transmission latency before the trading orders could arrive at and be executed at electronic exchanges to reduce latency arbitrage and/or order book arbitrage that may be experienced by high frequency trading participants. A similar transmission latency may be applied to the egress transmission of market data updates issued by an electronic exchange. Other techniques for facilitating electronic trading are also disclosed.

Transmission latency leveling apparatuses, methods and systems

Embodiments of the transmission latency leveling apparatuses, methods and systems provide an electronic bidding order management infrastructure, such as a “point-of-presence,” which receives and routes electronic trading orders from different trading entities at a server via a transmission medium to create a certain amount of transmission latency before the trading orders could arrive at and be executed at electronic exchanges to reduce latency arbitrage and/or order book arbitrage that may be experienced by high frequency trading participants. A similar transmission latency may be applied to the egress transmission of market data updates issued by an electronic exchange. Other techniques for facilitating electronic trading are also disclosed.

OPTIMIZING CONTAINER EXECUTIONS WITH NETWORK-ATTACHED HARDWARE COMPONENTS OF A COMPOSABLE DISAGGREGATED INFRASTRUCTURE

The invention is notably directed to a method, computer program product, and computer system for running software inside containers. The method relies on a computerized system that includes a composable disaggregated infrastructure, in addition to general-purpose hardware. The computerized system is configured to dynamically allocate computerized resources, which include both general resources and specialized resources. The former are enabled by the general-purpose hardware, while the latter are enabled by specialized network-attached hardware components of the composable disaggregated infrastructure. The method maintains a table capturing specializations of the specialized network-attached hardware components. At runtime, software is run inside each container by executing corresponding functions. A first subset of the functions are executed using the general resources, whereas a second subset of the functions are executed using the specialized resources, by offloading the second subset of functions to respective components of the specialized network-attached hardware components, in accordance with the specializations.

OPTIMIZING CONTAINER EXECUTIONS WITH NETWORK-ATTACHED HARDWARE COMPONENTS OF A COMPOSABLE DISAGGREGATED INFRASTRUCTURE

The invention is notably directed to a method, computer program product, and computer system for running software inside containers. The method relies on a computerized system that includes a composable disaggregated infrastructure, in addition to general-purpose hardware. The computerized system is configured to dynamically allocate computerized resources, which include both general resources and specialized resources. The former are enabled by the general-purpose hardware, while the latter are enabled by specialized network-attached hardware components of the composable disaggregated infrastructure. The method maintains a table capturing specializations of the specialized network-attached hardware components. At runtime, software is run inside each container by executing corresponding functions. A first subset of the functions are executed using the general resources, whereas a second subset of the functions are executed using the specialized resources, by offloading the second subset of functions to respective components of the specialized network-attached hardware components, in accordance with the specializations.

DATA FLOW MODELING
20230239246 · 2023-07-27 ·

Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a first network entity may generate a first data flow model for a first set of paths that traverse the network entity. The first network entity may receive an indication of a second data flow model for a second set of paths that traverse a second network entity, the first set including at least one path that is within the second set. The first network entity may selectively update the first data flow model based at least in part on whether the indication of the second data flow model indicates an error in the first data flow model. Numerous other aspects are described.

DATA FLOW MODELING
20230239246 · 2023-07-27 ·

Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a first network entity may generate a first data flow model for a first set of paths that traverse the network entity. The first network entity may receive an indication of a second data flow model for a second set of paths that traverse a second network entity, the first set including at least one path that is within the second set. The first network entity may selectively update the first data flow model based at least in part on whether the indication of the second data flow model indicates an error in the first data flow model. Numerous other aspects are described.

Rate update engine for reliable transport protocol

A system includes a first processor configured to analyze packets received over a communication protocol system and determine one or more congestion indicators from the analysis of the data packets, the one or more congestion indicators being indicative of network congestion for data packets transmitted over a reliable transport protocol layer of the communication protocol system. The system also includes a rate update engine separate from the packet datapath and configured to operate a second processor to receive the determined one or more congestion indicators, determine one or more congestion control parameters for controlling transmission of data packets based on the received one or more congestion indicators, and output a congestion control result based on the determined one or more congestion control parameters.

Rate update engine for reliable transport protocol

A system includes a first processor configured to analyze packets received over a communication protocol system and determine one or more congestion indicators from the analysis of the data packets, the one or more congestion indicators being indicative of network congestion for data packets transmitted over a reliable transport protocol layer of the communication protocol system. The system also includes a rate update engine separate from the packet datapath and configured to operate a second processor to receive the determined one or more congestion indicators, determine one or more congestion control parameters for controlling transmission of data packets based on the received one or more congestion indicators, and output a congestion control result based on the determined one or more congestion control parameters.