H04Q2213/13164

Method and apparatus for intelligent routing of instant messaging presence protocol (IMPP) events among a group of customer service representatives

A routing system is provided for intelligent routing of instant messages between clients connected to a data network and customer service representatives connected to the network. The system comprises at least one instant message server and at least one intermediate server connected to and addressable on the network, the intermediate server capable of routing and accessible to the instant message server. Clients connecting to the instant message server through instant message software assert a connection link advertised by the instant message server to establish bi-directional communication between the client machine and the intermediate server. In preferred application, the intermediate server interacts with the client for identification of client and client software. The client request is then routed to an appropriate customer service representative running compatible software according to enterprise rules establishing an active instant message connection between the client and the selected customer service representative.

System and method for transmitting data in a network

A system and method for transmitting data in a network comprising the steps of determining a traffic congestion variable of a data transmission node arranged to receive data from one or more source nodes of the network, using the traffic congestion variable to select a preferred transmission mode for use by the one or more source nodes to transmit data to the data transmission node, and switching an operating transmission mode of each of the one or more source nodes to the preferred transmission mode such that the one of more source nodes transmit data to the data transmission node with the preferred transmission mode.

OPTIMIZED RESOURCE MANAGEMENT IN CORE NETWORK ELEMENTS

The present invention addresses method, apparatus and computer program product for resource management within a distributed system in a core network element, comprising organizing computing resources of the core network element into sets, wherein a first set is always active, setting an upper threshold and a lower threshold for the load of the sets, wherein the sets in operation are loaded as long as the average load of the sets in operation reaches the upper threshold, and when the upper threshold is exceeded, a new set is activated, whereas, when the load thereof falls below the lower threshold, the last activated set is deactivated, assigning a priority number to each set, segmenting an interval of random numbers for randomizing a request distribution in subintervals which are allocated to the computing resources of the active sets, wherein the length of each subinterval is determined based on the priority number of the set of the respective computing resource, and allotting a random number out of the interval to an incoming request, and forwarding the request to such computing resource which belongs to the subinterval that contains the allotted random number.

Determination of the latency of an optical transmission link
10958995 · 2021-03-23 · ·

Disclosed is a method for determining the link latency of an optical transmission link which includes an end node at each end and one or more pass-through nodes. Each pair of neighboring nodes is connected, at a connection port of each node, by an optical connecting path. Each pass-through node includes an optical pass-through path between its connection ports. The optical connecting paths and optical pass-through paths form an optical link path. A delimiter device includes a delimiter element provided at each connection port of each node. The delimiter element forms a demarcation within the optical link path. According to the method the following steps are carried out: measuring, for each pair of neighboring nodes, a section latency by transmitting a section probe signal from a first one of the pair of nodes to the second one of the pair of nodes; measuring, at the first node, a first time delay of a first reflection signal, which is created by the delimiter element of the delimiter device of the first node by reflecting a power portion of the section probe signal, and a second time delay of a second reflection signal, which is created by the delimiter element of the delimiter device of the second node by reflecting a power portion of the section probe signal received from the first node; and calculating the section latency as half the difference between the second time delay and the first time delay; determining, for each pass-through node either theoretically or by measurement, a pass-through latency of an internal optical pass-through path between the delimiter elements of the delimiter devices of the respective pass-through node; and adding all section latencies and pass-through latencies in order to obtain the link latency of the optical link path.

Determination of the Latency of an Optical Transmission Link
20200196036 · 2020-06-18 ·

Disclosed is a method for determining the link latency of an optical transmission link which includes an end node at each end and one or more pass-through nodes. Each pair of neighboring nodes is connected, at a connection port of each node, by an optical connecting path. Each pass-through node includes an optical pass-through path between its connection ports. The optical connecting paths and optical pass-through paths form an optical link path. A delimiter device includes a delimiter element provided at each connection port of each node. The delimiter element forms a demarcation within the optical link path. According to the method the following steps are carried out: measuring, for each pair of neighboring nodes, a section latency by transmitting a section probe signal from a first one of the pair of nodes to the second one of the pair of nodes; measuring, at the first node, a first time delay of a first reflection signal, which is created by the delimiter element of the delimiter device of the first node by reflecting a power portion of the section probe signal, and a second time delay of a second reflection signal, which is created by the delimiter element of the delimiter device of the second node by reflecting a power portion of the section probe signal received from the first node; and calculating the section latency as half the difference between the second time delay and the first time delay; determining, for each pass-through node either theoretically or by measurement, a pass-through latency of an internal optical pass-through path between the delimiter elements of the delimiter devices of the respective pass-through node; and adding all section latencies and pass-through latencies in order to obtain the link latency of the optical link path.

Optimized resource management in core network elements

The present invention addresses method, apparatus and computer program product for resource management within a distributed system in a core network element, comprising organizing computing resources of the core network element into sets, wherein a first set is always active, setting an upper threshold and a lower threshold for the load of the sets, wherein the sets in operation are loaded as long as the average load of the sets in operation reaches the upper threshold, and when the upper threshold is exceeded, a new set is activated, whereas, when the load thereof falls below the lower threshold, the last activated set is deactivated, assigning a priority number to each set, segmenting an interval of random numbers for randomizing a request distribution in subintervals which are allocated to the computing resources of the active sets, wherein the length of each subinterval is determined based on the priority number of the set of the respective computing resource, and allotting a random number out of the interval to an incoming request, and forwarding the request to such computing resource which belongs to the subinterval that contains the allotted random number.

Method and apparatus for anticipating and planning communication-center resources based on evaluation of events waiting in a communication center master queue

A software application for recommending workforce resource allocation in a communication center based on requirements of events represented in a communication-center queue has a first interface for accessing information from the queue; a second interface for accessing information from a data source about workforce availability and state information; a processing component for processing queue information and workforce information; and a message generation and delivery component for generating a workforce allocation recommendation based on processing results and sending the recommendation to a target entity. In a preferred embodiment, the application periodically accesses the queue and the data source to obtain the most recent information for processing and generates periodic recommendations based on real-time requirements of events and availability states of resources, the recommendations sent ahead of time before the resources are required.

Method and apparatus for anticipating and planning communication-center resources based on evaluation of events waiting in a communication center master queue

A software application for recommending workforce resource allocation in a communication center based on requirements of events represented in a communication-center queue has a first interface for accessing information from the queue; a second interface for accessing information from a data source about workforce availability and state information; a processing component for processing queue information and workforce information; and a message generation and delivery component for generating a workforce allocation recommendation based on processing results and sending the recommendation to a target entity. In a preferred embodiment, the application periodically accesses the queue and the data source to obtain the most recent information for processing and generates periodic recommendations based on real-time requirements of events and availability states of resources, the recommendations sent ahead of time before the resources are required.

Method and apparatus for anticipating and planning communication-center resources based on evaluation of events waiting in a communication center master queue

A software application for recommending workforce resource allocation in a communication center based on requirements of events represented in a communication-center queue has a first interface for accessing information from the queue; a second interface for accessing information from a data source about workforce availability and state information; a processing component for processing queue information and workforce information; and a message generation and delivery component for generating a workforce allocation recommendation based on processing results and sending the recommendation to a target entity. In a preferred embodiment, the application periodically accesses the queue and the data source to obtain the most recent information for processing and generates periodic recommendations based on real-time requirements of events and availability states of resources, the recommendations sent ahead of time before the resources are required.

Method and apparatus for optimizing response time to events in queue

A system for optimizing response time to events or representations thereof waiting in a queue has a first server having access to the queue; a software application running on the first server; and a second server accessible from the first server, the second server containing rules governing the optimization. In a preferred embodiment, the software application at least periodically accesses the queue and parses certain ones of events or tokens in the queue and compares the parsed results against rules accessed from the second server in order to determine a measure of disposal time for each parsed event wherein if the determined measure is sufficiently low for one or more of the parsed events, those one or more events are modified to a reflect a higher priority state than originally assigned enabling faster treatment of those events resulting in relief from those events to the queue system load.