METHOD FOR SCALING UP MICROSERVICES BASED ON API CALL TRACING HISTORY
20230222012 · 2023-07-13
Assignee
Inventors
Cpc classification
International classification
Abstract
A disclosed microservice scaling operation obtains information indicating dependencies between a function associated with an external API call and microservices spanned by the external API call. Functions invoked by managed resources are monitored and, responsive to detecting the function being invoked, a scaling service is launched to access the dependency information, identify the applicable microservices, and perform a scale up operation instantiating each of the microservices. The dependency information may be obtained by recording and analyzing traces for instances of the external API call to determine a dependency tree that indicates branches spanned by the external API call and a sequence of microservices corresponding to each branch. The microservices may be scaled up in parallel or in a prioritized parallel manner wherein in early span microservices are launched before late span microservices. The API may be a RESTful API and each microservice may correspond to an internal API call.
Claims
1. A microservice scale up method, comprising: obtaining dependency information indicative of a dependency between a particular function associated with a particular external API call and a plurality of microservices spanned by the particular external API call; monitoring functions invoked by one or more managed information handling resources; responsive to detecting an invocation of the particular function, launching a scaling service configured to: access the dependency information to identify the plurality of microservices; and performing a scale up operation to instantiate one or more instances of the plurality of microservices.
2. The method of claim 1, wherein obtaining the dependency information comprises: recording traces for each of one or more instances of the particular external API call; and analyzing the traces to determine a dependency tree corresponding to the external API call, wherein the dependency tree is indicative of the branches the external API may span and a sequence of microservices corresponding to each branch.
3. The method of claim 1, wherein the scale up operation instantiates each of the one or more microservices in parallel.
4. The method of claim 1, wherein the scale up operation instantiates the one or more microservices based on the sequencing of one or more microservice.
5. The method of claim 1, wherein each of the plurality of microservices corresponds to an internal API call.
6. The method of claim 5, wherein the API comprises a (REST) compliant API.
7. An information handling system, comprising: a central processing unit (CPU); and an non-transitory memory resource accessible to the CPU and including one or more processor-executable instructions for performing coordinated microservice scaling operations comprising: obtaining dependency information indicative of a dependency between a particular function associated with a particular external API call and a plurality of microservices spanned by the particular external API call; monitoring functions invoked by one or more managed information handling resources; responsive to detecting an invocation of the particular function, launching a scaling service configured to: access the dependency information to identify the plurality of microservices; and performing a scale up operation to instantiate one or more instances of the plurality of microservices.
8. The information handling system of claim 7, wherein obtaining the dependency information comprises: recording traces for each of one or more instances of the particular external API call; and analyzing the traces to determine a dependency tree corresponding to the external API call, wherein the dependency tree is indicative of the branches the external API may span and a sequence of microservices corresponding to each branch.
9. The information handling system of claim 7, wherein the scale up operation instantiates each of the one or more microservices in parallel.
10. The information handling system of claim 7, wherein the scale up operation instantiates the one or more microservices based on the sequencing of one or more microservice.
11. The information handling system of claim 7, wherein each of the plurality of microservices corresponds to an internal API call.
12. The information handling system of claim 11, wherein the API comprises a (REST) compliant API.
13. An non-transitory computer readable medium including processor-executable instructions that, when executed by a processor, cause the processor to perform coordinated microservice scaling operations, wherein the coordinated microservice scaling operations include: obtaining dependency information indicative of a dependency between a particular function associated with a particular external API call and a plurality of microservices spanned by the particular external API call; monitoring functions invoked users of one or more managed information handling resources; responsive to detecting an invocation of the particular function, launching a scaling service configured to: access the dependency information to identify the plurality of microservices; and performing a scale up operation to instantiate one or more instances of the plurality of microservices.
14. The non-transitory computer readable medium of claim 13, wherein obtaining the dependency information comprises: recording traces for each of one or more instances of the particular external API call; and analyzing the traces to determine a dependency tree corresponding to the external API call, wherein the dependency tree is indicative of the branches the external API may span and a sequence of microservices corresponding to each branch.
15. The non-transitory computer readable medium of claim 13, wherein the scale up operation instantiates each of the one or more microservices in parallel.
16. The non-transitory computer readable medium of claim 13, wherein the scale up operation instantiates the one or more microservices based on the sequencing of one or more microservice.
17. The non-transitory computer readable medium of claim 13, wherein each of the plurality of microservices corresponds to an internal API call.
18. The non-transitory computer readable medium of claim 17, wherein the API comprises a (REST) compliant API.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] A more complete understanding of the present embodiments and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein:
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
DETAILED DESCRIPTION
[0016] Exemplary embodiments and their advantages are best understood by reference to
[0017] For the purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, entertainment, or other purposes. For example, an information handling system may be a personal computer, a personal digital assistant (PDA), a consumer electronic device, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include memory, one or more processing resources such as a central processing unit (“CPU”), microcontroller, or hardware or software control logic. Additional components of the information handling system may include one or more storage devices, one or more communications ports for communicating with external devices as well as various input/output (“I/O”) devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communication between the various hardware components.
[0018] Additionally, an information handling system may include firmware for controlling and/or communicating with, for example, hard drives, network circuitry, memory devices, I/O devices, and other peripheral devices. For example, the hypervisor and/or other components may comprise firmware. As used in this disclosure, firmware includes software embedded in an information handling system component used to perform predefined tasks. Firmware is commonly stored in non-volatile memory, or memory that does not lose stored data upon the loss of power. In certain embodiments, firmware associated with an information handling system component is stored in non-volatile memory that is accessible to one or more information handling system components. In the same or alternative embodiments, firmware associated with an information handling system component is stored in non-volatile memory that is dedicated to and comprises part of that component.
[0019] For the purposes of this disclosure, computer-readable media may include any instrumentality or aggregation of instrumentalities that may retain data and/or instructions for a period of time. Computer-readable media may include, without limitation, storage media such as a direct access storage device (e.g., a hard disk drive or floppy disk), a sequential access storage device (e.g., a tape disk drive), compact disk, CD-ROM, DVD, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), and/or flash memory; as well as communications media such as wires, optical fibers, microwaves, radio waves, and other electromagnetic and/or optical carriers; and/or any combination of the foregoing.
[0020] For the purposes of this disclosure, information handling resources may broadly refer to any component system, device or apparatus of an information handling system, including without limitation processors, service processors, basic input/output systems (BIOSs), buses, memories, I/O devices and/or interfaces, storage resources, network interfaces, motherboards, and/or any other components and/or elements of an information handling system.
[0021] In the following description, details are set forth by way of example to facilitate discussion of the disclosed subject matter. It should be apparent to a person of ordinary skill in the field, however, that the disclosed embodiments are exemplary and not exhaustive of all possible embodiments.
[0022] Throughout this disclosure, a hyphenated form of a reference numeral refers to a specific instance of an element and the un-hyphenated form of the reference numeral refers to the element generically. Thus, for example, “device 12-1” refers to an instance of a device class, which may be referred to collectively as “devices 12” and any one of which may be referred to generically as “a device 12”.
[0023] As used herein, when two or more elements are referred to as “coupled” to one another, such term indicates that such two or more elements are in electronic communication, mechanical communication, including thermal and fluidic communication, thermal, communication or mechanical communication, as applicable, whether connected indirectly or directly, with or without intervening elements.
[0024] Referring now to the drawings,
[0025] The illustrated method 100 includes a learning or acquisition phase, described below in reference to
[0026] The method 100 of
[0027] In at least one embodiment, efficiency is achieved by scaling up at least one instance of all of the applicable microservices in parallel to reduce the overall scale up delay associated with a conventional configuration, in which microservices are activated sequentially, one-at-a-time, as the internal API call corresponding to each span of the function is made. Other embodiments may achieve a potentially lesser, but still significant, degree of efficiency by scaling up sub-groups of the microservices in parallel. For example, if a user function spans a sequence of four microservices, the scale up operation may, as an alternative to scaling up all four microservices in parallel, scale up a first subgroup, e.g., the first two microservices, in parallel and then, while the first and second microservices are executing, scale up second subgroup, i.e., the third and fourth microservices, in parallel. In this example, the use of subgroups to scale up the required microservices in two, rather than one, parallel operations, may result in little or no additional scale up delay if the time required to execute the first two microservices is longer than the time required to scale up the third and fourth microservices. The management resource may, in at least some embodiments, be configured to define one or more microservice subgroups and to perform a parallel scale up operation for each subgroup.
[0028] Referring now to
[0029] In at least some embodiments, the external and internal APIs associated with the external an internal API calls illustrated in
[0030] At least some embodiments that employ RESTful APIs may leverage RESTful API tracing tools including, as an illustrative and non-limiting example, VMware Tanzu Observability software, to develop a database 210 of API tracing data.
[0031] The dependency tree information may include information indicative of one or more branches 252 that a user function might follow as well as the sequence of microservices 204 executed within each branch. In some embodiments, branch information may include probability information indicating the likelihood that any particular branch is followed. In these embodiments, the branch probability information may be used to define one or more microservice subgroups wherein, as discussed previously, parallel scale up operations are performed for each of two or more microservice subgroups. As an example, if the particular sequence of microservices, represented in
[0032] Turning now to
[0033]
[0034] Referring now to
[0035] This disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Similarly, where appropriate, the appended claims encompass all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Moreover, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, or component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative.
[0036] All examples and conditional language recited herein are intended for pedagogical objects to aid the reader in understanding the disclosure and the concepts contributed by the inventor to furthering the art, and are construed as being without limitation to such specifically recited examples and conditions. Although embodiments of the present disclosure have been described in detail, it should be understood that various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the disclosure.