Домой United States USA — software Math Behind Software and Queueing Theory

Math Behind Software and Queueing Theory

103
0
ПОДЕЛИТЬСЯ

I will be taking a closer look at a branch of mathematics known as queueing theory and its potential applications in the world of software engineering.
Join the DZone community and get the full member experience.
In this part of the Math Behind Software series, I will be taking a closer look at a branch of mathematics known as queueing theory and its potential applications in the world of software engineering. I will describe some basics around the queueing theory, explain the basic laws from said theory and show you the equations which can be used to estimate queue throughput. I will end with a short example of calculating service capacity with all the things explained earlier. Let’s start with a quick recap of what was described in the previous part, which can be found here.
In the previous part of the series, I explained contention and coherency delay and their impact on the performance of our system. I brought up concepts like Amdahl’s Law, which is focused on measuring the impact of contention and Universal Scalability Law (Gunther’s Law) which additionally accounts for coherency delay. I described them and presented their equations. I also plotted graphs to show the difference in numbers between both Laws.
As you may have noticed while reading through the previous part, getting real numbers from either Amdahl’s Law or Universal Scalability Law is quite a task and may require lots of checks, tests and preparations. Fortunately, the queueing theory seems to help us solve this problem. It contains laws, equations and relations which can be used to calculate the same metrics as Universal Scalability Law and their arguments are way easier to get than ones of USL. As a side note, I can add that Universal Scalability Law can be derived from one of the models from queueing theory, namely the machine repairman model — according to Neil Gunther himself.
Unsurprisingly, queueing theory is a branch of mathematics, focused around studying and describing queues (or, in more professional terms, lines). The whole theory is all about how lines are created, how they behave and, most important, how and why they malfunction. 
It is one of these branches of mathematics which are useful in real life, e.g. it can be used in many branches of industry. From general logistics — to estimating and calculating the flow of goods between stores, through telecoms — the quality of service assurance or teleco flags requires the capacity to even our world of software — all the message queues around like Kafka or RabbitMQ.
Kendall’s notation or Kendall notion is a system for describing and classifying queuing systems proposed in 1953 by D.G. Kendall. Originally it was based on three parameters A/S/c but later another three parameters were added and the notation ended up as A/S/c/K/N/D. Of course, each one of the parameters has its unique meaning and represents a different aspect of a particular system.
Below you can see the description of all the parameters and their meaning:
• A is the time between the arrivals of customers to the queue
• S is the service time distribution
• c is the number of servers
• K is the capacity of the queue,
• N is the overall maximum number of customers
• D is the queuing discipline
Parameters A, B and D have a finite set of possible values — you can find more information about the most common values of these parameters here. On the other hand, parameters c, K and N have an infinite set of possible values and can be assigned values from the whole set of whole numbers.
Examples of queuing systems descriptions in Kendall notation:
• M/M/c
• M/M/2
• G/M/c/5
• D/D/3/5/PS
If any of the last three elements are missing, for example in M/M/c queue, then one should assume that K and N are equal to infinity and D is equal to FIFO.

Continue reading...