The Multilevel Feedback Queue (MLFQ) Scheduling algorithm is conceptually simple at the high-level but somewhat complex as one starts to analyze its operations in detail. This algorithm is commonly used in operating systems to help ensure all processes are executed.
This algorithm should not be confused with the Multilevel Queue Scheduling Algorithm as the two are different. The later is not as flexible because processes are fixed and do not move between queues in the scenario where high-level and low-level processes exist.
The MLFQ scheduling algorithm offers much more flexibility because it enables a process to transfer to other queues which avoids the issue of starvation. Basically, what happens if a high-priority queue exceeds allotted CPU time, the algorithm will push that process to a lower-priority queue and vice versa (Silberschatz, Galvin, & Gagne, 2012, pp. 198).
One item to consider when reviewing MLFQ scheduling algorithms is that the scheduler will typically run all queues in order from 0 to X, where X is the number of the last queue. Queue 0 has the highest priority and X would have the lowest priority. However, if a process initiates in the higher-level priority queue slots, that will move the execution back to those queues and then it will move forward towards the lowest priority queue (Silberschatz et al. 2012, pp. 198).
Silberschatz et al. provided a great example and diagram that illustrated some of the complexities of how this particular algorithm performs using a 3 queue model. In their explanation, they indicated:
- Each queue contains one or more processes.
- If a process in the first queue (queue 0) exceeded its time allocation (8 milliseconds), it gets pushed to the end of queue 1.
- In this case, the first process in queue 1 will double (assuming queue 0 does not have any processes).
- If the first process in queue 1 does not finish, it gets pushed to the next queue, queue 2 which are executed by a First-Come, First-Served (FCFS) scheduling algorithm. But again, only if the prior queues 0 and 1 do not have any processes executing (Silberschatz et al. 2012, pp. 198).
The MLFQ scheduling algorithm involves a number of properties, namely: a) designating the quantity of queues involved, b) the algorithm assigned to a specific queue to help it effectively function for a specific task, c) a conditional aspect dictating when a process should be moved to a foreground or background process, or somewhere in-between (Silberschatz et al. 2012, pp. 199).
While MLFQ is a great algorithm that effectively reduces starvation, it is not perfect and instances of the latter do occur, especially when we see prolonged spikes utilizing high-priority queues with new incoming processes entering these queues. Demands for context-switching and memory usage increase when this happens, leading to a reduction of CPU utilization (Hoganson, 2009).
Hoganson indicated in his paper, Reducing MLFQ Scheduling Starvation with Feedback and Exponential Averaging that some people have tried to alleviate starvation by utilizing “real-time processing” principles, improving context switching, introducing a degree of artificial intelligence and experimenting with “neural network processing” (Hoganson, 2009). In his paper, he went on to propose another solution involving taking away processing time between high and low priorities queues and placing select processes occurring in this location to the low priority queue, so long as critical processes were not impacted. Hoganson added that his strategy should not be used if there are temporary processing spikes which he was able to measure and control in his enhanced MLFQ algorithm proposal. His proposal seems promising but it would not completely remove the risk of starvation during peak CPU usage periods where we see spikes. However, as mentioned before, while no algorithm or enhancement is perfect, if the positives of an alternate implementation outweigh the negative aspects and the proposed new solution has been fully tested and accepted by other researchers in the industry, it would be a logical positive step forward. Perhaps Hoganson’s solution could lead to the virtual elimination of starvation risk in the future, resulting with faster, more efficient systems.
Hoganson, K. (2009, Dec.). Reducing MLFQ scheduling starvation with feedback and exponential
averaging. Journal of Computing Sciences in Colleges, 25(2), 196-202. Retrieved from ACM
Silberschatz, A., Galvin, P.B., & Gagne, G. (2012). Operating System Concepts Update (8th ed.).
Hoboken, NJ: John Wiley & Sons, Inc.