Order from us for quality, customized work in due time of your choice.
In this section, we would be reviewing about 6 papers published by academia each trying to find the root cause of unfairness in BBR by either experimental or theoretical studies. And, in the end, we would be analyzing one of google’s update identifying the root cause and a proposal for a refined BBR version development.
M. Hock et al. published a paper in 2017 [2], which pointed out that the BBR protocol would be suitable for single flows rather than the multiple flows because the multiple flows cause the overwhelming of bottleneck leading to increased queues at the buffer. As the new flows come into the network, the queue size rapidly increases and BBR finds it difficult to control by probing the round trip time every 10 secs during which the experimental system experienced massive packet losses. Since the BBR tries to provide high throughput, it focuses on clearing the large buffers first compared to the small buffers leading to unfairness. The authors have concluded by stressing the need for an explicit mechanism to detect congestion and to maintain fairness in the system.
The results generated by this paper paved a path for the future research works as it led almost all the BBR academia research done after this publication. However, in a world where a network with the single flow is relatively less to shared resources network, it would have been appropriate if the paper didn’t stress on this point or using it as a comparative tool. The paper mentioned the need for an explicit mechanism to detect congestion but on the contrary, BBR evaluates a series of round trip time and stops the ramping up soon than later to prevent packet losses. Also, it would have been great if the paper tried to experiment BBR in existence with other congestion protocols which would have opened an interesting research scope.
In a current practical network which uses both BBR and CUBIC as congestion control protocols, it is important to check the performance of BBR running in parallel with CUBIC. Such experimental research was conducted by K. Miyazawa et al. [3]. This paper spotted few performance imbalances in BBR when working in correlation with CUBIC. But the primary performance imbalance which is related to our topic on focus and unexplored by the previous paper is the inter protocol imbalance which results in unfairness. This paper identifies that the bottleneck bandwidth and positive feedback are the primary sources for this imbalance. However, the paper assumes that performance and fairness can be guaranteed by reducing the buffer size of the bottleneck.
Only because CUBIC outperformed BBR when the bottleneck bandwidth began to decrease, made the authors suspect the positive feedback as a cause for it. But, it is still unclear whether the positive feedback is the only root cause for performance degradation. Though the paper provided a possible solution based on an assumption, the degree of practicality is lacking, which could have been answered to strongly emphasize the solution and to cross-check the above-mentioned root cause.
The authors’ of the previous paper published another paper which focuses more on achieving fairness by efficiently dropping the unsuccessful packets [4]. The authors have attempted to replicate the success in achieving the fairness for the packet-based congestion control protocol which used a packet dropping model known as CoDel for the BBR. The attempt failed as the model didn’t perform as expected when the BBR need to work with large volumes of flows and performed well with fairness when a compromise was made on throughput.
BBR was designed as an alternative for packet-loss based congestion control. In such a case, applying the traditional method of dropping packets to BBR, just for the reason of achieving fairness without any concrete reason, seems to be inappropriate. Based on the outcomes from these papers, it seems to be evident that there is an algorithm design error which needs to be found to achieve fairness.
D. Scholz et al. [5] in their paper have made an investigation on the algorithm of BBR and concluded that a potential flaw existed in the probing phase. To be more specific about the flaw, the authors’ conducted a simulation-based study. The results proved that unfairness is due to the probing phase which caused the overestimation of the bandwidth-delay product for the new flow. The paper also emphasized on the need to reduce the probing time during which the BBR sends more incoming traffic to the already existing queue.
Unlike the previous papers which conducted the experiment in the inter-protocol environment and couldn’t identify the exact issue for unfairness, this paper not only identified one of the key issues but also proved it with concrete shreds of evidence. Though the identification of the issue seems to be strong on paper, the possible predicted solution needs to be verified as the change in the probing time lesser than the present time of 10 seconds might bring in consequent changes to the system. Google, however, made a change to the probing time which will be discussed later at the end of the next section.
Up to now, we reviewed papers which deployed BBR protocol on computer networks. Now we will review on papers which conducted experiments in 4G LTE networks. A. Parichehreh et al. [6], in their paper, have analyzed the BBR protocol deployed in LTE uplink. The authors’ were able to observe the expected behaviour of BBR similar to CUBIC and Reno in LTE uplink but the fairness issue still prevailed when the flows were transmitted in parallel. A variation was observed in the throughput of the system as well as the channel was underutilized. Later, with the analysis, it was found that unfairness was the reason for the lack of efficient throughput and severe packet losses were observed until the time when the probing phase began its operation.
T. Dai et al. [7], also conducted a similar experiment in an emulation testbed with Ethernet and 4G based network. This paper too mentioned the weakness of BBR in practical world Ethernet, LTE network and added on that unfairness was observed during their study which was found to be less competitive than CUBIC. These results are acceptable since the BBR was initially built by Google for its backbone and youtube networks. By reviewing all these papers, it is clear that academia has made a good amount of contribution and helped Google in finding the root causes for the flaws.
Based on the flaws detected by Google’s research team and by academia, an update on BBR was presented in IETF 100th meeting by Google’s make-TCP-fast-project group [8]. Google said that fairness improvement was one of the current areas of focus and accepted the fact that the initial version had issues with the buffers. Google also added on that the probing of bandwidth and dynamics estimation were the root causes for this flaw. Since the initial version of BBR followed ‘one size fits all’ concept, Google proposed the second version of BBR which would follow the dynamic adaptive approach. Using this approach, Google promised to design an algorithm for a more specific long run bandwidth and buffer estimator which would prevent the severe fairness issues and packet losses.
This update was considered to be one of the major updates after the initial release of BBR by Google. Google as tried its best effort to find the root cause of the issue and proposed a solution which would rectify or completely nullify the existing flaws. However, the fairness issues related to the inter-protocol environment haven’t been addressed properly in this update.
Order from us for quality, customized work in due time of your choice.