How can the Open Internet coexist with specialised services?

by Frode Sorensen


The net neutrality debate in Europe is stalled by a serious confusion about the distinction between the open Internet and other IP-based services provided outside the Internet, so-called specialised services. This has evolved into a word game which obstructs a constructive discourse about one of today’s most important questions for the modern society. On 5th June this topic is up for discussion at the net neutrality panel at the EuroDIG 2015 conference in Sofia, Bulgaria.

Contribution to the EuroDIG 2015 net neutrality panel by Frode Sorensen, Norwegian Communications Authority (Nkom).

What made the Internet a success?

Why do we want to strive for net neutrality? The main objective is to preserve the advantages of the open Internet. It has been explained many times before, and it is still worth repeating, the success factors of the Internet:

  • Internet allows innovation without permission, and thereby provide low entry barriers
  • Internet users are controlling their own access, a new dimension for freedom of expression
  • Internet applications are decoupled from the network, so-called application-agnosticism
  • Internet communication provides global connectivity, interconnecting all end-users

Note, the phrase “user control”, instead of “user choice”, since the latter sometimes is twisted by the industry to say: “Yes, we support net neutrality, and end users can choose a neutral Internet access subscription, if they want”. But this is fragmenting the Internet, and this is not what we want to achieve with net neutrality. We want to preserve the Internet as an open and non-discriminatory platform where all connected users can control how to use their own access.

More about such Word Game at the end of the article.

How does the Internet work?

The Internet technology has proven to be flexible to accommodate all types of applications. This is due to the well-known end-to-end argument, which implies that functionality as much as possible should be implemented in endpoints connected to the network, and not internally inside the network. This is the opposite of traditional telecoms networks, and we have seen which architecture that has become the winner!

But how is it possible that a best effort network without quality guarantees can achieve this? The traffic handling on the Internet is based on congestion control, a mechanism where endpoints adjust their traffic load based on the available capacity in the network. If the traffic load increases too much, endpoints are supposed to back off. Thereby equilibrium between endpoints connected to the Internet is maintained.

A major question in the net neutrality debate is what kind of traffic management that can be considered reasonable. There is a lot of traffic management on the open Internet which nobody questions regarding this: endpoint-based congestion control (mentioned above) and application-agnostic network-internal congestion management. These types of traffic management are fundamental for the well-being of the Internet.

The Internet technology is adaptive, and new features are introduced over time. The congestion avoidance mechanism elaborated above is evolving with new schemes. And regarding content distribution over the Internet, we’ve seen increasing use of CDNs and introduction of adaptive media codecs, which significantly reduces the amount of traffic sent through the Internet.

Finally, when it comes to congestion handling on the Internet, this includes short-term aspects such as the ones described above, but also long-term aspects. The more bandwidth an ISP sells, the more capacity will be needed in their network and interconnections. Luckily, the unit cost for network equipment is declining, and for mobile networks we face increased spectrum efficiency, additional spectrum allocation, and traffic offload to Wi-Fi networks.

Traffic growth is the success of the Internet, not the problem of the Internet! Provisioning of Internet access is a business opportunity to ISPs.

Three principal approaches to QoS

ISPs repeatedly argue that we need something more than best effort, that we need Quality of Service (QoS). This can be questioned, since over-provisioning of capacity in traditional IP networks may be cheaper than a tightly managed QoS-based architecture. But leaving that discussion aside, if we want to provide something better than best effort, how can it be provided? There are mainly three options which are discussed below.

1) Specialised services outside the Internet

In order to acknowledge ISPs wish to provide QoS-based services, NRAs have developed the regulatory concept “specialised services” for IP-based services provided outside the Internet. BEREC has provided guidance regarding how this kind of services should be defined. In a statement to the European Parliament’s TSM resolution, BEREC said: “BEREC considers that specialised services should be clearly separated (physically or virtually) from internet access services at the network layer, to ensure that sufficient safeguards prevent degradation of the internet access services.

As long as specialised services are clearly isolated from the Internet, and don’t degrade the performance of the Internet access service, they are exempted from net neutrality considerations. Specialised services are not new; they exist already (e.g. facilities-based IPTV), and new will be deployed (e.g. VoLTE in mobile networks). The question for the future is; how will they evolve compared to applications provided over the Internet?

2) Application-specific, provider-controlled QoS on the Internet

When it comes to provisioning of QoS on the Internet, the current implementation of the Internet access services allow few options. First of all, there is no widely deployed Internet interconnection mechanism supporting QoS in the market, even though technical standards exist. Second, the practices currently applied by some ISPs are typically based on Deep Packet Inspection (DPI), a particularly intrusive practice.

The result of this is an application-specific, provider-controlled traffic management, and this is considered as the prototype of unreasonable traffic management by most net neutrality advocates. It is often argued by net neutrality opponents that the IP technology never abandoned QoS. And this is correct. However, the standardised QoS architectures are totally different for today’s DPI-based solutions used by some ISPs to degrade Internet access services.

3) Application-agnostic, user-controlled QoS on the Internet

Is it possible at all to combine QoS on the Internet with net neutrality? The answer is yes – if it is done the right way! We can call such a practice user-controlled, application-agnostic QoS architecture. This method has been developed through a deep analysis by Barbara van Schewick in her paper Network Neutrality and Quality of Service: What a Non-Discrimination Rule Should Look Like. This kind of architecture is also described in the BEREC Net Neutrality QoS Guidelines.

The method goes like this: ISPs could implement a QoS architecture standardised by IETF. Applications running on Internet-connected devices could then be controlled by users, e.g. by configuring the traffic class of each application via a user interface. Then traffic sent from the application to the network will be marked with the selected traffic class, and the network will handle the traffic based on a contact agreed between end-user and ISP.

But will ISPs be interested in providing such QoS architecture? They should be, based on the need for QoS which they are advocating.

Public discourse or Word Game?

A major problem with the net neutrality discourse today seems to be that the terminology has been infiltrated with counter-definitions. This way, a sensible discussion about this important topic often becomes a masquerade.

ISPs may say “consumers need QoS to fulfil their needs”, but they may mean that “we want to use DPI to control subscribers’ traffic”. In case of such double tongue, lawmakers may be misled to compromise on net neutrality.

And specialised services may be used as a disguise for prioritized services on the Internet, but this would be the opposite of the regulatory concept where traffic from specialised services is clearly separated from Internet traffic.

And finally, the argument about innovation used by net neutrality advocates, has now become a favourite among the opponents. But reverse-engineering telecoms into the Internet could hardly be called innovation!