In the recent SIGCOMM ICN workshop, one particular paper by Perino and Varvello evaluated the feasibility of realizing a CCN content router; this feasibility being drawn against a roadmap of memory technologies (given that the authors were from Alcatel-Lucent, we can assume some truth in the outlined memory roadmaps). The paper concluded that “We ﬁnd that today’s technology is not ready to support an Internet scale CCN deployment, whereas a CDN or ISP scale can be easily afforded.”
In other words, the authors assert that the feasibility of CCN content routers just about ‘rides the wave of state-of-the-art memory technologies’. One needs to investigate closer as to what the authors assumptions were when arriving at these conclusions, i.e., what was the input into the model to derive expected FIB (forwarding information base), content store and PIT (Pending Interest Table) sizes? As outlined on page 47 (of the proceedings), the authors assume a starting point as well as progression similar to that of today’s reachable hostnames (derived from Google statistics), i.e., starting at today’s about 280 millions indexed routable hostnames. Given that starting point and progression, the required sizes for FIBs and PITs etc just about progress along the expected availability of different memory types.
Why is this conclusion so interesting for PURSUIT?
It is interesting because PURSUIT has a significantly different starting point than the authors present in their paper. This different starting point comes from a very different assumption of what an information-centric Internet really is. The assumption in the paper shows that it sees a CCN-like Internet as a progression of the same model that we know today. In other words, it assumes a hosting model through globally routable servers which host information that needs to be disseminated (sending an interest request to fp7-pursuit.eu/query, an information space for the PURSUIT project that is hosted by a commercial ISP!).
But can we assume that any future (information-centric Internet) really has the same model?
In our predecessor project PSIRP, we went through a lengthy discussion on this at the very beginning of the project. What were our assumptions of how information is being pushed around in this brave new world that we envision? While Pekka Nikander’s numbers of items and scopes of information were certainly VERY high for some in the project (I recall numbers like 1021 individual items per user with some 1015 scopes), it certainly showed a direction of thought that has been commonly shared since: an information-centric Internet has a vast number of publishers of data; publishers that we assume need to be reachable in some form or another in order to disseminate the information finally to those being interested in them. For instance, if my local pub would like to publish the availability of spaces in their facilities, we assume a simple device being stuck on their walls, publishing the availability through an embedded system on the device directly to the (future) Internet. No need for uploading to a hosting provider that would then become the globally routable hostname!
This is a VERY different starting point. If we apply this starting point to the aforementioned paper, the conclusions are obvious: an in-router state approach such as CCN (in its current form) cannot work, given the memory technologies known to come.
But apart from this difference in conclusions in a paper published some four years after the start of PSIRP, there was a more immediate conclusion that we arrived at very early within the project: we have to exploit a spectrum of forwarding technologies that ranges from truly stateless (albeit limited in capabilities) to fully in-router state!
It is this insight that we have exploited since in our work on forwarding in information-centric networks. The very first step, the point at the ‘far left’ of the aforementioned spectrum, was developed in the PSIRP project. It is known as the LIPSIN forwarding and was published in SIGCOMM 2009. It is this scheme that is also currently implemented in our Blackadder prototype. Sure, its ability to implement native multicast with constant length headers is limited due to the inevitable false positives when increasing the tree size. But it still allows to cover a remarkable range of scenarios with local reach of information dissemination. And we are working on a number of extensions to the basic scheme, either through a relay architecture (see our deliverable D2.2 for a first version of that) or through moving towards a variable length scheme (more on this in the upcoming D2.3).
What we know well is the ‘right hand side’ of the spectrum, namely full in-router state approaches such as CCN or IP multicast, come to that. But we truly believe that only a range of forwarding solutions along this outlined spectrum of state tradeoff can bring us the required set of solutions for fully accommodating the requirements for a future (information-centric) Internet where publishers exist in abundance!