Feb 132013
 

I have had an interesting exchange of thoughts at an event I’m participating. I asked a colleague (who will remain unnamed) why his particular focus on NDN over something like, say, PURSUIT came about.

His response was astounding to me, certainly unexpected. The central point that emerged is that the highly stateful operation of NDN is a challenge that is unknown to actually work. The Internet so far has operated stateless in terms of flow operation (oddly enough, the forwarding state in terms of IP tables seem to be discounted). This point took me by surprise, to say the least, but I can follow the line of thinking that such massive change from the ‘rather’ stateless IP to a more massive stateful mode is interesting as a challenge, albeit such proposal to move complexity into the network is not new with past research areas, such as Active Networking.

Why is massive stateful operation interesting though? The argument is seemingly that it enables a number of interesting network-internal operations, such as hop-by-hop congestion control, multi-path, etc. But is it really true that one needs increasing state in the forwarders, meaning is it a necessary condition? My answer to this is clearly no. State in such problems does not disappear. The point is where your solution will place the state and placing it in the forwarders (by attaching increasingly complex control logic to the forwarding tables) is only ONE way of doing things. In other words, a choice of stateless forwarder, such as enabled by our LIPSIN solution in PURSUIT, does not prohibit such network-internal control and our own work in PURSUIT on these topics are daily proofs of this point.

Is stateless forwarding really less interesting? Let us first re-iterate the opportunities that solutions such as LIPSIN or our recently developed multi-stage extension (which scales to any tree size without any false positives) provide. Forwarders in this world only hold information about their own outgoing links. There is NO forwarding information required beyond this, no forwarding information that carries any names, nothing like forwarding information bases (FIBs) required in solutions like NDN. The forwarding operation is very simple (essentially comparing the bit patterns of your own links with those in the packet header), which allows for efficient implementation in software, hardware, even directly optical!

Where’s the challenge now? Similar to the congestion control argument above, state does not disappear in a problem like forwarding. It is placed somewhere else and it is here where the challenge lies. In the PURSUIT architecture, a function that potentially holds this state is the topology management and formation core architectural function, essentially performing path computation and header construction if LIPSIN is your choice of forwarding. The challenge here lies in performing these operations well, fast,  and possibly distributed. The work reported in the ICNC 2013 shows the feasibility of performing such path computation at speed, while however supporting Steiner tree heuristics. Our work in the IEEE Communications Magazine July 2013 issue reports on our early results to perform slow path operations that form the basis for the stateless fast path. It clearly is a challenge but one that is interesting and promising in early results.

Is such stateless data plane desirable? I do envision that a world of simple forwarders holds tremendous potential in terms of energy efficiency and optimisation potential. The first point comes from its simple operation as well as the lack of needing memory. This might lead to extremely simple, even core network, forwarders that operate at extremely high speed. Secondly, the chosen approach leaves many options where the state (which, again, will not disappear) finally resides. One such choice, albeit negating the advantages mentioned above, could be the forwarder itself. This would turn the forwarders into NDN-like forwarders of similar complexity, performing topology formation in a highly distributed way. More likely, however, is to hold the state in a regionally distributed topology formation, also for mobility support reasons, optimising these network nodes in terms of computational efficiency while keeping the actual forwarders simple. Point though is that this degree of freedom is interesting and a challenge in itself to better understand.

Stateless forwarding – boring or not? I would personally say no for maybe obvious reasons, but stateless forwarders hold many promises that are an equally radical departure from the still rather complex world of forwarding that we have today. This challenge has kept many of us busy over the years, so check out in our papers and deliverables where we have come down that road.

 Posted by at 09:11
Nov 032012
 

A question that often pops when the validity of ICN research is discussed, is how its goals differ from what is already available in CDN solutions. There are many answers to this question, but I would like to focus on one: in my opinion, ICN should not include data storage. Regardless of their implementation, CDNs attempt to push data to where it will be needed next, therefore they incorporate both distribution and storage capabilities: a content provider makes data available and the CDN intelligently replicates these data in its servers. Current ICN architectures on the other hand only offer distribution capabilities, leaving storage to separate entities. There is an argument for incorporating at least some storage at ICN nodes, for example, storing data made available by applications inside the ICN core, so that it remains available even if applications terminate. But delegating storage to another entity requires trust relationships to avoid Denial of Service attacks; do we want to embed these in the network? Even further, do we want to embed CDN-like replication strategies in the network in order to make CDNs obsolete? I would answer no to both: ICN should be a better substrate for CDNs than IP (and the DNS) but it should not attempt to become a CDN. Naming data and decoupling names from locations are powerful tools that can be used in many ways to facilitate diverse CDNs – and not only CDNs.

Oct 172012
 

ICN is (among other things) about scattering information objects in many (network) locations: the same object can exist simultaneously in a web server, in a CDN server, in a cache, or even in the RAM of an ADSL gateway. While this feature is expected to facilitate content distribution, it raises a significant security concern: how can access control on data, stored outside the data-owner administrative boundaries, be imposed?

A straightforward approach for answering this question is to examine the current situation. A common solution to this problem is to not pose any access control on the data stored  by 3rd parties! Surprisingly,  this solution is widely adopted! Take for example facebook, that stores users’ pictures in Akamai’s CDN. If you have a facebook account perform this very simple experiment: upload a picture to your facebook profile, secure this picture by applying a strict access control policy (e.g., “only my friends can view the picture”), view the picture  and copy its URL (it should be something like that http://fbcdn-sphotos-g-a.akamaihd.net/  which is a URL pointing to an Akamai server),  and give this URL to somebody that does not even have a facebook account. You will be surprised to see that your picture can be accessed, bypassing the Facebook access control policy. Actually anybody can see this picture, provided that she has the pointer to it (the Akamai URL). Why does this happen is obvious: Akamai knows nothing about facebook policies.

An improvement to this approach is to allow 3rd parties to access the data-owner’s user directory and retrieve only the information that is required in order to evaluate an access control policy.  A common approach for achieving that is by using OAuth.  OAuth is a protocol that enables 3rd parties to request access to specific user attributes. OAuth has been widely adopted (e.g., by Google, Facebook, and many others). Moreover OAuth v2.0is aiming to become an IETF standard. Nevertheless OAuth comes with some drawbacks: (i) a 3rd party learns something about the user (the user attributes required in order to evaluate a policy), (ii)  a 3rd party has to be able to understand the meaning of the various user attributes (e.g., in the case of facebook it has to understand what a ‘friend’ means), and (iii) it can not be easily implemented resulting in broken (and broken) implementations.

With all these in mind, we created a solution that tries to overcome these problems and we adapted it to our ICN architecture. Our solution was presented during the second SIGCOMM workshop on Information Centric Networking (and received the best paper award).  In a nutshell our system works as follows: a data owner creates an access control policy and stores it in an Access Control Provider, it attaches a pointer to that policy (e.g., a URL) to every item the policy protects, any 3rd party can request users to authenticate themselves against a policy,  the authentication takes place at the location indicated by the pointer (i.e., at the Access Control Provider that holds the actual definition of the policy),  and upon a successful authentication 3rd parties are notified by the Access Control Provider.

This approach has many advantages compared to OAuth. First, 3rd parties remain oblivious about data-owner personal information. Second, 3rd parties do not have to understand user attributes in order to evaluate an access control policy; this task is performed by the Access Control Provider. Last, but not least, the access control policy is stored and managed in a separate (functionally centralized) module -the Access Control Provider- therefore it can be easily modified and re-used.

We believe that our solution can be easily adapted for other ICN architectures, as well as for similar systems (e.g., cloud storage, CDN services)

Aug 202012
 

I have heard it now too often (as recent as during the AsiaFI summer school) and it needs to be addressed here: the mobility problem in ICN is NOT solved by simply re-subscribing (or re-sending interests, depending what ‘language’ you are speaking) from the subscriber side!

With recent discussions, e.g., in the IRTF, that focus on publisher mobility, there is an assumption that I’m hearing too often lately in presentations, namely that subscriber mobility is a done deal - just re-subscribe!

Yes, it is a brute force solution to a problem that is significantly more challenging than the simple solution suggests. Re-subscribing is indeed possible but one has to keep in mind the possible implications of this approach. Per device, there are likely many pieces of information that the device’s software is interested in. Hence, what looks like a straightforward solution quickly turns into a nightmare to happen on your wireless link.

Let’s expand on this by assuming a mobile device that holds 1000 active subscriptions to individual information items – that’s not an unreasonable number, considering mashed-up (and highly desegregated) content on, e.g., a webpage or a distributed filesystem app running on your device (think Google Drive), or many other possible apps running while you are using your mobile device. Changing your access point now leads to re-subscriptions to these 1000 items – in a basic re-subscription solution, this leads to an upstream traffic burst of approximately 1000*(average_length_of_ID+Ethernet header) (in case you use Blackadder which operates directly on Ethernet – you need to add any overlay overhead for other deployments). With an average length of, say, 30 bytes per ID, that’s 30k+ per handover. Make that 10000 objects being active, and you reach 300k+. At the same time, let us consider mobility in wireless environments where more than one device is handing over at any time – so you need to multiply any figure with the number of mobile handsets  that are doing so.

You possibly see where this is going. Re-subscribing is, clearly, a SIMPLE solution to a problem that can be very hard to solve EFFICIENTLY!

The considerations above leave out the computational overhead that these messages create (either at the topology manager in the PURSUIT case or through the distribution within the network and the necessary PIT updates in CCN). I even assert that this computational overhead is something that we will need to deal with in any case. The main point here though is that the wireless access network is a scarce resource by any measure, even with increasing link speeds in recent technologies. Filling this bandwidth with control traffic, initiated by the mobile handset, seems just simply wrong, while seemingly ignoring the possibility for network-assisted handovers.

It is the latter that is one of the reasons for the explicit topology management that is realised in the PURSUIT architecture. While it allows for simple solutions such as re-subscription for mobility management, it also allows for network-assisted solutions that can possibly avoid a signalling avalanche that is outlined above.

I wish that in future presentations, I will see less claims that subscriber mobility is a done deal - it’s hard and let’s realise that!

 Posted by at 07:07
Nov 162011
 

In the recent SIGCOMM ICN workshop, one particular paper by Perino and Varvello evaluated the feasibility of realizing a CCN content router; this feasibility being drawn against a roadmap of memory technologies (given that the authors were from Alcatel-Lucent, we can assume some truth in the outlined memory roadmaps). The paper concluded that “We find that today’s technology is not ready to support an Internet scale CCN deployment, whereas a CDN or ISP scale can be easily afforded.”

In other words, the authors assert that the feasibility of CCN content routers just about ‘rides the wave of state-of-the-art memory technologies’. One needs to investigate closer as to what the authors assumptions were when arriving at these conclusions, i.e., what was the input into the model to derive expected FIB (forwarding information base), content store and PIT (Pending Interest Table) sizes? As outlined on page 47 (of the proceedings), the authors assume a starting point as well as progression similar to that of today’s reachable hostnames (derived from Google statistics), i.e., starting at today’s about 280 millions indexed routable hostnames. Given that starting point and progression, the required sizes for FIBs and PITs etc just about progress along the expected availability of different memory types.

Why is this conclusion so interesting for PURSUIT?

It is interesting because PURSUIT has a significantly different starting point than the authors present in their paper. This different starting point comes from a very different assumption of what an information-centric Internet really is. The assumption in the paper shows that it sees a CCN-like Internet as a progression of the same model that we know today. In other words, it assumes a hosting model through globally routable servers which host information that needs to be disseminated (sending an interest request to fp7-pursuit.eu/query, an information space for the PURSUIT project that is hosted by a commercial ISP!).

But can we assume that any future (information-centric Internet) really has the same model?

In our predecessor project PSIRP, we went through a lengthy discussion on this at the very beginning of the project. What were our assumptions of how information is being pushed around in this brave new world that we envision? While Pekka Nikander’s numbers of items and scopes of information were certainly VERY high for some in the project (I recall numbers like 1021 individual items per user with some 1015 scopes), it certainly showed a direction of thought that has been commonly shared since: an information-centric Internet has a vast number of publishers of data; publishers that we assume need to be reachable in some form or another in order to disseminate the information finally to those being interested in them. For instance, if my local pub would like to publish the availability of spaces in their facilities, we assume a simple device being stuck on their walls, publishing the availability through an embedded system on the device directly to the (future) Internet. No need for uploading to a hosting provider that would then become the globally routable hostname!

This is a VERY different starting point. If we apply this starting point to the aforementioned paper, the conclusions are obvious: an in-router state approach such as CCN (in its current form) cannot work, given the memory technologies known to come.

But apart from this difference in conclusions in a paper published some four years after the start of PSIRP, there was a more immediate conclusion that we arrived at very early within the project: we have to exploit a spectrum of forwarding technologies that ranges from truly stateless (albeit limited in capabilities) to fully in-router state!

It is this insight that we have exploited since in our work on forwarding in information-centric networks. The very first step, the point at the ‘far left’ of the aforementioned spectrum, was developed in the PSIRP project. It is known as the LIPSIN forwarding and was published in SIGCOMM 2009. It is this scheme that is also currently implemented in our Blackadder prototype. Sure, its ability to implement native multicast with constant length headers is limited due to the inevitable false positives when increasing the tree size. But it still allows to cover a remarkable range of scenarios with local reach of information dissemination. And we are working on a number of extensions to the basic scheme, either through a relay architecture (see our deliverable D2.2 for a first version of that) or through moving towards a variable length scheme (more on this in the upcoming D2.3).

What we know well is the ‘right hand side’ of the spectrum, namely full in-router state approaches such as CCN or IP multicast, come to that. But we truly believe that only a range of forwarding solutions along this outlined spectrum of state tradeoff can bring us the required set of solutions for fully accommodating the requirements for a future (information-centric) Internet where publishers exist in abundance!

 Posted by at 12:31
Oct 242011
 

From the beginning of the PURSUIT project, we started looking at the problem of caching in Information Centric Networks from two angles. The first is following a management apporach to caching  – what we call information replication, similar to what is done in CDNs today – using dedicated storage devices attached to ICN nodes. The second approach is that of in-network (packet-level) caching, where every network node opportunistically caches every packet it forwards and is able reply to request for info items/chunks it holds. The latter approach has also been studied by most of the proposed ICN apporaches.

In the information replication problem, the placement/replacement of info items decision is made by a managemenet component taking into account subscriptions  (popularity of objects etc) which proactively configures the storage/caching devices in terms of what items to store/replace.  We investigated and continue to look at centralised (offline) and distributed (online) algorithms that decide the network location of the caches and the replica assignment of the information objects to the decided caching points. This decision is also related to the topology manager function of the PURSUIT model that can also perform some traffic engineering tasks deciding the ‘best’ available replica for every subscriber.

Moreover, in the in-network caching appraoch, we have thought of a taxonomy of the relevant policies that can be applied there such as replacement policies (e.g. LRU/LFU), request dissemination strategies (e.g. en route, flooding), replication strategies, analytical models of the caching mechanisms, and their interaction with possible transport protocol functionality.

While packet-level caching is considered one of the salient characteristics of information-centric network architectures, one can argue that with that mechanism in place storage placement and information replica assignment is not necessary any more. We still believe that information replication still has an important role to play. Unlike in-network caching, which attempts to store the most commonly accessed objects/packets as close to the clients as possible, replication distributes contents across multiple mirror servers. Replication is used to increase availability and fault tolerance, while it has as side effect load- balancing and enhanced client-server or publisher-subscriber proximity. In conclusion, our belief is that replication and in-network caching complement each other.

We are currently finalising the implementation of in-network and managed caching in the PURSUIT prototype (Blackadder) and plan to evaluate the above in as possible realistic scenarios.

 Posted by at 13:17
Oct 102011
 

Some time ago, when the original PSIRP prototype came out, we convinced an undergraduate student to get acquainted with the (then current) BlackHawk prototype and write a socket emulator on top of it, so that we could demonstrate backward compatibility of pub/sub networking with the Internet. The plan was to write a simple socket library that would translate at least the datagram socket calls to pub/sub calls so that we could run TFTP on top (conveniently, the student had written a short TFTP program for a networking class). What could be simpler than translating sendto() to publish() and recvfrom() to subscribe(), making each packet one publication? Or, maybe, it should be the other way around. Anyway, it seemed pretty straightforward, as long as we could understand how the prototype worked and could keep it stable for the duration of a small TFTP transfer.

Having sorted out this part, we moved to the issue of mapping the socket addresses to content identifiers. In PURSUIT (and PSIRP) we assume that each publication would be identified by a scope identifier (SId) and a rendezvous identifier (RId), with the SId indicating a set of related information (with a common access control policy) and the RId indicating a specific publication in this information set. The problem is that originally we also assumed that each such pair would uniquely identify a piece of content, essentially making publications immutable (i.e. the content identified by SId/RId pair would always be the same). This is very handy for caching (no need to check if the content is valid, as it never changes), but a lot of trouble when you need to emulate sockets!

If you think about it, a socket is an endpoint of an information stream, that is, it identifies a sequence of data items sent to (or transmitted from) that endpoint. In a way, sockets by definition correspond to mutable data. What the emulator needs to do then, is to map a socket address indicating mutable data, to SId/RId pairs indicating immutable data. How on earth can you do that? Short answer: no way! Long answer: read below (but not nicely).

Our solution was to exploit the versioning scheme implemented by the BlackHawk prototype: a publication could be modified, leading to a new version of it becoming available. We therefore translated sendto() calls towards a remote socket to a set of versions of a single publication, using the remote socket address to produce the SId/RId pair (by hashing). We even found a way to do this for stream sockets, where the endpoint changes on the server side after a new connection is established: we hashed the socket addresses of both endpoints to indicate the connected socket. But this solution is not good: it cannot work with multiple clients talking to a single server at a well known socket endpoint, as synchronizing version numbers between them would be too expensive.

To maintain the same basic model, i.e. that the SId/RId pair can be computed by only knowing the remote address, we tried different ways to avoid collisions on this well known SId/RId pair: we could also add the current time into the hash to avoid using the same SId/RId too long (this requires loose synchronization, but NTP can handle this) and within each such period produce multiple RIds, with the sender choosing one randomly. This reduced the chance of collisions but did not eliminate them.

In the end we gave up and wrote a paper arguing that even in an Information-Centric Network (ICN) one may also need the ability to subscribe/publish to channels, i.e. streams of data that change over time. An example of that would be switching on your TV to see what’s on, as opposed to asking for a specific program: in the former case, you subscribe to a channel where the SId/Rid pair does not denote specific content (it depends on the time of day), in the latter you subscribe to a document which is uniquely denoted by the SId/RId pair. The paper appears in the ICN workshop at SIGCOMM 2011, and the current PURSUIT prototype (BlackAdder) supports both channels and documents. It is an open question to what extent channels and documents should be differentiated in an ICN.

May 162011
 

In the functional model that our PURSUIT work is based upon, we see three major functions being implemented, namely the finding of information, the building of an appropriate delivery graph, and the forwarding along this graph. Let us now address considerations for the separation of these functions at some levels of the architecture while performance and optimization demands merging these functions at others.

We start with the issue of separating finding of information from the forwarding along a possibly appropriate graph. The problem we try to outline is best captured by: If you don’t know its location, you might end up looking everywhere!

Our starting point is that of an interest in information at a consumer’s end with information being somewhere available in the network. First, let us assume that finding of information is merged with the forwarding of the actual information from a potential provider to the consumer of the information. For this, let us consider the following example:

Think of a social networking service very much akin to today’s solutions, like Facebook. A large-scale social network ala Facebook is likely to be distributed all the over the world in terms of publishers and subscribers. In other words, one can think of it as a social construct in which information pertaining to this social construct is unlikely to be locally constrained. For now, we assume that there is at least one scope that represents the information space of the social network.

There are now two options for implementation. First, take one that is similar to today’s Facebook in which (the scope) ‘facebook.com’ points to a (set of) server(s) that ‘host’ the service Facebook. Hence, all publications to the Facebook scope are in fact uploads, i.e., any subscription to a named data piece is in fact routed to the Facebook server farm. In this case, all information has in fact a location by virtue of the upload operation to a set of dedicated servers whether one wanted it or not. Merging finding of information and forwarding along a desirable graph is now possible, since any local egress router (called, e.g., content router in NDN) can simply forward the interest request to the (limited number of) domain(s) hosting the Facebook server.

Let us consider another approach to Facebook that builds on the power of storing the data at the publisher or at any other node. In other words, we do not assume that the information is uploaded to a server. Instead, we merely assume that the publisher (of Facebook information) signals the availability of the data within the scope of Facebook. It is now the task of the network to provide an appropriate route to this publisher for any future interest request. This model is appealing to a company like Facebook since it still allows control over the data by virtue of possible access control and profiling of usage patterns. But it relieves Facebook from the burden of hosting the actual data, i.e., it removes the need for operating uploading servers and therefore reduces overall costs of their operations. Any entity that happens to have a particular information item (such as a status update or photo) can provide the information to the interested subscriber.

In this form of a social network, what would happen if functions of finding and delivery were not separated? Let’s assume that the item is not available within the domain where it is requested (leaving out caching since we are concerned with an original request for the item which hasn’t been cached yet). An interest in a particular (social network) information item now needs to be forwarded to ANY domain that hosts the information. If we assume a BGP-like table at the egress router of each domain, the Facebook entry is likely to point to a number of domains that might host Facebook content (which can be any, given the scenario). Slowly, the interest request will propagate over many domains although it is likely that only one is hosting the actual information items at hand.  As a result, ANY status update of ANY social network member is likely to be spread over many, if not all, domains in the Internet! Depending on the intra-domain approach to determine whether or not an interest request can be fulfilled (NDN, for instance, uses local broadcast), this could easily amount to a global flooding of status updates in any network that might hold viable information about this social network (which is, in the case of Facebook, a LARGE number!).

A similar problem arises when bringing information mobility into play, i.e., information that is  exclusively available at a moving target (e.g., my personal laptop). If trying to be reachable by potential interested parties, the interest requests need to be forwarded in a larger number of ISPs (surely, movement patterns could be used to limit the region of discovery but would require not only disclosure of this information but also additional logic in the network to do so – information and logic that one does not want to associate to the fast-path forwarding function).

What is the problem here? Returning to our statement from the beginning, we can conclude that if you don’t know its location, you might end up looking everywhere! What is the lesson learned here? It is that, if information is location-less (which is often the case), finding the information needs to be separated from the construction of an appropriate delivery graph in order to optimize the operations of each of the functions. While we appreciate the cases where information has clearly limited location, e.g., in content distribution where dedicated content networks serve the interests of their customers, we consider it a strong assumption that (application-level) information in general is location-less, either due to the nature of its information space or due to mobility.

However, if information HAS location, merging these functions not only does not lead to problems but even allows for optimization of the operation. These cases are more likely, however, at lower levels of implementation. Take as an example the segmentation of a larger information item along a dedicated (transfer byte limited) forwarding link from one node to the other. Separating finding from delivery is futile here since the location of the information is obvious (the sending node) from the receiving node’s perspective. Hence, the rendezvous signal of interest can be interpreted as the very send operation from the one physical node to the other.

It is this separation of functions that is the powerful notion of the PURSUIT functional model and we need to do more work to better understand this power.

 Posted by at 07:07
Feb 222011
 

Tailored content means any content which has been created with a specific purpose – be it to educate, inform, question or entertain. So, how will an information centric network (ICN) change the way in which users receive content via the internet, and how data moves across networks? The aim of this blog is to explore how these changes which will benefit businesses, and everyday users of the internet.

What does ICN mean for the way in which content is promoted or advertised?

In the current internet, marketing and advertising is difficult. At best it can be “targeted” using a fairly manual process, linking individuals in a database to adverts or email campaigns according to perceived interest levels. At worst a scattergun approach is used, much like offline advertising using posers and flyers. Facebook are trying to tackle this within their network, but what if users could set preferences which allow their viewing habits to be tracked according to the kinds of information they regularly search for and want to find? This implicit and specific data could open up many possibilities for all kinds of companies to develop new marketing strategies and business models. For example, if advertisements are targeted to users interests, they may cease to be a nuisance and could instead become useful revenue generators for viewers and hosts alike.

 Posted by at 14:19
Jan 212011
 

For the ReArch 2010 workshop at the ACM Conext in Philadelphia, we (that’s me and my Hungarian colleague Gergely Biczok) presented first thoughts on the issue of price differentiation in a Future Internet.

We argue that the current Internet experience (from a pricing perspective) is that of separately paying the truck driver when you buy goods in a supermarket. Price differentiation, however, is quite commonplace in other industries. You pay differently for flights, depending on time of booking and class, obviously. The same goes for train tickets, discounts in supermarkets (buy one, get one free). But why is it not commonplace in the Internet?

In the paper, we argue that the resource sharing paradigm of the current Internet with its opaque bit transfer service largely stands in the way of differentiating product offerings. If wanted, it requires rather expensive out-of-band signalling frameworks, which are often only implemented for premium services (if at all).

How does this relate to the information-centric work we are doing in PURSUIT? We argue in the paper that an architecture similar to that of PURSUIT would allow for an in-band solution for price differentiation – the labels for information literally become price labels. The scoping concept of PURSUIT provides an additional powerful tool to aggregate pricing and tie the information concepts (reflected in the scoping and labeling structure being used for a particular application or area) to appropriate pricing concepts associated to the products being offered.

This is a bold argument and we openly admit (in the paper and here) that much is to be done in this space. Scalability and performance are obvious issues. How to do an efficient clearinghouse is certainly an interesting question. But the socio-economic issues are equally important and interesting. The obvious one is how this relates to the infamous network neutrality debate. What is the role of regulation to provide a common basis of openness while not standing in the way of differentiating offerings through more flexible pricing schemes?

All of these are interesting directions of research, some of which we will address in PURSUIT, others which we will seek collaboration with external experts.

In case you are interested in the paper itself, you can find it here.

 Posted by at 16:14