Ebook: Towards the Future Internet
This book contains papers describing the major scientific achievements supported by European funding in the area of Future Internet. It is published in the context of FIA, the Future Internet Assembly"1", which is structured to permit interactions across technical domains from researchers and other stakeholders active in Future Internet research. FIA holds two meetings per year and calls on those involved in relevant funded research projects to participate actively and to steer its work. On 31 March, 2008 the Bled declaration “Towards a European approach to the Future Internet”"2" was officially presented at the FIA Conference in Bled, Slovenia: “In the future, even more users, objects, services and critical information infrastructures will be networked through the Future Internet which will underpin an ever larger share of our modern and global economies. It is therefore time to strengthen and focus European activities on the Future Internet to maintain Europe's competitiveness in the global marketplace. A significant change is required and the European Internet scientific and economic actors, researchers, industrialists, SMEs, users, service and content providers, now assert the urgent necessity to redesign the Internet, taking a broad multidisciplinary approach, to meet Europe's societal and commercial ambitions.” "1" For more information see http://www.future-internet.eu "2" For more information see http://www.future-internet.eu/publications/bled-declaration.html
Nobody could foresee 20 years ago the important role Internet is playing today as one of the cornerstones of our society, supporting social and economic interactions at global scale. Internet's critical role will be more evident in the future as more and more activities based on it will permeate our daily life, driven by emerging novel Information and Communication Technologies.
Therefore, trying to foresee how the internet will evolve in the next 20 years is a great challenge. Even greater challenge is to guide research and innovation in this area so that our society benefits from them. It is not surprising that research on the future of the Internet has become a strategic priority all over the world, with important national initiatives in USA, Japan, Korea and China as well as in many European Member States. The European Commission through its Seventh Framework Research programme dedicates about 20% of the budget for research related to the Future Internet, corresponding to more than a billion euros funding.
The adoption pace of internet technologies has been accelerated. At the same time as society we face three emerging challenges: the need to recover from a period of economic crisis, the need to better manage energy resources and the need to mitigate the effect of climate change; these three interrelated needs call for sustainable solutions. The anticipated technological developments of the Future Internet and the trends towards smart infrastructures (in energy, mobility, health, work, environment, etc.) provide Europe with an opportunity to progress towards a sustainable economy and society. The foundations for these technological developments lay on the efforts of researchers in Europe and worldwide.
This book, the second of a series, tries to capture the emerging trends in Future Internet research, as they are presented through European funded research activities.
Directorate General for Information Society and Media
Mário Campolargo, Emerging Technologies & Infrastructures
Luis Rodríguez-Roselló, Converged Networks & Services
The concept of Economic Traffic Management (ETM) encompasses various techniques for optimizing overlay networks considering both, underlay and overlay networks' performance requirements as well as the resulting economic implications for ISPs. This work presents several mechanisms through an overall ETM System (ETMS), identifying the possibility for synergies between mechanisms, both in the sense of complementarity of decision grounds and in the sense of functionality and components employed thereby. The paper describes the core ETMS architecture and how various mechanisms are instantiated. It continues with the discussion of the flexibility and modularity of this architecture, allowing for the accommodation of synergies. Finally, it presents selected results from the test-bed trials of the ETMS and a dedicated discussion on incentives behind these ETM mechanisms.
The network research community is actively working on the design of the Future Internet, aiming at solving scalability issues that the current Internet is facing. It is widely recognized that the Locator/ID Split paradigm fits well the requirements for the new, more scalable, Future Internet architecture. Both the academy and the industry have already produced several technical proposals, with their own peculiarities and merits. However, despite such an effort, it is still unclear, how the Locator/ID Split will be deployed, what are the drivers of its adoption, who will adopt it, and how the adoption process will most likely evolve in time. In this paper, we answer to these questions by presenting an adoption model for the Locator/ID Split paradigm.
Multipath TCP (MPTCP) is a resource pooling mechanism which splits data across multiple subflows (paths) and ensures reliable data delivery. Although Multipath TCP is a relatively small technical change to the TCP protocol providing improved resilience, throughput and session continuity, it will have considerable impact on the value networks and business models of Internet access provisioning. In this paper, we evaluate the viability of different MPTCP deployment scenarios and present what new ISP business models might be enabled by the new flexibility that MPTCP brings. This allows the research community to focus on the most promising deployment scenarios.
“Utility” is by no means new a new concept in ICT. At the time of writing, the European Union is putting forward the argument to extend “universal service” to broadband communication services. The universal service argument may in future be extended beyond broadband to other Internet services, or services running on top of the communications infrastructure of the Internet. This paper considers the economic underpinnings of utility services in the Future Internet context. It examines a number of relevant economic models and basic economic assumptions in an attempt to answer the question: are utility services economically viable? It identifies several key issues and spells out some implications for the Future Internet service models on the basis of that analysis. The paper concludes that market forces alone are unlikely to be sufficient for utility service provision in ICT; utility service markets have the intrinsic characteristics of a monopoly. We need new and bold economic models for the emerging and future Internet scenarios.
The complexity and interdependencies of deployed software systems has grown to the point where we can no longer make confident predictions about the security properties of those systems from first principles alone. Also, it is very complex to state correctly all relevant assumptions that underlie proofs of security, so that attackers constantly seek to undermine these assumptions. Complexity metrics generally do not correlate with vulnerabilities and security best practices are usually founded on anecdotes rather than empirical data. In this paper, we argue that we will therefore need to embrace empirical methods from other sciences that also face this problem, such as physics, meteorology, or medicine. Building on previous promising work, we suggest a system that can deliver security forecasts just like climate forecasts.
Projects of the Future Internet Research & Experimentation (FIRE) initiative are building an experimental facility that shall serve the needs of Future Internet research and development. The main design principles are virtualization of resources and federation. Federation is a means to meet requirements from Future Internet research that cannot be met by individual testbeds. In particular, to support large scale experiments utilizing heterogeneous resources, a federation of experimental facilities is needed. While several initiatives are currently establishing large scale testbeds, the mechanisms for federating such environments across the boundaries of administrative domains are unclear. This is due to the lack of established and agreed federation models, methods, and operational procedures. In this article we propose a federation model that defines high level conceptual entities for federating resources across administrative domains. A first prototype implementation of the functional components derived from the model has been realized and evaluated. This is demonstrated by the discussion of use cases that depict the flexibility of the proposed approach. The model can guide future testbed developments and harmonize the currently scattered efforts across several FIRE projects in order to establish an agreed resource federation framework. This framework shall be the basis for Future Internet research and experimentation in Europe and provide experimental facility services to academia and industry.
Virtualization in both computer systems and network elements is becoming increasingly part of the Internet. Apart from posing new challenges, virtualization enables new functionalities, and hints at a future Internet architecture, which contains a physical, but polymorphic, substrate on which various parallel infrastructures can be created on demand. This article explores such a new “architecture of virtual infrastructures”. The FEDERICA project and the recent evolution trends in the National Research and Education networks are taken as examples of this new type of network infrastructure, which is an evolution of the classic network.
Most of the Internet's traffic is data-oriented, while the Internet is based on sending messages to end points. As a result, the efficient multicast is difficult to implement on Internet's scale, and various attacks such as DoS and SPAM are easy to launch. In this paper, we describe a clean slate approach for a publish/subscribe based networking: Publish/Subscribe Internet Routing Paradigm (PSIRP). PSIRP aims to implement publish/subscribe networking without relying on existing networking protocols such as IP. Preliminary results suggest that a clean slate publish/subscribe approach is flexible and scalable to the Internet.
This paper introduces an identity engineered approach for the Future Internet that puts usability and privacy concerns at the core. The goal is to achieve a solution that scales to billions of users and entities that could want to connect to the Internet. Digital Identities that conceptualize the properties of users under their control play a key role in making sure the needs of the user become part of the target architecture. The concept of intentity makes the initiation of communication and services depend on the intention of the user or, more generally, the entity.
The current challenge for the network systems is the reduction of human intervention in the fundamental management functions and the development of mechanisms that will render the network capable to autonomously configure, optimize, protect and heal itself, handling in parallel the emerging complexity. The in-network cognitive cycle will allow the continuous improvement of management functions of individual network elements and collectively of a whole network system. This paper proposes the software components for the engineering of an innovative self-managed future Internet system that will support visionary research through experimentation.
The Autonomic Internet project  approach relies on abstractions and distributed systems of a five plane solution for the provision of Future Internet Services (OSKMV): Orchestration, Service Enablers, Knowledge, Management and Virtualisation Planes. This paper presents a practical viewpoint of the manageability of virtual networks, exercising the components and systems that integrate this approach and that are being validated. This paper positions the distributed systems and networking services that integrate this solution, focusing on the provision of Future Internet services for self-configuration and self-performance management scenes.
Service Clouds are a key emerging feature of the Future Internet which will provide a platform to execute virtualized services. To effectively operate a service cloud there needs to be a monitoring system which provides data on the actual usage and changes in resources of the cloud and of the services running in the cloud. We present the main aspects of Lattice, a new monitoring framework, which has been specially designed for monitoring resources and services in virtualized environments. Finally, we discuss the issues related to federation of service clouds, and how this affects monitoring in particular.
Distributed denial of service (DDoS) is considered as one of the most serious threats to emerging cloud computing infrastructures. It aims at denying access to the cloud infrastructure by making it unavailable to its users. This can cause important economic and organizational damage depending on the type of applications running on the cloud that have become unavailable. This paper proposes an extension to a federated cloud architecture to use scalability and migration of virtual machines to build scalable cloud defenses against cloud DDoS attacks. The architecture is validated by showing how three DDoS attack scenarios are handled by the DDoS countermeasures.
Integration of different networks enhances sustainable applications but specific vulnerabilities must be faced up too. New network components often are introduced to efficiently adapt different technologies, whereas in some cases they can be exploited as sources for generation of anomalous events in the interconnected networks attributable to security attacks. This paper deals with security for heterogeneous and inter-operable communication networks including a satellite segment, focusing on the application of an Intrusion Detection System (IDS) on the target scenario. The baseline envisages the presence of Performance Enhancing Proxies (PEPs) at the edges of satellite links, which implies the twofold effect of improving TCP performance but also increasing a PEP-related vulnerability due to the violation of the end-to-end semantic of TCP. The paper addresses the above mentioned issue through the realization of a test bed including a real geostationary satellite link operated by Telespazio. Outcomes of experiments show the effectiveness of the application of the proposed IDS system.
The variety of technologies and standards in the domain of service-based systems makes it complex to build architectures which fit specific project contexts. A reference architecture accompanied by guidelines for deriving context-specific architectures for service-based systems can ease this problem. The NEXOF-RA project is defining a reference architecture for service-based systems that serves as a construction kit to derive architectures for a particular project context. Experience in developing the reference architecture over the last two years has shown that the service-oriented context results in different and sometimes contradicting demands for the reference architecture. Therefore, the development of a single and integrated reference architecture is not feasible. Instead, for constructing the reference architecture, the project has chosen a pattern-based approach that allows the consideration of different types and demands of service-based systems. Thus it can deal with contradicting demands of different types of service-based systems and is extensible to include new future trends of service-based systems. This paper will present the structure of the pattern-based reference architecture and explain how it addresses the needs of a reference architecture for service-based systems.
Establishing Web services as resources on the Web opens up productive, but challenging new possibilities for open, highly dynamic and loosely-coupled service economies. In addition, lifting services to the semantic level provides a sophisticated means for automating the main service-related management processes and the composition of arbitrary functionalities into new services and businesses. In this article we present the SOA4All approach to a global service delivery platform. By means of semantic technologies, SOA4All facilitates the creation of service infrastructures and increases the interoperability between large numbers of distributed and heterogeneous functionalities on the Web.
Human behavior, both individually and socially, is aimed at maximizing some objective functions, and this is directly reflected in energy dynamics. New issues are emerging now, such as the unpredictability of some renewable sources generation and the new technologies enabling real time energy optimized use in smart cities. Here the role of the Future Internet in the smart grids is addressed, in particular enlightening how the anticipatory knowledge of the future occurrences of the energy consumption dynamics may be effectively promptly exchanged between competing actors.
Despite all the uncertainties regarding the architectures, protocols and technologies to be used in an Internet of the future, it is clear that it will be shaped for humans, carrying with it major social and economical impact. In this sense, aiming at improving the user perceived Quality of Experience in the Internet of the Future, our paper presents common ground work for designing a unified generic human profile structure and correspondent architecture capable of seamless interacting with a myriad of things and services, independently from their associated technologies. Moreover, supported by its reality, social and context awareness conception principles, it will enable human behavior to be leveraged to any entity present in every single next generation ecosystem.
Current research in pervasive computing, as well as in the fields of mobile telecommunications and device manufacture are opening the way for convergence between mobile telecommunications and the traditional Internet towards the Future Internet. The ubiquitous computing paradigm integrates information processing into the objects that surround us in our environment and permits both global control of these objects and global access to the information they can provide. One aspect of this is the notion of smart spaces where the integration of communication and computational devices is being used to create intelligent homes, offices and public areas, which provide intelligent support for the user. The Persist project (PERsonal Self-Improving SmarT spaces) is investigating a novel approach to this convergence through the introduction of the concept of self-improving Personal Smart Spaces.
A flexible wavelet-based scalable video coding framework (W-SVC) is proposed to support future media internet, specifically content delivery to different display terminals through heterogeneous networks as the Future Internet. Scalable video bit-stream can easily be adapted to required spatio-temporal resolution and quality, according to the transmission and user context requirements. This enables content adaptation and interoperability in Internet networking environment. Adaptation of the bit-stream is performed in the compressed domain, by discarding the bit-stream portions that represent higher spatio-temporal resolution and/or quality than the desired. Thus, the adaptation is of very low complexity. Furthermore, the embedded structure of a scalable bit-stream provides a natural solution for protection of the video against transmission errors inherent to content transmission over Internet. The practical capabilities of the W-SVC are demonstrated by using the error resilient transmission and surveillance applications. The experimental result shows that the W-SVC framework provides effusive flexible architecture with respect to different application in future media internet.
This document provides an overview of the MPEG Extensible Middleware (MXM), one of ISO/IEC MPEG's latest achievements, defining an architecture and corresponding application programming interfaces (APIs) which enable accelerated media business developments. The paper describes the vision behind MXM, its architecture, and a high level overview of the API. Additionally, example MXM applications are given.
In most cases, current Internet architecture treats content and services simply as bits of data transported between end-systems. While this relatively simple model of operation had clear benefits when users interacted with well-known servers, the recent evolution of the way the Internet is used makes it necessary to create a new model of interaction between entities representing content. In this paper we study the limitations of current Internet and propose a new model, where the smallest addressable unit is a content object, regardless of its location.
The Future Internet is not envisaged to be simply a faster way to go online. What is expected to fundamentally change the way that people use the Internet is the ability to produce, and seamlessly deliver and share their own multimedia content. In this paper, we introduce and analyse innovative architecture components to offer media scalable content delivery, increasing the robustness, enriching the PQoS and protecting the content from unauthorized access over heterogeneous physical architecture and P2P logical overlay network topologies. Technology pillars in which the system is based are described: i.e. Multi-layered/Multi-viewed content coding, Multi-source/multi-network streaming & adaptation, content protection and lightweight asset management.
The integration of the physical world into the digital world is an important requirement for a Future Internet, as an increasing number of services and applications are relying on real world information and interaction capabilities. Sensor and actuator networks (SAN) are the current means of interacting with the real world although most of the current deployments represent closed vertically integrated solutions. In this paper we present an architecture that enables efficient integration of these heterogeneous and distributed SAN islands into a homogeneous framework for real world information and interactions, contributing to a horizontal reuse of the deployed infrastructure across a variety of application domains. We present the main concepts, their relationships and the proposed real world resource based architecture. Finally, we outline an initial implementation of the architecture based on the current Internet and web technologies.