Ebook: Towards the Future Internet
The Internet is a remarkable catalyst for creativity, collaboration and innovation, providing us today with amazing possibilities that just two decades ago would have been impossible to imagine. Our challenge today is to prepare a trip into the future: what will be the Internet in ten or twenty years from now and what more amazing things will it offer to people? In order to see what the future will bring, we first need to consider some important challenges that the Internet faces today. European scientists proved that they are at the forefront of Internet research already since the invention of the web. But the challenges are huge and complex and cannot be dealt with in isolation. The European Future Internet Assembly is the vehicle to a fruitful scientific dialogue, bringing together the different scientific disciplines that contribute to the Future Internet development. Until now, scientists from more than 90 research projects were funded with around 300 million euros under the 7th Framework Programme. Another 400 million euros will be made available in the near future. These amounts coupled with private investments bring the total investment to more than a billion euros, showing Europe’s commitment to address the challenges of the future Internet. This book is a peer-reviewed collection of scientific papers addressing some of the challenges ahead that will shape the Internet of the Future. The selected papers are representative of the research carried out by EU-funded projects in the field. European scientists are working hard to make the journey to the Future Internet as exciting and as fruitful as was the trip that brought us the amazing achievements of today. We invite you to read their visions and join them in their effort so Europe can fully benefit from the exciting opportunities in front of us.
The Internet is a remarkable catalyst for creativity, collaboration and innovation providing us today with amazing possibilities that just two decades ago it would have been impossible to imagine; and yet we are not amazed! It is only 20 years ago that Tim Berners-Lee invented the Web and two years later, CERN publicized the new World Wide Web project. If one could take a trip back in time with a time machine and say to people that today even a child can access for free a satellite image of any place on earth, interact with other people from anywhere and query trillions of data all over the globe with a simple click on his/her computer they would have said that this is science fiction!
Our challenge today is to prepare a similar trip into the future: what will be the Internet in ten-twenty years from now and what more amazing things will it offer to people? But before trying to see how the future will look like, we need to consider some important challenges that the Internet faces today.
If we consider Internet like one big machine, we should note that it has been working all these years without witnessing a major overall failure, showing a remarkable resilience for a human-made technology. However, Internet provides its services on the basis of “best effort” (i.e. there is no guarantee of delivering those services) and “over provisioning” (i.e. to be sure that we get a certain quality of services we need to keep available all time an important amount of resources). Internet was never designed to serve massive scale applications with guaranteed quality of service and security. Emerging technologies like streaming high quality video and running 3D applications face severe constraints to run seamlessly anytime, everywhere, with good quality of services. Thus, if we want to continue the growth, improve the quality and provide the affordable basic access, new business models have to be put in place to make Internet sustainable.
European scientists proved that they are at the forefront of internet research already since the invention of the web. But the challenges are huge and complex and cannot be dealt in isolation. The European Future Internet Assembly is the vehicle to a fruitful scientific dialogue bringing together the different scientific disciplines that contribute to the Future Internet development with scientists from more than 90 research projects funded until today with about 300 million euros under the 7th Framework Programme. Another 400 million euros will be made available in the near future. These amounts coupled with private investments bring the total investment to more than a billion euros. This is an important investment showing Europe's commitment to address the challenges of the future Internet.
This book is a peer-reviewed collection of scientific papers addressing some of the challenges ahead that will shape the Internet of the Future. The selected papers are representative of the research carried out by EU-funded projects in the field. European scientists are working hard to make the journey to the Future Internet as exciting and as fruitful as was the trip that brought us the amazing achievements of today. We invite you to read their visions and join them in their effort so Europe can fully benefit from the exciting opportunities in front of us.
Mário Campolargo, Director F – Emerging Technologies and Infrastructures
João Da Silva, Director D – Converged Networks and Services
Socio-economics aims to understand the interplay between the society, economy, markets, institutions, self-interest, and moral commitments. It is a multidisciplinary field using methods from economics, psychology, sociology, history, and even anthropology. Socio-economics of networks have been studied for over 30 years, but mostly in the context of social networks instead of the underlying communication networks. The aim of this paper is to present and discuss challenges and perspectives related to “socio-economic” issues in the Future Internet. It is hoped that this will lead to new insights on how to structure the architecture and services in the Internet of the future.
This paper analyzes two challenges of Internet evolution by evaluating the dynamics of change against the outstanding aspects of the current situation. To understand these challenges, a model that exposes the factors that drive the dynamics of Internet evolution is discussed first. The conclusion drawn is that attitude and technology are the two vectors that lead the evolution of Internet. Secondly, the contemporary context with regard to technology development and information society attitudes is analyzed. Finally, it is concluded that the necessity of a paradigm shift in the attitude of organizations and the threat of unsustainable technology development are two of the main challenges towards the future Internet.
Socio-economic aspects play an increasingly important role in the Future Internet. To enable a TripleWin situation for the involved players, i.e. the end users, the ISPs and telecommunication operators, and the service providers, a new, incentive-based concept is proposed referred to as Economic Traffic Management (ETM). It aims at reducing costs within the network while improving the Quality-of-Experience (QoE) for end users. In particular, peer-to-peer (P2P) overlay applications generate a large amount of costs due to inter-domain traffic. ETM solution approaches have to take into account (a) the traffic patterns stemming from the overlay application, (b) the charging models for transit traffic, and (c) the applicability and efficiency of the proposed solution. The complex interaction between these three components and its consequences is demonstrated on selected examples. As a result it is shown that different ETM approaches have to be combined for an overall solution. To this end, the paper derives functional and non-functional requirements for designing ETM and provides a suitable architecture enabling the implementation of a TripleWin solution.
The problem of supporting the secure execution of potentially malicious third-party applications has received a considerable amount of attention in the past decade. In this paper we describe a security architecture for Web 2.0 applications that supports the flexible integration of a variety of advanced technologies for such secure execution of applications, including run-time monitoring, static verification and proof-carrying code. The architecture also supports the execution of legacy applications that have not been developed to take advantage of our architecture, though it can provide better performance and additional services for applications that are architecture-aware. A prototype of the proposed architecture has been built that offers substantial security benefits compared to standard (state-of-practice) security architectures, even for legacy applications.
Wireless Sensor Networks (WSN) are quickly gaining popularity due to the fact that they are potentially low-cost solutions, which can be used in a variety of application areas. However, they are also highly susceptible to attacks, due to both the open and distributed nature of the network and the limited resources of the nodes. In this paper, we propose a modular, scalable, secure and trusted networking protocol stack, able to offer self-configuration and secure roaming of data and services over multiple administrative domains and across insecure infrastructures of heterogeneous WSNs. The focus is on trusted route selection, secure service discovery, and intrusion detection, while critical parts of the security functionality may be implemented in low-cost reconfigurable hardware modules, as a defense measurement against side channel attacks.
Identity management has the potential to play a major role in the Future Internet as an enabling technology that integrates services with transport infrastructures. The use of partial identities can improve users' control over their personal data and enhance privacy, but revealed data can be used by service providers more than strictly needed, such as for user profiling. Without adequate precautions, service and identity providers could pick up profile and service usage data during the authentication process. Such privacy concerns have gained attention of late, and legal constraints are expected to follow to further limit data transfer from identity providers to service providers. The paper discusses legal constraints from a European perspective, taking into account the possibility of network operators acting as identity providers, and approaches to enhancing privacy. We show how European legislation and user needs for privacy affect the design of Identity Management Systems and outline consequences as well as opportunities and directions for future research.
The Pan-European laboratory – Panlab – is based on federation of distributed testbeds that are interconnected, providing access to required platforms, networks and services for broad interoperability testing and enabling the trial and evaluation of service concepts, technologies, system solutions and business models. In this context a testbed federation is the interconnection of two or more independent testbeds for the temporary creation of a richer environment for testing and experimentation, and for the increased multilateral benefit of the users of the individual independent testbeds. The technical infrastructure that supports the federation is based on a web service interface through which available testing resources can be queried, provisioned and controlled. Descriptions of the available resources are stored in a repository, and a processing engine is able to identify, locate and provision the requested testing infrastructure, based on the testing users' requirements, in order to dynamically create the required testing environment. The concept is implemented using a gateway approach at the border of each federated testbed. Each testbed is an independent administrative domain and implements a reference point specification in its gateway.
Socio-economic aspects are not intrinsic to the current Internet architecture and so they are handled extrinsically. This has led to increasing distortions and stresses; two examples are inter-domain scaling problems (a symptom of the way multihoming and traffic engineering are handled) and deep packet inspection (a symptom of the lack of resource accountability). The Trilogy architecture jointly integrates both the technical and socio-economic aspects into a single solution: it is thus designed for tussle. A Future Internet that follows the Trilogy vision should automatically be able to adapt to the changes in society's demands on the Internet as they occur without requiring permanent redesign.
In this paper we describe several approaches to address the challenges of the network of the future especially from a mobile and wireless perspective. Our main hypothesis is that the Future Internet must be designed for the environment of applications and transport media of the 21st century, vastly different from the initial Internet's life space. One major requirement is the inherent support for mobile and wireless usage. A Future Internet should allow for the fast creation of diverse network designs and paradigms and must also support their co-existence at run-time. We observe that a pure evolutionary path from the current Internet design is unlikely to be the fastest way, if at all possible, to address, in a satisfactory manner, major issues like the handling of mobile users, information access and delivery, wide area sensor network applications, high management complexity and malicious traffic that hamper network performance already today. We detail the scenarios and business use cases that lead the development in the FP7 4WARD project towards a framework for the Future Internet.
Despite its success, the Internet is suffering from several key design limitations, most notably the unification of endpoint locators and identifiers, and an imbalance of power in favor of the sender of information. The unfavourable consequences are that the full range of possibilities offered by the Internet may not be fully realized and trust in its proper operation has been significantly weakened. In this paper, we introduce the Publish/Subscribe Internet Routing Paradigm (PSIRP) and present an architectural redesign of the global Internet based on an information-centric publish/subscribe (pub/sub) communication model. Through its application of pub/sub communications and efficient network design emphasizing end-to-end trust, we believe that the PSIRP-reengineered Internet may resolve many of the problems plaguing the current Internet and provide a powerful and flexible network infrastructure with a high degree of resiliency.
This paper presents a new autonomic management architectural model consisting of a number of distributed management systems running within the network, which are described with the help of five abstractions and distributed systems: Virtualisation, Management, Knowledge, Service Enablers and Orchestration Planes. The envisaged solution is applicable to the management design of Future Internet as a service and self-aware network, which guarantees built-in orchestrated reliability, robustness, context, access, security, service support and self-management of the communication resources and services.
The Internet today is a complex agglomerate of protocols that inherits the grown legacies of decades of patchwork solutions. Network management costs explode. Security problems are more pressing than ever, as organized crime discovers its value. The application and user demands on the Internet are increasing with mobile technologies and media content on the rise, all the while the number of participating nodes is equally boosting. As a direct consequence the recently triggered research on concepts for the future Internet has to cope with a high complexity at network layer and significance in mission critical service infrastructures of society. As part of this effort, the research field of autonomic communication (AC) aims at network self-management and self-protection, following the autonomic computing paradigm invented by IBM. We argue that the collaboration of network nodes provides a valuable way to address the corresponding challenges. After an in-depth analysis of the problem space, we outline in this paper the advantages and challenges of collaboration strategies in deployment. We present the Node Collaboration System (NCS) developed at Fraunhofer FOKUS for the experimental investigation of collaboration strategies and show how the system can be used in a simple setting for network self-protection.
Clearly, whether revolutionary/clean-slate approaches or evolutionary approaches should be followed when designing Future Multi-Service Self-Managing Networks, some holistic Reference Models on how to design autonomic/self-managing features within node and network architectures are required. Why Reference models?: (1) to guide both approaches towards further architectural refinements and implementations, and (2) to establish common understanding and allow for standardizable specifications of architectural functional entities and interfaces. Now is the time for harmonization and consolidation of some ideas emerging (or achieved so far) from both approaches to Future Internet design, through the development of a common, unified and “standardizable” Reference Model for autonomic networking. This paper presents this vision. We also present the design principles of an emerging Generic Autonomic Network Architecture (GANA)—a holistic Reference Model for autonomic networking calling for contributions. We describe different “instantiations” of GANA that demonstrate its use for the management of a wide range of both basic and advanced functions and services, in various networking environments.
Future Internet services require access to large volumes of dynamically changing data records that are spread across different locations. With thousands or millions of distributed nodes storing the data, node crashes or temporary network failures are normal rather than exceptions and it is therefore important to hide failures from the application.
We suggest to use peer-to-peer (P2P) protocols to provide self-management among peers. However, today's P2P protocols are mostly limited to write-once/read-many data sharing. To extend them beyond the typical file sharing, the support of consistent replication and fast transactions is an important yet missing feature.
We present Scalaris, a scalable, distributed key-value store. Scalaris is built on a structured overlay network and uses a distributed transaction protocol. As a proof of concept, we implemented a simple Wikipedia clone with Scalaris which outperforms the public Wikipedia with just a few servers.
Future Internet Access technologies are supposed to bring us a very performing connection to the main door of our homes. At the same time, new services and devices, as for example digital Audio-Video (AV) terminals (such as HDTV videos) and their increased use will require data transfers at speeds exceeding 1 Gbps inside the home at the horizon 2012. Both drivers lead to the deployment of a high-quality, future-proof network inside homes, to avoid a somehow ironic, but indeed possible situation, in which the Home Area Network (HAN) becomes the actual bottleneck of the full system. In this paper we review the requirements for next-generation HAN, showing that this environment may end up taking advantage of optical cabling solutions as an alternative to more traditional copper or pure wireless approaches.
Transparent networks are widely seen as the prime candidates for the core network technology of the future. These networks provide ultra high speed end-to-end connectivity with high quality of service and failure resiliency. A downside of transparency is the accumulation of physical impairments over long distances, which are difficult to mitigate using physical-layer techniques only, and the novel challenges in fault detection/localization. We present here the DICONET project, a set of techniques and algorithms implemented at multiple layers, culminating with the physical implementation of a transparent optical network on a testbed. DICONET consists of a set of impairment-aware network management algorithms, such as routing and wavelength assignment, monitoring, failure localization, rerouting, all integrated within a unified control plane, which extends known solutions to include the impairment-awareness of the underlying layers.
Research in Service Oriented Computing has been based on the idea that software applications can be constructed by composing and configuring “software services”, i.e., software utilities that can be used but that are not necessarily owned by consumers. A key aspect has however been dramatically underestimated in this research, namely the fact that – in most cases – software services are software components that provide electronic access to “real services” (e.g., a software service for travel booking allows us to access the actual service behind it, namely “the possibility of traveling”). Our claim is that the “Internet of Services” should focus on real services, rather than software services. In particular, we investigate the new role of Internet, which is a supporting infrastructure in the case of software services, but becomes a key enabler for real services, offering a unique capability to communicate in real time changes in real services and allowing for immediate reactions by service consumers. In the paper, we illustrate the project we are undertaking to demonstrate that Internet can become the service delivery platform of the future. We illustrate, in particular, the research challenges this vision produces in the areas of service usage, representation, engineering, and delivery, as well as the results we have already achieved.
The Future Internet is about to fundamentally change social and economic interactions at a global scale. The integrated access to people, media, services, and things will enable new styles of interaction at unprecedented scale, flexibility, and quality. However, this also calls for a well-defined and sound approach for management and governance that allows for clear harmonization and translation of issues across domains and layers. This paper presents a proposal that aims to blend management and governance issues at business, software, infrastructure, and network level, and introduces a multi-level SLA management approach to bridge these issues across different layers. It also sketches some insights on management and governance practise and requirements in various industrial domains.
SOA4All, a collaborative European research and development project, is pioneering advanced web technology that will allow billions of parties to expose and consume IT services online. Four complementary technical advances are being integrated to create a coherent and domain-independent service delivery platform. Service-oriented architectures and service-orientation principles are being used to support the development of complex services based on distributed and reusable components. Web principles and technology are used to provide an underlying infrastructure that allows the integration of services at a world wide scale. Web 2.0 is used to structure human-machine cooperation in an efficient, user-adapted and cost effective manner. And semantic technology is used to enhance service discovery, composition and execution.
This paper analyses the current service creation trends in telco and Web worlds, showing how they are converging towards a future Internet of user-centric services embracing typical telco capabilities. The OPUCE platform is presented as the next step towards this integrated, user-centric future: a platform which offers intuitive tools for graphical service creation aimed at individuals with no specific skills in computer science or programming and a service-oriented execution environment capable of a seamless interoperation of Web Services and telco applications based on operator-owned infrastructure. The OPUCE platform is compared to existing mashup creation tools to show its advantages and how is compared to existing mashup creation tools to show its advantages and how it could be used to implement a converged and open service marketplace for the Future Internet.
This paper presents current research in the design and integration of advance systems, service and management technologies into a new generation of Service Infrastructure for Future Internet of Services, which includes Service Clouds Computing. These developments are part of the FP7 RESERVOIR project and represent a creative mixture of service and network virtualisation, service computing, network and service management techniques.
Over recent years, resource provisioning over the Internet has moved from Grid to Cloud computing. Whilst the capabilities and the ease of use have increased, uptake is still comparatively slow, in particular in the commercial context. This paper discusses a novel resource provisioning concept called Service-oriented Operating Systems and how it differs from existing approaches of Grids and Clouds. The proposed approach aims for making applications and computers more independent of the underlying hardware and increase mobility and performance. The base architecture and functionality will be detailed in this paper, as well as how such operating systems could be deployed in future workspaces.
Future Internet is about combinations of communication, content and context services. Whereas the former two have achieved already a reasonable state of maturity, context information services are still at their infancy: at most stand-alone applications with limited on-line-or-off-line presence information. The critical success factor is still missing, namely context federation, which is the exchange of context information between different application, services and providers.
This article investigates how context services could be successfully federated by following the same pattern that many other information and communication services have successfully followed in the past. First Context Information Aggregators and later Context Information Brokers play a critical role in addressing the market need for federated context information services.
This article highlights challenges that have to be overcome to make this vision come true. If Europe takes the lead in overcoming these challenges, Europe can become a flourishing ground for a new context-brokering industry.
The Service Oriented Architecture (SOA) is increasingly adopted by industry as a paradigm for building distributed software applications. Yet, the SOA has currently several serious limitations and many crucial service issues are not addressed, including, for example, how to establish, monitor and enforce quality in an end-to-end fashion, as well as how to build service-based applications that proactively adapt to dynamically changing requirements and context conditions. This paper provides an overview of the service research challenges identified in S-Cube, the European Network of Excellence on Software Services and Systems. S-Cube strives to address those challenges by bringing together researchers from leading research institutions across diverse disciplines. The S-Cube researchers are joining their competences to develop foundations and theories, as well as novel mechanisms, techniques and methods for service-based applications, thereby enabling the future Internet of Services.