The most powerful computers available and planned for the near future harness the combined compute power of millions of processors. In order to utilise the potential of such large scale parallel systems, major efforts in algorithm design and software development are required. With each new generation of parallel computers, combining ever more processors in one system, the development of software that can efficiently and effectively exploit the full potential of such massive systems becomes more difficult. Alternate architectures and compute paradigms are thus increasingly being investigated in attempts to alleviate these difficulties.
The pervasive presence of heterogeneous and parallel devices in consumer products such as mobile phones, tablets, personal computers and servers, also demands efficient programming environments and applications targeting small scale parallel systems in contrast to large scale supercomputers.
In response to such demands the Parallel Computing (ParCo2017) conference held at Bologna, Italy in September 2017 also included discussions on alternative approaches to achieve High Performance Computing (HPC) capabilities that could potentially surpass Exa- and Zetascale performances. Talks on the application of Quantum Computers and FPGA processors to solve particular compute intensive problems exemplified future possibilities. These developments are mainly aimed at making available more capable systems for solving compute intensive scientific/engineering problems, such as, for example, climate models, security applications as well as classic NP-problems that currently cannot be managed even with the most powerful supercomputers available.
The fast expanding and wide spread use of parallel computers to solve problems emerging from new application fields is as important for future developments as the expansion of systems to achieve higher processing speeds. New application areas such as Robotics, AI and Learning Systems, Data Science, Internet of Things (IoT), In-Car Systems, Autonomous Vehicles, were discussed. Such applications often do not require extreme processing speeds, but rather a high degree of heterogeneous parallelism. These pose particular challenges for the Software Engineering aspects of parallel software, in particular efficiency, reliability, quality assurance and maintainability. Often, these very same systems also pose extreme challenges in terms of power/performance trade-offs, mainly related to limited amounts of power available from batteries and/or to the problems related to heat dissipation.
A further aspect is that, for example, Data Science, IoT and large scale Scientific/Engineering applications, are highly dependent on high speed and broad band communication to transfer huge quantities of data. High throughput systems combined with high performance capabilities are thus increasingly required in practical situations.
As was the case with all previous events in the parallel Computing series of conferences, ParCo2017 attracted a large collection of notable contributions that depict present and future developments in the parallel computing field. During this event the various trends and research areas mentioned above were discussed, either in keynotes, contributed papers or specialised symposia.
This volume, however, represents a selection of papers presented at the conference. Not all contributors could submit their contributions in time and some contributions could not be accepted. The end result is that in these proceedings some papers covering areas mentioned above are not included.
The organisers wish to thank all organisations and individuals who contributed to the success of this event. A particular word of thanks is due to Paola Alberigo, Silvia Monfardini and Claudia Truini for their indispensable role in organising the conference.
16 November 2017