I
In the beginning of the 1990s, I started to feel somehow uncomfortable about the methodologies used in science. Various kinds of technology were approaching maturity, and semiconductors and software seemed to be accomplishing technological innovation. Some of those technologies became available to everyone; the Personal Computer came into wide use and the Internet culture was about to dawn. The focus of research shifted from fundamentals to application in many technological areas, and research and development (R&D) investments often produced unnecessary functions beyond their marginal utility. Technological advancements subsequently pursued horizontal specialization and oligopoly to change the foundation of the world economy from an industrial capitalism to a financial capitalism in conjunction with the globalization that accelerated the bipolarization of the world. The values of enterprises were measured based on their stock values, which made it difficult to invest in long-term research that focused on the next innovations. The internet bubble burst in Japan and around the world, and we entered the days of what later would be called the “Lost Decade.”
II
It was not true that the advancement of technology had solved all the issues that needed to be resolved. There still emerged dozens of new needs, and expectations were kept high for science and technology. In fact, the problems that could be solved with the “methods” of that time by “experts” of that time had all been solved but what was left, were all the problems that they did not know how to solve.
During that period, I directed the Sony Computer Science Laboratories (Sony CSL), that aimed at some important research themes including life, intelligence and ultra-distributed systems. I often questioned myself whether life and intelligence can be broken down just like a product can be reduced to parts, a part to material, material to molecules and a molecule to atoms; whether the decomposition of life and intelligence into parts, if possible, can serve to solve problems; when networks spread and the application programs on the networks were mutually connected to work as ultra-distributed systems, how one could determine whether a failure of an application had an impact on other applications or not. In other words, is there “something” that could not be solved by using element reduction and abstraction and by the syntheses of elements, and can this “something” have an implication in the problems to be solved from then on?
At that time Sony CSL employed the researchers Hiroaki Kitano and Jun Rekimoto with whom I discussed frequently on their research topics. Gradually, the discussion started to refer to the keyword of “open systems”, reaching often to the conclusion that the cooped-up feeling we had back then might come from the assumption of “closed systems”. Nevertheless, no one knew how to express “open systems” and the way to solve problems of open systems specifically at that time. Individual research programs advanced respectively, but there was no unified concept encompassing the individual research tracks, which made it virtually impossible to put the real changes of this research methodology into words.
One of the popular concepts at that time was “Complex Systems”. Life and brains are very complex. Large distributed systems built on the Internet are also very complex. Having studied these complex subjects and adoring Prigogine and Varela, we dreamed a vision in the theory of complex systems. The theory of complex systems provided a new viewpoint of understanding phenomena as “time development systems.” As it turned out, however, a classic methodology underlies this theory that says: “complex phenomena can be expressed by simple non-linear equations.” This could help to comprehend one aspect of a complex problem but could not always help to solve a real problem by itself. With the participation of Luc Steels in 1996, and Hideki Takayasu and Ken Mogi in 1997 in Sony CSL, the discussions went further in depth and width but without reaching a conclusion.
III
When moving into the 21st century, the global environment and energy issues stood out, and globalism developed rapidly, both of which became substantially influential in our daily lives. Many researchers addressed those issues to draw conclusions in their own ways. In most cases, however, the solutions were rather one-sided and a solution of a given issue had sometimes caused another issue. In other cases, safety issues occurred, involving nuclear equipment, traffic systems, online transaction systems and food, making the existence of problems clear in previous system design and operation. Around that time, the research directions at Sony CSL began to establish new research fields that included the word “systems” in their title just like “Systems Biology” and “Systems Brain Science”.
At the beginning of 2006, I was assigned to supervise a project concerning “dependable systems” sponsored by the Japan Science and Technology Agency (JST). Right after the start of this project, the definition of “dependability” became a big issue about which the discussion continued for nearly one year without reaching any conclusions, and finally, I had to propose its basic concept by myself. In the fall of 2007, dependability was defined not as the technology to reduce (foreseeable) failures, but to address unforeseeable incidents, to continue operation and to keep improving performance.
IV
At that time, this concept of “dependability” did not link to open systems directly, but it was clear that the discussions of open systems in Sony CSL underlay this definition, and it did not take so much time to build a basic concept of open systems science based on it. Open systems interact with their outer world whereas closed systems are closed from the outer world. The critical difference between open systems and closed systems is that we can only have the internal observer's view for open systems whereas we can have the external observer's view, and thus we need to deal with the systems “as they are alive” or “as they are operating”. Instead of the methodology of reductionism, which tries to understand basic principles by reducing them through division and abstraction, we need a new methodology to solve the problems of complex “living” or “operating” systems in the real world. I believe that in order to solve the problems of open systems, we need to add a new perspective to analysis and synthesis: the perspective of “management.” Management means to do the best effort possible to continue operation and keep improvements going.
At first, we believed that the notion of management inhabits a totally different sphere than science and technology, but when you think about it, what is life science if not the management of life? Creatures are always pursueing the next-best solutions when total solutions seem unobtainable, somehow doing what's necessary to extend life, doing what is necessary to ensure the survival of descendants. This involves a timeline and events to be managed. Problems involving the brain and mind are essentially the same. Similarly, in the case of social, economic, global environmental, energy, and food related issues, it is necessary to continue chipping away at these problems without let up, and in this sense a management perspective is critically important.
Efforts are also needed to counter service outages and deliberate attacks on the immense Internet-connected social infrastructure, and future upgrades and modifications must be taken into consideration at the initial design stage. Here again, a management perspective is essential. So, in order to pursue solutions to enormous and complex issues we face, even while these issues are alive and active, it requires a three-perspective approach in which a management perspective is incorporated from the outset in addition to analytic and synthetic perspectives. This three-perspective approach is the sense of the open systems science methodology.
This new methodology of science was first announced at the 20th Anniversary Symposium of Sony CSL in June 2008. In this book, various scholars from different disciplines illustrate how open systems science can be applied to their field.
V
This book consists of 9 chapters. In Chapter 1 “What is Open Systems Science?”, Mario Tokoro describes the basic concept of open system science. Tokoro engaged in the studies of object-oriented computing, parallel programming and the Internet at the Keio University and at Sony CSL. He also assumed the leadership of software development and became responsible for technological development in Sony Headquarters. Tokoro has always paid special attention at how research should be conducted and became interested in the methodology of science.
This chapter first recalls the history of science and R&D to indicate the issues that need to be solved in the 21st century and their characteristics. Then, it shows that those problems cannot be solved successfully with the past methodology of science, based on reductionism, and proposes the definition of “open systems” and the methodology of “open systems science”. Open systems, in contrast to closed systems, continuously interact with their outer world. An open system itself usually consists of multiple subsystems with varying numbers, relations, and functions. Such a system falls out of the direct scope of traditional science, but Tokoro strongly urges scientists to tackle it. The methodology of open systems science adds the new perspective of management to the perspectives of analysis and synthesis. Next, Tokoro refers to the consequent linkage from open system science to open collaboration, and finally he touches on scientific attitudes and educational issues.
VI
The subsequent chapters describe the practice of the methodology of open systems science on actual important problem areas. As mentioned before, however, individual research efforts and the elaboration of open systems science methodology were carried out concurrently and evolved together through the discussions between researchers and me, and among researchers. Instead of thinking in terms of open systems science from the start, we happened to notice one day that everybody was performing their research using this common methodology. The degree of adopting the methodology of open systems science varies depending on the respective themes. I do hope that the readers notice that the results of these studies led to the creation of new areas of science and technology.
In Chapter 2, Hiroaki Kitano reports on “Biological Robustness – A Principle in Systems Biology”. Kitano originally studied particle physics, then he went into the fields of artificial intelligence, natural language processing and genetic algorithms. Starting in 1994, he began to realize that real problems could not be solved unless actual biological systems were investigated. Kitano initiated the study of aging and the development of nematodes C. elegans and the fruitfly to computationally understand their mechanisms and, whenever possible, to reproduce them. He gradually expands this further to propose “Systems Biology” as an emerging field. He stepped into the research of the emerging mechanisms and therapeutic methods of diabetes and cancer. Systems biology is, in contrast to molecular biology, the discipline that takes creatures as systems, expresses their structures and control as networked interactions, and then tries to solve the problems of life and health.
Recently, Kitano became interested in the evolution of species and the adaptive ability of individuals, and successfully established a concept of “biological robustness”. Robustness is the ability of systems to maintain their functions despite external and internal perturbation. Normally, creatures control themselves to keep homeostasis, namely the stability of status to sustain their life. Some species such as Tardigrades, however, secure the sustainability of the function of life by abandoning homeostasis without losing robustness. Kitano clarifies the robustness in molecular interaction network structures in creatures and develops a theory of robustness. In addition, he proposes a long-tail medication method as a new medical treatment for the robustness of cancer in cancer care.
VII
In Chapter 3, Kazuhiro Sakurada explains about “Life as Inheritance of History – Systems Biology from Epigenetics”. Sakurada addresses a broad range of aspects studies within molecular biology such as the central nerve system, and challenges scientists to apply open systems methods to develop new drugs. In 1997, he noticed that the proliferative-homeostasis mechanism in human tissue by somatic stem cells plays an important role in the cause and the care of human diseases, and started the study of adult neural stem cells in Salk Institute for Biological Studies in the U.S. When he returned to Japan, he opened up stem cell based drug discovery and regenerative medicine in an industrial setting. For drug target identification, he applied the approach of systems biology obtained from the interaction with Hiroaki Kitano. Sakurada, who has been carrying out his research consistently under the theme of scientific understanding of human life systems, joined Sony CSL in 2008 in order to further develop his studies.
Epigenetics is the functional change of genes to be conveyed without accompanying changes of DNA sequences. The human organism is a complex system consisting of 60 trillion cells, repeating cell divisions 1016 times during a whole lifetime. Sakurada thinks that one of the important functions of life is a dynamic and irreversible process that memorizes the inputs from the internal and external environments in cells as epigenetics at the time of cell divisions, which generates the environmental adaptability of creatures, namely irreversibility. Furthermore, he considers that the understanding of life might be accomplished by combining genetically deterministic phenomena and individual creature's epigenetical phenomena together, and he proposes a new approach of systems biology to unify them.
VIII
In Chapter 4, Ken Mogi describes “Contingent Brains – Pursuing Dynamic Adaptability and Preceptual Stability”. With a major in physics and jurisprudence, Mogi is a researcher with both literary and scientific background. After studying neurophysiology at RIKEN in Japan and Cambridge University in the UK, he participated at Sony CSL in search of a new paradigm of his qualia research.
Qualia are textures felt by the brains. They are the comprehensive feelings that are generated by memory built into the body and engraved in the brains by the signals input from sensor organs at that time. This research is part of an effort to build up a scientific paradigm that stands in direct opposition to the past reductionist approaches in brain research. Mogi is trying to clarify qualia comprehensively from the perspective of sensory systems, neural systems and emotional systems. This chapter discusses “contingency” as a concept that is important in understanding the operation of the brain. Contingency indicates the situation where regularity and irregularity coexist. The concept of contingency is useful to understand the open-ended learning processes of brains. It also has an important meaning in considering the connections between the brain and its environment, and the relationships among people through social networks. The ultimate goal of brain science is to show the way from contingency to qualia abounding in consciousness and to understand the way in which consciousness is brought about from the activities of neuron cells. Together with the results of his experiments, Mogi indicates the future direction of brain research.
IX
In Chapter 5, Luc Steels states that “Evolutionary Linguistics is Revolutionizing the Study of Language and Meaning”. Steels is a linguist and an artificial intelligence researcher. In the second half of the 1980s, he helped to initiate the field of behavior-based robotics, but in the middle of the 1990s, he became aware of its limitations and therefore started to search for a new paradigm to study intelligence. In 1995, he stayed for 3 months at Sony CSL where he developed a new bold research theme tackling the question of the origin and evolution of language and began to study it within a framework of language games. While the approach of Chomsky was dominant at that time and forced linguistics into the framework of classical reductionism, Steels adopted the hypothesis that language is a complex adaptive, open system. He thus developed the paradigm of evolutionary linguistics, and named the mathematical and computational study of evolving languages as Semiotic Dynamics.
The basic framework of this approach is to simulate language games using multiple agents. The speaker uses some element of language (for example a word) to achieve a particular cooperative goal (for example drawing attention to an object) and the game succeeds if the goal has been achieved. If so, the linguistic conventions are reenforced, otherwise the lexicon and grammar are updated in order to be more adapted for subsequent games. Initial research focused on perceptually grounded categories and lexicons, but in recent years the experiments have moved to physical robots that integrate vision and embodiment, as well as more complex grammatical languages. Language is here seen as an open system, a complex adaptive structure that is adjusted by speakers and hearers to remain maximally adapted to their communicative needs.
X
In Chapter 6, François Pachet deals with “The Future of Content is in Ourselves.” Pachet is a computer scientist and a musician. Having learnt logic programming and object-oriented programming at graduate school, he developed a new constraint programming system. Then, he joined the Paris Laboratory of Sony CSL, where he has engaged in the development of real-time constraint programming and feature extraction systems with the use of genetic algorithms, and in its application to music from various angles.
Pachet argues that the coming digital era will see content as works of art and that content is not only created by trained experts, but by anyone. Creating content can be thought of as interacting with images of oneself. He claims that “asking” oneself how and what to express is essential to that end, and he introduces the concept of reflexive interaction in the center of content creation. Reflexive means introspective or reflective. He develops various technologies to achieve it, and performs a variety of experiments or concerts together with experts and children to demonstrate its validity, showing how this paradigm is different from the past vision of content creation.
XI
In Chapter 7, Jun Rekimoto presents “Towards the Cybernetic Earth – Cyborging Earth and its Possibility.” Rekimoto is a visionary who talks about the future, an engineer who materializes his concepts and an artist who expresses them beautifully. He has been consistently viewing user interfaces as the merger between the real world where we are living in and the virtual world created by computers and networks. While developing real user interfaces, he has been envisioning the living environment and intellectual activities of human beings in the future.
The influence of computers and the internet is immeasurable, and the accumulation of intelligence accelerated by the World Wide Web (WWW) continues boundlessly. Collective intelligence, such as Wikipedia and folksonomies that accumulate symbols created by users, has converted into something that includes sound and images, and has further transformed to sensonomies that include sensors. When all of them become part of a collective intelligence as an online system, it becomes global intelligence. At the same time, individual intelligence assists individual memory, records activities and reinforces thoughts in a lifetime. The merger between human bodies and machinery, which started from embedded sensors and actuators for medical use, will be gradually and widely accepted to advance to the next level. Rekimoto considers that the interaction generated by global intelligence and the reinforced individual intelligence will bring about a new step in the history of the earth, which can be reasonably called the “Cybernetic Earth”, and he shows its direction through new technological development.
XII
In Chapter 8, Hideki Takayasu talks about “Observation and Control of Global Social Information – Econophysics Challenges Infinite Complexity”. Takayasu is a physicist specializing in statistical dynamics and fractals. Physics is one of the fields that has developed in tight interaction with the development of scientific methodology. Takayasu has been interested not only in natural phenomena but also social phenomena. His basic standpoint is observability. So far, it was almost impossible to obtain detailed data of economic phenomena. With the support of advanced computers and networks, however, it became possible to carry out transactions online and to obtain a variety of relevant high-precision data in real time.
Takayasu demonstrates that various phenomena can be described by processing these kinds of data with modern statistical dynamics, and established a new paradigm called “Econophysics”. He argues that viewing various social phenomena through the paradigm of econophysics could discover the contradictions of present social/economic systems so that we can improve them. As an example, he refers to the possibility and adequacy of a new interest system in finance. Integrating various kinds of data such as human movements, weather, and so forth with the transaction data would make it possible to understand a wider range of social/economical phenomena in the future.
XIII
In Chapter 9, Frank Nielsen reports on “Computational Information Geometry – Pursing the Meaning of Distances.” Nielsen is a young researcher specializing in information-theoretic approach of computational geometry, spending part of his graduate days at the University of Tokyo. Later, he joined Sony CSL Tokyo and has been engaged in the leading-edge study from both mathematical interests and their applications to real systems.
In this chapter, Nielsen first describes the interdisciplinary field of visual computing that would not have been available without the progress of computers. Then, he looks back on the historical development of geometry to introduce how critical advancement was brought to geometry from the viewpoints of “abstraction” and “computing.” Finally, he describes the emerging field of computational information geometry, a new research area that Nielsen has pioneered. Computational information geometry considers atomic information pieces as points of a geometrical space, defines distances among information as geometric distances, categorizes it by grouping it into clusters, and interprets its characteristics from geometric properties. To give an example of its application, he introduces the personalization of information retrieval systems for next-generation Internet search engines.
In the above paragraphs, I described the backgrounds and overviews of this book. Open systems science is the methodology to solve the problems in open systems, which are “living” or “operating,” through adding the new perspective of management to the conventional perspectives of analysis and synthesis. I would be more than happy if this book could trigger the discussions on the directions of science/technology in the 21st century, and could contribute to the future innovative changes and progress if only a little.