Ebook: Formal Ontology in Information Systems
FOIS is the flagship conference of the International Association for Ontology and its Applications, a non-profit organization which promotes interdisciplinary research and international collaboration at the intersection of philosophical ontology, linguistics, logic, cognitive science, and computer science.
This book presents the papers delivered at FOIS 2023, the 13th edition of the Formal Ontology in Information Systems conference. The event was held as a sequentially-hybrid event, face-to-face in Sherbrooke, Canada, from 17 to 20 July 2023, and online from 18 to 20 September 2023. In total, 62 articles from 19 different countries were submitted, out of which 25 were accepted for inclusion in the conference and for publication; corresponding to an acceptance rate of 40 percent.
The contributions are separated into the book’s three sections: (1) Foundational ontological issues; (2) Methodological issues around the development, alignment, verification and use of ontologies; and (3) Domain ontologies and ontology-based applications. In these sections, ontological aspects from a wide variety of fields are covered, primarily from various engineering domains including cybersecurity, manufacturing, petroleum engineering, and robotics, but also extending to the humanities, social sciences, medicine, and dentistry. A noticeable trend among the contributions in this edition of the conference is the recognition that improving the tools to analyze, align, and improve ontologies is of paramount importance in continuing to advance the field of formal ontology.
The book will be of interest to all formal and applied ontology researchers, and to those who use formal ontologies and information systems as part of their work.
This volume contains all papers accepted for and presented at the 13th edition of the Formal Ontology in Information Systems conference (FOIS 2023). This thirteenth edition of the conference used a novel, sequentially-hybrid approach, which leverages the best of an in-person conference while providing the additional opportunities a virtual conference can offer.
FOIS 2023 built on positive experience from the previous edition and solicited a diversity of papers in three broad categories: (1) Foundational ontological issues, (2) Methodological issues around the development, alignment, verification and use of ontologies; and (3) Domain ontologies and ontology-based applications. In total, we received 62 submissions, accepting 25 of them (acceptance rate of 40%) after a thorough and deliberate review process. Each paper was reviewed by at least three (four on average) reviewers. A rebuttal phase allowing authors to make corrections and respond to reviews was followed by a lively discussion phase between reviewers and PC chairs. Of the accepted papers, 15 were presented at the in-person conference in Sherbrooke, Quebec, Canada, which took place from 17 to 20 July 2023. The remaining ten papers were presented in the virtual part of the conference, held from 18 to 20 September 2023.
The presented papers include seven foundational papers (from 18 submissions), eight methods papers (11 submissions), and ten domain ontology and application papers (from 17 papers in the domain ontology category and 16 papers in the applications category). A number of additional papers that did not make it into these proceedings despite their high quality were recommended for presentation as part of the ontology showcase and demonstration track at the conference, or as part of the eight workshops that were co-located with the in-person conference in Sherbrooke. Those papers will be published in a separate volume of proceedings to appear in the CEUR proceedings series.
The authors of the submitted papers come from 19 countries and the programme committee members from 20 countries – with representatives from all continents except Antarctica. The authors come from a variety of countries, namely (in order of frequency) Germany, Italy, France, Brazil, the Netherlands, Canada, the United States, Sweden, Australia, India, South Africa, Tunisia, the United Kingdom, Spain, Israel, Norway, Poland, Portugal and Turkey. Submissions to satellite events, such as the Early Career Symposium, the Ontology Showcase, and the eight collocated workshops, attracted authors from an additional 12 countries (in alphabetical order): Austria, Belgium, China, Colombia, the Czechia, Finland, Saudi Arabia, Slovenia, Switzerland, Syria, Taiwan, and Uruguay.
Three particular trends are noticeable among the accepted papers. Firstly, fewer new foundational areas (like the representation of trust, properties or roles) are covered as compared to previous editions of FOIS, and almost no paper deals with the relation of ontologies with linguistics, which used to be a recurring topic. At the same time, there is a noticeable trend towards more methodological papers covering methods for re-using and aligning existing ontologies to help develop better domain ontologies. Lastly, the ontologies reported on in this volume cover a very broad set of domains, primarily from various engineering domains including cybersecurity, manufacturing, petroleum engineering, and robotics, but also extending to the humanities, social sciences, medicine, and dentistry.
We have organized the papers in this volume by paper type, starting with papers that focus on foundational issues, followed by methodological papers and, finally, by domain ontologies and application papers.
Among all accepted papers, the PC chairs and a selection committee, consisting of senior PC members, chose three papers to be awarded prizes in recognition of their outstanding contribution, and the exceptionally high quality of both the paper and the presentation. The overall high quality of accepted papers made this selection very difficult, but after thorough deliberation by the selection committee, the FOIS best paper award, which comes with a prize of 500 Euro graciously sponsored by IOS Press, was awarded to the paper “Inferring Ontological Categories of OWL Classes Using Foundational Rules” by Pedro Paulo F. Barcelos, Tiago Prince Sales, Elena Romanenko, João Paulo A. Almeida, Gal Engelberg, Dan Klein and Giancarlo Guizzardi. The paper proposes and evaluates a bootstrapping approach to infer the foundational categories into which the classes from domain ontologies fit.
In addition, we awarded two distinguished paper awards, each of which received prize money of 250 Euro sponsored by IAOA. The recipients were the papers “A method to improve alignments between domain and foundational ontologies” by Cesar Bernabe, C. Maria Keet, Zubeida Khan and Zola Mahlaza, and the paper “A quest for identity criteria in computational ontologies” by Pawel Garbacz. The former paper identifies frequent alignment issues between domain ontologies and foundational categories, and suggests an improved decision diagram as a basis for improved alignment. The latter paper explores the use of first-order theorem provers to systematically test the logical predicates which denote the classes and relations of an ontology for a set of predefined identify criteria.
What is noticeable among all three prize-winning papers is that they focus on novel methods to improve ontology engineering, which is indicative of the recognition within the wider formal-ontology community that improving our tools to analyse, align, and improve ontologies is of paramount importance in advancing the field.
The conference would not have been possible without the work all of the authors who submitted their papers, so we would like to thank all authors, regardless of whether or not their paper was accepted, for their contributions to building and sustaining a community for applied ontology research. Equally important were the contributions by the programme committee, whose over 90 members carefully reviewed and discussed all submissions.
As general chair, Antony Galton (Exeter University, UK) gave the main direction for this FOIS edition, and played a major role in its good coordination. The success of the conference also owes a lot to the online chair, Cassia Trojahn (IRIT Université Toulouse 2, France) and to all the other chairs who helped with publicity, workshops, the ontology showcase, and the early career symposium. The complete list of those who helped to organise FOIS 2023 is included after this preface. Maybe most importantly, the success of the conference also relies on the leadership and time commitment of the local organisers, led by Jean-Francois Ethier and Anne-Marie Cloutier (both University of Sherbrooke) and the entire team at GRIIS and the University of Sherbrooke, who made the conference run smoothly and organised the great social events and delicious food.
We would also like to thank our partners and sponsors. First of all, we thank the International Association for Ontology and its Applications (IAOA) as the association that provides the funding and government structure for the organisation and guidance of the FOIS conference series and which also sponsored the two distinguished paper awards. We likewise take this opportunity to thank IOS Press for their continued support in the publication of the FOIS proceedings, and their sponsorship of the best paper award, and express our appreciation of the financial support for student travel and attendance provided by SECAI (the School of Embedded Composite Artificial Intelligence, TU Dresden and Leipzig University) and financial and in-kind support from Destination Sherbrooke, the University of Sherbrooke, and Coopérative Université de Sherbrooke.
We are delighted to end this preface with the announcement that, from now on, the FOIS conference will happen annually instead of biennially (as previously) and that the next edition will be organized in Twente, in The Netherlands, in July 2024.
Space, time, objects, and events are fundamental concepts in ontologies. For substantivalists that believe entities are located at regions (space or spacetime), a key issue to address is the relationship between entities and their located regions. There is rich literature on the ontologies of location and mereology of material objects, as well as their philosophical foundations, but relatively little on events and their location. Most existing event ontologies provide little beyond the signatures and simply associate an event entity to space and time. In this study, we propose a new location ontology for events that formalize the relationship between events and spacetime, where events and spacetime maintain their own mereologies. We also make ontological commitments to support the mereological harmony of events and spacetime and provide the rationale and axiomatization of these commitments.
Trust is an attitude that an agent (the trustor) has toward an entity (the trustee), such that the trustor counts upon the trustee to act in a way that is beneficial w.r.t. to the trustor’s goals. The notion of trust is relevantly discussed both in information science and philosophy. Unfortunately, we still lack a satisfying account for this concept. The goal of this article is to contribute to filling this gap. First, we take issue with some central tenets shared by the main philosophical accounts, such as that there is just one relation of trust, that this relation has three argument places, and that trust is reliance plus some extra factor. Second, we provide a novel account of trust, also discussing different levels of trust. According to the account we put forth here, the logical form of trust sentences is expressed by a four-place relation. Further, we distinguish and characterize four kinds of trust relations and their connections. We also argue that trust and reliance are different phenomena. Third, on the basis of the proposed account, we extend the Reference Ontology of Trust (ROT). We call the new version of ROT that includes this extension “ROT 3.0”. Finally, we discuss the implications of the new ontological definitions in the applications we have done of the concept of trust in other works, also pointing out future applications made possible by these novel accounts of trust.
Standard first-order logic interprets reference, predication and quantification in terms of fixed denotations with respect to a domain of precise objects. We explore ways to generalise this semantics to account for variability of meaning due to factors such as vagueness, context and diversity of definitions or opinions. We present Variable Reference Logic (VRL), an elaboration of Standpoint Logic, which is a multi-modal logic based on a variety of Supervaluation Semantics. VRL can accommodate several modes of variability in relation to both predicates and objects. Its principal novelty is that its semantics incorporates a domain of indefinite individuals, whose precise properties (such as spatial extension) are not fully determinate. Each indefinite individual is associated with a set of precise entities corresponding to possible precise versions of the individual.
The notion of trust has been traditionally investigated within many disciplines, ranging from sociology to economy, as well as politics, psychology, and philosophy. More recently, it is especially in the fields of AI, ICT, and Engineering (e.g., Critical systems), that the need for a discussion on the concept of trust, problematized in relation to the massive employment of technical artefacts in modern society, is becoming urgent. Yet, being a characteristic trait of human relationships, it is not clear whether the attitude of trust can also be directed towards artefacts. Moreover, with respect to the study of systems’ failures, the engineering sciences provide cognate notions to that of trust, e.g. reliability or dependability, which highlight our dependence on complex systems to fulfil certain tasks in a context of risk, uncertainty and vulnerability. In order to understand how far we can rely on technology, we should be able to understand, first of all, which kinds of dependencies are at stake. To this aim, in this paper, we will briefly review and discuss the main theoretical points related to trust and the technical notions mentioned, looking at both humanities and engineering literature. Then, we shall propose a preliminary ontological analysis aiming at comparing the specificities of the concepts concerned, all sharing a form of instrumental dependence.
In a multilingual domain ontology developed using the labels approach, where each ontological entity is labelled with a language-tagged string, two scenarios result: (1) the ontology is ‘language-independent’, where there is an equal number of labels per natural language, or (2) the ontology is a ‘primary-language’ ontology, where one natural language takes precedence over the other languages used. In a multilingual ontology, it is assumed there is full equivalence between the different languages, however, each natural language, as an embodiment of a culture, differs in how it interprets and organises the world. The result is that although the viewpoint expressed by the multilingual domain ontology is thought to be universal, one natural language is very often privileged, typically English.
Using the culture-bound concepts of ‘dowry’ and ‘bride price’, we demonstrate the differences in perspective when considered for different languages and sub-domains. We propose an ontology, Model of Multiple Viewpoints (MULTI), where both language and culture are considered together, and language is classified as a social norm of a community. MULTI is formalised in OWL and aligned to DOLCE+DnS Ultralite, a foundational ontology suitable for modelling contexts. The evaluation of MULTI is done against the identified use cases. The expected result is that an ontology can be annotated with its viewpoint, thus making the viewpoint of the ontology explicit.
Roles as argument places in a positionalist account of relations are pervasive in conceptual data modelling and linguistics, known also as components of a relationship and as semantic, thematic, or deep roles as parts of a verb or verb class, respectively. They are also planned to be used in Abstract Wikipedia that seeks to combine them. There is, however, no insight in systematic or ontologically sound usage of such roles, in contradistinction to the ample attention given to aligning classes to nouns and relationships to verbs. Roles, as identifiable argument places, may benefit from similar efforts toward an ontology of roles. We aim to take a first step in that direction in a two-pronged approach. First, we conducted an analysis of a set of 101 conceptual data models on their use of roles. Second, we analysed VerbNet, an authoritative linguistic knowledge base on thematic roles. The results show promise for improvements of naming roles in conceptual data models. VerbNet’s roles are challenging to align to an ontology due to its mixing of the ontological and linguistic layers and flexibility of natural language. The insights obtained also indicate ample avenues for further research.
We investigate the construction of time in EMMO, a foundational ontology developed to improve the strictness in the representation of applied sciences’ knowledge. We show how temporal individuals and temporal relations can be defined from the primitives of causation and parthood, at the core of EMMO; we then prove that our construction satisfies van Benthem’s requirements for temporal structures. Our analysis contributes to clarifying the overall landscape of causal relational theories of time, and to the ongoing effort of aligning foundational ontologies. We conclude by sketching how our results can be generalised, employing a strategy to simulate relations’ transitive closure in FOL. This generalisation makes the described construction of time exploitable in ontology engineering with minimal preconditions and sets up the groundwork for a systematic analysis of the connections between (discrete) causal and temporal structures.
Several efforts that leverage the tools of formal ontology (such as OntoClean, OntoUML, and UFO) have demonstrated the fruitfulness of considering key metaproperties of classes in ontology engineering. These metaproperties include sortality, rigidity, and external dependence, and give rise to many fine-grained ontological categories for classes, including, among others, kinds, phases, roles, mixins, etc. Despite that, it is still common practice to apply representation schemes and approaches—such as OWL—that do not benefit from identifying these ontological categories, and simplistically treat all classes in the same manner. In this paper, we propose an approach to support the automated classification of classes into the ontological categories underlying the (g)UFO foundational ontology. We propose a set of inference rules derived from (g)UFO’s axiomatization that, given an initial classification of the classes in an OWL ontology, can support the inference of the classification for the remaining classes in the ontology. We formalize these rules, implement them in a computational tool and assess them against a catalog of ontologies designed by a variety of users for a number of domains.
Foundational ontologies can be used to enable semantic interoperability in modern information systems. Aligning a domain ontology to a foundational ontology is perceived difficult, however. Reasons include confusing underlying concepts, understanding the philosophical ideologies of foundational ontologies, and lack of alignment guidance. For BFO, there is a BFO Classifier tool for alignment, but users still face challenges. To uncover some of these user challenges, an experiment was performed using 10 BFO-aligned domain ontologies. The alignment of domain entities were analysed, revealing seven different types of mistakes in the alignments. To avoid them, the BFO classifier tool was altered to improve the questions and explanations for the core principles of BFO. Thereafter, the BFO classifier tool was evaluated to measure the effect on alignment with a use-case based approach, using the GORO and AWO ontologies. The evaluation revealed that alterations facilitated alignment, as users felt more confident in their results given the improved understanding of the questions and possible answers.
SNOMED CT is a large concept-based terminology designed according to epistemic, semantic and pragmatic principles relevant to clinicians. Its goal is structured clinical reporting in electronic healthcare records (EHRs). The Basic Formal Ontology (BFO) is an ontology designed on the basis of types claimed to exist in reality based on a domain-independent ontological theory. Its goal is faithful representation of reality within that theory. The Ontology for General Medical Science (OGMS) extends the BFO by providing definitions for types relevant within the clinical domain. Combining SNOMED CT with the ontological rigor of BFO and OGMS might improve clinical reporting by, f.i., preventing data entry mistakes and inconsistencies, and make EHRs more comparable. To that end, we are developing a logical framework capable of exploiting what SNOMED CT offers terminologically and realism-based ontologies such as the BFO and the OGMS ontologically by means of bridging axioms compatible with the BFO, and expressed in the same CLIF-dialect as used in its axiomatization in first order logic. In this paper, we report on our attempts to detect in the combinations of binary relations that are used in the definition of SNOMED CT’s definitions of disorder concepts patterns which might at least partially automate the construction of such axioms. Our findings suggest that this partial automation is indeed possible, but to a smaller extent than we had hoped for. We compare our approach with a recent proposal that seeks to bring SNOMED CT and BFO closer together by reinterpreting SNOMED CT disorders as clinical occurrents. The proposal has its merit in providing a realist underpinning for that part of SNOMED CT’s concept model in terms of the BFO, but is not discriminatory enough for an automatic translation into OGMS. Key problem is the lack of face validity of SNOMED CT disorder terms as compared to the formal definitions they are given and this in absence of textual definitions.
Building taxonomies is often a significant part of building an ontology, and many attempts have been made to automate the creation of such taxonomies from relevant data. The idea in such approaches is either that relevant definitions of the intension of concepts can be extracted as patterns in the data (e.g. in formal concept analysis) or that their extension can be built from grouping data objects based on similarity (clustering). In both cases, the process leads to an automatically constructed structure, which can either be too coarse and lacking in definition, or too fined-grained and detailed, therefore requiring to be refined into the desired taxonomy. In this paper, we explore a method that takes inspiration from both approaches in an iterative and interactive process, so that refinement and definition of the concepts in the taxonomy occur at the time of identifying those concepts in the data. We show that this method is applicable on a variety of data sources and leads to taxonomies that can be more directly integrated into ontologies.
The notion of identity criteria marked the dawn of contemporary formal ontology. Despite a number of issues this notion has raised, the quest for them still seems to be worthwhile, in particular in the case of a formal ontology built in the context of information systems. In the current paper I investigate the benefits and costs of using automatic theorem provers in the task of identifying such criteria for formal ontologies that are expressed in a prover-processable language. To this end two detailed case studies were performed – each concerned an upper-level ontology presented in a recent volume of the Applied Ontology journal. The identity criteria found by the process described in this paper turned out to be not particularly illuminating. The respective theorems that define them are rather direct consequences of the axioms, so proofs and models provided by the prover do not provide any new insights into the actual conceptual contents of the formal ontologies.
Commonsense ontology often conflicts with the ontology of our best scientific and philosophical theories. However, commonsense ontology, and commonsense belief systems in general, seems to be remarkably efficient and cognitively fundamental. In cases of contrast, it is better to find a way to reconcile commonsense and “theoretical” ontologies. Given that commonsense ontologies are typically expressed within natural language, a classical procedure of reconciliation is semantical. The strategy is that of individuating the “ontologically problematic” expressions of natural language and paraphrasing the sentences in which they appear in a (formal) language whose commitments are compatible with those of our best theories. We believe that this strategy of reconciliation, though quite standard, especially in the philosophical literature, is problematic: for a start, it forces us to conclude that the “real content” of our commonsense expressions and beliefs is different from what it appears. Commonsense ontology becomes just an illusion. We will thus propose an alternative approach: according to our view, a commonsense ontology is reconciled with a theoretical ontology in case it is shown that the explanation of why we believe in the existence of a problematic entity is compatible with our best theories. We will call this kind of reconciliation “epistemic”. The advantage of an epistemic reconciliation is that commonsense ontology is treated in its own right and could be taken prima facie. Another advantage of the view is that epistemic reconciliation can be analysed through the notion of explaining away: a commonsense ontology is epistemically reconciled with a theoretical ontology if and only if the problematic entities of the commonsense ontology are explained away by “respectable” entities of the theoretical ontology. In the final part of the paper, we sketch a formal analysis of explaining away.
The mainstream approach to the development of ontologies is merging ontologies encoding different information, where one of the major difficulties is that the heterogeneity motivates the ontology merging but also limits high-quality merging performance. Thus, the entity type (etype) recognition task is proposed to deal with such heterogeneity, aiming to infer the class of entities and etypes by exploiting the information encoded in ontologies. In this paper, we introduce a property-based approach that allows recognizing etypes on the basis of the properties used to define them. From an epistemological point of view, it is in fact properties that characterize entities and etypes, and this definition is independent of the specific labels and hierarchical schemas used to define them. The main contribution consists of a set of property-based metrics for measuring the contextual similarity between etypes and entities, and a machine learning-based etype recognition algorithm exploiting the proposed similarity metrics. Compared with the state-of-the-art, the experimental results show the validity of the similarity metrics and the superiority of the proposed etype recognition algorithm.
The term homeomerosity refers to when a whole and its parts are the same kind of thing. For instance, a computer and its processor can both be classified as machines. Homeomerosity is a prerequisite for meaningful addition and subtraction. For example, adding the area sizes of two independent regions gives another area size, but adding an area size and a number of hours yields a number with a peculiar unit. In earlier work, homeomerosity has been formalized with respect to mereological parthood, but not in concurrence with a notion of class subsumption. Both are essential to homeomerosity, as a part can only be observed to be of the same kind as the whole if they are observed to be of some kinds in the first place. In this work, we use formal concept analysis to organize conceptual representations of parts and wholes in a shared contextual model. In our doing so, we show wholes and parts can be represented by sub-concepts of a concept with respect to which they are homeomerous.
The FAIR principles define a number of expected behaviours for the data and services ecosystem with the goal of improving the findability, accessibility, interoperability, and reusability of digital objects. A key aspiration of the principles is that they would lead to a scenario where autonomous computational agents are capable of performing a “self-guided exploration of the global data ecosystem,” and act properly with the encountered variety of types, formats, access mechanisms and protocols. The lack of consistent support for some of these expected behaviours by current information infrastructures such as the internet and the World Wide Web motivated the emergence, in the last years, of initiatives such as the FAIR Digital Object (FDO) movement. The FDO aims at defining an infrastructure where digital objects can be exposed and explored according to the FAIR principles. In this paper, we report the current status of the work towards an ontology-driven conceptual model for FAIR Digital Objects. The conceptual model covers aspects of digital objects that are relevant to the FAIR principles such as the distinction between metadata and the digital object it describes, the classification of digital objects in terms of both their informational value and their computational representation format, and the relation between different types of FAIR Digital Objects.
This paper presents an improvement proposal for an ontology-driven multi-level conceptual model for the data catalogue domain. Data catalogues gather metadata that describe resources in different and heterogeneous digital platforms (repositories). They are supported by Information Systems (IS) that use these descriptors to provide visibility and support resources exploration and analysis. Domain ontologies are essential to promote quality ISs, as they are developed to reflect the intended reality. The proposed conceptual model is well-founded on the Unified Foundational Ontology and the Multi-Level Theory, based on the widely used DCAT vocabulary, a standardized metadata schema for describing datasets and data services. The resulting model addresses ambiguities and contemplates high-level types contributing to the conformance of domain concepts and relationships. In addition, they provide knowledge about the different types of resource descriptors and relationships contained in a specific catalogue, favoring its management. The paper enhances the previous model by extending it to handle descriptors representing a dataset according to the data equivalence across multiple distributions. We also demonstrate the model by describing a dataset with no data equivalence in its distributions, taken from a real-world scenario, thus providing a structured representation to manage metadata sets in the data catalogue domain.
The notion of territory plays a major role in human and social sciences as it allows to anchor in a spatio-temporal context the facts studied in humanities. Representation of territories as spatio-temporal entities has been tackled in various ways. However, approaches for an historical context are scarce, as most approaches are designed for either a contemporaneous or case specific use. Notably, most ontologies used to represent territories are focused on spatial representation and do not intend to ecompass the impact of actors over said space, which happens to be a defining dimension of territories in humanities. In order to represent historical territories, we proposed a new version of the previously conceived HHT ontology (Hierarchical Historical Territory) to represent hierarchical historical territories and include actors representation. The resulting ontology encompasses the description of evolving territories, territorial divisions, explicit change representation, the will of actors to change established characteristics of territories and allows to represent territories without having to know their geometry by relying on a notion of building blocks to replace polygonal geometry.
The representation of manufacturing resources plays a fundamental role in engineering modeling scenarios. These are characterized in a number of ways by taking into account their physical properties, capabilities, roles, etc. In this context, notions like capability, process, and functionality are used in different manners, hence it is not clear how different approaches can interoperate. The aim of this work is to propose an ontology for manufacturing that represents assets involved in manufacturing operations, their characteristics and relations, as defined in a list of requirements. This contribution stems from the existing literature and, in particular, integrates recent works related to the modeling of manufacturing resources, engineering functions, and capabilities. The ontology takes advantage of DOLCE as foundational ontological framework. The relevant axioms are presented and commented on with respect to the identified requirements.
The DrMO ontology is a domain ontology that represents knowledge underlying the composition, characterization and standardization of different materials involved in the dental restoration procedure. It will assist dentists in selecting appropriate materials based on up-to-date scientific knowledge to satisfy a patient’s specific requirements, without jeopardizing their clinical time. It reuses several ontologies from the OBO foundry, especially the Oral Health and Disease (OHD) Ontology. However, the dental restoration domain is complex and also requires concepts from materials science and engineering. Thus, DrMO also incorporates knowledge from the Devices, Experimental scaffolds, and Biomaterials (DEB) and Functionally Graded Materials (FGM) ontologies to provide more comprehensive knowledge of this area of dental material than previous ontologies. However, much of the terminology from FGM is different than that used in clinical dentistry. Thus, DrMO has changed the appropriate classes to make them consistent with terminology common in dentistry. DrMO also follows ontology design best practices by reusing meta-data properties from the Dublin Core vocabulary. It captures knowledge from a set of the most recent and influential papers in Dental Materials and related fields. Links to these papers are included in the ontology as meta-data defined with Dublin Core. It is implemented in OWL2 and was developed with the Protégé 5.6 ontology editor. The ontology was created using the Ontology Development 101 methodology by Noy et. al. Several domain experts in addition to Dr. Dutta also provided their expertise. The ontology is available on GitHub and licensed via an open source license. The GitHub project includes a corresponding file of SPARQL queries that answer the competency questions defined as part of the ontology development methodology.
Digitalization is a priority for innovation in the engineering sciences. The digital transformation requires making the knowledge claims from scientific research data machine-actionable, so that they can be integrated and analysed with minimal human intervention. Up until now, the depth of digitalization is often too shallow, with annotations that are only of use to a human reader. In addition, digital infrastructures and their metadata standards are tedious to use: They demand too much effort from researchers, much of which goes into metadata that contribute nothing to an improved reuse of knowledge. These shortcomings are related. Data documentation and annotation are complicated and of little use whenever the metadata that make knowledge reusable are not prioritized. Addressing this gap, we discuss metadata standardization efforts targeted at documenting the knowledge status of data; we refer to such an annotation as epistemic metadata. We propose a schema for epistemic metadata, with a focus on knowledge and reproducibility claims, that is designed to be user-friendly and flexible enough to apply to a spectrum of circumstances and validity assessments. These developments are implemented as part of the PIMS-II ontology. They were conducted in line with requirements procured through a case study on papers and claims from molecular modelling and simulation.