Ontolog Forum
Ontology Summit 2019
The Ontology Summit is an annual series of events that involves the ontology community and communities related to each year's theme chosen for the summit. The Ontology Summit was started by Ontolog and NIST, and the program has been co-organized by Ontolog, NIST, NCOR, NCBO, IAOA, NCO_NITRD along with the co-sponsorship of other organizations that are supportive of the Summit goals and objectives.
Communiqué
Description
An explanation is the answer to the question "Why?" as well the answers to followup questions such as "Where do I go from here?" Accordingly, explanations generally occur within the context of a process, which could be a dialog between a person and a system or could be an agent-to-agent communication process between two systems. Explanations also occur is social interactions when clarifying a point, expounding a view, or interpreting behavior. In all such circumstances in common parlance one giving/offering an explanation.
Some Views from Philosophy and Science
A brief history of why explanations provides some context and includes the observation that among the first known attempts at understanding the why of explanations were those documented among Greek intellectuals and philosophers. For example, to understand and explain the why there was a Peloponnesian War Thucydides defined explanations as a process where facts (indisputable data), which are observed, evaluated based on some common knowledge of human nature. This was then compared in order to reach generalized principles for why some events occur. In the writings of Plato (e.g. Phedus and Theaetetus), we see explanations as an expression using logos knowledge composable by Universal Forms, which are abstractions of the world's entities we come to experience and know. Facts, in this view are occurrences or states of affairs and may be a descriptive part of an explanation, but not the deep Why. Aristotle's view, such as in Posterior Analytics provides a more familiar view of explanation as part of a logical, deductive, process using reason to reach conclusions. Aristotle proposed 4 types of causes (aitiai) to explain things. These were from either the thing's
- matter,
- form,
- end, or
- change-initiator (efficient cause).
Following Descartes, Leibniz and especially Newton, modern deterministic causality using natural mechanisms became central to causal explanations. To know what causes an event means to employ natural laws as the central means to understand and explain why it happened. As this makes clear some notions of the nature of knowledge, how we come to know and the nature of reality are part of explanation.
John Stuart Mill, provides a deductivist account of explanation as evidenced by these 2 quotes:
“An individual fact is said to be explained, by pointing out its cause, that is by stating the law or laws of causation, of which its production is an instance,”
and
“a law or uniformity of nature is said to be explained, when another law or laws are pointed out, of which that law is but a case, and from which it could be deduced.”
The Ontology Summit 2019 is concerned with the role of ontologies for explaining the reasoning of a system. More specifically, the summit will focus on critical explanation gaps and the role of ontologies for dealing with these gaps. The sessions will examine current technologies and real needs driven by risks and requirements to meet legal or other standards.
Inspired by the current DARPA Explainable AI (XAI) Project, (see: https://www.darpa.mil/program/explainable-artificial-intelligence ) the Ontology Summit theme considers the general problem of explanation. We are interested not only in AI systems that can explain their actions and what they believe but also in other smart engineering systems which may cooperate with and aid humans. With the increasing amount of software devoted to industrial automation and process control, it is becoming more important than ever for systems to be able to explain their behavior to humans. Explanations include expressing rationales, characterizing strengths and weaknesses, and projecting their behavior into the future.
Ontologies could play a significant role in explanations since such agents must represent the conceptual framework that supports explanation. Such explanations would include terms for domain and natural world concepts, relations, and activities. Some version of natural language may be used to describe states and actions in terms that people easily understand, as well as the conceptual structures within which dialog, plans and actions take place.
A benefit of the use of ontologies in support of explanations is the potential for improving interoperability between systems that otherwise would not have a common framework for interoperation. The danger is that current efforts for explainability will be brittle as well as siloed, which will produce a large variety of incompatible explanation techniques that individually satisfy the requirement of providing explanations but which are of little use when explainable subsystems are integrated into large scale systems.
In the usual sense, Ontologies are designed knowledge artifacts but exist in computational (operational) environments which allow reasoning and so should also include the ability to reason about/explain what they know and how they have reasoned with this knowledge. In particular, they should be able to express the rationale for the selected use of the relevant parts of an ontology or suite of ontologies; explain the strengths and weaknesses of the ontology; and, when the ontology is in use, explain data that conforms to the ontology.
Introductory Sessions
Explanations and help facilities designed for people
Co-Champions: John Sowa
Basic principle: It's irrelevant how much you know about computer hardware, software, and systems design. When you are faced with some system designed by somebody else, you are as much a novice as anybody who walks in off the street.
Help facilities for any kind of system are abysmal, and they are getting worse -- primarily because the complexity of hardware/software systems is growing much faster than any help facilities that explain how they work.
People who buy a new car, for example, never learn how to use all its so-called "features" before they buy its replacement. Anybody who rents a car will be driving on a highway before discovering features that may be critical for safety.
Issues: How do current help and explanation facilities work? What are the underlying mechanisms for implementing them, invoking them, and using them? How can they be improved? What kinds of AI facilities could improve them? What are the R & D directions and possibilities for improved design?
Goal: Make help and explanation facilities as useful and usable as a guru or geek sitting next to you. Even better, make them as usable as a kind and forgiving friend who knows what you want.
Overview of Explainable AI
Co-Champions: Ram D. Sriram and Ravi Sharma
Main Tracks and Sessions
Kickoff Meeting
All co-champions introduce their tracks.
Financial Explanations
Co-Champions: Mark Underwood and Mike Bennett
Medical Explanations
Co-Champions: Ram D. Sriram and David Whitten
Explainable AI in Medicine has several facets.
- One facet is explaining a suggested decision made by an computer system and what justification supports it to the medical provider who is diagnosing a malady.
- Another is explaining a medical process to the patient who might be a participant and their support team.
- Similarly, there is the explanation and justification to the billed party of such a process and why it is coded the way it is, whether that billed party is the patient, an insurance company or some other financier.
- A final facet is explaining the medical record to a future reader tracing the process of care.
Some discussion of this is at https://medcitynews.com/2018/10/how-do-you-make-doctors-trust-machines-in-an-ai-driven-clinical-world/ Another useful set of papers by Peter Szolovitz are at http://groups.csail.mit.edu/medg/people/psz/home/Pete_MEDG_site/Publications.html
Explainable AI
Co-Champions: Ram D. Sriram and Ravi Sharma
Narrative
Co-Champions: Donna Fritzsche and Janet Singer with Mark Underwood as consultant
Commonsense
Co-Champions: Gary Berg-Cross and Torsten Hahmann (U of Maine) An early goal of AI was to teach/program computers with enough factual knowledge about the world so that they could reason about it in the way people do. The starting observation is that every ordinary person has "commonsense" or basic knowledge about the real world that is common to all humans. Spatial and physical reasoning are good examples. This is the kind we want to endow our machines with for several reasons including as part of conversation and understanding. System understanding of human perceptual and memory limitations might, for example, be an important thing for a dialog system to know about.
Early on this was describe as giving them a capacity for "commonsense". However, early AI demonstrated that the nature and scale of the problem was difficult. People seemed to need a vast store of everyday knowledge for common tasks. A variety of knowledge was needed to understand even the simplest children's story. A feat that children master with what seems an natural process. One resulting approach was an effort like CyC to encode a broad range of human commonsense knowledge as a step to understanding text which would bootstrap further learning. Some believe that today this problem of scale can be addressed in a new ways including via modern machine learning. But these methods do not build in an obvious way to provide machine generated explanations of what they "know." As fruitful explanations appeal to folks understanding of the world, common sense reasoning would be a significant portion of any computer generated explanations. How hard is this to build into smart systems? One difficult aspect of common sense is making sure the explanations are presented at multiple levels of abstraction, i.e. from not too detailed to tracing exact justifications for each inference step.
This track will explore these and other issues in light of current ML efforts and best practices for AI explanations.
Purpose
As part of Ontolog’s general advocacy to bring ontology science and engineering into the mainstream, we endeavor to abstract a conversational toolkit from the sessions that may facilitate discussion and knowledge sharing amongst stakeholders relevant to the topic. Our findings will be supported with examples from the various domains of interest. The results will be captured in the form of a 2019 Summit Communiqué, with expanded supporting material provided on the web.
Process and Deliverables
Similar to our last thirteen summits, this Ontology Summit will consist of virtual discourse (over our archived mailing lists), virtual panel sessions (over video conference calls), and a Virtual Symposium. The main deliverable is a collaboratively developed Communiqué and more specialized articles within which we, among other things, present our distilled thoughts about the theme.
Structure and Discourse
- 14 November 2018 John Sowa “Explanations and help facilities designed for people”
- 28 November 2018 Derek Doran Overview of Explainable AI
- 5 December 2018 Overview of Commonsense Knowledge and Explanation
- 16 January 2019 Introduction to the Tracks
- 23 January 2019 Commonsense Session 1
- 30 January 2019 Narrative Session 1
- 6 February 2019 Financial Explanations Session 1
- 13 February 2019 Medical Explanations Session 1
- 20 February 2019 Explainable AI Session 1
- 27 February 2019 First Synthesis Session
- 6 March 2019 Commonsense Session 2
- 13 March 2019 Narrative Session 2
- 20 March 2019 Financial Explanations Session 2
- 27 March 2019 Medical Explanations Session 2
- 3 April 2019 Explainable AI Session 2
- 10 April 2019 Explainable AI Session 3
- 17 April 2019 Medical Explanations Session 3
- 24 April 2019 Communiqué Development Session 1
- 1 May 2019 Communiqué Development Session 2
- 6 May 2019 Symposium Preparation
- 7 May 2019 Symposium Session 1
- 8 May 2019 Symposium Session 2
- 29 May 2019 Post-Mortem Session
There were 16 regular sessions, 3 synthesis development sessions, and 2 symposium sessions. There were 18 invited speakers. Each session had the proceedings (from the chat room) and a recording (one audio recording and the rest video recordings). The following are the regular sessions, showing the speakers with links to their presentation slides (when they were provided) and the recordings.
Date | Speaker | Topic | Presentation | Recording |
---|---|---|---|---|
11/14 | John Sowa | Explanations and help facilities designed for people | Slides | Video |
11/28 | Ram D. Sriram and Ravi Sharma | Introductory Remarks on XAI | Slides | Video |
Derek Doran | Okay but Really... What is Explainable AI? Notions and Conceptualizations of the Field | Slides | ||
12/05 | Gary Berg-Cross and Torsten Hahmann | Introduction to Commonsense Knowledge and Reasoning | Slides | Video |
1/16 | Ken Baclawski | Introductory Remarks | Slides | Video |
Gary Berg-Cross and Torsten Hahmann | Commonsense | Slides | ||
Donna Fritzsche and Mark Underwood | Narrative | Slides | ||
Mark Underwood and Mike Bennett | Financial Explanation | |||
Ram D. Sriram and David Whitten | Medical Explanation | |||
Ram D. Sriram and Ravi Sharma | Explainable AI | Slides | ||
1/23 | Michael Grüninger | Ontologies for the Physical Turing Test | Slides | Video |
Benjamin Grosof | An Overview of Explanation: Concepts, Uses, and Issues | Slides | ||
1/30 | Donna Fritzsche | Introduction to Narrative | Audio only | |
Ken Baclawski | Proof as Explanation and Narrative | Slides | ||
Mark Underwood | Bag of Verses: Frameworks for Narration from Cognitive Psychology | Slides | ||
2/6 | Mike Bennett | Financial Explanations Introduction | Slides | Video |
Mark Underwood | Explanation Use Cases from Regulatory and Service Quality Drivers in Retail Credit Card Finance | Slides | ||
Mike Bennett | Financial Industry Explanations | Slides | ||
2/13 | David Whitten | Introduction to Medical Explanation Systems | Video | |
Augie Turano | Review and Recommendations from past Experience with Medical Explanation Systems | Slides | ||
Ram D. Sriram | XAI for Biomedicine | Slides | ||
2/20 | William Clancey | Explainable AI Past, Present, and Future–A Scientific Modeling Approach | Slides | Video |
3/6 | Niket Tandon | Commonsense for Deep Learning | Slides | Video |
3/13 | Ilaria Tiddi | Building Intelligent Systems (That Can Explain) | Slides | Video |
Dennis Wuthrich | Arches: Using Ontologies to Protect Cultural Heritage | Slides | ||
3/27 | Ugur Kursuncu and Manas Gaur | Explainability of Medical AI through Domain Knowledge | Video | |
4/3 | Giedrius Buračas | Deep Attentional Representations for Explanations - DARE | Video | |
4/10 | Sargur (Hari) Srihari | Explainable Artificial Intelligence: The Probabilistic Approach | Slides | Video |
4/17 | Arash Shaban-Nejad | Semantic Analytics for Global Health Surveillance | Slides | Video |
05/07 | Ken Baclawski | Welcome to the Symposium | Slides | Video |
Gary Berg-Cross | Commonsense Track | Slides | ||
Janet Singer | Narrative Track | Slides | ||
05/07 | Ken Baclawski | Panel Discussion | Video |
Resource Pages
Suggested Themes
Potential themes are in OntologySummit2019/Theme
Synthesis Page
The Communiqué will be based on OntologySummit2019/Synthesis Individual tracks may have their own working synthesis. OntologySummit2019/CommonSenseTrackSynthesis
Meeting Call and Connection Info
- Dates: Wednesdays
- Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
- Expected Call Duration: 1 hour
- The Video Conference URL is https://zoom.us/j/689971575
- iPhone one-tap :
- US: +16699006833,,689971575# or +16465588665,,689971575#
- Telephone:
- Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 558 8665
- Meeting ID: 689 971 575
- International numbers available: https://zoom.us/u/Iuuiouo
- iPhone one-tap :
- Chat Room
Meetings
- ConferenceCall 2019 01 16
- ConferenceCall 2018 06 20
- ConferenceCall 2018 09 26
- ConferenceCall 2018 10 03
- ConferenceCall 2018 10 10
- ConferenceCall 2018 10 31
- ConferenceCall 2018 10 17
- ConferenceCall 2018 10 24
- ConferenceCall 2018 11 14
- ConferenceCall 2018 11 28
- ConferenceCall 2018 12 05
- ConferenceCall 2019 01 23
- ConferenceCall 2019 01 30
- ConferenceCall 2019 02 06
- ConferenceCall 2019 02 13
- ConferenceCall 2019 02 20
- ConferenceCall 2019 02 27
- ConferenceCall 2019 03 06
- ConferenceCall 2019 03 13
- ConferenceCall 2019 03 20
- ConferenceCall 2019 03 27
- ConferenceCall 2019 04 03
- ConferenceCall 2019 04 10
- ConferenceCall 2019 04 17
- ConferenceCall 2019 04 24
- ConferenceCall 2019 05 01
- ConferenceCall 2019 05 06
- ConferenceCall 2019 05 07
- ConferenceCall 2019 05 08
- ConferenceCall 2019 05 29
- ConferenceCall 2019 06 12
- ConferenceCall 2019 06 19