Ontolog Forum
OntologySummit2013: (Track-D) "Software Environments for Evaluating Ontologies" - Synthesis
Track Co-champions: Michael Denny, Ken Baclawski & PeterYim
Mission Statement:
Through this track, we aim to coordinate the following:
- provide a venue to bring together individuals and communities who can help define and advance the state-of-the-art in software and systems for evaluating ontologies
- the collection and enumeration of software environments and tools for evaluating ontologies (with emphasis on those that are open efforts and those that are publicly available)
- investigations and development work (software prototyping and implementation) focused on the ontology evaluation theme, leading to interim presentations at the symposium, and possibly continued after this Ontology Summit ... (this bullet, which was on our original mission statement is now handled by the Hackathon-Clinics Activities champions - see: OntologySummit2013_Hackathon_Clinics)
see also: OntologySummit2013_Software_Environments_For_Evaluating_Ontologies_CommunityInput
Work-products from this Track:
- Our approach:
- Introduction of our track mission and approach - slides from the Ontology Summit 2013 Launch Event on 2013.01.17
- Introduction of our approach to the survey on software support to ontology quality and fitness - slides from the Ontology Summit 2013 Synthesis-I session on 2013.02.21
- The two panel discussion sessions when we invited stewards of some exemplary ontology software tools and environments out there to share with us their work, experience and insights ...
- 2013_02_14 - Thursday: Ontology Summit 2013 session-05: "Software Environments for Evaluating Ontologies - I" - Co-chairs: Peter P. Yim & Michael Denny - Panelists: Michael Grüninger, Jeanne Holm, Gavin Matthews - ConferenceCall_2013_02_14
- 2013_03_21 - Thursday: Ontology Summit 2013 session-10: "Software Environments for Evaluating Ontologies - II" - Co-chairs: Michael Denny & Peter P. Yim - Panelists: Adam Pease, Till Mossakowski, Tania Tudorache, Michel Dumontier, Kingsley Idehen - ConferenceCall_2013_03_21
- The Survey on "Software Support for Ontology Quality and Fitness" - see:http://ontolog-02.cim3.net/wiki/Category:OntologySummit2013_Survey
- we also provided support to the Hackathon-Clinics program team ...
- at their introduction - slides (also) from the Ontology Summit 2013 Synthesis-I session on 2013.02.21
- and launch - slides from the Ontology Summit 2013 Hackathon-Clinics Program Launch session on 2013.02.28
- Lastly, some thoughts and insights gathered through the course of the Track-D discourse - slides from the Ontology Summit 2013 Synthesis-II session on 2013.04.04
The following is an initial input from track D for the Summit Communique. Not all of these points will necessarily be addressed and included. These are provided for comment. {{
Track D, as "Software Environments for Evaluating Ontologies", falls within the current Communique outline in:
C. The State of the Art of Ontology Evaluation (4) What tool-support is currently available to support the evaluation of the characteristics (identified in C-2) and the best practices (identified in C-3)?
Within this vein, some preliminary Track D concepts that may be developed for inclusion in the Summit communique are, in no special order:
- The notion of tool support of quality is broader than the track's title and should include "guidance" as well as "evaluation" of those ontology characteristics determining an ontology's quality and fitness. Ontology tools and software environments may intentionally constrain or recommend to the user proper ontology structure and content.
- Tools may contribute this "evaluation" or "guidance" function at different points along the ontology life cycle, and for a given characteristic, some tools may perform better in one life cycle phase than in another phase where a different tool is better suited. Generally, appreciation of the full cycle of life of an ontology is not well established within the ontology community.
- There are central aspects of ontology that may not be amenable to software control or assessment. For example, the need for clear, complete, and consistent lexical definitions of ontology terms is not presently subject to software consideration beyond identifying where lexical definitions may be missing entirely. Another area of quality difficult for software determination is the semantic fitness of an ontology to its world domain (reality) or to its application domain. Software guidance may be available for the fitness of candidate ontologies for import and reuse, but not so for the novel content of a new ontology.
- The design, implementation, and use requirements of an ontology may affect how quality and fitness on a particular ontology characteristic are determined, as well as interpreted and valued. Perhaps all quality and fitness assessments by software should be traceable to stated ontology requirements.
- Significant new ontology evaluation tools are currently becoming available to users. Carving a link between such tools and existing IT architecture and design tools (e.g., EA and SA) remains a future possibility in order to integrate ontology into mainstream application software development within enterprise or more focused IT environments. This capability could offer a definitive means of connecting ontology quality/fitness characteristics and measures to use case and application software requirements.
- Approximate lexical and structural matching of a new ontology or ontology component to the content of a repository of known ontologies may offer an effective means of identifying comparable ontology content for:
- 1) demonstrable coding patterns;
- 2) confirmation of authoring approach; and
- 3) identification of reuse candidates.
- Given sufficient results from the Ontology Quality Software Survey, the degree to which current tool capabilities align with ontology quality priorities expressed by Tracks A-C.
- Discoveries about the state of ontology evaluation stemming from the Hackathon and Clinic experiences.
{{{
-- maintained by the Track-D co-champions: Mike Denny, Ken Baclawski & Peter P. Yim ... please do not edit
}}}