Session
|
Broader thoughts
|
Duration
|
1 hour
|
Date/Time
|
8 Nov 2023 17:00 GMT
|
9:00am PST/12:00pm EST
|
5:00pm GMT/6:00pm CET
|
Convener
|
Andrea Westerinen and Mike Bennett
|
Agenda
- Anatoly Levenchuk, strategist and blogger at Laboratory Log
- Knowledge graphs and large language models in cognitive architectures
- This talk discusses styles of definitions for knowledge graphs (KG) combined with large language models (LLMs). The KG architectures and systems are reviewed, taken from Ontolog Forum's 2020 Communique. A framework is proposed for a cognitive architecture using both LLMa and KGs for the evolution of knowledge during 4E (embodied, extended, embedded, enacted) cognition. In this framework, ontologies are understood as answers to the question "What is in the world?" and can be found in representations that vary across a spectrum of formality/rigor. An example is given of the use of ontology engineering training in management, where upper-level ontologies are given to students in the form of informal course texts (with the goal of obtaining a fine-tuned LLM within the "neural networks" of students' brains) coupled with lower-level ontologies that are more formal (such as data schemas for databases and knowledge graphs).
- Anatoly Levenchuk has worked as a strategy consultant for more than 30 years. He helps with vision and strategy definition to many government agencies and large companies. Now he is science head of Aisystant that serves as a school in engineering and management. His first machine learning project was in 1977, first ontology engineering project was in 1980. He is author of several textbooks on systems thinking, methodology, systems engineering, systems management, natural and artificial intelligence, education as "person engineering". His blog "Laboratory Log" http://ailev.lievjournal.ru in Russian has more than 3,000 subscribers.
- Slides
- Arun Majumdar and John Sowa, Permion AI
- Trustworthy Computation: Diagrammatic Reasoning With and About LLMs
- Large Language Models (LLMs) were designed for machine translation (MT). Although LLM methods cannot do any reasoning by themselves, they can often find and apply reasoning patterns that they find in the vast resources of the WWW. For common problems, they frequently find a correct solution. For more complex problems, they may construct a solution that is partially correct for some applications, but disastrously wrong or even hallucinogenic for others. Systems developed by Permion use LLMs for what they do best. But they combine them with precise and trusted methods of diagrammatic reasoning based on conceptual graphs (CGs). They take advantage of the full range of technology developed by 60+ years of AI, computer science, and computational linguistics. For any application, Permion methods derive an ontology tailored to the policies, rules, and specifications of the project or business. All programs and results they produce are guaranteed to be consistent with that ontology.
- John's Slides
- Arun's Slides
Conference Call Information
- Date: Wednesday, 8 November 2023
- Start Time: 9:00am PST / 12:00pm EST / 6:00pm CET / 5:00pm GMT / 1700 UTC
- Note that Daylight Saving Time has ended in Europe, Canada and the US.
- ref: World Clock
- Expected Call Duration: 1 hour
- Video Conference URL: https://bit.ly/48lM0Ik
- Conference ID: 876 3045 3240
- Passcode: 464312
The unabbreviated URL is:
https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09
Participants
Discussion
Resources
Previous Meetings
Next Meetings