AI agents, i.e. computer programs that can solve tasks independently, have changed dramatically in recent decades. In the past, they were mainly based on fixed expert knowledge (so-called “ontologies”). I created one of my first (multi-)agent systems using ontologies and predefined rules. Today, however, they are increasingly learning directly from data. This article explains what these changes look like and what they mean for us [3][4][5].
What are “ontologies”?
Imagine you have a large reference book with all the important terms and contexts on a specific topic, for example about animals. This reference book not only contains definitions, but also rules about how these animals are related to each other, what characteristics they have, and so on [3][4].
This is exactly what ontologies do in AI: they determine how a computer program “understands” terms and relationships. The advantage of this is that the program can “look up” this knowledge and draw logical conclusions, similar to a person using a textbook. The disadvantage is that creating such a reference work is laborious and time-consuming, and if anything changes, everything has to be updated by hand [4].
How do today’s AI agents work?
Modern AI agents, for example chatbots or image recognition programs, work differently: they are given huge amounts of data (“big data”) and automatically learn to recognize patterns from it [1][2][6]. A language model like ChatGPT is fed billions of texts and then knows how language is normally constructed. This allows it to answer questions or even write poetry without having learned each individual word manually [2][7].
What are the pros and cons of these new approaches?
- Pros:
- Flexible: Data-driven AI can often be used in many areas without having to program everything from scratch [2][6].
- Adaptable: When the world changes (e.g. new facts or events), you can feed the AI new data and it will adapt [2].
- Cons:
- Opaque: Often we don’t know exactly how the AI arrives at its results because it searches for patterns in huge “data clouds” [9].
- Susceptible: If the data is not good – for example, incomplete or biased – the system can deliver false or even discriminatory results [10][14].
Is it possible to combine the best of both worlds?
Many researchers and companies are currently trying to combine the old and the new. On the one hand, they want to use structured expert knowledge to achieve better explainability while maintaining the flexibility of modern AI systems. This is how so-called “hybrid systems” are created. These systems draw on both ontologies and large amounts of data [11][12].
What does this mean for our future?
- Intelligent assistants: An increasing number of systems will help us in our daily lives, whether it’s shopping, learning or controlling machines [13].
- More transparency: The combination of clearly defined knowledge and learning systems makes it possible to better understand and control AI [9][12].
- Novel applications: From medicine to space travel – AI agents could offer new, previously impossible solutions [6][13].
AI agents have evolved from rigid, rule-based systems to flexible, data-based models. Current developments are combining the best of both to make our future safe, understandable and technically powerful [9][12][13].
References
- Goodfellow, I., Bengio, Y. & Courville, A. (2016). Deep Learning. MIT Press.
- Brown, T. B. et al. (2020). Language Models are Few-Shot Learners. In: Advances in Neural Information Processing Systems.
- Brachman, R. J. & Levesque, H. J. (2004). Knowledge Representation and Reasoning. Morgan Kaufmann.
- Gruber, T. R. (1993). A translation approach to portable ontology specifications. Knowledge Acquisition, 5(2), 199–220.
- Berners-Lee, T., Hendler, J. & Lassila, O. (2001). The Semantic Web. Scientific American, 284(5), 28–37.
- LeCun, Y., Bengio, Y. & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
- Devlin, J. et al. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: NAACL-HLT 2019.
- Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.
- Domingos, P. (2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books.
- Various conference papers on retrieval-augmented generation (e.g. Facebook AI Research, Microsoft Research, Google DeepMind).
- Garcez, A. d., Lamb, L. C. & Gabbay, D. M. (2008). Neural-Symbolic Cognitive Reasoning. Springer.
- Russell, S. & Norvig, P. (2010). Artificial Intelligence: A Modern Approach. 3rd ed., Pearson.
- Floridi, L. (2019). Translating principles into practices of digital ethics: AI as a case in point. Philosophy & Technology, 32(2), 185–209.