Since ChatGPT, generative artificial intelligence (AI) seems to be on everyone's lips, but there are numerous large language models (LLMs), all of which are based on the principle of natural language processing and the so-called transformer architecture, which was first presented by Google Brain and Google Research in 2017.
This transformer architecture has not only significantly improved the process of natural language processing, but also enables the computer to translate languages better than before, understand existing texts in more detail and even generate new texts.
In this lecture, we will therefore embark on a kind of search for clues, starting with the generated text of an LLM and following the path back into the deep neural networks of generative AI via an insight into the Transformer architecture in order to uncover false information and even hallucinations of LLMs regarding people and societies.
Look forward to a lecture by Prof. Dr. Dennis Klinkhammer at the FOM University of Applied Sciences with a Q&A session and hands-on A.I.. You can get your hands on a neural network as the basis for A.I. and train it yourself.
The event is part of the "Science in Cologne's Houses" series.
Registration at: https://koelner-wissenschaftsrunde.de/kwr_termine/wie-generative-ki-lernt/