Beskrivelse

A recurring view is that AI and Deep Learning will make humans redundant
in a wide range of tasks, even including science. In 2008, Chris Anderson pro-
claimed the end of the theory, arguing for the possibility of automating science
and eliminating human experts. However, the redundancy of human expertise in
scientific inquiry has been questioned by many philosophers of science. For ex-
ample, Leonelli (2016) showed in detail how elaborate human efforts are involved
in the creation of essential databases in the biological sciences, and Hansen &
Quinon (2023) showed how expert knowledge is involved in several places in
classical scientific applications of Deep Learning, such as protein folding.
With the emergence of chatbots based on large language models (LLMs),
such as ChatGPT OpenAI (2022), one may again question whether humans
are necessary for the advancement of science. After all, chatbots seem to be
largely independent in their search for knowledge and creative responses to
prompts. In this talk, however, we argue that expert knowledge remains an
essential part of any scientific discovery supported by an LLM. We provide
a detailed analysis of how human contributions are incorporated into modern
chatbots based on LLMs. For example, LLMs are trained on large human-
generated datasets and require considerable expertise in machine learning to
train. Furthermore, an essential part of training a chatbot such as ChatGPT is
Reinforcement Learning from Human Feedback (RLHF) OpenAI (2022). Based
on human ratings of various sample prompts and responses, RLHF was used to
train a reward model, which in turn was used to adapt ChatGPT. Based on this
analysis, we provide a classification of the different types of expert knowledge
involved in the application of LLMs in scientific contexts, and we address the
ramifications of a human-driven expert knowledge approach to AI for scientific
practice.
In particular, we focus on the role of expertise after the RLHF has been
used. By doing so, we aim to counter the argument that expert knowledge may
have been important in the beginning, but now that we have well-trained LLMs,
human expertise is no longer needed. We argue that such optimistic claims, just
like those made earlier in the context of AI and Deep Learning, will turn out
not to be true. We extend our argument to suggest that it is unlikely that we
will ever escape the need for human expert knowledge in science.
Periode12 apr. 2024
BegivenhedstitelML, Explain Yourself!
BegivenhedstypeKonference
PlaceringUtrecht, HollandVis på kort
Grad af anerkendelseInternational

Emneord

  • Artificial Intelligence
  • Large Language Model
  • Data Science
  • Philosophy of Science
  • Expert Knowledge