Henry Shevlin (Cambridge) ChatGPT, anthropomorphism, and the gap between folk psychology and cognitive science
Tue, 13 Feb
|Room G37
Time & Location
13 Feb 2024, 17:00 – 18:30
Room G37, Senate House, University of, Malet St, London WC1E 7HU, UK
Abstract
Even as the general public is marveling at the capabilities of ChatGPT and businesses are scrambling to incorporate it into their workflow, the academic commentariat is deeply divided as to how to characterise the cognitive capabilities of Large Language Models. In this talk, I'll suggest that far from being mere stochastic parrots, LLMs can display sophisticated and creative reasoning across a wide range of domains. I go on to consider whether this warrants ascribing them psychological capacities such as thoughts, beliefs, understanding, or even consciousness. While there is of course vast disagreement across philosophy and cognitive science about how to analyse these terms, I argue that shallow theories of mental attitudes such as those of Ryle, Dennett, and Schwitzgebel naturally tend towards endorsing literal ascriptions of at least some mental states to large language models. By contrast, I suggest that deep theories of mental attitudes such as those of Fodor, Dretske, and Mandelbaum face a potent challenge from the folk's growing tendency to readily and constructively interpret LLMs in psychological terms. In particular, I argue that if social forms of AI such as Replika become commonplace and are widely treated as psychological agents by the general public, deep theorists of the mind risk growing irrelevance.