Trent N. Cash

Incoming Postdoctoral Scholar at University of Waterloo

Quantifying Uncert-AI-nty: Testing the Accuracy of LLMs’ Confidence Judgments


Accepted Pending Minor Revisions


Trent N. Cash, Daniel M. Oppenheimer, Sara Christie, Mira Devgan


View PDF
Cite

Cite

APA   Click to copy
Cash, T. N., Oppenheimer, D. M., Christie, S., & Devgan, M. Quantifying Uncert-AI-nty: Testing the Accuracy of LLMs’ Confidence Judgments . https://doi.org/10.31234/osf.io/47df5_v3


Chicago/Turabian   Click to copy
Cash, Trent N., Daniel M. Oppenheimer, Sara Christie, and Mira Devgan. “Quantifying Uncert-AI-Nty: Testing the Accuracy of LLMs’ Confidence Judgments ”(n.d.).


MLA   Click to copy
Cash, Trent N., et al. Quantifying Uncert-AI-Nty: Testing the Accuracy of LLMs’ Confidence Judgments . doi:10.31234/osf.io/47df5_v3.


BibTeX   Click to copy

@article{trent-a,
  title = {Quantifying Uncert-AI-nty: Testing the Accuracy of LLMs’ Confidence Judgments },
  doi = {10.31234/osf.io/47df5_v3},
  author = {Cash, Trent N. and Oppenheimer, Daniel M. and Christie, Sara and Devgan, Mira}
}

Abstract

The rise of Large Language Model (LLM) chatbots, such as ChatGPT and Gemini, has revolutionized how we access information. These LLMs can answer a wide array of questions on nearly any topic. When humans answer questions, especially difficult or uncertain questions, they often accompany their responses with metacognitive confidence judgments indicating their belief in their accuracy. LLMs are certainly capable of providing confidence judgments, but it is currently unclear how accurate these confidence judgments are. To fill this gap in the literature, the present studies investigate the capability of LLMs to quantify uncertainty through confidence judgments. We compare the absolute and relative accuracy of confidence judgments made by four LLMs (ChatGPT, Bard/Gemini, Sonnet, Haiku) and human participants in both domains of aleatory uncertainty - NFL predictions (Study 1; n = 502), and Oscar predictions (Study 2; n = 109) – and domains of epistemic uncertainty - Pictionary performance (Study 3; n = 164), Trivia questions (Study 4; n = 110), and questions about life at a university (Study 5; n = 110). We find several commonalities between LLMs and humans, such as achieving similar levels of absolute and relative metacognitive accuracy (although LLMs tend to be slightly more accurate on both dimensions). Like humans, we also find that LLMs tend to be overconfident. However, we find that, unlike humans, LLMs – especially ChatGPT and Gemini –  often fail to adjust their confidence judgments based on past performance, highlighting a key metacognitive limitation.