Workshop: Explanation and Understanding

Workshop: Explanation and Understanding

May 19-20, 2016

Aarhus University, Denmark 

In both science and philosophy, good theories provide good explanations. Views on explanation diverge strongly. Explanation has been held to be brought about by subsumption under laws of nature, by unification, or by description of causes. Regardless, explanation is generally agreed to be an epistemic concern. Not so understanding. It was long ignored or thought to simply follow from explanation. More recently, some have defended understanding as a more independent phenomenon worthy of investigation. This workshop will discuss some of the latest developments in the debate and possible ways of advancing it.

*** See below for the programme and abstracts ***

Key note speakers:

  • Michael Strevens (NYU)
  • Henk de Regt (VU Amsterdam)
  • Kareem Khalifa (Middlebury College)
  • Richard Dawid (Stockholm)


  • Asbjoern Steglich-Petersen (AU)
  • Jan Faye (KU)
  • Eline Busck Gundersen (AU)
  • Samuel Schindler (AU)

Contributed speakers:

  • Insa Lawler (University of Duisburg-Essen)
  • Finnur Dellsen (UC Dublin)
  • Mark Newman (Rhodes College)
  • Benjamin Rancourt (University of Massachusetts)

The event is hosted by the Centre for Science Studies, Department of Mathematics, Aarhus University, and funded by the Danish Network in Philosophy of Science (PI: Hanne Andersen), the DFF Sapere Aude project Intuitions in Science and Philosophy” (PI: Samuel Schindler), and by the research programme for philosophy and intellectual history at Aarhus University.

For questions please contact the organiser at


Final schedule

Thursday, May 19th, room D4 in the Mathematics Department (1531-219) 

From 8:45


Coffee and registration



Welcome and introduction

9:30 - 11:00



Keynote by Michael Strevens (NYU): Causal Understanding versus Predictive Know-How, 45min

Commentary by Eline Busck Gundersen (CSMN, Oslo): 15min

30min discussion

Chair: Samuel Schindler

11:00 – 11:30


Coffee break

11:30 – 12:30


Contributed paper by Insa Lawler (University of Duisburg-Essen), How do model-based explanations provide understanding-why?, 1 hr (max. 45 min + 15 min discussion)

Chair: Samuel Schindler 

12:30 – 14:00


Lunch at the math cafeteria

14:00 – 15:30


Keynote by Kareem Khalifa (Middlebury College): Understanding, Idealization, and (Just Enough) Truth, 45min

Commentary by Asbjoern Steglich-Petersen (AU): 15min

30min discussion

Chair: Eline Busck Gundersen 

15:30 – 16:00


Coffee break

16:00 – 17:00


Contributed paper by Benjamin Rancourt (University of Massachusetts), Understanding: The Art of Cognitive Management1 hr (max. 45 min + 15 min discussion)

Chair: Eline Busck Gundersen  



Dinner at tapas bar CANblau


Friday, May 20th, room D2 in the Mathematics Department (1531-119)

9:30 - 11:00



Keynote by Richard Dawid (University of Stockholm): Understanding, String Dualities Empirical Equivalence, 45min

Commentary by Samuel Schindler (AU): 15min

30min discussion

Chair: Jan Faye

11:00 – 11:30


Coffee break

11:30 – 12:30


Contributed paper by Finnur Dellsen (University College Dublin), Understanding Beyond Explanation1 hr (max. 45 min + 15 min discussion)

Chair: Jan Faye  

12:30 – 14:00


Lunch at the math cafeteria

14:00 – 15:00


Contributed paper by Mark Newman (Rhodes College), Factivity and Degrees of Understanding on the Inferential Theory1 hr (max. 45 min + 15 min discussion)

Chair: Asbjoern Steglich-Petersen

15:00 – 15:30


Coffee break

15:30 – 17:00


Keynote by Henk de Regt (VU Amsterdam): How false theories can yield genuine understanding, 45min

Commentary by Jan Faye (KU): 15min

30min discussion

Chair: Asbjoern Steglich-Petersen  



Closing remarks



Key notes

> Michael Strevens: Causal Understanding versus Predictive Know-How

What distinguishes mere knowledge of causes from genuine causal understanding? To answer this question, some writers have developed an idea from the literature on causal explanation, proposing that deep understanding (likewise deep explanatory knowledge) consists in an ability to answer “what if things had been different” questions about affairs in the vicinity of the explanandum. This paper argues that, although there is a connection, causal understanding and the ability to answer “what if things had been different” questions frequently come apart. Thus, understanding is neither constituted by the question-answering ability nor even typically proportional to that ability. A sketch of an alternative conception of causal understanding is given.     

> Kareem Khalifa: Understanding, Idealization, and (Just Enough) Truth

The goals of this paper are twofold. First, I will present my own model of understanding. The core idea is that understanding is a function of scientific knowledge of an explanation. Second, I will consider, and then rebut, an objection to this model. The objection holds that because idealizations are false, yet still provide understanding, understanding and explanatory knowledge cannot be as tightly linked as I propose. I shall argue that the objection is unsound because of its overly restrictive notion of explanatory knowledge.

> Richard Dawid: Understanding, String Dualities Empirical Equivalence

Understanding, String Dualities Empirical Equivalence String dualities constitute a specific form of empirical equivalence in physics. One may argue that, after a century when empirical equivalence was primarily of interest to philosophy of science, the rise of duality in string physics marks the first time that empirical equivalence takes centre stage in physics itself. The paper will make the case that the philosophical repercussions of string dualities are in fact directly opposed to the way the significance of empirical equivalence was understood throughout most of the 20th century in philosophy of science. One way to view that change is in terms of understanding: considering empirically equivalent theories becomes essential for understanding.

> Henk de Regt: How false theories can yield genuine understanding

We attack the traditionally accepted view that a criterion of representational veridicality is a necessary condition for scientific understanding. To replace this ‘veridicality condition’, we propose an effectiveness condition on understanding: understanding requires representational devices that are scientifically effective; where scientific effectiveness is the tendency to produce useful scientific outcomes such as correct predictions, successful practical applications and fruitful ideas for further research. We illustrate our claims using three case studies: phlogiston theory versus oxygen theory for understanding of chemical phenomena; Newton’s theory of gravitation versus Einstein's general theory of relativity; and fluid models of energy and electricity in science education.


Contributed papers

>Insa Lawler: How do model-based explanations provide understanding-why? How do model-based explanations provide understanding-why?

Intuitively, understanding why p requires believing a correct explanation for why p. Yet this thought is shaken by the fact that scientists employ idealizing models, like the ideal gas model, for understanding some phenomena. As Elgin (2004, 2007), Reiss (2012), and de Regt (2015) highlight, model-based explanations seem to foster understanding-why, though idealizations are false. In order to deal with this puzzle, one could deny that understanding-why is gained or one could relax the truth requirement on understanding-why (Elgin 2004, de Regt 2015). Alternatively, one could resolve it by showing that successful model-based explanations are veridical. I evaluate three proposals: First, I argue that the attempt to analyze model-based explanations as how-possibly explanations (e.g., Grüne-Yanoff 2009 & 2013, Psillos 2011, Rohwer & Rice 2013) does not succeed, because how-possibly explanations are not proper explanations for why p. Secondly, I argue that the attempt to analyze model-based explanations as according-to-the-model explanations (e.g., van Riel 2015) does not resolve the puzzle, either. Such explanations do not provide understanding-why, but understanding what an explanation for p is according to some model. Thirdly, I provide a solution by arguing that idealizations are explanatorily decisive neither for explanations of phenomena with a model-based law nor for explanations (i.e. derivations) of such laws. In the first case, only simplifying idealizations play a role. These are rendered true by employing an approximately operator. In the second case, the idealizations are not explanatorily decisive: The veridical counterparts of idealizations would yield a derivation of something close to the model-based law with the same explanatory power, as Strevens carves out (2008, 2016). Lastly, I argue that this proposal does not render idealizations epistemically idle. As Elgin points out, they facilitate epistemic access to crucial features of the phenomenon to be explained (2004, 2007). Moreover, they tell us which factors are explanatorily negligible, as Strevens highlights (2008, 2016). Altogether, the outlined account provides an answer to the puzzle from idealization without diminishing the epistemic value of idealizations.

> Benjamin Rancourt: Understanding: The Art of Cognitive Management

There is a difference between someone who knows facts about physics based on memorizing Wikipedia articles and someone who genuinely understands physics. I argue that the explanation for this difference is found in the requirements of cognitive success in inquiry. Inquiry is the process of raising and answering questions. To succeed in inquiry, a cognitive system requires the ability to find information relevant to the questions it needs to answer and exploit that information in relevant ways to answer those questions. I argue in favor of a novel account of understanding: one understands a subject matter just in case one has the ability to extract and exploit relevant information to answer questions in that subject matter. A person who understands physics has the ability to recognize which facts are relevant to answering questions in physics, and has the ability to recognize what to do with that information to reach the answers to those questions. Mere knowledge of facts cannot provide this skill. This account explains the close connection between understanding and explanation: Explanations cite features relevant to answering questions in the related subject matter. It explains the role that understanding plays in education: we aim to get students to understand because we want them to have the skills to determine what is relevant to answer questions on their own. It explains the role of understanding in expertise: we value an expert's understanding because it allows them to identify relevant issues regarding questions asked. Further, it explains the appeal of previously offered accounts of understanding, including both accounts based on explanation (e.g. the accounts of Kitcher, Salmon, Woodward, Khalifa, and Strevens) and accounts that treat understanding as independent of explanation (e.g. the accounts of Kvanvig, Grimm, de Regt, Dieks, Elgin, Boylu, and Wilkenfeld).

> Finnur Dellsen: Prediction, Explanation, and Non-Intellectual Understanding

Understanding is standardly associated with explanation: To understand some phenomenon, it is argued, is to have an explanation of it. Given the current consensus that explanation and prediction are distinct achievements, it follows from the standard view that prediction as such plays no role in scientific understanding. Specifically, the implication is that the scientist who understands a phenomenon need not be able to predict its behavior, nor does her ability to reliably predict something imply that she possesses any understanding of it. This is a corollary of most current accounts of understanding, which I will collectively refer to as *intellectualist* accounts. This paper argues against this conception of understanding and presents a *non-intellectualist* alternative on which both explanation and prediction are important elements of understanding. On this view, an ability to predict may constitute partial understanding of a phenomenon, and a full understanding of something involves being able to predict its behavior. I argue for this by showing that partial scientific understanding can be correctly attributed in cases where (i) it is an open question whether a correct explanation has been identified, (ii) the alleged explanation is not in fact correct, and (iii) there is arguably no correct explanation for the phenomenon at all. In all cases, the degree of understanding that I argue is present is most plausibly attributed to an ability to reliably predict the behavior of the phenomenon in question. I conclude by briefly considering a somewhat unexpected benefit of the view, viz. that understanding emerges as a plausible explicans of scientific progress. Scientific progress is commonly taken to involve both an intellectual achievement such as explanation, and a more practical achievement such as prediction. Thus, if understanding involves both explanation and prediction, a simple understanding-based account of scientific progress looks plausible.

> Mark Newman: Factivity and Degrees of Understanding on the Inferential Theory

A traditionally accepted view is that in order for science to provide understanding of a phenomenon it must be explained with at least approximately true descriptions. This is the requirement of factivity. Not all philosophers accept factivity, instead arguing that it is clear from both the history of science and its contemporary theories that we often acquire understanding even when our explanations and models include falsehoods. Some philosophers defend factivity, but a clear and convincing account has yet to emerge. Henk de Regt rejects factivity, arguing that understanding is contextual and pragmatic; it amounts only to successful predictions, practical applications, and fruitful ideas for further research. As long as these are achieved using an intelligible theory, a subject can be said to understand, even if the explanation is radically false. In this paper I defend moderate factivity, arguing that de Regt’s non-factive approach is incorrect, on the grounds that it ignore the resources we have available to account for degrees of understanding. Using my Inferential Theory of Scientific Understanding I show how degrees of understanding can be measured, and importantly how mostly false explanations can lead to degrees of understanding so long as they contain some truths. The overarching idea is this: To understand P minimally requires we can follow an explanation for P and make minimal inferences on its basis. To understand P maximally requires we be able to construct correct explanations for P using a true theory T. Between the two extremes lies a continuum of understanding which can be graded entirely according to the number and kind of inferences being correctly drawn about P. Specifying the number of inferences is a practically impossible task. However, I introduce a taxonomy of inference that enables us at least to see how to measure the kinds of inference involved. This brings us some way towards a full account of degrees of understanding.