Français English
Logo IHPSTLogo CNRSLogo Paris 1Logo ENS
Bandeau institut

Doctorante invitée

Nous avons le plaisir d'accueillir Elsa Muro Sabater en qualité de doctorante invitée à l’IHPST durant 3 mois, jusqu’à mi-juillet, sous la responsabilité de Matteo Mossio. Elle vient de l’Université de Navarre en… [En savoir plus]

Prochainement à l’IHPST Prochainement à l’IHPST

Jeudi 01 janvier 1970, 01:00-01:00

En savoir plus

Agenda – Dimanche 26 octobre 2014

Le mois précédent Septembre 2014 | Ce mois-ci | Novembre 2014 Le mois prochain

  • L
  • M
  • M
  • J
  • V
  • S
  • D
 

Colloques

Knowing and Understanding Through Computer Simulations

Du Jeudi 16 juin 2011, 08:00 au Samedi 18 juin 2011, 18:00

Responsables : Anouk Barberousse, Marion Vorms
Axe de recherche : Philosophie de la physique, connaissance scientifiques, unité des sciences
Programme de recherche The computational turn in physics
Présentation : Computer simulations are widely used in contemporary science. They lay in the heart of climate science for atmosphere is an especially complex system, whose evolution is impossible to understand without computers. Neither empirical inquiry, nor mathematical reasoning through theoretical models are sufficient to study the highly non-linear behavior of our planet's climate.

The aim of this conference is to address the epistemological problems arising from the massive use of computers in today's science and to inquire into the nature and status of simulation-based knowledge. According to the traditional picture, the two legitimate sources of scientific knowledge are experimentation and reasoning. One is justified in believing a proposition only if one has either an empirical or an a priori warrant for this belief. Computer simulations, prima facie, provide neither. The algorithms used in implementing theoretical models are more often than not black boxes for their users, who have no other choice than to trust those who have written the programs. The relationship between inputs and outputs of simulations is usually epistemically “opaque”, as “most steps in the process are not open to direct inspection and verification” (Humphreys, 2004). This results in a loss of understanding. Moreover, climate science relies on simulations run on powerful workstations and on mainframe computers, involving large, international teams of researchers, so that understanding is distributed among many scientists.

Acknowledging the centrality of computer simulations in contemporary science raises questions about central epistemological notions as rationality, knowledge, justification, and understanding. In order to answer these questions, the conference will bring together scholars from different disciplines: philosophers of science, epistemologists, sociologists of science, computer scientists, as well as climate scientists. They will address the following problems.

The first set of problems is about epistemic opacity broadly conceived: to what extent is a scientist justified in believing the output of a simulation when she is not able to follow all the steps of the computation? The nature of computer-based mathematical proofs has already been investigated by epistemologists (Kripke, Tymoczko, Burge); their results may be extended to cases where the output of the computer is about empirical knowledge. What is the nature of the knowledge so obtained? To what extent is it empirical? What role do a priori elements play?

The second set of problems deals with the collaborative aspects of computer-based science. For sure, science in general is a collaborative enterprise and no individual scientist could master the state of the art in any domain by her own means. However, computer simulations require a novel form of scientific collaboration. In order to design and launch a computer simulation, it is not only scientists in the same domain, or peers, that are obliged to collaborate, but physicists (sometimes from different sub-domains) with algorithm writers and computer scientists. No single scientist has control on all aspects of the simulation. We need a detailed study of the various functions of trust in such big scale collaborative knowledge construction. What is the status of the knowledge so obtained? What kind of control set-up is it possible to elaborate?

The third set of problems is about the nature of expertise. The nature of computer-based knowledge in climate science is not only an epistemological problem, but it has also important sociological and political dimensions. Whereas scientific experts play a more and more important role in our society, the value and use of experts and expertise are being challenged. Are scientific experts relying on computer simulations a special case, as climate skepticism may suggest? Does it have any impact on the trust laypeople should, or should not, confer on these scientists’ expertise? How does experts’ judgement guide decision-making in such highly uncertain situation? And what are the implications of such deep uncertainty for decision rules such as the precautionary principle?

Programme

Comité scientifique : Jacques Dubucs (IHPST), Paul Egré (Institut Jean Nicod), and Cyrille Imbert (Archives Poincaré)

Jeudi 16 juin

École Normale Supérieure, 45 rue d’Ulm, 75005, salle Dussane

9h-9h30 Opening talk
9h30-10h45 Paul Humphreys (University of Virginia)
The Simulation Game
10h45-12h00 Martin Kusch (University of Vienna)
Computational Science: Non-Human or Social Epistemology

14h-15h15 Tyler Burge (University of California, Los Angeles)
Epistemic Warrant: Humans and Computers
15h15-16h30 Stéphanie Ruphy (Université de Provence)
Do computer simulations constitute a new style of scientific reasoning?
16h45h-18h Gloria Origgi (Institut Jean Nicod)
Simulation and Collective Intelligence

Vendredi 17 juin

École Normale Supérieure, 45 rue d’Ulm, 75005, salle Dussane

9h30-10h45 Margaret Morrison (University of Toronto)
What Can We Learn From Computer Simulations?
10h45-12h Eric Winsberg (University of South Florida)
Reflections on the Verification and Validation of Computer Simulations

14h-15h15 Hervé Le Treut (Institut Pierre-Simon Laplace, Laboratoire de metéorologie dynamique)
Multiplicity of climatic models and collective expertise
15h15-16h30 Seth Bullock (University of Southampton)
In Praise of Epistemic Insecurity: Could ‘Worse‘ Simulations Fuel a ‘Better’ Science-Policy Interface?
16h45-18h Roman Frigg (London School of Economics)
Decision-Making with Climate Models

Samedi 18 juin

Institut d’Histoire et de Philosophie des Sciences et des Techniques, grande salle, 13, rue du Four, 75006 Paris

9h-10h15 Amy Dahan & Hélène Guillemot (centre Alexandre Koyré)
Holism and Globalism of climate models, through the study of some climatologist practices
10h30-11h45 Julian Reiss (Erasmus University, Rotterdam)
Computer Simulations, Adequacy-for-Purpose, External Validity and All That
11h45-13h00 Wendy Parker (Ohio University)
Simulation results as evidence

Résumés

Seth Bullock (University of Southampton)
In Praise of Epistemic Insecurity: Could “Worse” Simulations Fuel a “Better” Science-Policy Interface?
The 21st century brings a raft of significant systemic challenges: global finance, global climate, global technology, global security, global governance, global sustainability, etc. Meeting these challenges will involve understanding and managing complex systems that comprise many interacting parts. Computer simulation is emerging as the key scientific tool for dealing with such systems. As a consequence, simulation science is increasingly political and has the potential to be critical to our future well-being and quality of life. To what extent do our current simulation modelling practices measure up to the responsibility that they must now shoulder?
The assumed gold standard for simulation modelling is that our models should achieve a (pseudo)-empirical status that allows their results to be understood as secure forecasts, or valid predictions about their real-world target systems. Where simulations have realworld political significance, it is expected that their predictions and forecasts can inform the decisions of policy makers, stakeholders, etc. Such models are analogous to the simulated wind tunnels within which new designs of cars or planes are trialled before deciding whether they should be constructed and sold in the real world. One might usefully discuss the extent to which we can expect simulations of markets, ecosystems, etc., to reach this gold standard. Here, though, the problem is approached obliquely by asking why we would want a science-policy interface structured in this way: i.e., one in which the flow from policy to science defines “challenges” and “impact” while the flow in the other direction takes the form of predictions and forecasts. The merits of an alternative science-policy interface at which simulation models with *no* empirical validity are built and explored in order to generate *insights* into our understanding of target systems will be considered. In distinguishing between the two styles of interface, I will draw on an analysis of Charles Babbage's simulation model of miracles and its subsequent impact on attempts to build machines capable of automatic economic reasoning.
Issues to be addressed will include: How might a science-policy interface be restructured to allow insights to pass across it? To what extent are accountability and democracy impacted by the structure of the science-policy interface? Could willfully insecure complex systems simulation offer a paradigm within which an effective reassessment of impact-driven science takes place?

Tyler Burge (University of California, Los Angeles)
Epistemic Warrant: Humans and Computers
I compare warrants for believing the results of computer runs with warrants for believing what other people tell us and with warrants for believing in the results of some of our own unconscious inferences. I distinguish two types of warrant–warrant by reason (justification) and warrant without reason (entitlement). Warrant without reason depends on exercising a naturally reliable cognitive competence, such as inference or perceptual belief formation. Using this distinction, I consider warrant for relying on computers in pure mathematics and empirical science. In the case of empirical science, I center on genetic algorithms, commenting both on the “non-rational”, random elements in the way their results are produced and on similarities and differences with inductive inference in human beings. I emphasize that our understanding of our own inferences is often partial, and that warranted belief–indeed, knowledge–is compatible with partial understanding. So the partial understanding that derives from relying on computers is an extension of a phenomenon that has always been part of empirical science, and even the more complex parts of mathematics.

Amy Dahan & Hélène Guillemot (centre Alexandre Koyré)
Holism and Globalism of climate models, through the study of some climatologist practices
During last years, complex numerical models gave birth to works emphasizing their « epistemic opacity » (Humphreys 2009) and their « holism » (modularity and entrenchment making « analytical understanding hard or even impossible to achieve », Lenhard and Winsberg, 2010).
Starting from research practices and following a bottom-up epistemology, we’ll try to enlighten how the climatologists are approaching, solving, eluding, and even exploiting, difficulties linked with the holism of their models.
We will distinguish different aspects of climate models holism, in relation, either with physical constraints on global atmospheric system (conservation laws), or with interactions between different process within the atmosphere, or with heterogeneous coupling and parameterizations. We will study how scientists faced to this complexity are deploying several strategies to understand their models, evaluate their outputs, explain the spread of the results, or estimate the role of single process: using hierarchies of models of various complexities, inter-comparison of models, comparison with observations data, and combination of all these different tools. Finally, we’ll suggest a comparison with other scientific domains holism (i.e. economics) to underline the specificities of climate models.

Roman Frigg (London School of Economics)
Decision-Making with Climate Models
Climate models are widely used to make forecasts, which provide the basis for farreaching policy decisions. However, upon closer examination it turns out that climate models do not actually warrant the probabilistic forecasts that are commonly derived from them: due to their intrinsic imperfection and nonlinearity, they cannot be used to calculate decision-relevant probabilities. Although the IPPC has recognised this fact, no research in to other methods of prediction has been carried out. It is the aim of an ongoing project address this issue by first investigating how and why exactly probabilistic predictions break down in climate models, and then develop alternative methods to get around the problem. The proposal is that probabilistic reasoning should be given up altogether. Models should be used to calculate non-probabilistic odds for certain events, and these should be used to guide decision making. We introduce both the problem and the proposal and illustrate it with a simple example.

Paul Humphreys (University of Virginia)
The Simulation Game
Computer simulations of cognitive processes have been the vehicle for much of the work in artificial intelligence and one of the earliest morals drawn in that area was that how something is done can affect what has been done. Secondly, the focus on representations in that area has led to an attitude that the kind of epistemic opacity involved in other sorts of scientific simulations is not a problem for AI if the appropriate level of representation remains unaffected. Thirdly, while simulated intelligence is intelligence, and a simulated market is a market, a simulated lithium atom is not a lithium atom. I shall try to connect these three issues as a way of illuminating the role of epistemic opacity and of materiality in scientific simulations and also of determining what kind of knowledge we can lay claim to as a result of running a simulation.

Martin Kusch (University of Vienna)
Computational Science: Non-Human or Social Epistemology
In a number of places Paul Humphreys has emphasised that computational science calls for a new kind of thinking about knowledge, that is, the recognition that "scientific epistemology is no longer human epistemology". This paper will distinguish between different readings of this idea and seek to evaluate them in light of earlier discussions about computers in the sociology of scientific knowledge and social epistemology.

Hervé Le Treut (Institut Pierre-Simon Laplace, Laboratoire de metéorologie dynamique)
Multiplicity of climatic models and collective expertise
Climate projections for the coming decades or centuries rely on a multiplicity of models, developed in different laboratories, and based on various approaches of the same underlying physical, chemical or biochemical principles and processes. International programmes have fostered a common approach of model validation, at different time scales. These programmes have created a strong emulation between climate modelling groups, strong links between observation-based analysis and modelling. It is possible to define at the one end, areas where models have a true capacity to describe complex features of the real word, and at the other end, a persisting divergence between models for many aspects of future climate projections. Whereas the development of these model intercomparisons reflects a pragmatic approach of the problems, further progress will also require a deeper understanding of their meaning.

Margaret Morrison (University of Toronto)
What Can We Learn From Computer Simulations?
As a philosophical exercise not much turns on whether we classify computer simulations (CS) as experiments, a form of modeling, or some type of hybrid activity that straddles both camps. Since they typically exhibit features from all three categories it might seem a waste of time debating the distinctions required for a fine grained classification. What is important however is how we characterise the outputs of computer simulation since their epistemic status is the crucial feature in deciding what kind of knowledge computer simulations deliver. In this latter context the relationship to experiment and modelling takes on a new dimension, one that shows just how the results generated by some CS can qualify as a new type of experimental knowledge, rather than a hybrid of traditional forms of modelling and experiment. In order to document the relationship to models and experiments and illustrate both the novelty and epistemic dimension of CS results I want to focus on particular methodological techniques, specifically the features of verification and validation used to assess the status of CS outputs. Verification and validation are typically described as addressing two separate issues; in the case of verification the question is whether the equations have been solved correctly while validation is concerned with whether the correct equations have been solved. In verification the relationship between the simulation and the real world or target system is not an issue while in validation it is the issue. The question that naturally arises is how validation can be carried out if there is insufficient experimental data about the target system. A careful examination of the various processes involved in validation, especially the role of validation metrics, shows us how to answer this question but also illustrates how CS can deliver epistemically significant results in an experimentally novel way, and what those results tell us about the target system.

Gloria Origgi (Institut Jean Nicod)
Simulation and Collective Intelligence
In recent years, there has been a lot of discussion revolving around a phenomenon called The Wisdom of Crowds (Surowiecki, 2004), that is, the aggregation of lay-judgements that ends up in an epistemically superior result if compared with expert judgement. Processes of collective intelligence are at work in search-engines, future markets, focus groups, etc. In this paper I would like to address the question of the epistemic status of collective intelligence processes. Are collective intelligence decision processes a case of simulation? And what do they simulate? What are the mechanisms of trust and division of cognitive labour that make these processes reliable? Is the aggregation of vast amounts of data or opinion based on a “method” or are we facing the challenge of a new science without method?

Wendy Parker (Ohio University)
Simulation results as evidence
I will discuss three uses of computer simulation models in the study of weather and climate: for predicting near-term weather conditions, for projecting long-term climatic conditions, and for estimating the state of the atmosphere at a given (past) time. When, if ever, do results from simulation studies undertaken for these purposes provide good empirical evidence for (relatively precise) claims about the real atmosphere and/or climate system? To what extent is the epistemic opacity of computer simulations a problem here? In addressing these questions, I will explore an analogy between computer simulation models and traditional scientific instruments; of particular interest will be how we come to believe that results obtained from traditional scientific instruments provide good empirical evidence for claims about real-world target systems. I will also illustrate how the boundary between observational data and modeling results is blurred in the study of weather and climate.

Julian Reiss (Erasmus University, Rotterdam)
[i]Computer Simulations, Adequacy-for-Purpose, External Validity and All That

To the extent that computer simulations are performed with the goal of learning about certain physical or social systems of interest, the epistemology of computer simulations is that of ordinary experimentation. Like ordinary experiments, computer simulations achieve their goal through the controlled variation of (sets of) factors of interest; and like ordinary experiments, computer simulations are almost always performed on one system, called a ‘model’ in order to make inferences about (a property of) a related system, called a ‘target’.
One area where thinking about simulations in analogy to ordinary experiments is fruitful is that of (simulation) model validation. Much of the literature on validation treats the issue as though models as such could be validated. In fact, models can only be said to be more or less valid for a given purpose (cf. Parker 2009). This point is well understood in the experimentalist literature: evaluation of the validity of a model (e.g., a model organism) is always relative to some specific hypothesis (e.g., about the toxicity of a substance). If it is, the hypothesis established on the model is said to be ‘externally valid’.
This suggests that strategies to ascertain externality validity that have been developed in the experimentalist literature may be applicable to simulations. By examining three major contributions to this literature – Guala’s analogy-based account, Cartwright’s capacities-based account and Steel’s mechanism-based account – I argue that the proposed strategies indeed apply, but most fruitfully only relative to the purpose of explanation. If the relevant purpose is prediction – which is the norm for many simulations, for example in climate science – these strategies are likely to fail and alternatives should be sought.

Stéphanie Ruphy (Université de Provence)
Do computer simulations constitute a new style of scientific reasoning?
Philosophers of science readily acknowledge today the pervasive and central role played by computer simulations in many disciplines. Less consensual is the claim that computer simulations constitute a distinctively new set of scientific practices raising new philosophical issues. Stöckler (2000) for instance contends that they do not amount to a revolution in methodology, and Frigg and Reiss (2009) argue that the problems they raise are only variants of already-discussed problems pertaining to models, experiments and thought-experiments. Humphreys (2009), on the other hand, defends the novelty of issues raised by distinctive features of computer simulations such as their epistemic opacity, and Winsberg (2001) emphasizes the specificity of the ways simulations get justified.
My aim is to develop a different perspective on this question of novelty, by investigating whether computer simulations constitute a new style of scientific reasoning, in Hacking’s sense of the notion. To count as a style of scientific reasoning, a set of modes of scientific inquiries must accomplish three things: i/ it must introduce new types of entities (such as objects of study, propositions or explanations); ii/ it must be “self-authenticating”, that is, it must define its own criteria of validity and objectivity; iii/ it must develop its own techniques of stabilization.
The focus of my inquiry will be computer simulations of complex physical systems. Asking whether they constitute an emerging, new style of scientific reasoning will necessitate i/ investigating the ontological status of the “parallel worlds” they create and how these parallel worlds articulate with the real-world systems they simulate; ii/ analyzing the conditions of possibility for truth (or falsehood) of statements about the world derived from a simulation, in order to see whether they are dependent on a specific procedure of reasoning; iii/ investigating the sources of the stability of computer simulations when new data come in.
For each of these three lines of interrogations, I will determine to what extent the answers are specific to computer simulations (in contrast in particular with models and experiments) and I will illustrate my claims with case studies in the astrophysical sciences.

Eric Winsberg (University of South Florida)
Reflections on the Verification and Validation of Computer Simulations
It is common practice in the simulation community to distinguish between verification and validation. Verification is said to be the process of determining the extent to which the solutions generated by the computer simulation model approximate the solutions to the original mathematical model equations. And validation is said to be the process of determining the extent to which a computer simulation model is an adequate representation of a target system. In this paper I argue that this distinction is not as clean as it is often thought to be. I also argue that this has important implications for how we think about the nature of inductive evidence when it comes to simulations.

Haut de page

Retour à la page précédente

Retour à l’accueil

Accueil du site | Informations pratiques | Lettre d’information | Nous contacter | Mentions légales | Accès privé
© IHPST, 2002-2014.
Dernière mise à jour : Dimanche 26 octobre 2014