Tuesday, November 27, 2007

A new edited collection


Review: Why Do They Look Like That? Three-dimensional Models in Science Anna Maerker Social Studies of Science 2007;37 961-965 http://sss.sagepub.com/cgi/reprint/37/6/961?etoc

Monday, September 24, 2007

A sociological model by which to orientate ethics, science and society

The BBC has published a report Safeguarding impartiality in the 21st century in which it sets out to ‘define a set of principles of impartiality in a forward-looking way’, and to ‘identify a list of broad implications for the BBC’. This makes mention of global warming.

"… the growth of inter-party agreement at Westminster and unofficial cross-party alliances – whether on the invasion of Iraq, the funding of higher education, the detention of terrorist suspects, or global warming – complicates the impartiality equation. There are many issues where to hear ‘both sides of the case’ is not enough: there are many more shades of opinion to consider. Indeed, the principal linkage of impartiality to ‘matters of party political or industrial controversy’ has a very dated feel to it: there are many other areas where controversy is now much fiercer" (page 34)


The relation of the science of global warming and the idea of impartiality, the latter at the very heart of public broadcasting has not only been vexing the BBC but commentators in other media outlets.


A lack of general knowledge in science is frequently bemoaned in the news. In this case, however, no-one appears to be pointing out that a lack of social scientific knowledge is hampering understanding and progress towards a more fruitful discussion about the role of the BBC (and others) in the debate about climate change.


Sociology offers us the following framework by which to orientate ethics, science and society (for a good introduction see Runciman 1999, The Social Animal).

1) A ‘Value-neutral explanation’ is an explanation of why the world is what it is that can be assessed as to its validity.
2) ‘Value judgements’ are evaluative statements as to whether the state of the world is a good or bad thing
3) The decision of what to explain -‘value relevant choices’ - is a function of the interests, values and circumstance of the commentator (journalist or otherwise).

The BBC clearly has to consider all 3 points as it goes about its business. It’s own reports and those of climate scientists must be assessed as to (1). Clearly the interests and values of the viewers, the corporation, and the members the BBC Trust etc., along with other factors such as the amount of resources available will feed into (3). The BBC has a set of values, as we all do (2).


In the storm of rhetoric and comment about impartiality, a key to clarity is not to deny having values, but to strive to be as clear as possible about what they are in articulating an explanation. For the sake of integrity, this should be done alongside a consideration of how value judgements and value relevant choices impact on the account offered.

Friday, August 31, 2007

Announcement of Events

Modelling Matter
St Chad's College, Durham University26-28 March 2008

Sponsored by Durham University's Institute of Advanced Study (UK)

This interdisciplinary two-day symposium will investigate the different strategies and methods of modelling and representing across the physical and life sciences, and the ways in which models relate to their subject matter and to the empirical evidence. Speakers include: Prof Nancy Cartwright (London School of Economics and University of California, San Diego); Dr Matthew D Eddy (Durham University & Caltech); Prof Ronald Giere (University of Minnesota).

Further information see: http://www.dur.ac.uk/m.d.eddy/Modelling_Matter.html

Also:

Professor Brigitte Nerlich (University of Nottingham) in dialogue with Professor Andreas Musolff (MLAC): Metaphors as models of mediation between science and the public: newspaper reporting of the 2001 foot and mouth outbreak
11 February, 17:15 to 11 February, 19:15, Institute of Advanced Study, Cosin's Hall, Durham University
This is part of the Metaphors as Models Interdisciplinary Dialogues series.

See:http://www.dur.ac.uk/ias/events/thematic/

Friday, August 03, 2007

Analytical and FEA Models in the Interstate Highway Bridge Collapse

Was there a sufficient discussion of uncertainty in the 2001 University of Minnesota report analyzing the bridge? The general conclusion of the report was that failure was not likely, and a page 11-14 analysis argued that the idealized models used for evaluating bridges are "inherently conservative." Given that the the report identified existing fatigue cracks, should there have been more of a discussion of possible unknown causes of failure? Was it too overconfident?

Initially, I don't immediately see a way in which there could have been a more nuanced discussion of uncertainty in the report. Historical evidence cited in the report seems to give a lot of credibility to assuming that the model calculations were conservative. The report indicated existing fatigue cracks but used a lot of empirical testing in their analysis of whether failure was imminent. If there are faults in the analysis or great inadequacies in the discussion of uncertainty, I don't yet see them.

Tuesday, February 20, 2007

Discussion: Simulations and Computational Nanotechnology

Ann Johnson’s article Institutions for Simulations: The Case of Computational Nanotechnology discusses computational nanotechnology as simulation science. Johnson challenges two assumptions or ‘myths’ related to simulations. First, that simulations are a cheap alternative to experiments and second, that simulations have a close connection to theory. She shows through empirical analysis of US based computational nanotechnology sites how human, financial and computational resources are needed for performing the technology. She argues that “science makes itself” in the intertwined process of constructing simulations and manufacturing empirical data. Hence, her study offers us an interesting perspective on the relationship between technological opportunities and scientific practice.

Article was published in a special issue on Models and Simulations in Scientific Practice, Science Studies 1/2006, eds. by T. Knuuttila, M. Merz and E. Mattila. http://www.sciencestudies.fi/v19n1

Wednesday, January 31, 2007

Mackenzie's Certainty Trough, Nuclear Missiles and "Science Abuse"


The MRG once again outclassed their rival UPERG in their second meeting of the semester. On deck was Donald Mackenzie’s “Nuclear Missile Testing and the Social Construction of Accuracy.”

The article represents an interesting case study demonstrating how uncertainty calculations for missiles became a chess piece in a political game where decidedly value-laden policy positions are justified in terms of technical arguments. For example, 1964 presidential candidate Barry Goldwater was a long-time skeptic of m
issiles, which have never been rigorously tested, and made arguments against their supposed accuracy an issue in his campaign. Mackenzie offers a variety of examples of how political factors drove efforts to establish the accuracy of the missiles.

Many know of Mackenzie’s article for his “certainty trough.” Mackenzie’s point is that people who are intimately connected with knowledge production often are more unsure about their knowledge claims than those who indirectly rely on that knowledge. Further, in the case of missiles, those who were alienated by ICBMs (ie supporters of increased spending on bomber craft as a primary US deterrent like Goldwater) felt that the uncertainty was much greater.

The continuum of the certainty trough could be more coherent. We are not measuring certainty in relation to one single independent variable (like familiarity or intimacy with knowledge generation processes). Though a certain type of proximity is surely still in the mix, advocacy positions for or against missiles based on uncertainty are due to political configurations (e.g. what branch of the military you work in, whether or not you’re running for president, your views on how to “win” the cold war, etc.). So, the observation that uncertainty has to do as much with politics as it does with scientific results is still valid, but the simple graph is a bit misleading.

I personally felt very unsympathetic to some of the later critics of missile accuracy, which made me question some of my sympathies with critics of general circulation models (GCMs). As I see it, there can be a variety of skeptical positions on climate models.

A) There are the extremists who view climate models to be senseless garbage, perhaps constructed at the whim of conspiracy theorists.
B) There are more moderate skeptics of a variety of flavors. Some think that the level of predictive accuracy for long-term predictions is impossible, others think that we can't properly understand the complex atmospheric interactions that could cause warming, etc.
C) Some, including myself, are skeptical that science can directly determine what policies should be done. (I might also be skeptical in some b-type ways; I don't believe in a sort of Laplace's demon, but all but the most extreme modelers don't either.)

With nuclear missiles, there seem to have been two types of skeptics who believed:
D) The missiles wouldn't effectively serve as "counter-force" to knock out enemy silos.
E) The missiles’ accuracy was overstated or difficult to clarify.

There are significantly different contexts: with the models on nuclear missiles, there were explicit political goals. Policymakers wanted missiles that would be able to knock out enemy nukes in their bunkers. Given Mackenzie’s history, it seems as though post-1960 there was a reasonable level of confidence that the missiles would be able to fulfill their task. Because there were specific goals for military leadership, the science could ‘found’ the policy. E-critics may have been right in certain aspects, but D wasn't effectively challenged (I could be convinced otherwise on this thesis, but I do think that eventually D became effectively true in an epistemic sense). Many of the critics of the missiles were intelligent political actors, but their actions were exclusively political and "an abuse of science."

With climate models, there is no explicit goal for their use. Even if the climate models could make perfect (or perhaps consensus) predictions about climate change, there is no clear sense of what should be done in consequence. Thus C-type criticism of GCMs is likely justified. However, I also think that climate models are dealing with much more complex systems than ICBMs (although ICBMs are themselves highly complex). Thus I don't feel inclined to say that type B skepticism is necessarily an abuse of science.

Complaining about abuse on science is often pointless and sometimes stupid (as is perhaps the case with the Republican War on Science thesis). Many science and technology studies folk buy onto the idea that science can't found policy.

But perhaps the more refined idea is that that mature science can found policy in politically uncontroversial areas. So, if missile calculations were based on mature science and were based on an issue that eventually became uncontroversial, then the later critics become ‘science abusers.’ GCMs are not based on as mature of a science and are controversial politically, so it's hard to understand what abuse could really mean.

Friday, January 26, 2007

Computers are never wrong?

Here's an interesting discussion that has taken place starting with the blog Real Climate (comment #193), which was then picked up by Dan Hughes on Prometheus, and then summarized and critiqued on Truth or Truthiness. The controversial statement is:
a computer model is nothing more than an embodiment of 200 years of independently tested pieces of the physical theory. If you're going do dismiss any result that requires a computer to help with the calculations, you're going to have to dismiss most of 20th/21st century science and technology.

The originator of this nonsense is Raymond T. Pierrehumbert, a lead author on the IPCC third assessment report, among other things. Margo's post on Truth or Truthiness picks up on the recently trendy discussion of overselling science that have reverberated through the blogosphere in the weeks since the American Geophysical Union annual meeting in December. She takes issue with the first half of the statement, but it is also worth pointing out, that, even if it were true, the second half of the statement is absolutely laughable.

Let's add a couple of critiques to what others have already pointed out.

1. Pierrehumbert's remark is in response to statements about IPCC third assessment findings. He suggests that they are not "viewpoints" as was asserted in the original comment, but instead the result of "a computer model." Well, no. Actually it's a whole bunch of computer models that don't always agree (thus the assertion that the results are more of a "viewpoint").

2. Even if this hypothetical "single computer model" existed and was absolutely correct in its representation of the causal mechanisms underlying atmospheric processes, we have no reason to accept its result as "right" in the sense Pierrehumbert is using. The result is only as "real" or "right" as the data (observations) that the model is based on. And these data do not necessarily have anything to do with "200 years of independently tested pieces of the physical theory."

3. And finally, Pierrehumbert seems to be invoking some mysterious additive property of facts, in which we can simply add a bunch of "facts" together, and what comes out the other end? More facts, of course!

It is this kind of foolishness that makes climate change advocates look less like scientists and more like fanatics.