Does Integrative Pluralism solve the problems associated with the scientific explanation of nature? — UCL Philosophy of Natural Science (2019)

Dylan Kawende FRSA
14 min readJul 2, 2021

Introduction

A hurdle for the scientific explanation of nature is its complexity. My central claim is that Integrative Pluralism (IP) resolves some of the problems associated with the scientific explanation of nature in two ways — the first being parasitic on the second. First, IP integrates different theories, models and explanations that are historically contingent and partial solutions to complex scientific questions. Second, IP makes a warranted distinction between (1) the theoretical integration of phenomena that is unlikely to be a unified model and (2) the application of a pluralistic model to explain local phenomena. Unless we accept hierarchical classification as a required mode of epistemological inquiry, which I grant is implausible, IP is a credible position.

Sandra Mitchell

However, reductionists would view Mitchell’s emphasis on contingency and her focus on biological phenomena as two flaws since they typically propose that scientists — physicists especially — aim to make universal and ‘fundamental’ claims about nature that hold independent of contingency. In Sections 2 and 3, I grant that Mitchell’s account of IP overcomes these objections.

1. Critical exposition

The following is a critical exposition of Mitchell’s account and related literature.

Complexity of nature

Both Mitchell and Herbert Simon (1991) agree that nature is complex. Mitchell understands complexity as possessing a diverse set of ‘contingent’ and ‘evolved properties’ (2002, p67). By contingent, Mitchell means ‘propositions that are neither true under every possible valuation (i.e. tautologies) nor false under every possible valuation (i.e. contradictions)’ (Gaonkar, 2001). Typically, contingency is contrasted with the ‘necessary’ (or necessarily true) and is synonymous with the ‘probable’ (Sloane, 2010). Simon understands complexity as comprising ‘a large number of parts that interact in a non-simple way’ (1991, p468).

The complexity of nature presents two challenges: ontic and epistemic. The ontic challenge is that the phenomena we are trying to investigate are complex. That is, nature has many parts and interactions that are difficult to access or represent and hence the need for idealised models. The epistemic challenge is that the systems of knowledge we use to investigate these phenomena are also complex. Even idealised models are difficult to construct given the tension between veracity and explanatory power. Namely, idealised models focus on a limited set of properties in isolation by either ‘stripping away’ from or ‘distorting’ the target system, which brings their epistemic utility into question (Cartwright, 1989, Ch. 5; Mitchell, 2004, p81). The unification and the integration — Mitchell treats these two processes independently — of different partial accounts to explain phenomena is difficult if not impossible to apply universally. Mitchell (2002) argues that this is because the different explanations are influenced by their historical contingencies and require different levels (p63) or ‘scales’ of abstractions that are in themselves constrained by our pragmatic and epistemic aims (Levin, 1992, p1950).

Pluralism vs reductionism

There have been other attempts to explain nature with all of its complexity e.g. Nancy Cartwright’s ‘patchwork pluralism’, Paul Sherman’s ‘levels of analysis’, Paul Feyerabend’s ‘epistemological anarchism’, and reductionism (Mitchell, 2002, pp 64–68). Some of these attempts serve as direct foils to Mitchell’s IP, specifically reductionism. As with any school of thought, reductionism operates on a spectrum. Nevertheless, all reductionists classify nature hierarchically and advocate intertranslatability i.e. biology can be explained in terms of physics. On this view, there is one fundamental level — somewhere in the realm of physics — that contains all of the explanatory weight from which all other levels can be derived (2004, p83).

Reductionism is not to be conflated with monism. Not all monists are reductionists since you could have a view that permits multiple non-hierarchical levels (so anti-reductionist) but maintains that there is just one structure to nature. That is, monists commonly submit that we inhabit one world. IP does not seem to dispute this since Mitchell accepts that however complex a system and however many contributing causes participate in emergent phenomena, there is only ‘one causal history that, in fact, has generated a phenomenon to be explained’ (2002, p62). She also argues that there are limits to the different explanations that we can plausibly entertain (p59) and hence rejects Feyerabend’s epistemological anarchism (2004, p82). The contention in my view lies in (1) whether this one structure is intrinsically complex in a non-hierarchic fashion or intrinsically ordered in a necessarily hierarchical fashion. And (2) whether the privileged explanation of this one structure is an integrated one. Part of this essay will show how scientific practice problematizes both metaphysical pictures and justify why IP succeeds.

Mitchell argues that the idealised and partial character of our representations suggests that there will never be a hierarchic account that can do the work of explaining complex phenomena. This is because, in the spirit of Cartwright’s work, IP takes seriously the partial character of ceteris paribus explanations. Consequently, Mitchell accommodates William C. Wimsatt’s asymptotic approach to generate less idealised representations as a means to truer theories (2002, p65).

Division of labour in social insects

In the case of the division of labour in honeybees, Wimsatt’s asymptotic approach requires us to recognise that the three self-organisation models: (1) Genetic diversity for threshold response, (2) Foraging for work algorithm, (3) Learning algorithm, are compatible since they are describing ‘non-overlapping ideal worlds’ (Mitchel, 2002, p64). By combining all three, we can better explain the complex phenomenon of the division of labour. But Mitchell wants to go further, first, by arguing that this coupling strategy is but one way to integrate different models (2002, p66). Second, she argues that the desire for a ‘true’ theoretical unification that resolves all scholarly dispute advocated by reductionists is not feasible given the contingent nature of complex biological phenomena (p67).

In regards to the integrated explanation of the division of labour in honeybees, Mitchell explains that this type of integration is still local. This is because the same type of division of labour which can be observed in ants would require a different explanation since they contain ‘negligible genetic diversity compared to bee colonies’ (ibid). Third, like Sherman (1988), she argues that different scientists require different levels of abstraction and analysis to explain phenomena in their domain. But she argues that Sherman’s account errs in compartmentalising the different levels since the discoveries made on one level will often have an impact on our understanding of the other levels (p63). In other words, stratification is important but only up to a point. We want to avoid isolationism since the levels interact with each other in dynamic ways. This explains (1) the need for integration to explain concrete phenomena since they will contain causal features from all levels. And (2) her rejection of the reductionist impulse to confer primacy to the ‘fundamental’ level of physics since there are reasons to doubt its legitimacy.

Is there a fundamental level?

There are many reasons to be pessimistic about a fundamental level and a fundamental level which explains all other levels. First, take, for example, an argument from the history of science. Max Born declared in 1928 (after seeing the Dirac equation) that ‘physics as we know it will be over in 6 months’ (Schaffer, 2003, p503). Also, Gottfried Wilhelm Leibniz, in a letter to Simon Foucher, from Schaffer (2003, p498) wrote: ‘Thus, I believe that there is no part of matter which is not — I do not say divisible — but actually divided; and consequently, the least particle ought to be considered as a world full of an infinity of different creatures.’ Both quotes illustrate the evolving complexity of quantum phenomena and challenge the primacy of physics since the theoretical entities that physicists posit to explain phenomena are contingent. For example, atoms, which were once considered elementary particles, are now treated as complex systems by nuclear physicists. As Simon (1991) explains, ‘whole stars, or even galaxies, can be regarded as elementary subsystems’ depending on the epistemic aims of an astrophysicist (p468).

Second, in quantum mechanics, the uncertainty principle states that you cannot measure quantum phenomena without changing their position or velocity (Hazen et al, 2009, p84). For example, the Large Hadron Collider makes predictions of various theories in particle physics by colliding hadrons with other subatomic particles which in turn affects the original position and velocity of the particles under investigation (Humphries, 1986, p4). If our biological theories were reducible to the level of physics, then we would expect greater certainty from the laws of physics. That is, if quantum theory is meant to be, in principle, a universal theory that gives an exhaustive account of physical reality, it should be applicable, in principle, to all physical systems, including those in biology. However, the probabilistic nature of quantum phenomena shows otherwise. Instead, quantum phenomena are described and measured in terms of probabilities and wave functions (p85), not universal laws that hold independent of contingency.

But how does historical contingency fit in? As Cartwright et al (1984) point out, ad verum theoretical laws in quantum mechanics have to be ‘fixed’ (p475) — or ‘fudged’ in Lipton’s terms (1990, p54) — by, for example, eliminating infinite self-energies or by retaining only certain terms in expansions. In the process of describing concrete phenomena, there is the possibility that any model of quantum phenomena may only represent one subsystem in a more complex set of quantum interactions. Therefore any explanation of quantum phenomena is contingent on what aspects of the phenomena are being represented along with the probability and temporal priority conferred on the putative causes.

This is further challenged by the fact that measuring the position of a subatomic particle with increasing precision, so that the margin of error decreases, means the uncertainty in velocity must become greater to compensate (Hazen et al, 2009, p84). Recall the trade-off between veracity and explanatory power. We can see this in the case of applying Schrodinger’s equation to concrete problem-situations like exponential decay. Various ad hoc moves are made to get the equation to yield correct predictions (Cartwright, 1984, p475) but these predictions only describe what happens other things being equal. They do not describe what would happen were the situation different and since probabilities are in constant flux, this raises doubts about the literality of these laws. Ultimately, none of these examples supports derivability or causal completeness in the reductionist sense and requires an appeal to contingency.

Other issues associated with hierarchical classification include crosscutting, and information overload. However, I will focus on the issue of reductionism as this has served as the greatest foil to pluralistic explanations of nature. Although this paper is theory-led, I will compare division of labour among social insects and measurement in quantum mechanics to support the view that IP resolves some of the problems associated with the scientific explanation of nature and its warranty in physics as well as in biology. Giving a full defence of how IP applies in all branches of quantum mechanics falls beyond the scope of this paper and my expertise. But there are important parallels to be made.

2. My Critique

My first argument is that IP integrates different explanations that are historically contingent and partial solutions to complex scientific questions. How would integration occur in quantum mechanics? One way Mitchell would propose we do this is through appealing to ‘local theoretical unification’. On this view, the best way to explain quantum phenomena is by developing models in which ‘a number of features of a complex process are jointly modelled’. Additionally, the ‘scope of unity and corresponding degree of abstraction will be settled by a combination of pragmatic and ontological constraints’ (Mitchell, 2004, p88).

Support for this claim comes from the wave-particle duality. Quantum mechanics holds that everything — particles, energy, the rate of electron spin — comes in discrete units (Hazen et al, 2009, p81). However, the wave-particle duality problematizes this axiom since light behaves like a wave (a non-discrete unit) under some circumstances and like a particle under others (p87). The behaviour of electrons and light indicates that in the quantum realm, our familiar categories of ‘wave’ and ‘particle’ do not hold. Unlike the reductionist who would force your hand to choose between the two models on the basis that we require one single and stable explanation, IP offers a way of joining the compatible features of the two models of light in a way that conforms with its complexity and contingency.

Bas van Fraassen

Just as Mitchell argues that there is a multitude of ways to explain nature’s solutions to the problems of survival and reproduction (2002, p58), IP would view the distinction in classical physics between wave and particle as empirically inadequate. The natural response to the wave-particle question is simply ‘none of the above’ according to IP. While this implies that we cannot visualize a photon or represent it in a non-idealised model, it forces us to create new and integrated categories and models that empirically ‘save the phenomena’ (Van Fraassen, 1977, p629). This would partly explain the move from wave-particle duality to complementarity and Bohr’s insistence that elements from both models were ‘equally valid and equally needed’ for a better explanation of the phenomena (Hilgevoord et al., 2016).

My second argument is that IP makes a warranted distinction between (1) the theoretical integration of phenomena that is unlikely to be a unified model and (2) the application of a pluralistic model to explain local phenomena. Let me start by unpacking what the two components mean in Mitchell’s account. The distinction hinges on a need to decide between different explanations and the difference between integration and unification (Mitchell, 2002, p66). (1) Refers to the fact that at the theoretical level, different idealised models e.g. the three self-organisation models, that track different causes do not directly refer to the same ideal systems (p64). That is, they can be integrated because they are partial and not competing. They are unlikely to be unified because of the diversity of contingencies and presently unknown contributing causes that may possibly influence the ‘variable paths’ phenomena can take (p67). Unification possesses a sense of finality. Integration does not.

(2) Refers to the fact that while unification is unlikely, explanation of concrete local phenomena is still possible by virtue of the varying degrees of integration of different — sometimes mutually exclusive — idealised models. For example, Mitchell accepts that the genetic self-organization model of division of labour would not always apply in ants (p67). But this is not a problem for IP because IP is not in the business of making universal claims about nature that remain stable across time. It accepts the evolving complexity of nature and anticipates presently unknown causes. In quantum mechanics, further support for Mitchell’s distinction comes from the fact that part of Bohr’s insistence on retaining the merits of the wave and particle models rested on the premise that both pictures do not refer in a literal one-to-one correspondence to physical reality (Hilgevoord and Uffink, 2016) which is a typical feature of even moderate scientific realists and reductionists’ accounts.

Niels Bohr

Instead, Bohr granted that the applicability of these models was contingent on the ‘experimental context’ (ibid). This supports Mitchell’s account in two ways. First, it demonstrates that in scientific practice the application of a pluralistic model does not require a narrow account of so-called ‘fundamentals’ to explain phenomena. This would be to endorse a naive form of empiricism and scientific realism. Like Mitchell, Bohr recognised that the characteristics of theoretical entities are best explained when we forgo isomorphic representations predicated on one-to-one relations as these overestimate our ability to access the inner nature of the corresponding phenomena. Integration avoids this trap by appealing to contingency with the understanding that atomic architecture can be arranged in (virtually) an infinite number of ways (Hazen et al, 2009, p116). Second, Bohr’s example shows that to get a more complete picture of the causal history of the phenomena, we must open the lid on the context of discovery through interrogating the context in which the experiment was performed. We do not observe or explain patterns straightforwardly and this is a product of nature’s evolving complexity and our evolving knowledge systems.

3. Objections and replies

First, reductionists might claim that there is no real motive for integration in physics since quantum phenomena are less influenced by historical contingency than biological phenomena. Second, Mitchell’s use of contingency might seem like a catchall that does not clearly prescribe how scientific explanation ought to occur. I agree that it is not clear in Mitchell’s account how far back in history an explanation must traverse to capture the salient causal features that give rise to phenomena before descending into infinite regress. Perhaps it may only take one non-integrated and non-contingent explanation to adequately explain complex phenomena. At first glance, this argument is appealing for critics seeking epistemic confidence in simple and elegant theories that unify vast phenomena.

However, the main weakness of this objection is that while there may be instances in which non-integration and non-contingency are prima facie acceptable, the problem of induction will continually warrant the need to go back to redescribe phenomenon and to challenge where the assumed buck should stop. Not only is nature complex, but it is seldom uniform, which makes explanation an iterative process. In Humean tradition, I argue that one would have to be omniscient to establish necessary causes. Second, Mitchell does not purport to have the final say on how integration will take place in all instances. That likely falls within the remit of those immersed in the pursuit. But Mitchell makes a strong case for integration as a gateway to better explanations of nature.

Conclusion

In conclusion, I have argued that IP explains nature through its appeals to historical contingency, pessimism about the fundamental level, non-hierarchy, and coupling idealised models across all levels of analysis. Contrary to reductionism, I have shown that since biological and quantum phenomena are not stable any attempt to explain them will rely on ceteris paribus-based models that require integration. More research is required to determine the applicability of IP in versions of monism that view the cosmos as possessing a fundamental level, with metaphysical explanations deriving from the One (Schaffer, 2010).

Word count: 3000

References

  1. Cartwright, N. (1989). Nature’s capacities and their measurement. Oxford: Clarendon.
  2. Cartwright, N., & McMullin, E. (1984). How the laws of physics lie. American Journal of Physics. Retrieved from: https://aapt.scitation.org/doi/pdf/10.1119/1.13641
  3. Gaonkar, D. (2001). Contingency and Probability. Encyclopedia of Rhetoric. Ed. Thomas O. Sloane.
  4. Hazen, R. M., & Trefil, J. (2009). Science matters: Achieving scientific literacy. Anchor.
  5. Hilgevoord, J., & Uffink, J. (2016). “The Uncertainty Principle”, The Stanford Encyclopedia of Philosophy. Retrieved from https://plato.stanford.edu/entries/qt-uncertainty/
  6. Humphries, S. (1986). Principles of charged particle acceleration. New York: John Wiley and Sons.
  7. Levin, S. A. (1992). The problem of pattern and scale in ecology: the Robert H. MacArthur award lecture. Ecology, 73(6), 1943–1967.
  8. Lipton, P. (1990). Prediction and prejudice. International Studies in the Philosophy of Science, 4(1), 51–65. Retrived from
  9. Mitchell, S. D., & Dietrich, M. R. (2006). Integration without unification: An argument for pluralism in the biological sciences. The American Naturalist, 168(S6), S73-S79.
  10. Mitchell, S. D. (2004). Why integrative pluralism?. EMERGENCE-MAHWAH-LAWRENCE ERLBAUM-, 6(1/2), 81.
  11. Mitchell, S. D. (2002). Integrative pluralism. Biology and Philosophy, 17(1), 55–70.
  12. Schaffer, J. (2010). Monism: The priority of the whole. The Philosophical Review, 119(1), 31–76. Retrieved from https://www.jstor.org/stable/pdf/41684359.pdf
  13. Schaffer, J. (2003). Is there a fundamental level?. Noûs, 37(3), 498–517. Retrieved from https://onlinelibrary.wiley.com/doi/pdf/10.1111/1468-0068.00448
  14. Sloane, T. (2010). Encyclopedia of rhetoric. Oxford: Oxford University Press. Retrieved from http://www.oxfordreference.com/view/10.1093/acref/9780195125955.001.0001/acref-9780195125955-e-50?rskey=LNMMob&result=1
  15. Simon, H. A. (1991). The architecture of complexity. In Facets of systems science (pp. 457–476). Springer, Boston, MA.
  16. Sherman, P. W. (1988). The levels of analysis. Animal Behaviour.
  17. Van Fraassen, B. C. (1977). To save the phenomena. The Journal of Philosophy, 73(18), 623–632.

--

--

Dylan Kawende FRSA

Founder @ OmniSpace | UCLxCambridge | Fellow @ Royal Society of Arts | Freshfields and Gray’s Inn Legal Scholar | Into Tech4Good, Sci-fi, Mindfulness and Hiking