Complexity in Science

The following is a draft section from the introduction chapter of my dissertation proposal, which addresses the problem of reductionism and complexity in science.

It is agreed that all scientific endeavors begin with assumptions. Perhaps the contents of the previous statement are evidence of this as is its primacy in this section. Kuhn (2012) used the term paradigm to describe a collection of assumptions, or rather a “club” (p. kpp 19) or network of researchers who agree on those assumptions and use them to guide and communicate their research. From Descartes to Popper, the philosophical debate over paradigmatic assumptions that guide the study of nature has been boiling for centuries. Guba and Lincoln (Handbook of Qualitative Research, 1994) suggest that “Paradigm issues are crucial; no inquirer, we maintain, ought to go about the business of inquiry without being clear about just what paradigm informs and guides his or her approach” (p. 116). Yet as of the minute paradigmatic slice of time in which this writing occurs, all that is commonly agreed is that science is simply said to be defined by the assumptions that define whatever paradigm the scientist builds their research upon (Kuhn, 2012).

One important and popular assumption in science today is that which defines the reductionist paradigm (Laszlo, 1971; M’Pherson, 1974; Tuan, 2012; Wilson, 1998). The reductionist paradigm assumes that problems are best solved by dividing and dissecting pertinent variables to their essential components in order to determine the cause of the phenomenon in question. The strength in this view is that it makes simple what was once complex, makes clear what was once mysterious, and produces straightforward solutions to important problems. The treatment of physical trauma in an emergency room, the correct nail to use for the frame of a housing structure, the determination of wire gauge to compensate for D.C. voltage drop over long distances, are all techniques which are made possible by reductionistic thinking. They tackle problems that have been solved with laborious and delicate analysis, and the resulting formulae produce predictable results specific to the problem domains from which they are derived.

Reductionism has flourished as science since the industrial revolution and is responsible for the explosive growth in technical engineering that so visibly impacts the life of homo sapiens today (Harari, 2015). As of this writing an estimated 62.9% of people in the world own a mobile phone (Statista, 2017), a device so complex that it is impossible to tally the orders of magnitude above that which a single human mind can contain. This remarkable feat of engineering was accomplished through the innovative combination of small solutions to countless isolated problems such as battery storage capacity, computational power versus heat generation, sound quality over digital lines, wireless digital networking, GPS, camera quality, and of course, transistor-based computational horsepower measured in billions of mathematical floating-point operations per second called gigaflops (and one day the unit will be one-thousand-million-million flops, or petaflops) (Eicker & Lippert, 2017). Each of these engineering problems was reduced to a manageable size in order to produce a deterministic solution with 99.9% or 100% accuracy, and the combined result is an inanimate object so dynamic that its very creators cannot seem to find the limit of its novel uses.

But amidst the success there remain problems which cannot be solved or even posed within the reductionist paradigm. These are problems of complexity. Reductionism assumes, or at least strives, for a quantitative world view where discrete variables are used to create predictable solutions to problems (Tuan, 2012; Terra & Passador, 2015; Khisty, 2006). As any meteorologist who has tried to predict the weather or digital animator who has tried to model the movement of a human hair in the wind is familiar: predictability is not always mother nature’s strong point. There is evidence that problems of complexity behave more like an ocean (pun intended) of variable-particles flowing into each other simultaneously to produce results that cannot be easily quantified, let alone predicted (Gleick, 2011). A reductionist thinker faced with a problem of complexity may assert that all that is required is more computational power applied to more comprehensive analytical data. But it is possible that problems of complexity cannot be accurately modeled at all, because the period-buffer, a computational version of a frame in a motion picture, is a conceptual device imposed on nature through reductionistic thinking. It did not arise from nature itself. The trajectory and impact location of the proverbial cue ball determining the resulting vector and acceleration of a second billiard ball may be a great example for teaching the basic laws of physics to primary school students, but in nature countless particles interact with other particles simultaneously and without interval. Material and information flow in nature constantly and on an order of interdependence that no deterministic discrete model can capture, no matter how sophisticated.

While reductionism works wonders in problems of engineering which can be reduced to simple causal rules like the elementary billiard table example, it can create problems of massive scale when it alone provides the dominant world view (Bell & Morse, 2005). It is generally agreed that clinical practice of any kind relies on adequate assessment of the problem before action can be taken to remedy it. When assessment approaches to health care are founded on the assumptions of reductionism, they have the potential to ignore complexity. Michael Kerr writes:

A physician can repeatedly prescribe a diuretic for a patient with leg edema, but fail to recognize that the patient is in chronic heart failure. As a consequence, the edema keeps recurring. A psychiatrist can hospitalize a schizophrenic patient, but not appreciate how the problematic relationship between the patient and his parents has contributed to the hospitalization. The patient may improve and be discharged, but be rehospitalized a few months later. A family therapist may treat two parents and their schizophrenic son, but not attach importance to the fact that the parents are emotionally cut off from their families of origin. The parents’ cut off from the past undermines their ability to stop focusing on their son’s problems; once again, the therapy will be ineffective” (1988, p. kpp 5)

The type of thinking that assumes a single, essential solution to every problem will miss variables beyond the scope of the assessment framework used. This assumption also comes with a second and more fundamental implicit assumption, that each single effect has a single cause, a paradigmatic marker of 17th century mechanistic thinking (Godfrey-Smith, 2013; Hamdani, Jetha, & Norman, 2011; Puhakka, 2015). Russel’s view (as cited by Tuan, 2012) of mechanistic “causal law is employed to infer the existence of one thing or event from the existence of another or a number of others. . .we can plausibly claim that when some earlier events are given, only one act or acts within some well-marked character are related to these earlier events” (p. 200). A facet of reductionism, mechanistic thinking is at the heart of the randomized control trial (RCT), a methodology for sifting through non-essential variables in order to determine the essential variable. The RCT is now the gold standard research methodology in clinical practice (Puhakka, 2015) and relies on the statistical value of a reliably direct, moderated, or meditated correlation between two variables.

However, models based on mechanistic thinking can fail quickly when applied to problems of complexity (Gibson & Wilson, 2013; Gleick, 2011). For example, researchers in the 1950’s were tasked with predicting the weather. At first glance, the weather appears to have some degree of patterned phases of sunshine, rainfall, and wind speeds. Once reliable relationships are found between meteorological variables (e.g. air pressure and precipitation) it is assumed that accurate prediction would require the analysis of these and other variables observed throughout an array of weather stations. These data may be then analyzed over time to tease out patterns in the relationships of the variables across the geographical area. A theory of the weather would be devised and then a model constructed to determine a forecasted state based on the current state. The model would then be refined over time until it reaches an acceptable degree of accuracy. However, while resulting theories were capable of accounting for past weather data they were not capable of predicting future weather data. Eventually a time came when it was decided that the problem was unsolvable, and weather prediction remained an intuitive art. In fact, “virtually all serious meteorologists,” and indeed most “serious” scientists in the 1960’s rejected the prospect of predictive models, and indeed altogether mistrusted the computers that ran them (Gleick, 2011, p. kpp 22).

But while precise prediction was not possible, it was eventually observed that there was some regularity in the way that the weather changed. That is, a weather system possessed a pattern of ordered disorder at higher levels of analysis and over a longer period of time. This observation came when one researcher stumbled across the fact that small changes in the inputs of the simulations would produce more erratic changes in the outputs over time. The more times the outputs of one simulation run were fed back as inputs to the next simulation run, the less the outputs resembled what would be expected given such a small change in the original inputs1. This sort of imbalance between inputs and outputs points to the dramatic failing of pure mechanistic thinking: that while variables derived from reductionistic analysis may accurately account for past data, they may not always account for future results. The relationships between the variables are simple, the computer algorithms are deterministic, and yet the outputs become less predictable the longer the simulation is left to run (Gleick, 2011).

The researcher was Edward Lorenz, and his discovery led to the study of chaos. He observed that his computer simulations of the weather showed islands of coherence amidst the turbulence, and his discovery was that the two could exist together. This ordered-disorder is most simply conceptualized by understanding how a simple non-linear (i.e. exponential) equation can produce predictable results when one once, but the results become more unpredictable when the output is fed back into the equation as input, reflecting the requirement that the state of a weather system in one moment determines the state in the next moment. Complex problems like these seem to be creative in their unpredictability, they appear to be alive. (Fleischman P. R., 2012).

Lorenz had unwittingly made a discovery that would lead to the study of complexity and terms like complex systems, dynamical systems, and chaos theory. The concept of complexity challenged the paradigmatic assumptions of the time and opened the door to new ways of looking at extremely complex problems like variations in population levels, financial economy, and global climate, and most recently the functioning of the human body and mind (Siegel, 2012). Yet the assumptions of complexity have not permeated mainstream medical care in the United States, which remains mostly fixed in reductionistic thinking (Diez Roux, 2011; Kapp, Simones, DeBiasi, & Kravet, 2016; Peters, 2014; Trochim, Cabrera, Milstein, Gallagher, & Leischow, 2006). The reigning reductionistic assumptions drive, for example, research into essentially genetic causes of disease and pharmacological remedies to those causes, but do not provide the flexibility to tackle problems of great complexity or reciprocation such as the influence of interpersonal relationship anxiety on autoimmune inflammation, or epigenetic relationships between genes and the environment.

Reductionistic thinking in this way looks for a direct, linear relationship between two variables, such as gene X and disease Y. Sometimes X is said to cause Y as mediated through Z. Or X correlates with Y as moderated through Z. In any case, the causal relationship is sought to move in one direction, from X to Y. This type of thinking, which this study will term linear thinking after Macy’s (1991) term linear causality, works well for problems of engineering but fails for problems of complexity where there are sometimes uncountable variables with incomprehensibly complex relationships.

Linear thinking cannot solve every problem, but it certainly can solve many problems as evidenced throughout the span of the industrial revolution (Frodeman, 2013). Therefore, in the search for a type of thinking that is better suited to problems of complexity it may be beneficial to speculate as to the function of linear thinking in order to understand what limitations must be surpassed. After all, linear thinking is so prevalent in sapiens and our “lower” animal cousins that we seem to be hard-wired for it.

Drawing on the descriptions above, we can assume that the function of linear thinking may be to execute a single, precisely defined goal. Complexity is managed in linear thinking by executing many precisely defined goals as in the engineering of the mobile phone. Linear thinking often (or maybe always?) produces solutions conceptually organized in hierarchy, and hierarchy most visibly functions to optimize the execution of a precisely defined goal. Today hierarchy remains the most intuitive and popular way to organize a commercial kitchen, military, or government. Though there are attempts to organize government with less hierarchy or without hierarchy such as the system of “checks and balances” in the United States, pure democracy, or pure socialism, any group will fall back on hierarchy given a crisis that is intense enough. Goal-directed focus becomes quite clear; as an attendant of the opera would observe in the transition from the multi-dimensional heights of the imagination to the singular need for a toilet when nature calls; or happy reflection after the show followed by a focused search for food when the stomach growls in protest. Seen in this way, simple linear solutions to important problems prove vital to survival.

Linear thinking is prevalent in modern medicine where singular solutions for all sorts of ailments can at times appear magical, and hospitals and their governing agencies are organized in hierarchy to administer these solutions via specialized providers working in their appropriately divided departments. But this type of thinking and the resulting style organization has its limitations. The complex series of events leading to the 1964 US surgeon general’s report on smoking triggered wide-scale positive societal changes like increased taxation of tobacco at the state level and the Tobacco Master Settlement Agreement in (1998), but there were many unforeseen negative effects such as changes to the marketing and covert lobbying strategies from the tobacco companies (Trochim, Cabrera, Milstein, Gallagher, & Leischow, 2006). The Affordable Care Act in the United States includes a focus on “population health” as the result of “collective impact” efforts across government agencies, but lacks the coordination to accomplish their goals (Kapp, Simones, DeBiasi, & Kravet, 2016). The British National Health Service (NHS) is charged with the enormously complex task of managing more than a million employees, which includes “a wider range of professions (in this case clinical, allied health and managerial) than any other sector of activity in the UK” (Cramp & Carson, 2009, p. 71). The current model of NHS management views each profession sector of the system as a tool used to engineer the organization as if the professions and components were as related as bricks in a wall. The reality is of course that a change in one area can greatly affect the other, and the result is the famously ineffective NHS model. Similarly, existing research on the transition to adulthood for youth with disabilities focuses on identifying the variables that influence the problem of healthcare transition including “health, personal and environmental factors,” but does not consider the complexity of the relationships between the variables which limits and can even harm transition outcomes (Hamdani, Jetha, & Norman, 2011). Much like weather simulations, these variables can account for past data but the complexity of the problem of healthcare transition for this population makes a thoughtful effort to coordinate them for future development a separate task altogether. Ignoring the relationships between the variables through a mechanistic, atheoretical perspective can lead to unexpected consequences.

A 2014 survey of Eastern-Mediterranean health care officials (El-Jardali, Adam, Ataya, Jamal, & Jaafar) around the world revealed that costliness, political inertia, and a lack of the basic conceptual capacity in individuals involved pose significant barriers to coordinating larger-scale, complex analysis of health care systems. A related problem was a lack of sufficient health care information systems to produce the amount of data required for more comprehensive systemic evaluation. It was concluded that change within individual agencies was not sufficient to create the large-scale effect that government health care agencies are tasked with creating, and that political endorsement would be critical in coordinating the agencies into a cohesive whole. The consensus was that current ways of problems solving are more reactive than proactive, as explained by a policy-maker from Iraq: “The current thinking depends on reactively finding solutions to health systems problems and usually the mechanisms set are unclear and imprecise” (p. 402). Another policy-maker from Jordan reported that “Although steps [related to] evaluation are undertaken, they are mostly superficial and non-scientific” (p. 403). One effect of this kind of superficial strategy is that it often does not think all the way through the problem from interventions to effects of those interventions beyond the effects that are desired. A researcher from Palestine reported that,

At the national level, the health plan utilized steps 1 to 4 [on the design of interventions] but not in a systematic way. Stakeholders were convened and they brainstormed; however, they did not map and conceptualize effects of the intervention in the health system [in Palestine]. They also did not apply the ST approach systematically to examine relationships across components of the health system (p. 403).

This Band-Aid style of thinking fails to evaluate the assumptions that are used in the decision-making process, and so does not consider or prepare for the wider ramifications of the interventions used.

Linear thinking in health care is not so different from the type of thinking that fuels heated political debates on industry and the environment. The world Commission for Environment and Development (WCED) defines the term sustainability as “Development that meets the needs of current generations without compromising the ability of future generations to meet their needs and aspirations” (WCED, 1987, as cited by Bell & Morse, 2005, p. 409). By this definition, the idea of “sustainability” may be linked to overcoming linear thinking by looking beyond the desired immediate effects of any particular strategy and on to a wider or widest range of effects. That is to suggest that an “unsustainable” practice is one that insufficiently accounts for its effects. Whether one is pro-industry or pro-environment, a stagnating debate will remain until at least one side can escape the lure of linear thinking enough to produce evidence from multiple levels of analysis into the problems common to all sides. Industrialists and conservationists alike may benefit from a wider-scoped and longer-termed perspective on their chosen context that includes an understanding of the relationships between their most valued resources. If a business is organized for high output but depends on high employee turnover to accomodate the associated grueling working conditions, then the management may benefit from reorganizing to eliminate the HR overhead of firing and resolving personnel conflicts by retaining more employees. If a conservationist argues the importance of protecting a forest that is also critical to maintaining the ecological stability of the region, they may choose to strengthen their argument by developing a more comprehensive understanding of the complex ecological relationship between the environment and the resources that the imposing industry, and they themselves, may depend on.

If the function of linear thinking is to solve specific problems, is visible in the persuance of our most basic needs like hunger and safety, and prevalent in the organization of human groups, then we may assume that we are primarily wired for linear thinking. It has been found that people tend to evaluate situations in terms of unidirectional cause-and-effect (i.e. linear thinking) even when exposed to evidence that the situations involve variables in complex relationship with one another (White, 2008). This tendency is attributed in part to a limited capacity of the number of variables and/or relationships that the human mind can hold at once when analyzing a problem. This limitation may influence the assumptions of experimental psychology as well as laypeople by supporting quasi-experimental causal judgements, for example about the factors related to forest ecosystems and climate change (White, 2015, 2017). In fact, people can typically only hold two or three relationships between variables in mind at once which contributes to a sort of “naïve ecology” (White, 2008, p. 560) based on linear thinking.

If linear thinking functions to solve problems critical for survival, is limited by the capacity to handle problems of complexity, better problem-solvers are those who can move beyond linear thinking (Ying, Kang, Hiong, & Lim, 2014), and organisms tend overall to evolve toward greater complexity and adaptability (Kerr & Bowen, 1988), then it is possible to assume that moving beyond linear thinking may be a basic evolutionary challenge. That is to say that linear thinking is necessary for survival but it may also be an important barrier to overcome if we are to progress toward a way of life more in line with the natural laws and environs that we, and all organisms, are subject to.

References

Bell, S., & Morse, S. (2005, August). Holism and Understanding Sustainability. Systemic Practice and Action Research, 18(4), 409-426.

Cramp, D. G., & Carson, E. R. (2009). Systems thinking, complexity and managerial decision-making: an analytical review. Health Services Management Research, 22(2), 71-80.

Diez Roux, A. V. (2011, September). Complex Systems Thinking and Current Impasses in Health Disparities Research. American Journal of Public Health, 101(9), 1627-1634.

Eicker, N., & Lippert, T. (2017). Dynamic Self-assembling Petaflop Scale Clusters. Proceedings of the International Conference on High Performance Compilation, Computing and Communications (pp. 1-5). New York: Association for Computing Machinery.

El-Jardali, F., Adam, T., Ataya, N., Jamal, D., & Jaafar, M. (2014, November). Constraints to applying systems thinking concepts in health systems: A regional perspective from surveying stakeholders in Eastern Mediterranean countries. International Journal of Health Policy and Management,, 3(7), 399-407.

Fleischman, P. R. (2012). Karma and Chaos. In P. R. Fleischman, Karma and Chaos. Onalaska, WA: Pariyatti Publishing.

Frodeman, R. (2013). Philosophy dedisciplined. Synthese, 190, 1917–1936.

Gibson, W. T., & Wilson, W. G. (2013). Individual-based chaos: Extensions of the discrete logistic model. Journal of Theoretical Biology, 339(21), 84-92.

Gleick, J. (2011). Chaos: making a new science. New York, NY: Open Road Integrated Media.

Godfrey-Smith, P. (2013). Philosophy of Biology. Princeton, NJ: Princeton University Press.

Hamdani, Y., Jetha, A., & Norman, C. (2011). Systems thinking perspectives applied to healthcare transition for youth with disabilities: a paradigm shift for practice, policy and research. Child: care, health and development, 37(6), 806-814.

Harari, Y. N. (2015). Sapiens: A brief history of huhmankind. New York, NY: HarperCollins Publishers.

Kapp, J. M., Simones, E. J., DeBiasi, A., & Kravet, S. J. (2016). A Conceptual Framework for a Systems Thinking Approach to US Population Health. Systems Research and Behavioral Science.

Kerr, M., & Bowen, M. (1988). Family Evaluation: The Role of the Family as an Emotional Unit that Governs Individual Behavior and Development. New York, NY: W. W. Norton & Company.

Khisty, C. J. (2006). Meditations on Systems Thinking, Spiritual Systems, and Deep Ecology. Systemic Practice and Action Research, 19, 295-307.

Kuhn, T. S. (2012). The Structure of Scientific Revolutions. Chicago, IL: University of Chhicago Press.

Macy, J. (1991). Buddhihsm and General Systems Theory.

Peters, D. H. (2014). The application of systems thinking in health: why use systems thinking? Health Research Policy and Systems, 12(51), 1-6.

Puhakka, K. (2015). Encountering the psychological research paradigm: How Buddhist practice has fared in the most recent phase of its Western migration. (W. V. E. Shonin, Ed.) Springer.

Siegel, D. J. (2012). The Developing Mind: How relationshhips and the brain shape who we are. New York, NY: The Guilford Press.

Statista. (2017). Number of mobile phone users worldwide from 2013 to 2019 (in billions). Retrieved 06 11, 2017, from Statista: https://www.statista.com/statistics/274774/forecast-of-mobile-phone-users-worldwide/

Terra, L. A., & Passador, J. L. (2015). A Phenomenological Approach to the Study of Social Systems. Systemic Practice and Action Research, 28, 613-627.

Trochim, W. M., Cabrera, D. A., Milstein, B., Gallagher, R. S., & Leischow, S. J. (2006, March). Practical Challenges of Systems Thinking and Modeling in Public Health. American Journal of Public Health, 96(3), 538-546.

Tuan, N.-T. (2012). Rethinking the Scope of Science in Dealing with Complexity. Systems Practice and Action Research, 25, 195-207.

White, P. A. (2008). Beliefs About Interactions between Factors in the Natural Environment: A Causal Network Study. Applied Cognitive Psychology, 22, 559-572.

Ying, L. C., Kang, F. K., Hiong, K. G., & Lim, E. L. (2014). Is Problem Solving and Systems Thinking Related? A Case Study in a Malaysian University. Social Sciences and Humanities, 22(3), 345-363.

 

  1. This is the above-mentioned step-wise method of computation used in a simulation used whenever something needs to be modeled fairly smoothly through time, similar to how a cartoon animator draws one frame at a time on a fresh piece of paper. The simulation program is actually written to run just once per step, just as the cartoon animator only draws the character on one frame at a time. Weather prediction would also rely on this sort of step-wise simulation, as the simulation for one moment relies on knowing the result of the simulation of the previous moment

You may also like

5 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

8 + 2 =

This site uses Akismet to reduce spam. Learn how your comment data is processed.