Intersection of Philosophy and Computer Modeling

Cover Image for Intersection of Philosophy and Computer Modeling

Gottfried Wilhelm Leibniz once said in his work on the Art of Discovery that the only way to rectify our reasoning is to make them as tangible as those of the Mathematicians so that we can find our errors at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, to see who is right. Despite Leibniz’s hopes for a single computational method that would serve as a universal key to discovery, computational philosophy today is characterized by several distinct computational approaches to a variety of philosophical questions. Particular questions and particular areas have simply seemed ripe for various models, methodologies, or techniques. Both attempts and results are therefore scattered across a range of different areas. In what follows I offer a review of several philosophers who have explored the intersection of philosophy and computer modeling. Through their innovative use of computational methods, these philosophers have expanded our understanding of philosophy and its role in complex systems. Overall, this chapter highlights the potential of computational modeling as a tool for philosophers to gain new insights and understanding in their investigations.

2.1. Patrick Grim

In his work Modeling Epistemology: Examples and Analysis in Computational Philosophy of Science (2019) he opined that philosophical investigation is standardly marked by four characteristics. He posits that it is abstract, rather than concrete. That it operates in terms of logical argument, rather than empirical data. That its goal is understanding at the most general level, rather than specific prediction or retrodiction. And that it is often normative, rather than descriptive. According to Grim, Analytic philosophers in particular can be expected to emphasize these characteristics. He suggests that the primary change that can be accommodated, and, is necessary, is a modification of the second prerequisite. He argues that there is no reason why the questions we ask and the answers we seek are not questions and answers that we should approach using both empirical data and logical argument. He highlights that what many of us would like to see is models in the philosophy of science expanded to incorporate empirical data, which would admittedly be less philosophical. He contends that if they ended up being more informative in whatever way they proved informative that would seem a small cost. He asserts that we have for too long been hampered by the borders of our disciplines. He proposes that new techniques and methodologies, including computational techniques, need not come with any proprietary investment in established disciplinary boundaries.

2.2. Rory Smead and Patrick Forber

In their work Computer Modeling in Philosophy: Signals and Spites in Fluctuating Population (2019), they were of the view that Philosophy, as a discipline, has a rich recent history of using formal methods: mathematical logic to analyze language and ontology or probability theory to formalize the relationship between theory and evidence. They opined that computational methods are a recent addition to the philosopher’s formal tool kit. They believed that these methods harness algorithms to explore systems in ways that outstrip current analytical approaches, such as equilibrium analysis or mathematical proof. In particular, they believed that computational approaches have become essential for analyzing evolutionary systems. They highlight that taking a dynamic approach to these systems reveals novel behavior, and permits the analysis of more complex systems. It is their perspective that with evolution, computational methods permit the investigation of more complex arrays of strategies competing in a population, and the introduction of stochastic elements (e.g., drift, varying rates of mutation) into the systems. They believed that any formal model of evolution must make idealizations, and these face interminable tradeoffs, yet computational methods allow for the exploration of models of increasing complexity. They are convinced that of course, computational methods carry risks. They also note that increasing the complexity of the model and relying on algorithms for analysis raises the risk that formal artifacts might drive otherwise interesting dynamical behavior. Thus, they proposed that the construction of these complex models, and the deployment of computational methods, must be handled with sufficient care to guard against this risk.

2.3. F. LeRon Shults

In his work Computational Modeling in Philosophy: Computational Modeling in Philosophy of Religion (2019), he asked questions about how might philosophy of religion be impacted by developments in computational modeling and social simulation. He went on to describe some of the content and context biases that have shaped the traditional philosophy of religion, providing examples of computational models that illustrate the explanatory power of conceptually clear and empirically validated causal architectures informed by the bio-cultural sciences. He also outlined some of the material implications of these developments for broader metaphysical and meta-ethical discussions in philosophy. According to him, computer modeling and simulation can contribute to the reformation of the philosophy of religion in at least three ways: by facilitating conceptual clarity about the role of biases in the emergence and maintenance of phenomena commonly deemed “religious,” by supplying tools that enhance our capacity to link philosophical analysis and synthesis to empirical data in the psychological and social sciences, and by providing material insights for metaphysical hypotheses and meta-ethical proposals that rely solely on immanent resources.

2.4. Patrick Grim, Joshua Kavner, Lloyd Shatkin, & Manjari Trivedi

In their work Philosophy of Science, Network Theory and Conceptual Change: Paradigm Shifts as Information Cascades (2021), they postulated that the attempt to understand science, its dynamics, and how it changes might call for any of various levels of analysis and must ultimately include them all. According to them, a psychologist might approach the issue with an eye to creativity and conformity. While an economist might approach the topic in terms of incentives for research, innovation or exploitation of existing resources. Whereas sociologists might think of the task as primarily a social study, concentrating on research communities, structures of journal communication, and academic procedures for advancement and funding. Philosophers, they continue, have typically thought of particular areas of scientific research as characterized by ‘theoretical frameworks,’ disciplinary matrices,’ ‘scientific paradigms,’ or ‘conceptual schemes’ at a particular time, with scientific change to be understood as changes in those conceptual structures. They point out that it is presumed that such schemes exist psychologically in some individual heads, for they are shared and changed through the social dynamics of science, and that change may follow a form of both individual and social incentives. They posit the philosopher’s level of analysis, however, is the ‘theoretical framework,’ ‘disciplinary matrix,’ scientific paradigm,’ or ‘conceptual scheme’ itself, envisaged as something like an abstract object. They highlight that the philosopher presumes that at least major aspects of scientific change will be understood as broadly logical and boundedly rational changes in the scheme itself, though those changes play out in epistemic economics, through psychological mechanisms and in social dynamics. In their view, when under the pressure of new evidence, theoretical frameworks and scientific paradigms can be expected to change. They contend that the philosopher’s goal, at that level of analysis, is to better understand how. They highlight that despite this long philosophical tradition, thinking in terms of paradigms, theoretical frameworks, and conceptual schemes, however, no one in the philosophical tradition has attempted to model a conceptual scheme or track its dynamics. They probed on How precisely might one model a paradigm? How might one track, even theoretically, the dynamics of a paradigm shift? They attempted to take some first steps by putting the tools of complex systems and network theory, in particular, to use in the philosophy of science. The science of complexity, they argue, carries important philosophical lessons regarding the complexity of science.

2.5. Aaron Bramson, Daniel J. Singer, Steven Fisher, Carissa Flocken et al.

In their work Philosophical Analysis in Modeling Polarization: Notes from a Work in Progress (2012) they opined that Computational modeling and computer simulation have quickly established themselves not merely as useful add-ons but as core tools across the range of the sciences. They considered computational modeling to be a promising approach to a range of philosophical questions as well, as to questions that sit on the border between philosophy and other disciplines. According to them, questions regarding the transference of belief, social networks, and opinion polarization fall in the latter category, bridging epistemology, social philosophy, sociology, political science, network studies and complex systems. They declared that their purpose is not to sing the praises of computational modeling as a new philosophical technique, but rather to emphasize the continuity of computational model-building with the long philosophical tradition of conceptual analysis. With reflections from the process of building a specific model, they emphasized two things - (1) the work of constructing a computational model can serve the philosophical ends of conceptual understanding, in part because (2) attempts at computational modeling often require clarification of the core concepts at issue.

2.6. Evan Selinger, William Braynen, Robert Rosenberger, Randy Au et al.

In their work Modeling Prejudice Reduction: Spatialized Game Theory and the Contact Hypothesis (2005), they opined that Philosophers have done significant work on concepts of ‘race’ and ‘racism’, on the ethics of a spectrum of race-conscious policies proposed to address a history of discrimination, and on identity and the experience of race. That work consists primarily of conceptual and normative analyses of prejudice and of the social policies designed to address it. According to them, philosophers have also considered internal questions of racism within the canonical history of Western Philosophy, for example, Aristotle, Kant, and Hegel. They submit that what has been lacking, however, is sustained philosophical analysis regarding issues raised in the extensive social psychological literature: questions regarding the nature and formation of prejudice, questions regarding the social dynamics of prejudice, and questions regarding prospects for prejudice reduction. Explanation of these was crucial for them. They hoped to accurately explain how prejudice occurs and how it can be reduced, to be able to construct adequate public policy. For them, the lack of philosophical attention in this area is thus particularly conspicuous and unfortunate. As a first step toward remedying this situation—and with an eye toward public policy, they applied spatialized game theory and multi-agent computational modeling as philosophical tools: (1) for assessing the primary social-psychological hypothesis regarding prejudice reduction, and (2) for pursuing a deeper understanding of the basic mechanisms of prejudice reduction. They assert that social modeling in general has a philosophical pedigree that extends at least back to Hobbes and Locke. The particular techniques of social simulation employed here are relatively new, however, and raise important questions for the philosophy of science.

2.7. Jeremiah A. Lasquety-Reyes

In his work Computer Modeling in Philosophy: Towards Computer Simulations of Virtue Ethics (2019) Jeremiah proffered that though social scientists have used agent-based modeling for many topics with ethical dimensions or concerns, there are only a few agent-based models that explicitly refer to an ethical theory and only a handful of philosophers who have created computer simulations for ethics. He believes that among the few cases, there is currently no agent-based model for virtue ethics. Jeremiah suggested that agent-based modeling is a promising technological instrument to use for virtue ethics research since there is a functional parallelism between the two. According to him, in virtue ethics, a person can possess qualities (virtues or vices) that lead to similar repeated behaviors of an ethical nature; in agent-based modeling, an agent can possess properties and variables that result in more or less predictable behavior as the computer simulation is run.

Person + Qualities → Behavior

Agent + Properties → Behavior

He opined that as an ethical theory, virtue ethics is essentially “agent-based.” That it focuses on the person’s overall qualities, the person’s character that leads to similar and repeated moral actions. He submits that a just person, i.e. someone who possesses the virtue of justice, is inclined to do just acts such as telling the truth, returning things borrowed, or paying a fair price. While an unjust person, i.e. someone who lacks the virtue of justice and instead possesses the vice of injustice, is inclined towards unjust acts such as stealing, cheating, lying, and so forth. In his view, while comparing the duo, agent-based modeling can give virtual agents different variables, rules, or strategies that dictate their behavior. Jeremiah believes that it can be a single simple rule such as the one found in Schelling’s segregation model or can be a complex set of properties such as those found in Epstein and Axtell’s Sugar Scape simulation. He opines that this functional parallelism between virtue ethics and agent-based modeling suggests the possibility of using agent-based modeling as a tool to simulate and observe virtue ethics in action inside a controlled environment, which could lead to a greater appreciation and understanding of the ethical theory. He highlights that one of the things that attract social scientists and philosophers to agent-based modeling is the possibility of conducting social and ethical “experiments” that are impossible to do in real life. He argues that ethical theories often assume and make generalizations about human behavior that cannot be tested or replicated as one can do with physical experiments in a laboratory.

However, he believes that a computer simulation representing certain behaviors of human beings can be run an indefinite number of times with different conditions and variables. That the results can then be compared with each other and also quantitatively analyzed. In this respect, he contends that computer simulations could be seen as more complex, robust, and precise counterparts to the thought experiments that philosophers sometimes employ. For this reason, Mascaro et al. called their project a “new experimental ethics” and Wiegel called his project “experimental computational philosophy.” According to them, computer simulation adds a new opening for experimentation and understanding previously unavailable to philosophical ethics. But how does one simulate virtue ethics? Jeremiah suggested that there is a simple way and a complex way to go tackle it. The simple way conceives of virtue as a numeric variable invoked during specific events or situations in a simulation; the complex way conceives of virtue as the result of the interaction between physical, cognitive, emotional, and social components in an agent.