Computational Modeling in the Process of Scientific Inquiry

Cover Image for Computational Modeling in the Process of Scientific Inquiry

Computational models of scientific inquiry account for methods employed in solving problems, developing hypotheses, evaluating theories, and providing explanations. They also examine how problems and experiments are generated and consider the social aspect of science, which involves individuals collaborating. There are two main ways in which computational models have been used in the philosophy of science. One is a model of scientific theories, and the other is a model of the social interactions that occur in collaborative scientific research. For this discourse, this chapter will focus on the latter, by evaluating the use of parallel computation in characterizing group rationality in science, as proposed by Paul Thagard. The goal is to illustrate how computational modeling can aid scientific communities in their understanding and execution of scientific inquiries. By doing so, it aims to demonstrate the philosophical superiority and reliability of parallel computational accounts of theories in the social process of scientific inquiries.

4.1. Science as a Social Activity

The philosophy of science is a subfield of philosophy that focuses on examining the fundamental principles, techniques, and consequences of scientific research. It revolves around questions like how to define science, how reliable and flexible scientific theories are, and what purpose science serves. A perfect scientific methodology would provide a set of guidelines that could help a researcher move from a state of not knowing to gaining knowledge. Consequently, the philosophy of science scrutinizes the fundamental aspects of scientific investigation. The conventional philosophy of science primarily centers around individuals and the criteria that need to be met to justify their beliefs. However, this narrow focus seems peculiar because scientific endeavors, both historically and in the present, are inherently social. Scientists depend on one another for various aspects such as results, samples, and techniques. Their engagements are frequently collaborative, although occasionally competitive. Additionally, scientific research in most societies is conducted within a network of social connections that interconnect laboratories, government bodies, educational establishments, and communities of people. Is it possible for the philosophy of science to overlook the social context in which scientific research takes place?

Many philosophers believe that that's not the case, including Paul Thagard, for he posits that “Science today is performed by large communities of scientists working sometimes in collaboration, sometimes in conflict”. For Thagard, problem-solving, hypothesis formation, and experimentation in a given field may be distributed across individuals. He believes that different groups may be involved in the advocacy and exploration of conflicting hypotheses. From a computational viewpoint, he proposes that scientific communities can be thought of as highly “Parallel Processors of Information”. Philosophical models of rationality have mainly focused on individual rationality in the past. However, there is growing recognition that a group's rationality may surpass the combined rationality of its members. This is especially evident in scientific communities, where groups work with competing hypotheses and process information simultaneously or in parallel.

Science has a social aspect that requires a broader approach to rationality than what is typically discussed in philosophy. One way to determine if certain methods or principles are rational is to consider whether they are likely to lead to true beliefs. Similarly, if institutions or methods of organizing inquiry are likely to improve the chances of members of the community believing the truth, then they can be considered rational. Computational modeling techniques demonstrate that these institutional structures lead to collectively rational outcomes in situations that frequently occur in the history of science. Surprisingly, individual biases and social incentives, which might seem to hinder the rational pursuit of truth, can have a positive impact on the collective pursuit of knowledge.

4.2. Importance of Parallel Computation

One might assume that the only significant advantage of parallelism in computing is speed, as a group of parallel processors working together can only accomplish tasks faster but not perform anything beyond what a single processor can do. Likewise, the rationality of a scientific group might be seen as merely the sum of the rationality of individual scientists. Thagard argues, however, that parallel design of computers can offer more than just increased speed of operation: it can lead to qualitatively different means of information processing. Similarly, Thagard maintains that group rationality may require different overall standards than individual rationality.

Thagard highlights Wirth and other theorists' submission that a program should be understood as consisting of data structures and algorithms for manipulating those structures. He posits that the structures and algorithms are interdependent: the algorithms must work with the data in the form given to them. In languages like Pascal, he believes that data structures are conceptually distinct from the procedures that use them, whereas in LISP procedures are themselves data structures, namely, lists. In both cases, however, he opines that it is impossible to specify algorithms without noting the kinds of structures on which they operate. For him philosophers tend to assume the ubiquity of only one kind of data structure - the proposition and only one kind of algorithm - logical reasoning. But computer science, he highlights, offers a wealth of structures in which data can be stored: arrays, tables, records, frames, and so on.

Chapter 3 showed us that our understanding of thinking can be expanded by considering non-propositional data structures and non-logistic processing mechanisms. How we program can vary depending on our personal preferences. Experienced programmers understand that certain programming tasks are more easily accomplished in certain languages. While it is possible to write AI programs in languages such as Pascal, it is more efficient to use languages like LISP which offer better support for the necessary data structures and algorithms. In other words, qualitatively, it is easier to create programs in languages that provide the appropriate tools for the task at hand. As Thagard, rightly pointed out that some programming theorists even urge a kind of computational Whorf hypothesis, claiming that using a particular programming language can have a substantial effect on how problems are conceived. These features of programming point to a general argument for the qualitative importance of parallel processing. Some programming tasks are much more naturally done using particular kinds of data structures and algorithms found in particular programming languages, and great gains in efficiency and ease of use can be achieved by tailoring hardware for particular programming functions. Hence in contrast to the in-principle compatibility of any program with any hardware, we find in practice that a good fit of software and hardware is indispensable, which opens the door for the potential usefulness of hardware that employs parallel processors.

I will highlight how Thagard uses examples of parallel computation to demonstrate how group rationality can outperform individual rationality. He argues that parallel architectures not only provide faster processing speeds, but also enable the creation of more reliable, flexible, and easily produced programs compared to those designed for serial computers.

4.2.1. Reliability

Thagard highlights the advantages of parallelism over serial computing when it comes to system reliability. The human brain is an example of a parallel system where memory and processing capacity are distributed over large areas, allowing the remaining parts to compensate for any loss. In contrast, removing the storage part of a serial computer program will eventually lead to a total breakdown of the program. Parallel machines, on the other hand, can continue to function even with a few faulty cells since algorithms do not depend on a cell existing at a specific address. The neighbors of a cell can identify it as defective and ignore it, with performance continuing with only a slight degradation. Thagard also notes that it is possible to achieve reliability with serial computers, such as using two computers in tandem to provide checking and backup for each other. However, this approach involves duplicating resources, which is inefficient compared to building reliability into each system. Overall, parallelism provides a more natural way to ensure system reliability than serial computing. The distributed nature of parallel systems allows for compensation in case of loss, while parallel machines can continue to function even with a few faulty cells. Though reliability can be achieved with serial computing, it often involves duplicating resources and is less efficient than building reliability into each system.

4.2.2. Flexibility

Thagard discusses the role of consistency and parallelism in intelligent processing systems. He notes that while consistency is often seen as a paramount virtue in such systems, there are arguments that it is not always necessary and that flexibility can be more important. He points out that Minsky, for example, has argued that “a sufficiently flexible system can function despite contradictions”. He also notes that parallelism can encourage flexibility and audacity, allowing a system to consider multiple hypotheses simultaneously and potentially discover unexpected solutions. Thagard uses the example of PI to illustrate the benefits of this approach, for it simulates parallelism. PI allows the firing of any number of production rules at a single time step so that no strict priority of rules needs to be maintained. This allows the system to simultaneously consider different tasks and potentially discover multiple solutions to a problem. He also notes that parallelism can encourage the emergence of important structures, such as schemas, through the parallel activity of simpler structures. Overall, Thagard argues that while consistency is often valued in intelligent processing systems, flexibility and parallelism can be important for allowing a system to consider multiple hypotheses and potentially discover unexpected solutions.

4.2.3. Producibility

Thagard argues that no processing system, whether biological or artificial, can be created from scratch. Rather, they must build upon existing systems and ideas. The human mind is an example of a complex information-processing system that evolved over millions of years of mammalian development. Similarly, the design of modern computers must also draw upon existing ideas. Thagard suggests that parallel computation may be more "producible" than serial computation in some contexts. In other words, it may be easier to design and create intelligent machines using parallel computation. This is because parallel computation allows for greater subdivision of design tasks and reduces the need to worry about all the interactions that might occur in a complex system. He notes that early attempts at artificial intelligence involved programmers directly entering enough information into computers to make them intelligent. However, this kind of "spoon-feeding" has limitations. Expert systems have proliferated, but they are restricted to narrow domains. To be truly intelligent, computers must have flexibility and learning capacity similar to that of humans. Overall, Thagard suggests that parallel computation may be a useful tool in designing and creating intelligent machines. By allowing for a greater subdivision of design tasks and reducing the need to worry about complex interactions, parallel computation may make it easier for human designers to create intelligent machines.

4.3. Parallelism in Scientific Communities

Thagard describes the advantages of scientific inquiry as a collective enterprise, comparing it to parallel computing systems. While individuals are expected to maintain consistency and coherence in their beliefs, a scientific community can be expected to have sharply competing views, leading to healthy competition and the emergence of new ideas. This kind of flexibility and diversity in scientific communities also offers a level of reliability, overcoming the shortcomings of individual researchers who may be eccentric, incompetent, or even immoral. Thagard also notes that scientific knowledge is so complex that no single scientist can possess or contribute more than a tiny fraction of it, so producibility is a gain of the social nature of science. This is where the central executive plays a role, serving to collect and communicate information from separate researchers. In actual science, professional journals serve much of this function, with journal editors and referees screening the results of research for what is worth looking at by other researchers. Overall, Thagard acknowledges that scientists often work in teams rather than individually, adding a further complicating factor to the collective enterprise of scientific inquiry.

4.4. Group Rationality

Thagard discusses different methodological issues that arise in scientific research and how they affect the way science is conducted. He questions the standard of rationality that is appropriate when individuals work in parallel on scientific research. He suggests that divisions of labor and method may be more effective in achieving scientific advances than a uniform approach where all individuals work in the same way. He notes that one common division of labor in physics is between theoreticians and experimenters, but this division is not as prevalent in psychology. However, he suggests that this division may simply reflect individual differences in talents and inclinations, rather than normative standards. Thagard then addresses the issue of scientists' attachment to their hypotheses. While the standard philosophical account suggests that scientists should treat their hypotheses with the same critical attitude as those of other researchers, Thagard acknowledges that scientists are often passionate about their hypotheses. However, he suggests that this personal attachment can be a motivator for more thorough and intense research. He notes that this attachment can sometimes lead to excessive conservatism or even fraud, but it can also be necessary for incipient research programs where a degree of conviction is needed to generate interest and momentum. Thagard concludes by suggesting that a division of labor between audacious but reckless thinkers and careful but less original critics may be best for scientific research. While critics may make the best journal editors, they may not necessarily be the developers of the most interesting research. He acknowledges that these methodological issues are complex and may not have easy answers, but they are important to consider for the advancement of scientific research.

4.5. The Need for Experiments

Thagard discusses the challenges of determining the best methodologies for investigating group rationality in science. While it may not be possible to conduct a controlled experiment on human scientists, Thagard suggests that computer simulations of group operations could be a feasible alternative. He proposes that instead of modeling the problem-solving processes of a single individual, we could computationally model the operation of a group in which different individuals play different roles. If different methodologies were developed explicitly enough to be programmed, we could compare the performance of groups with different methodological styles, such as conservative Kuhnian scientists and more critical Popperian scientists. He conjectures that in the long run, the Kuhnian group would accomplish more.

Furthermore, Thagard suggests that a mixed group with different methodological styles, encompassing the audacious, the critical, and the conservative, could be considered. The results of the simulation could provide important contributions to the theory of group rationality in science. Even the exercise of working out the nature of such methodologies in sufficient detail to be implemented in a computational model would be highly illuminating.

In summary, Thagard proposes that computer simulations could be used to investigate group rationality in science and compare the performance of different methodological styles. The results could contribute to the theory of group rationality in science and provide insight into the nature of scientific methodologies.