# The Mathematical Approach To Programs And Computer Science As A Discipline Of A Mathematical Nature

## 1. The mathematical approach to computation

1.The mathematical approach to computation in the 30s and the 40s generated a number of equivalent definitions: of these, the notion of effective computability by mechanical means given by the model of a machine defined by Turing was the one to have the longest impact and to result the most intuitive one. We shall see in the next part of this volume how in parallel to the mathematical foundation, an engineering discipline was taking shape concentrating on how to realize the same principles of computation in a physical system. When computing machinery was quickly being developed and progressing during the 50s and 60s, some of those involved would look back at its mathematical foundation and recognise it as its most fundamental model to interpret the processes embedded in physical computing. In those and the successive decades, a major task was to define the relationship between computing and mathematics.

1 One of the early moment in which the relation between Computer Science and Mathematics was put under investigation is the Automatic Programming for Digital Computers Conference held in Washington D.C. in 1954. The first element that needs highlighting in this context is the property of universality and machineindependence sought for languages by some of the contributors, [10, 3]. This obviously appealed to the mathematical nature of computing, in particular to those aspects concerning the construction and identification of properties of systems. In [11], the problem of standards for programming languages starts from the ChurchTuring thesis result, i.e. the assertion that the sequence of sentences in code of instructions can be seen either as the recursive definition of its output or as the set of specifications of a special purpose machine designed for that output, and that the Universal Turing Machine can produce anything producible by any special machine (i.e. again its universality). Across the 60s, the main objective of Computer Science is seen by many in logical terms as symbol manipulation and how properties essential to computing (e.g. finiteness) would impact on their logical and mathematical representation.

2.In 1967, Newell, Perlis and Simon [16] in offering their definition of Computer Science as “computer science is the study of computers [t]he phenomena surrounding computers are varied, complex, rich”, considered various objections, including that computer science should be considered as a science of algorithms or programs, not of computers, to which they recount with an understanding of computers as machinery, i.e. “computers plus algorithms”. Forsythe [8] stressed that the algorithmic aspect of the discipline is strongly linked to a pragmatic aspect defined by the dynamic of computation, while Computer Science shares with mathematics the goal of creating basic structures whose concepts are independent of applications. He further went into discussing in how far can Computer Science be understood as applied mathematics, while sharing at the same time aspects with engineering. Computer Science and Mathematics share, according to Forsythe, the need for “rigour and exposition (in mathematicians’ language), or performance and documentation (in computer science terminology), and place a higher premium on quality than on promptness”. Finally, he likens digital computers to experimental tools for applied mathematics. Towards the end of the 60s, the idea that the interpretation of programs in appropriately defined programming languages and eventually the description of their properties could be provided mathematically was well established. In his seminal paper, Floyd [7] started by providing an association of steps in the control flow of the program with logically holding propositions: such associations are prevented to be arbitrary in the form of an inductively defined relation of logical validity of a proposition from another at each step a command in the program is executed. The semantic definition is syntactically based: it specifies which linguistic constructs of the language express valid commands and which conditions hold for each given command. Starting from a flowchart model of the program in which each step is a command, an interpretation corresponds to a mapping of logical sentences to each transition from step to step (called the entry and exit of the command). A verification of the flowchart is a proof that for every command expressed in the flowchart, if there is an entrance such that a given sentence is true, then there must be an exit such that another sentence is true.

## 2. Computer Science as a Discipline of a Mathematical Nature

A semantic definition of a particular set of command types, corresponds to formulating a verification conditions for any command of one of these types on the antecedents and consequents of the command. The holding of given relations at the beginning and end of program execution are interpreted to prove the correctness and termination of the program in a purely mathematical way through translation to a particular set of axioms and inference rules. A verified program entered at some step that makes a corresponding sentence true must be followed at each subsequent time by a step which makes a 2 corresponding sentence true. If the program halts, the step for the corresponding output state must become true at some point. Obviously, Floyd stresses that “we have not proved that an exit will ever be reached; the methods so far described offer no security against nonterminating loops.

To some extent, this is intrinsic; such a program as, for example, a mechanical proof procedure, designed to recognize the elements of a recursively enumerable but not recursive set, cannot be guaranteed to terminate without a fundamental loss of power. Most correct programs, however, can be proved to terminate”. The basic idea underlying the approach is thus that properties of such programs in a given language are independent of their physical instantiation and processors for that language, and can be reduced to “establishing standards of rigor for proofs about programs in the language”. In the early 70s, the duality of Computer Science and Mathematics was still unresolved, but two major figures in Theoretical Computer Science were approaching the problem explicitely. Donald E. Knuth, in his [13] starts by describing computer science as “the study of algorithms […] a precisely defined sequence of rules thelling how to produce specified output information from given input information in a finite number of steps. A particular representation of an algorithm is called a program”.

after which he strictly links the need for computers (i.e. the mechanical implementation of programs) in order to establish general properties of algorithms, as human are neither precise nor fast enough. Algorithms are singled out as the core of the discipline, common to its many branches. He does however reject the equivalence of the two disciplines, as essentially different viewpoints belong to them: Computer Science was still deemed too young to aim at the depth of results available in mathematics; nonetheless it is presented as a discipline which in the future would be considered basic to general education and like mathematics “computer science will be somewhat different from the other sciences, in that it deals with man-made laws which can be proved, instead of natural laws which are never known with certainty.”. But they differ for their methods: mathematics allows for infinite processes and static relationships to prove theorems, while computer science allows for finitary constructions and dynamic relationships to formulate algorithms.

## 3. The Debate on Program Correctness

In the same Journal issue, Dijkstra approaches the problem of the mathematical nature of programming, [5]. Dijkstra starts by characterising mathematical work by precision, generality, rigour which induces confidence; and programming by performance capacity and reliability on obedience. In view of the latter, the programmer is required to be accurate and precise, thus matching the first of the characteristics of work of a ‘mathematical nature’. In view of the very nature of the algorithms 3 written, the programmer’s work is also always general, as a program can be supplied with different inputs and always supposed to produce an appropriate result, thus it matches the second of the above listed characteristics of mathematical work. Finally, a program should be composed by well-designed components and if so, the resulting predictability guarantees the same confidence induce by mathematical work. Note how Dijkstra does not shy away from commenting on the weakest of the above claims, namely the actual unreliability of much programming: while at firts it could have been ascribed to the fact that programming is still a young discipline, it then turned out to have become an essential question on the reliability of programs. The software crisis, surfaced around the end of the 60s was proof in point

3.The reference by Dijkstra to the software crisis reflect the approach that many theoretically oriented computer scientists took from the 1960s onwards in determining whether a program satisfies an agreed specification for all possible inputs through mathematical proofs. The earliest steps in program verification can be traced back to the works of [Goldstine, von Neumann (1947)], [Turing(1949)] and [Curry(1949)]. From the theoretical viewpoint, thier research focused on the identification of tractable notations. Later on, after Floyd’s work, the main supporter of a strongly mathematical view on proving program correctness was held by Hoare. From the very start of his famous [12], Hoare is quite explicit in his assumption: “Computer programming is an exact science in that all the properties of a program and all the consequences of executing it in any given environment can, in principle, be found out from the text of the program itself by means of purely deductive reasoning.

” This assumption justifies the requirement of developing a logic to reason about programs, i.e. to prove properties of programs. The developed system includes axioms for arithmetics and the expression of the language refer to values of relevant variables before and after the execution of the program; this is interpreted in the now famous triple consisting of the precondition of execution P, the program Q and the postcondition R P{Q}R Following the work of Floyd of few years earlier [7], Hoare provides axioms (assignment) and rules (consequence, composition, iteration) for such expressions of program execution and aims at using it to prove program correctness, “provided that the implementation of the programming language conforms to the axioms and rules which have been used in the proof” a very important abstraction on the actual running and execution of an actual physical implementation of a written program. Proving program correctness is aimed for Hoare at solving at least in the first instance three of the main problems of programming: 4 1. specification satisfaction; 2. program documentation; 3. machine independence.

## 4. Correctness between the Physical and the Formal

The axiomatic method is deemed of high value to these aims because it is general so that implementations should be required to satisfy the axioms, and it is universally applicable, so that aspects of the program like data-types can be left underspecified. These properties are the same highlighted for activities of a mathematical nature by Dijkstra in his [5]. Reconsidering the tas of proving mathematically the correctness of programs, Dijkstra suggests that the structure of the correctness proof should guide program design, in this reinforcing the attitude of programmers to treat their artefacts as mathematical objects: “given the problem, the programmer has to develop (and formulate!) the theory necessary to justify his algorithm. In the course of this work he will often be forced to invent his own formalism”. The complication proper to programming is due to its inner physical nature, which requires the programmer to equip precision and explicitness to deal with other properties, such as size and thus to master complexity.

4.The discussion that would ensue in the following decades concerning the design of provably correct programs would necessarily alternate between these two essential aspects of programs, physical artefacts and formal structures. And it would guide not only the technical work in Verification, but also a much larger debate in the Philosophy of Computer Science. The purely mathematical approach by the like of Floyd, Hoare and Dijkstra was strongly criticised in subsequent philosophical debates. [De Millo et al.(1979)] rejects the thesis that a proof of correctness fully establishes confidence in the correctness of a program: their argument relies on the assumption that it is the social acceptance of correctness by the community of mathematicians that establishes such confidence, rather than a proof in itself; this is especially because a proof can in the first place be opaque and incorrect. A proof, which includes both formal and informal aspects to it, is only aprtially equivalent to deductive reasoning, and it can often only taken as a probably attempt at establish the truth of a theorem.

It is instead the presentation of the proof to the community that consolidates its veracity, in terms of its attractiveness and its ability to convince. Confidence in the truth of mathematical statements is obtained through rpoofs only when these are subject to the social mechanisms of the mathematical community. But formal verification of program correctness is bound to fail because no such social process is in place: verifications cannot be surveyed they are impossible to be shared among specialists; also the passgae from the informal specification of the program to the 5 formal verification cannot be but informal and as such it undermines the very purpose of the verificationist position; finally, proving that a formal verification that a program is consistent with its specification requires that the two elements (program and verification) be independently derived, but the real design process of programs is often a feedback cycle between specification elicitation and implementation. On these bases, the authors dismiss the idea of full formal verification of complex, real software systems, as these do not scale continuosly from smaller programs (which might be actually verified). This all reduces to the belief that working with software systems means working with systems one does not understand, nor create perfectly: the only principle that can be invoked, in place of verificability is reliability, i.e. systems which can be believed to work sufficiently well. [Cantwell Smith(1985)] also supports the thesis that “there are inherent limitations to what can be proven about computers and computer programs. […] Just because a program is ‘proven correct’, in other words, you cannot be sure that it will do what you intend”.

The main argument by Cantwell Smith is devoted to the use of models in program construction: these models are defined at some level of abstraction and as such they are partial. On the contrary, the result of actions (including the results of a program running: landing airplanes, administering drugs and similar) are not partial. The action (and the computer runnning a program) can therefore be used to provide feedback to complete the model that we know is not correct (because partial). Hence a program is as correct as the model is, and this is always some step away from the correctness of the real world. The program developed will differ from the specification in that it has to say how the intended behaviour is to be achieved (functional reuqirements), while the specification only requires to declare what the proper behaviour will be (non-functional requirements).

Then a proof that a given program satisfies a given specification is, in the first instance, just a way to show that two characterization of a given behaviour are compatible; and secondly, such compatibility does not guarantee that the specification is in itself a correct representation of your intention. The value of verification lies therefore in making your program design a lot clearer and to discover bugs that would otherwise go undetected. As such, Cntwell Smith proposes to call this relative consistency and to drop the workd correctness entirely. [Fetzer(1988)], while criticising the view in [De Millo et al.(1979)], radically opposes the idea that programs can be treated as logical structures which are subjects valid to deductive verification. Relying on the principle of analyticity of deductive consequence (the truth of the conclusion is contained in the truth of the premises if the inference is to be correct), Fetzter qualifies correctness of a program p in two forms: • Absolute verifiability : there exists some program interpreting certain rules of inference or program expressing primitive axioms which permit to draw inferences concerning the performance of a machine executing p; • Relative veirifiability : there exists a set of premises concerning p from which 6 it can be derived a conclusion concerning the performance of a machine executing p.

In developing this dual conception of verifiability, Fetzer suggests that a model of proof that can account for errors needs to be inherently probabilistic. Verification, according to Fetzer, can then work for small programs (in the vein [De Millo et al.(1979)] argued for) and for large programs when they consist of relatively straightforward arrangements of smaller programs (exhibiting what he calls cumulative complexity ); it is on the other hand more difficult for larger programs consisting of complicated arrangements of smaller programs (exhibiting what he calls patch-work complexity). While Fetzer is not dismissive of the aim of formal verification, it requires to distinguish algorithms from programs: algorithms and their encodings care subject to absolute verification by means of deductive procedures, as they are at the level of abstract machines; on the other hand, encoding of algorithms that can be coimpiled and executed are only subject to relative verifiability, because these concern properties of physical machines which can be established only inductively. The equivocation of formal verification is therefore based on the idea that its theoretical possibility holds for both algorithms and their encodings (whose behaviour can be proven with certainty), as well as compiled and executed code (whose behaviour cannot be proven with certainty). Programming as a mathematical activity holds only for formal expressions, while it does no longer hold for programs understood as objects of an applied mathematical activity.

References:

[Ashenhurst(1989)] Ashenhurst, R. L. (1989) Letters in the ACM Forum, Communications of the ACM, 32(3):287. [1] Baier, C. and J.-P. Katoen (2008). Principles of model checking. MIT press. [2] Boyer, R., Yu, Y. 1996. Automated proofs of object code for a widely used microprocessor. J. ACM 43, 1 (Jan.), 166 –192. [Bringsjord(2015)] Bringsjord, S. (2015). A Vindication of Program Verification. History & Philosophy of Logic, 36(3):262-277. [Arkoudas, Bringsjord(2007)] Arkoudas, K., Bringsjord, S. (2007). Computers, Justification, and Mathematical Knowledge. Minds and Machines, 17(2):185-202. [3] Brown, J., Carr, J.W., Automatic Programming and its Development on the MIDAC, Symposium on Automatic Programming for Digital Computers, Office of Naval Research, Department of the Navy, Washington, D.C., 13-14 May 1954, pp.84-97. [Burge(1998)] Burge, T. (1998). Computer Proof, Apriori Knowledge, and Other Minds, Noûs, 32(S12):1–37. [Cantwell Smith(1985)] Cantwell Smith, B. (1985). The Limits of Correctness, ACM SIGCAS Computers and Society, 14,15(1,2,3,4):18-26. [Clarke, Wing(1996)] Edmund M. Clarke and Jeannette M. Wing. 1996. Formal methods: state of the art and future directions. ACM Comput. Surv. 28, 4 (December 1996), 626-643. DOI: https://doi.org/10.1145/242223.242257 [4] The Coq Development Team (2017). The Coq Proof Assistant Reference Manual, v.8.7.0, https://coq.inria.fr/distrib/current/files/ Reference-Manual.pdf. [De Millo et al.(1979)] De Millo, R. L., Lipton, R. J., and Perlis, A. J. (1979). Social Processes and Proofs of Theorems and programs, Communications of the ACM, 22(5):271-281. [5] Dijkstra, E.W., Programming as a discipline of mathematical nature, American Mathematical Monthly, 81:608-612, 1974. [Curry(1949)] Curry, H. B. (1949). On the composition of programs for automatic computing, Naval Ordnance Laboratory Memorandum, 9806(52):19–8. [6] Daylight, E.G., The Dawn of Sofwtare Engineering, Lonely Scholar, 2012.

[Fetzer(1988)] Fetzer, J. H. (1988) Program Verification: The Very Idea, Communications of the ACM, 31(9):1048-1063. [7] Floyd, R.W., Assigning meanings to Programs, Proceedings of Symposia in Applied Mathematics, 19:19-32, Provdience, RI, USA, AMS, 1967. 8 [8] Forsythe, G., What to do till the computer scientist comes, American Mathematical Monthly, 75:454-461, 1968. [Goldstine, von Neumann (1947)] Goldstine, H. H., von Neumann, J. (1947). Planning and coding of problems for an electronic computing instrument, Technical Report, Institute of Advanced Studies, Princeton. [9] Gordon, M. 1987. HOL: A proof generating system for higher-order logic. In VLSI Specification, Verification and Synthesis. Kluwer. [10] Gorn, S., Planning Universal Semi-Automatic Coding, Symposium on Automatic Programming for Digital Computers, Office of Naval Research, Department of the Navy, Washington, D.C., 13-14 May 1954, pp.74-83. [11] Gorn, S., Standardized Programming Methods and Universal Coding, Journal of the ACM, 1957.

[12] Hoare, C.A.R., An axiomatic basis for computer programming, Communications of the ACM, 12(10):576-580. [13] Knuth, D.E., Computer Science and its relation to Mathematics, American Mathematical Monthly, 81:323-343, 1974. [14] Kuehlmann, A., Srinivasan, A., LaPotin, D. P. 1995. Verity – a formal verification program for custom CMOS circuits. IBM J. Res. Dev. 39, 1/2, 149 –165. [Jones(1986)] Jones, C. B. 1986. Systematic Software Development Using VDM. Prentice-Hall International, New York. [15] Luo, Z., Pollack, R. 1992. LEGO proof development system: User’s manual. Tech. Rep. ECS-LFCS-92-211 (May), Computer Science Dept., Univ. of Edinburgh [16] Allen Newell, Alan J.

Perlis, Herbert A. Simon, What is Computer Science?, Science, 157:1373-4, 1967. [Piccinini(2015)] Piccinini, G. (2015). Physical Computation: A Mechanistic Account, Oxford: Oxford University Press. [Primiero(2015)] Primiero, G. (2015). Realist Consequence, Epistemic Inference, Computational Correctness, in A. Koslow, A. Buchsbaum (eds.), The Road to Universal Logic, vol. II, Studies in Universal Logic, pp. 573-588, Springer International Publishing Switzerland. [Spivey(1988))] Spivey, J. M. 1988. Introducing Z: a Specification Language and its Formal Semantics. Cambridge University Press, New York. [17] Tedre, M., The Science of Computing – Shaping a Discipline, CRC Press, 2015. 9 [Turing(1949)] Turing, A. M. (1949). Checking a large routine, Report of a Conference on High Speed Automatic Calculating Machines, pp. 67-69. Republished in F. L. Morris and C. B. Jones, Annals of the History of Computing, 6(2):139- 134, 1984. https://www.turingarchive.org/browse.php/B/8. [Turner(2012)] Turner, R. (2012). Machines, in H. Zenil (ed.), A Computable Universe: Understanding and Exploring Nature as Computation, pp. 63-76, World Scientific Publishing Company/Imperial College Press, London. [Turner, Angius(2017)] Turner, R., Angius, N. (2017). The Philosophy of Computer Science, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), E. N. Zalta (ed.), https://plato.stanford.edu/archives/spr2017/entries/ computer-science/. [Woodcock et al.(2009)] Jim Woodcock, Peter Gorm Larsen, Juan Bicarregui, and John Fitzgerald. 2009. Formal methods: Practice and experience. ACM Comput. Surv. 41, 4, Article 19 (October 2009), 36 pages. DOI=https://dx.doi. org/10.1145/1592434.1592436