The Philosophy of Computation and Computers

Presentation to The Philosophy Forum, Sunday August 2nd, 2015

1.0 Definition and Types of Computation
1.1 Computation is defined here as any type of calculation and is the foundation of the discipline of computer science. Computer science includes the theory of computation, data structures, programming languages, computer architecture, networks, databases etc. The application of computer science technologies has utterly changed our social life and access to knowledge in an extraordinary manner leading to a global computerised society and with hypothetical explorations on the computation aspects of mind, the possibility of transformation of the human species, and even speculations that the universe itself is computational model.

1.2 Pure computation also has approaches which can be distinguished as (a) digital versus analogue, (b) sequential versus parallel (versus concurrent (c) batch versus interactive (providing feedback as the program completes instances of computation), which can be combined in practise (e.g., analogue parallel interative computation, such as The MONIAC (Monetary National Income Analogue Computer) created in 1949).

1.3 A function is a set of input-output pairs that performs a specific task; it can be embodied as a named section of a program as a procedure or routine. They encapsulate a task with input and output parameters. A function can be described as a formula, and can be considered computable is there is an algorithm (from Al-Khuwarizmi, an Arab mathematician, ca. 780-850 CE) that computes the function. Programmers often use "procedural forgetfulness" when writing functions once written.

1.4 Bill Rapaport, who holds a rare position of a "philosopher of computing" noted four Great Insights of Computer Science (2013)
1.4.1 From Bacon, Leibniz, Boole, Turing, Shannon, and Morse: All the information about any computable problem can be represented using only 2 nouns: 0, 1 (or any other bistable pair; I prefer Y/N from Pierre Abélard's "Sic et Non" ) for logical operators. More can be applied e.g., ternary computation and three-valued logic (true, false, indeterminate), etc
1.4.2 From Turing: Every algorithm can be expressed in a language for a computer ("a Turing machine") consisting of:
(a) an arbitrarily long recording device with notational areas (b) with a read/write head, (c) whose only nouns are "0" and "1", (d) and whose only 5 verbs (or basic instructions) are: (i) move-left-1-square, (ii) move-right-1-square, (iii) print-0-at-current-square
(iv)print-1-at-current-square, (v) erase-current-square
1.4.3 Boehm and Jacopini's Insight of structured programming. Only three grammar rules are needed to combine any set of basic instructions into more complex ones:
(a) sequence: first do this; then do that
(b) conditional selection: IF 'a' is true, THEN do 'b' ELSE do 'c'
(c) repetition (or looping): WHILE 'x' is true, DO 'y'
Optional grammatical rules for simpliticity and elegance include: exit, named procedures, recursion (as an elegant replacement for repetition).
1.4.4 The Church-Turing Thesis.
A function on the natural numbers is computable ("effective computabile") if and only if it is computable by a Turing machine. All algorithms so far translate to Turing-machines, but some (quite a few actually) functions are non-computable; for example "The Halting Problem" i.e., no algorithm can exist which will always correctly decide whether, for a given arbitrary program and its input, the program halts when run with that input.

2.0 Computer Technology and Architecture

"Remove the lid of the Apple by prying the back edge until it 'pops', then pull straight back on the lid and lift it off."
- chapter 1 of the Apple II Users Manual, Christopher Espinosa

2.1 The contemporary computer system is a digital general-purpose device that can be programmed to carry out logical (and therefore arithmetic) operations. The general form consists of a system unit (the case/chassis which contains the system board, power supply, memory, hard disk, optical media, peripheral controllers, network controllers), and input and output devices (keyboard and mouse, screen) for users and the network connection (typically to to a switch, router, and network attached storage etc). The system board typically holds the CPUs (central processing unit) of one or more cores, random access memory, non-volatile memory (ROM) containing firmware and other BIOS, a clock generator, etc. The centrao processing unit performs the basic arithmetical, logical, and input/output operations, usually including an arthimetic logic unit (ALU) and a control unit (CU), which manages memory, decoding, and execution. Processor specification can be evaluated by speed (clock cycles per second) and width (internal registeers, data and memory bus size).

2.2 When a computer is turned on engages in a series of tests on hardware components with the logic built in to firmware (POST), then provide an boot loader (e.g., BIOS, UEFI), then bring up a kernel choice. The hardware interacts with the software at the level of the kernel, which takes input/output requests from other software and translates them into machine instructions. The operating system (e.g., MS-Windows, MacOS, Linux, Android) manages computer hardware and software resources and provides common services for computer programs. Operating systems may be distinguished between tasking (single- or multi-), user (single- or multi-), distributed (e.g., Plan9) or centralised, embedded, or real-time versus scheduled. Application software typically depends on an operating system and is designed for user tasks (e.g., word processor, spreadsheet, games, drawing and graphics, etc).

2.3 High-performance computing (HPC) is the use of supercomputers and clusters to solve advanced computation problems. A supercomputer is a nebulous term for computer that is at the frontline of current processing capacity, particularly speed of calculation. One type of supercomputer architecture are clustered computers. Simply put, there are a collection of smaller computers strapped together with tha high-speed local network. Applications can be parallelised across them through programming. Clustered computing is when two or more computers serve a single resource. This improves performance and provides redundancy in case of failure system. Parallel computing refers to the submission of jobs or processes over one or more processors and by splitting up the task between them. The world's most powerful computer operates at 33.86 petaflops.

2.4 It is possible to illustrate the degree of parallelisation by using Flynn's Taxonomy of Computer Systems (1966), where each process is considered as the execution of a pool of instructions (instruction stream) on a pool of data (data stream). From this complex is four basic possibilities: Single Instruction Stream, Single Data Stream (SISD), Single Instruction Stream, Multiple Data Streams (SIMD)
Multiple Instruction Streams, Single Data Stream (MISD), Multiple Instruction Streams, Multiple Data Streams (MIMD).

2.5 A system may consist of one or more physical processors, and each processor may have one or more cores (experimental devices are looking at more than one thousand cores), and a process may be unithreaded or multithreaded. Parallel processing can be expressed as a speedup, which is is limited by locking requirements and cannot be proportional to the additional degree of parallelisation (Amdahl's Law); however this can by by-passed (Gustafon's Law)

3.0 Trajectories of Computer Technology

3.1 The history of the various types of data storage can be expressed in terms of the typical technologies used and applying the technical metrics, along with financial cost. The changes that occur can be presented as an elaboration of the empirical observation made by Gordon Moore, co-founder of Intel, in 1965, concerning the number, and therefore density, of transistors on a minimum cost integrated circuit doubling every twenty-four months.

3.2 The specific application of Moore’s has since been expanded to refer to a range of computing units. One particular application refers to secondary and tertiary storage, known as Kryder’s Law, refers to advances in hard disk storage per square millimetre (originally phrased as per square inch). A further application is the development of primary storage. Again, similar results are evident with increasing size and decreasing price per unit and per megabyte. Computer technologies have, and continue to develop, at significantly different rates. RAM speed and hard drive seek speeds, for example, improve particularly slowly. Local area networking cabling capacity in an average installation has increased from a capacity of 2 megabytes per second in the 1980s, 10 megabytes per second in the 1990s, to 100 megabytes per second in this decade and with gigabit installations increasingly common. A general rule that can be used for Ethernet technology is a tenfold increase in bandwidth per decade.

3.3 Kurzweil ("The Age of Intelligent Machines" in 1990, "The Age of Spiritual Machines" in 1999, "The Singularity Is Near" in 2005), follows Moore's Law to suggest that within our lifetimes computational machinery will bypass human computational power. Several software projects already are capable of producing human-type results (e.g.,, JAPE, joke analysis and prodiction machine, BRUTUS short story generator, Kurzweil's own Cybernetic Poet, the Aaron program that produces impressionist art).

4.0 Computational Speculations

"By reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract."
- Thomas Hobbes, De Corpore

4.1 The computational theory of mind names a view that the human mind or the human brain (or both) is an information processing system and that thinking is a form of computing. The theory was proposed in its modern form by Hilary Putnam in 1961, and developed by the MIT philosopher and cognitive scientist (and Putnam's PhD student) Jerry Fodor. Computational theories of mind are often said to require mental representation because the 'input' into a computation comes in the form of symbolic representation (language) or senory representations (touch, images, smells etc).

4.2 John Searle has criticized the Computational Theory of Mind with the Chinese Room thought experiment known as the Chinese room, which deals very closely with concepts of meaning and understanding and their relationship with consciousness. Putnam himself is critic questions that whether the human mind can implement computational states is not relevant to the question of the nature of mind.

4.3 There is a long philoshical tradition speculating the idea that reality is an illusion (and is particularly popular in popular culture products). The hypothesis that reality is computer simulation comes from Bostrom (2003) by presenting a trilemma by arguing that advanced species must have computing power and that either (a) there are no advanced species (b) there is no interets in producing simulations or (c) we are in a simulation produced by an advanced species.

4.4 A smaller scale speculation (similar to Decartes demon) is the brain-in-a-vat thought experiment (Putnam) which questions that if a computer provided sufficiently artificial sensory input to a 'brain in a vat' how would the brain know that it is actually in a vat (rather than being embodied and taking a walk in the sunshine).

File Attachments: