It is not uncommon for advocates to argue that the production of an sufficiently large expert rules based system will provide the sufficient and necessary requirements for 'strong' artificial intelligence. In recent days, attempts in this direction were made not by inorganic computational capacity, but rather with the development of a neural network using DNA. Research by Lulu Qian, Erik Winfree and Joshua Bruck published in the July 2011 edition of Nature uses " a simple DNA gate architecture... an artificial neural network model" is formed thats "function[s] as small neural networks".
Last month I had the opportunity to present at the Humanity Plus Conference in Melbourne. In part the presentation was a mapping of the extraordinary changes in computational capacity over the past fifty years, of which Moore's Law is the most well known and an issue which I have previously illustrated, along with some modest and fairly immediate predictions. What excites artificial intelligence enthusiasts are longer-term proposals, such as those by Ray Kurzweil (c.f., The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999), The Singularity is Near (2005)) who has argued that by 2019 a standard personal computer will have as much raw power as the human brain and that by 2029 such a machine will be 1,000 times more powerful.
Whilst many will intuitively (and correctly) argue that sheer hardware does not equate with intelligence for many strong AI advocates, this becomes merely a problem of developing sufficiently complex rules-based software. The computational theory of mind, supported by cognitive scientists and philosophers such as Jerry Fodor, argues that thoughts are the process of computation, defined as rules for symbolic representations where meaning is truth-functional, correlating between the mental state and an objective condition. In Daniel Dennett's Multiple Drafts Model of the computational theory (Consciousness Explained, 1991), there are a variety of sensory inputs from a given event and also a variety of interpretations of these inputs. In other words, a sufficiently complex robot - with both the hardware capacity and extensive software - would be an intelligent agent. It is a proposition which Australian philosopher David Chalmers has accurately described as "zombies".
There is no question that such a robot would be capable, indeed very capable, of following rules-based intelligence. Such machines have already shown their excellent capacity today, as famously (if controversially) illustrated by chess-playing machines such as Deep Blue, Rybka, and Fritz or sentry guns like the Super aEgis II. Critics of artificial intelligence however argue that rules-based intelligence is not sufficient for consciousness which also requires understanding, as Searle famously argued in his "Chinese room" thought experiment (it is also worth mentioning that as the APA pointed out in a different context "intelligence" is hardly a trivial concept with several significantly competing definitions). The typical argument against the Chinese Room argument is that understanding is a system-wide property, although that is a matter that Searle dealt with in the original paper. Dennett, taking a more elaborate version of the same, complains that Searle "just can't imagine how understanding could be a property that emerges from lots of distributed quasi-understanding in a large system".
The great weakness of this line of thinking is that it hasn't engaged in some fairly basic empirical research. The typical starting point can be found in Wittgenstein's Philosophical Investigations (1953) which argued against the possibility of a language understandable only by a single individual, although one can see some interesting precursors in sociology with Emile Durkheim's notion of the collective consciousness and the symbolic interactionism of George Herbert Mead. The core argument is that individual consciousness is simply not possible without social interaction, that symbolic values must be mutually shared values in order to have meaning. Such a proposition received tragic confirmation through the feral child Genie, an "experiment" which unfortunately has been repeated.
Intelligence, whether artificial or natural, as humans understand and experience it, is not isolated from consciousness. Whilst it is common to think of consciousness in terms of subjective phenomenological intentionality (such as the medical definition), the historical definition was very much bound into the co-knowledge of others, "conscius" (con- "together" + scire "to know"), "obligentur...communi inter se conscientia" ("they are bound by common co-knowledge between themselves" Cicero, Ver. 2.177). Knowledge of facts thus becomes a moral question of what one does with such knowledge relative to one's subjective desires; sensuality, intelligence, consciousness become intertwined into the entire mental state.
Thus the challenge for strong 'artificial intelligence' advocates is not so much to develop an expert rules-based system, but rather one which can make moral decisions based on mutual understanding with others. Until that is achieved, despite whatever level of computational power is achieved, the much vaunted intelligence of machines is merely zombie-like behaviour.