It just needs to observe certain shared principles. But can such software then become foolproof’
3. A fundamental premise is that much, perhaps still all, software is rooted in how the human brain operates. Software is in this sense the externalization of the brain’s own behavior. Software capability, and complexity, has evolved as designers understand more and better about their own thought processes.
5. Suppose software could in some sense "step outside" the human framework. Can such a "mind of its own" be simultaneously CAS and foolproof’ G’del’s theories suggest that this would not be universally possible: in ever increasing complexity required to produce CAS, insoluble problems will always arise, at some point a CAS will be required to "guess" as it will not be able to rationally compute an answer. The solution may be to make every CAS to an order of complexity greater than the task for which it is destined. G’del allows for the extension of the system to solve problems. Simply, new insoluble problems will arise, but these may be made to lie outside the domain concerned, so that the CAS becomes foolproof within a defined domain.
6. As an extension of these notions, empirical evidence suggests that very complex systems are still inherently "buggy" and that software bugs will always appear no matter what the design methodology. Designers and perhaps the CAS itself can repair bugs in the CAS. It then becomes an iterative process as in 5. to get to a stage where a CAS is foolproof within a defined domain, although no guarantee is given for the universal case.
7. Given that software is a manifestation of the human mind, it is likely to evolve with the same possibilities and constraints. Therefore it will never be 100% foolproof, just foolproof in a defined domain, and the time to realize such a system will be a function of the complexity and the breadth of the domain.
8. As a final remark, a CAS may not be foolproof per se, but may well be able to fool a human being. See Turing’s remark about the situation when a human being can no longer tell whether the interaction with an entity behind a computer screen is in fact with another human being or a machine. In this restricted sense, the foolproof software CAS is already