0:00
/
0:00
Transcript

FrankenstAIn (aD)

Only two paradoxical intentions?
Contrary to the belief that computer systems are infallible, the fact is that computer systems can and do operate incorrectly. It is therefore not possible to consider the reliability of any computer-dependent system as guaranteed.
This holds true for all computer-dependent systems, but in a particularly critical way applies to systems whose faulty behaviour poses a great risk to the public. Increasingly, human life depends on the safety of such systems. These include air traffic control and high-speed ground transport systems, weapons guidance and defence systems, and healthcare and diagnostic systems.
While it is not possible to completely eliminate errors in computer-dependent systems, we believe it is possible to reduce the risks to the public to a reasonable level. To achieve this, system engineers need to pay greater attention to reliability issues and give them increased consideration. The public has the right to demand that systems are installed only when adequate steps have been taken to guarantee their reliability to a sufficient degree.
The issues and questions to be asked about reliability include the following: 
1. What are the risks and consequences if the computer system behaves incorrectly? 
2. What is a reasonable and realistic level of reliability to demand, and is this level of reliability achieved by the system? 
3. What tools are used to determine and verify the level of reliability? 
4. Are those who have determined the degree of reliability different from those who have verified it? 
5. And are these, in turn, independent of those who have an economic or political interest in the system? (Association of Computer Machinery – ACM – Resolution, 1985)

Experts design and improve Neural Networks. The latter's Key Utility is not to replace the Human Brain but rather to function as a Mastermind of/for Humanity.

To whose Benefit of the World Population?

A virtual Monster Swellhead cannot bear any Responsibility. Only its Human Developers, Operators and Supervisors can.

Why Operators and Supervisors?

Such A Brainiac could be trained to keep on engineering improved Models of itself.

How should Mankind qualify for taking Responsibility on a growing Lack of Understanding towards such A Blackbox's opaque Performance?

Perhaps A Supervisory Board of Smart Humpty Dumpties?

Even Experts would be turning into innocent Bystanders as much as thrilled Spectators, merely remaining in awe of their Frankenstein's Self-Dynamics.

Where will it all end?

Golem can't/won't care.

Frame of Reference

Neural Link (2_24)

Neural Networks (1991)

Granting Man Future-Proofness

Brainstorm Project

Colossus

Shall We Play A Game?

Dave, Do You Mind, If I Ask You A Personal Question?

Let's Try HighSciFi In A Mad World

We're On Apollo 13

It’s not a Game – Industry

Happy Human Plug-Ins

Homebodies Blinded by the (Screen) Light

I'm sorry Dave

Discussion about this video