Strong AI
  abuse
  basics
  commerce
  intellectual
  privacy
  risks
  social
  speech
 
Index    
     
Strong AI  

Study Guide

Definition:
Any computer/machine that is capable of having cognitive thought and has the ability to understand

  1. Will computers really be able to "think" as well or better than humans?
    • Are we only complex machines, with so-called "memes" (genes of culture), or do we transcend this?
      • Some say morality is only a part of culture that allows us to further our reproductive ends.
  2. Who is morally responsible?
    • If computers were to think like humans, what would keep them from being bad, or what would make them good?
      • The programmer?
      • The computer itself?
    • Where would the machines get their "moral code"?
      • The three laws of robotics, according to Isaac Asimov, might provide a starting point. (About.com)
        1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
        2. A Robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
        3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
      • However, these laws assume that everybody who does the programming will agree to do this, and, as we can see from current trends by "script kiddies" and various web bot programmers, this is a fairly hopeless cause.
  3. How should this affect our decisions on what we can ethically entrust to computers?
    • The "morality" of the Machina Sapiens is in question.
    • Should Machina Sapiens ever by relied on for anything, dangerous or no?
      • If a machine is making its own decisions, how can it be reliable?

Index of Topics - Study Guide