Risks - Artificial Intellegence

Study Guide

  • Playing God
    • Should God alone be responsible for creating intelligent beings?
      • Some say yes, but others argue that even building a calculator has then violated "God's will". Some of the arguments center around what exactly is the definition of intelligence, and for what reasons should humans not create machines that posses it. See the link Playing God?. See also Brian Cantrwell Smith's abstract on Theology and AI.
    • Were "human-like" AI achieved, should the resulting machines have rights?
      • Another tough question. It is widely held that (like stated the U.S. Declaration of Independence) humans have rights derived from God, our Creator. It makes sense that we should be able to decide the rights machines of our own making would have, were that ever necessary. In this LINK Susan Stuart discusses several views.
  • Responsibilities
    • Were an artificially intelligent system designed to diagnose disease to work improperly, who would be responsible... the doctor who used it, or the programmer who programmed it, or the expert who designed it?
      • Certainly if the failure was due to the designer's misunderstanding of current knowledge, then he/she should be responsible. Likewise, the company making the product should be liable for programmer error. There are other issues too. This situation can apply to any type of advisory role of an artifically intelligent system (i.e. legal advice, financial advice, etc.). See the link Who is responsible (scroll to the section on "The Future of A.I."). See also Artificial Intelligence Should we, and if we should then how? (scroll to the section on "The Responsibility of AI Research and Development").
  • Future Issues
    • What could some of the consequences of A.I. be in the future?
      • How should we go about building Intelligent machines... particularly interesting to us: ethical machines. Some think it is impossible to have an ethical machine. Others see it just around the corner. Mathematician Ben Goertzel certainly does. Also, I.J. Good discusses these issues and more in this LINK.
    • What responsibilities do AI researchers have?
      • Artificial Intelligence already is used in life/death decision making. How accurate should an AI system be before being used in such systems. Various opinions are held, but certainly the AI device should save more lives then a human doing the same job. See the link A.I. -- good or bad?(Scroll to the section titled "The Responsibility of AI Research and Development") to get aquatinted with some of the issues. Also see the link "It's the Computer's Fault" (scroll to the section on "Responsibility for Computer-Error").

    Index of Topic -