Silicon Valley is debating if AI weapons should be allowed to decide to kill



Silicon Valley is debating if AI weapons should be allowed to decide to kill

Silicon Valley is debating if AI weapons should be allowed to decide to kill



by BikkaZz

8 comments
  1. “meaning an AI algorithm would make the final decision to kill someone “

    Remember it’s easier to kill unarmed Americans civilians…not like facing America army…

    bad people using bad AI…far right extremists promise to shoot shoplifters to ‘help’ the economy…

    Last month, Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons. At an event hosted by the think tank Hudson Institute, Lonsdale expressed frustration that this question is being framed as a yes-or-no at all. He instead presented a hypothetical where China has embraced AI weapons, but the U.S. has to “press the button every time it fires”….

    Except China isn’t…..far right extremists libertarians tech bros are…

    Lonsdale’s and Luckey’s affiliated companies are working on getting Congress to listen to them. Anduril and Palantir have cumulatively spent over $4 million in lobbying this year, according to OpenSecrets. “

  2. It’s really not an “if”, and is more of a what value are we putting on human life.

  3. >According to Reese, Skynet “saw all humans as a threat; not just the ones on the other side” and “decided our fate in a microsecond: extermination”. It began a nuclear war which destroyed most of the human population, and initiated a program of [genocide](https://en.wikipedia.org/wiki/Genocide) against survivors. Skynet used its resources to gather a slave labor force from surviving humans.

  4. Oh great, the AI can’t answer basic questions correctly and now it’s going to be killing me.

  5. Short answer: no. The technology isn’t sufficiently advanced enough. Maybe once it’s built out, it’ll be trustworthy enough to make that determination, but not any time soon.

Leave a Reply