Jump to content

Recommended Posts

Posted
akrales_180508_2553_0039.0.jpg Its LaMDA system is built to make AI more conversational. | Photo by Amelia Holowaty Krales / The Verge

Google has placed one of its engineers on paid administrative leave for allegedly breaking its confidentiality policies after he grew concerned that an AI chatbot system had achieved sentience, the Washington Post reports. The engineer, Blake Lemoine, works for Google’s Responsible AI organization, and was testing whether its LaMDA model generates discriminatory language or hate speech.

The engineer’s concerns reportedly grew out of convincing responses he saw the AI system generating about its rights and the ethics of robotics. In April he shared a document with executives titled “Is LaMDA Sentient?” containing a transcript of his conversations with the AI (after being placed on leave, Lemoine published the transcript via his Medium...

Continue reading…

View the full article

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

By using The Great Escaped Online Community, you agree to our Privacy Policy and Terms of Use