Yesterday, TikTok offered me with what seemed to be a deepfake of Timothee Chalamet sitting on Leonardo Dicaprio’s lap and sure, I instantly thought “if this silly video is so good, think about how unhealthy it should be electoral disinformation”. OpenAI has, of necessity, thought the identical factor, and right now up to date its insurance policies to start addressing the difficulty.
The Wall Road Journal famous the brand new coverage change that was first printed on the OpenAI weblog. ChatGPT, Dall-e and different customers and makers of OpenAI instruments are actually prohibited from utilizing OpenAI instruments to impersonate candidates or native governments, and customers can not use OpenAI instruments for campaigning or lobbying. Customers are additionally not allowed to make use of OpenAI instruments to discourage voting or distort the voting course of.
The digital credentials system would encode photographs with their provenance, successfully making it a lot simpler to determine artificially generated photographs with out having to search for odd arms or distinctive matches.
OpenAI instruments can even start directing voting questions in the USA to CanIVote.org, which tends to be among the best authorities on the Web for the place and methods to vote within the US
However all of those instruments are at the moment within the technique of being rolled out and are closely depending on customers reporting unhealthy actors. On condition that AI itself is a quickly altering device that often surprises us with nice poetry and easy lies, it is unclear how effectively this may work to fight disinformation this election season. For now, your finest guess is to proceed to embrace media literacy. Which means questioning each information or picture that appears too good to be true, and at the least doing a fast Google search if ChatGPT finds one thing fully wild.