Sé el primero en recomendar esto
Recent AI systems can create fake images, sound files, and videos that are hard to distinguish from real ones. For example, Lyrebird's software can mimic anyone saying anything from a one minute sample of their speech, Adobe's "Photoshop of Voice" VoCo software has similar capabilities, and the "Face2Face" system can generates realistic real time video of anyone saying anything. Continuing advances in deep learning "GAN" systems will lead to ever more accurate deceptions in a variety of domains. But AI is also getting better at detecting fakes. The recent rash of "fake news" has led to a demand for deception detection. We are in an arms race between the deceivers and the fraud detectors. Who will win? The science of cryptographic pseudorandomness suggests that the deceivers will have the upper hand. It is computationally much cheaper to generate pseudorandom bits than it is to detect that they aren't random. The issue has enormous social implications. A synthesized video of a world leader could start a war. Altered media could implicate people in crimes they didn't commit. Governments have tampered with photographs since the beginning of photography. Stalin, for example, was famous for removing people from historical photos when they fell out of favor. The art world has had to deal with forgeries for centuries. Good forgers can create works that fool even the best art critics. The solution there is "provenance". We not only need the work, we need its history. But provenances can also be faked if we aren't careful! Can we create an unmodifiable digital provenance for media? We describe several approaches to using blockchains, the technology underlying cryptocurrencies, to do this. We discuss how the time and location of events can be cryptographically certified. And how future media hardware might provide guarantees of authenticity.