Content identification technology.
The most recent wave of AI tools and gimmicks has proven the technology can produce some pretty convincing creative content, whether it's a Van Gogh-style painting of Paddington Bear and Mickey Mouse making out generated in DALL·E or a full Intro to Economics essay on consumer surplus generated in ChatGPT. We're not yet at a point where AI tools can match the ingenuity and creativity of the human brain in all circumstances, but it's foreseeable that we'll close that gap soon.
And as we close that gap, you'll continue seeing AI-generated content like this more and more frequently. And as that happens, the internet will quickly devolve from Silicon Valley into Dodge City.
Because we don't yet have a good system for logging exactly what was created where and when. Sure, some files (a smartphone photo, for example) have this sort of information buried within the metadata, but it's easy enough for this information to be scrubbed or lost in translation as the image floats site-to-site. How will we always know who's the owner of that particular photo? As deepfake videos continue getting more and more realistic, how will we know when a video is doctored vs when a video is straight out of the camera? And to that debilitatingly lazy Marketing major who used ChatGPT for every single homework assignment and essay last semester, how will we know when a piece of writing was created by humans, not neural networks?
There's a desperate need for a system that tracks whenever a piece of content is created on the internet and enters it into a ledger for future verification of its origins, perhaps on the blockchain similar to NFTs. I'm not sure how this would be implemented or how any startup would realistically make money on this, but the crystal ball of AI is starting to get a bit less murky, and all I'm seeing is a barrage of copyright and plagiarism lawsuits until we sort this problem out.
Comments