How Are AI Companies Fighting the Spread of Disinformation in 2024 Political Ads?
Many worry that unregulated deepfakes will enable the spread of disinformation in the 2024 US elections. At Instreamatic, we recognize this risk — which is why we have measures in place to prevent it.
In a recent article, Instreamatic CEO Stas Tushinskiy sat down with Venture Beat to discuss the growing role AI is expected to play in the upcoming elections.
Instreamatic’s Contextual Ads allow brands to create unlimited personalized ad versions in minutes mentioning the audience’s location, a calendar date, or even recent events. This is a game-changer for automating the tedious manual process of creating political campaign ads and making them more relevant to voters.
Some AI platforms allow users to generate images, videos and audio using someone’s likeness—which can raise concerns of the integrity of campaign media. Luckily, Instreamatic AI requires that clients obtain permission to use a person’s voice — so not just anyone can clone the voice of a political candidate.
“You can’t just sign up,” Tushinskiy explained. “We will be engaged in campaign creation.” Instreamatic, he said, does not “want to get caught in the middle of something we didn’t intend the platform to be used for,” adding that if there were problems with political ads they would be “deleted immediately” on our hands ” and if necessary “we’ll make a public safety statement.”
Additionally, the political advertising offering will not be available to everyone.