Meta Believes Social Media Can Safeguard Us Against Deep Fakes

Meta Believes Social Media Can Safeguard Us Against Deep Fakes

Deep fakes represent one of the most alarming threats posed by AI technology today. Producing realistic photos, audio, and videos has become astonishingly easy. For instance, you can find deep fakes of celebrities like Morgan Freeman and Tom Cruise below.

Interestingly, while social media has often served as a platform for spreading these deep fakes, Adam Mosseri, head of Instagram, believes it could be instrumental in debunking them as well …

Understanding How Deep Fakes Are Created

Traditionally, deep fake videos have been produced using a technique called generative adversarial networks (GANs).

This involves one AI model either creating a fake video or showing a real one, while a second model tries to distinguish between the two. This iterative process trains the first model to generate increasingly realistic fakes.

More recently, diffusion models such as DALL-E 2 have begun to gain prominence. These models transform real video footage, producing numerous variations based on text prompts that guide the AI in achieving the desired results. They become more effective as user interaction increases, enhancing their training.

Notable Deep Fake Video Examples

One notable example features a deep fake of Morgan Freeman, crafted three years ago when the technology was not as advanced as it is today:

Another famous instance depicts Tom Cruise in the role of Iron Man:

Additionally, British viewers might recognize Martin Lewis, a popular financial advisor, in this deep fake created to promote a crypto scam:

Adam Mosseri from Meta suggests that social media could potentially improve the situation by helping to highlight fake content. However, he acknowledges that it isn’t a flawless system, and individuals need to evaluate sources carefully.

As we’ve progressed, our ability to craft realistic images, both static and motion, has greatly intensified. Jurassic Park amazed me at ten years old; however, that was a $63 million production. GoldenEye on N64 astonished me four years later because it was live-action. In retrospect, those productions appear rather rudimentary. Regardless of one’s viewpoint on technology, generative AI is evidently creating content that is challenging to differentiate from actual recordings, and its capabilities are advancing swiftly.

A friend of mine, @lessin, challenged me around ten years back with the thought that any statement should be evaluated not only for its content but also the credibility of the individual or organization making it. While this collective realization may have taken time, it now appears crucial to consider the speaker’s identity over the message itself when determining validity.

As platforms on the internet, our responsibility is to label AI-generated content as accurately as we can. Nevertheless, some material will inevitably evade detection, and not all misleading information will originate from AI, thus we should also furnish context about the source to help users gauge the trustworthiness of the content.

It is vital for consumers—be they viewers or readers—to maintain a critical perspective when engaging with content that claims to be a record of reality. My advice is to *always* reflect on the identity of the speaker.

Image: Shamook

: . More.