Last week, I started getting a slew of texts and DM’s alerting me that a semi-viral video was propelling its way across the internet in which a ‘version’ of me that isn’t quite me is saying things I definitely wouldn’t. The context was a podcast I never hosted with a guest I have never met in which a ‘version’ of a woman named Arielle Lorre said things I would later discover she wouldn’t and definitely didn’t.
I got deepfaked.
This isn’t my first adventure down the uncanny valley. Six months ago, or maybe nine, a similar video made the rounds in which a clip of me from this podcast was lifted into an advertisement for some low rent longevity snake oil supplement wherein I was complimenting a woman I also had never met on her youthful appearance.
It was mildly annoying. I reported both the video and the account. Nothing happened. Nothing was done about it. But the video was so schlocky, so poorly executed, and so thoroughly transparent, I let it go, didn’t think much more about it, and it eventually went away.
Update 4.14.25: That video has now suddenly resurfaced. You can see it here (at least for now). It was sent to me last night and appears on the @antiagingmission Instagram account, which has over 750K followers. I’ve reported it so we’ll see if it gets taken down.
This video is different. Although the clips of me that appear are (similarly and ironically) lifted from that very same podcast I did indeed host (with the wonderful Chef Babette), this time the manipulation was far more severe. This time, words were were put in my mouth—AI words uttered by an AI mouth that distorted my face for the purpose of defrauding the credulous for ill-gotten gains.
It’s weird. Creepy for sure—a set up to sell shitty skin care products by an even shittier company out of Korea called Skaind.
Despite all those texts and DM’s, I was nonetheless inclined to simply ignore it. Like the video that preceded it, it’s terrible. It takes almost no discernment to see it for what it actually is, which is clearly something fake.
But the messages kept coming, some of them from people I was surprised to discover were genuinely confused, wondering how I could have turned from someone they had come to respect into someone they were now disappointed to realize was in fact deranged, and so completely cringe.
Feeling the need to act, I attempted to report the video and the account (@Skaind.Official) to the powers that be at Instagram, but for reasons that escape me (something about being ‘soft blocked’?) the platform wouldn’t let me. Next I attempted to DM the account and leave a comment below the video. Both efforts failed. I was also denied permission to share the video to my IG Stories. This culminated in my soft block quickly turning hard, and my access to even seeing the Skaind account simply disappeared.
The following day, Arielle posted her side of the story, along with the details of her defense (which I admit has been more strident and strategic than mine). Despite dispatching her lawyers on the case with a cease and desist, she was rebuffed by Skaind with an offensive attempt to sidestep any responsibility. It was an honest mistake, they claim, as they were unaware that she is “a recognized person with image rights.”
Let’s set aside one’s level of recognizability as even salient to the matter, or the fact that it is only because of our recognizability that Arielle and I were deepfaked in the first place (forgive me, I was a lawyer for 15 years). Instead, let’s pull focus on Meta, which proved to be no help either, refusing Arielle’s request to even pull the video down let alone suspend the account.
Beyond being personally appalled by this, it’s the intention that is most upsetting, which is a transparent attempt to leverage creators to defraud unsuspecting consumers into buying the bullshit they are selling.
However, this is not a story about personal grievance. It’s a precautionary tale.
The implications of advancing AI technology on the creator economy are obvious. When the trust between creator and audience is eroded, the reputational damage is anything but fake—it’s very much real with career-ending consequences.
There can be no doubt that the very near future of our information landscape portends a dumpster fire ecosystem of malevolent artifice—a scenario in which what occurred to me and Arielle is hardly an exception and very much the norm. The tech will improve. It is improving, more rapidly than we suspect, and at a rate we’re ill-equipped to protect ourselves against. Soon, low rent schlock like that shilled by Skaind will disappear, replaced with easy-to-create, cost-effective versions that are shockingly authentic in every discernable way.
What we are experiencing is the dawn of a new era in which near everything that is fake will be utterly indistinguishable from whatever remains that is legitimately real.
What is legitimately real is this very real threat, a threat of existential proportions the downstream implications of which are seismic and dire.
When AI can deepfake everything, nobody is safe—and everything is at risk.
I’m hopeful there’s still time. But one thing is clear: amidst the celebration of these remarkable new tools, and the many ways in which they promise to improve lives and already do, understand that there is arms race afoot—one in which considered efforts to guard against AI’s obvious harms cannot compete with the momentum already underway that propels its innovation forward unabated.
In other words, there is an asymmetry of incentives we must address, one in which at present the incentive to protect the present is no match for the incentive to produce the future at breakneck pace.
Those that sound the alarm are irritating scolds who simply don’t get it. Those that build, however, are modern-day heroes who do get it and therefore deserve trillions to pursue their innovations without interference.
If we have the technology to create the problem, it follows that we also must have the tech to prevent the problem from occurring, to diagnose it when it arises, and to inoculate ourselves against whatever malevolence it unleashes. In the short term, I have to believe there are better and easier ways that already exist for platforms like Meta to not only detect and delete deepfakes as they arise, along with a process to hold perpetrators accountable.
Is it too tall an ask to create a better system that creates a chilling effect on bad-faith-deep-fake behavior in which the repercussions are sufficiently consequential to prevent bad actors from doing bad things before they do harm?
It’s not an overstatement to say that if we fail solve this problem, we will fail to survive as a society—because a society in which everything and nothing are real and fake simultaneously isn't a society at all and simply cannot cohere.
Perhaps we’re already there. If not, we’re on the cusp.
We know this.
What we don’t know yet is whether we’re up to the task of actually dealing with it in a truly meaningful way.
While I choose hope, that hope is tempered by something else we know, which is that if the history of humankind tells us anything, it’s that we are a species with a primordial urge to plow forward undeterred, and a species resistant to appreciating the negative consequences of this instinct until it’s too late—our willingness to confront what we might reap a reckoning we prefer to delay to the aftermath of unsavory outcomes that otherwise could have been avoided.
While this may come across as dour, an extreme extrapolation on my flirtation with being deepfaked in the lamest way possible, what it is not is hyperbolic.
The future, our future, depends on what we choose to do or not do about this—and that’s just a fact.
For more of my thoughts on the AI revolution, I suggest you watch my 2nd podcast conversation with Yuval Noah Harari.
For a more hopeful (and fun) exploration of these ideas, please check out
’s podcast and Substack newsletter — a fantastic and highly entertaining exploration of how we maintain our humanity in relationship with this emergent technology.Thank you for reading. I hope you found it instructive. If so, this is the part where I suppose I’m supposed to ask you to subscribe to this Substack.
Rich
PS - If you’re new to me, I’m the host of The Rich Roll Podcast, which you can find on YouTube, Apple, Spotify, or wherever you listen to pods. I’m also the author of Finding Ultra. For more, visit my website and/or Instagram.
PPS - I’m brand new to Substack. I honestly don’t really even know how this world works. But I am curious, hence this post, which is my very first on this platform. Am I doing this right? What kind of content would you be interested in me sharing here? Any feedback would be appreciated.
Yep, also after your conversation with the guy from Perplexity I ditched ChatGPT (paid) because of its worrying hallucinations and changed to Perplexity. More ethical driven. Thanks Rich! Welcome to Substack!… I mean, a lawyer who knows how to write with the wisdom of and endurance athlete and the mindfulness of a guru??… What else you need?!
Geez! The fact that Meta allows this to happen is the most frustrating aspect. Not surprising, unfortunately.