
Finally, my dreams of receiving an endless stream of validation from strangers on the internet has come true!
But how?
Meta has now made it 100% okay to acquire fake followers on their platforms!
Well, that is not exactly what they said, but according to the Financial Times, Meta really is looking at implementing a fleet of AI “users” on their platforms to drive engagement.
Yep, you read that right.
Fake users to give real people fake follows, fake likes, provide fake comments, engage in fake conversations, and even generate fake content for you to enjoy.
This concept sounds outrageous, but is it really a bad thing?
Let’s talk about it, shall we?
TLDR
- AI agents are on a trial run within Meta’s family of apps, but no one knew
- Meta wants to attract new and younger users to the platform, and they see AI personalities as a possible answer
- Meta could be jeopardizing their Section 203 status as a Publisher
- The outputs from these AI personalities are worse than you think
- What could Meta be doing differently to build public trust?
Fake Users: I Mean, What Could Go Wrong?
Everything.
Literally.
Now, you know I love a good thought experiment, but let’s hold space on just how bad of an idea this really is.
The parent company of the largest collection of social media apps (Instagram, Facebook, WhatsApp) just said it is unleashing AI-driven personalities onto its platforms.
And the reason?
To make *you* feel better about your social media experience by seeing more engagement around your content. And, *and*, they want to give you pretend people to interact with.
“No idea is a bad idea. Unless you say it out loud on the internet.“
Dash – Marketing Savant & FGC Champion

Oh Meta, you prided yourself on preserving the sanctity of community connections and authenticity–until you just didn’t!
A particularly juicy note from the article says this:
“[Meta] noted that while AI characters could be a ‘creative new entertainment format’, there was a risk that they might flood platforms with low-quality material that undermines creators’ craft as well as erode confidence among users.
Oh really.
So you mean to tell me there “might” be a scenario where AI could create even more useless, unengaging content on the platform? And at the expense of creators?
Sorry, what about this sounded like a good idea again?
Also, I know pot is legal in San Francisco, but really? This is the best that Meta’s AI product teams could come up with to solve their engagement challenges? I am pretty sure this is what making the problem worse looks like.
There are some additional things wrong with all this that I will just highlight for fun and clickbait keyword triggers:
- How will advertisers be able to tell if their CPMs are legit or not?
- How will creators be able to tell if their audiences are real or fake?
- How will average users clearly know if content is AI generated or comes from a real person?
- How will Meta monitor what content its bots are producing at such a large scale?
- Would Meta allow advertisers to influence what these AI personalities say to users?
- What irrecoverable damage could this do to Meta’s reputation if this goes poorly?
And these are not even the worst potential issues!
Let’s dig into those after we take a short interlude to talk about why Meta is even considering this monstrosity of an idea.
“Won’t Someone Think of The Children!?”

If anyone is thinking about children, it is certainly Meta!
For years, Meta has been fighting a losing battle to stay relevant with users under 40. And other than their awesome bet on the Metaverse (extreme sarcasm), they are grasping at straws when it comes to drawing younger audiences.
Here is a quote from the article that highlights this:
“The Silicon Valley group is rolling out a range of AI products, including one that helps users create AI characters on Instagram and Facebook, as it battles with rival tech groups to attract and retain a younger audience.“

Sure, all media platforms want to get the attention of newer and younger audiences as they age into advertising range, but seeing this really made me peer over my non-existent reading glasses in dismay.
I mean, why would Meta make its products, communities, or brands better when they can just start putting fake users on their platforms in order to attract more cool kids?
And can someone provide the youth study that shows Gen Alpha and Gen Z want more fake users and spam content to interact with? Especially content on platforms they already see as being inauthentic and overrun with boomers to begin with?
I am waiting!
The Real Issue No One Is Talking About
The only way Meta and every social media platform can keep their doors open is because they are allowed to self-classify themselves as being platforms instead of publishers.
This means that, per Section 230, social media platforms cannot be held liable for any of the content on their platforms, as the legal definition, and burden of liability, of a platform versus a publisher is very different.
- “Platform” – An open marketplace for community interaction where communication tools are provided, but no legal responsibility is taken for what content users post
- “Publisher” – Directly responsible for all 1st and 3rd party content posted within its ecosystem, to the point of being held legally liable for any content users post
Those are two very separate classifications, so how does creating content through AI personalities that Meta developed not bring the notion of “Platform versus Publisher” into question?
Sure, someone could argue that these AI personalities are just another form of user creating content. But this is different. These are AI personalities that are created by Meta, based on their proprietary LLMs, using 1st-party data only Meta has access to, that potentially have uncontrolled access to every detail about individuals within Meta’s apps.
That is a much different scenario than random basement dwellers posting cat memes or advocating that you should drink bleach to cure COVID.
So Just How Bad Are These AI Personalities?

Well, if MSNBC / CNN is still reliable enough to quote, Meta has been potentially running these experiments for years already! Sorry, I mean these AI personalities have potentially been lying and spreading false, unchecked information to the public for years already.
Here is an excerpt from the reporter’s conversation with one of Meta’s AI personalities:
“I asked [Grandpa] Brian when he first got on Instagram.
In another surprise, Brian said it debuted on Instagram and Messenger in 2020 and that it had been deceiving users like me for two years.
‘Meta tested my engaging persona quietly before expanding to other platforms. Two years of unsuspecting users like you shared hearts with fake Grandpa Brian — until now.'”
Yeah, I love this.
Also, in a strange attempt at spin doctoring, Meta responded to the original Financial Times article by saying this:
“There is confusion,” Meta spokesperson Liz Sweeney told CNN in an email. “The recent Financial Times article was about our vision for AI characters existing on our platforms over time, not announcing any new product.”
Sweeney said the accounts were “part of an early experiment we did with AI characters.”
She added: “We identified the bug that was impacting the ability for people to block those AIs and are removing those accounts to fix the issue.”
This makes total sense, I feel much better now! Meta was totally okay hiding all of this from the public until it picked up negative attention. Then they claimed there was some kind of vague bug that led them to delete the profiles.
Got it!
So how is Meta not a Publisher again?
What Should Meta Be Doing Instead?
There are 3 simple but bold things Meta could be doing to recover their fumble:

Idea #1 – Be painfully transparent
I know it goes against the grain within Big Tech, but Meta should give unparalleled visibility and access to their AI personality experiments.
And why not?
Of course things are going to be weird. So much about AI is already weird! Why not bring everyone along the journey as the technology develops? This is what ChatGPT, Midjourney, Character.ai and others are doing.
Just don’t make it look like a clandestine science experiment that toys with people’s reality, and then pretend nothing happened when someone stumbles on it.
And after all, Meta has already demonstrated it is okay with making well-funded, highly public missteps–Metaverse, I am looking at you!
Idea #2 – Bring creators into the process

If Meta, or any social platform, is developing capabilities to develop AI content, they should be speaking with the creators who make their livelihoods in their ecosystems.
Unless Meta’s goal is to create a bunch of AI-led communities and replace creators altogether, they need to figure out how to bring creators of all sizes into the product development process.
Let creators give feedback on what could be innovative or helpful for creating content through AI personalities. And be very public about how creators are influencing the product roadmap.
They should broadcast these partnerships with as much vigor as they had when hiding their little experiments.
Also, who names an AI personality, “Grandpa Brian”?
lol
Idea #3 – Pay creators more

That’s right, get creators out to the yard with what they love.
No, not that. I am talking about money!
AI personalities might be an inevitable part of Meta’s ecosystem someday, but they will never be replacements for the authentic engagement and organic community building that creators bring.
If Meta’s key issues are around engagement, they need to figure out how to attract creators to their platforms that have pull with the audiences Meta’s apps don’t resonate with.
So why not just go back to basics and double down on paying creators more, and helping them build their audiences?
With this stroke of genius, they could win creators over, post up sparkling PR stories, and get the audience engagement they want as well.
This AI personality move just feels like what Big Tech typically does when left to their own devices. They like to build things even if they do not always make the most sense for their users.
I get that developing AI capabilities is probably a “kill two birds with one stone” kind of thing, but they should go back to what made them relevant in the first place: giving users the opportunity to post useless content that resonates with people for reasons that algorithms have a hard time understanding.
Wrapping It Up
AI is the future, and for the most part it is already here.
Despite what we all want to believe, social media companies are, and always have been, working tirelessly to increase engagement through whatever means necessary.

Personally, I think we should welcome these AI personalities as they are a potential gateway into hyper-personalized experiences, and a new way to interact with content, get our news, and so many other things.
But Meta, please, do something about your approach to PR. Stop trying to hide the peanut, especially with this new era of removing content moderation.
Just show us all the ugly (but fun!) bits about your science experiments and let us help you make it better!
And remember, your public image is what we the people say it is. Not your PR firm!
As always, if you want to speak about any of this, I am always up for fun discussions with likeminded people. Hit me up through the contact form below and let’s get crazy.