Twitter appears to be testing a new verification process for Twitter Blue subscribers that would involve submitting a government ID. Code-level insights reveal a process for sending in a photo of the user’s ID, both front and back, along with a selfie photo to verify their Twitter account. The feature is listed alongside others only available to Twitter Blue subscribers, like support for editing tweets, uploading longer videos, organizing bookmarks with folders, and other paid subscription perks.
The ID upload feature was uncovered in Twitter’s code last week by product intelligence firm Watchful.ai, but it’s unclear for now if it’s being tested externally. The firm told TechCrunch it believes the feature is in testing in the U.S., where it was found in the Android version of the Twitter app. However, it doesn’t know how many (or if any) Twitter users are actually seeing the feature as of yet.
Twitter, as you may recall, controversially revised its verification process under Elon Musk’s ownership by moving away from an older system where users were verified if they were notable people of some sort — like celebrities, politicians, or other public figures — to one where users could simply pay for the verification checkmark.
That system hit some snags upon first launch, as users verified themselves and then began to impersonate other high-profile individuals or even companies, leading to chaos. Twitter then had to pause the system, retool and relaunch it with increased protections. It also carved out a way for businesses to verify themselves and receive a gold checkmark and said it would label some high-profile accounts with an “Official” badge.
Still, even though the revamped system requires a phone number to become verified, it has been shown to be vulnerable to the threat of impersonation. As The Washington Post reported earlier this year, Twitter’s system didn’t ask for a photo ID upon verification, which allowed a reporter to add the verified blue badge to a fake account claiming to be that of a U.S. senator.
Adding a photo ID and selfie requirement to Twitter Blue’s verification process could help to fight against impersonation if the feature were to be rolled out more broadly.
In the screenshots provided to TechCrunch by Watchful.ai, Twitter informs users that the new verification process will take about 3 minutes to complete and their information and images will be shared with a third party for purpose of confirming their identity. That indicates Twitter itself isn’t handling the verification process but is working with a provider to do the heavy lifting here.
Though many people continue to believe that verification should be a service provided to the community, rather than a paid offering, Twitter’s move to turn clout-chasing into a paid feature was later adopted by Meta as it chases new revenue streams outside of advertising. Last week, Meta launched paid verification on Facebook and Instagram in the U.S., after earlier rollouts in Australia and New Zealand. Its system allows users to buy its blue checkmark for a monthly fee. However, in Meta’s case, verification also provides impersonation protection and direct access to customer support, which creators and businesses may find to be worth the cost.
If publicly launched, government ID-based verification would be a notable change for Twitter’s ID verification system which is today more focused on giving Twitter Blue subscribers increased visibility on the platform, where their tweets are prioritized in the Notifications timeline. But while Twitter may now verify that someone is a human with a real phone number, it doesn’t necessarily indicate they are who they say they are, as The Washington Post’s tests had indicated.
Twitter doesn’t reply to press inquiries (beyond sometimes now emailing back a poop emoji), so don’t expect a comment.
Twitter testing government ID-based verification, new screenshots show by Sarah Perez originally published on TechCrunch