What I learned building a fact-checking startup

In the aftermath of the 2016 U.S. election, I set out to build a product that could tackle the scourge of fake news online. My initial hypothesis was simple: build a semi-automated fact-checking algorithm that could automatically highlight any false or dubious claim and suggest the best-quality contextual facts for it. Our thesis was clear, if perhaps utopian: If technology could drive people to seek truth, facts, statistics and data to make their decisions, we could build an online discourse of reason and rationality instead of hyperbole.

After five years of hard work, Factmata has had some successes. But for this space to truly thrive, there are a great deal of barriers, from economic to technological, that still must be overcome.

Key challenges

We quickly realized that automated fact-checking represents an extremely hard research problem. The first challenge was defining just what facts we were checking. Next, it was thinking about how we could build and maintain up-to-date databases of facts that would allow us to assess the accuracy of given claims. For example, the commonly-used Wikidata knowledge base was an obvious option, but it updates too slowly to check claims about rapidly changing events.
Read more from the TechCrunch Global Affairs Project

We also discovered that being a for-profit fact-checking company was an obstacle. Most journalism and fact-checking networks are nonprofit, and social media platforms prefer working with nonprofits in order to avoid accusations of bias.

Beyond these factors, building a business that can rate what is “good” is inherently complex and nuanced. Definitions are endlessly debatable. For example, what people called “fake news” often turned out to be extreme hyperpartisanship, and what people proclaimed “misinformation” were really contrarian opinions.

Thus, we concluded that detecting what was “bad” (toxic, obscene, threatening or hateful) was a much easier route from a business standpoint. Specifically, we decided to detect “gray area” harmful text — content that a platform is not sure should be removed but needs additional context. To achieve this, we built an API that scores the harmfulness of comments, posts and news articles for their level of hyperpartisanship, controversiality, objectivity, hatefulness and 15 other signals.

We realized that there was value in tracking all the claims evolving online about relevant corporate issues. Thus, beyond our API we built a SaaS platform that tracks rumors and “narratives” evolving in any topic, whether it is about a brand’s products, a government policy or COVID-19 vaccines.

If this sounds complicated, that’s because it is. One of the biggest lessons we learned was just how little $1 million in seed funding goes in this space. Training data around validated hate speech and false claims is no ordinary labeling task — it requires subject-matter expertise and precise deliberations, neither of which comes cheaply.

In fact, building the tools we needed — including multiple browser extensions, website demos, a data labeling platform, a social news commenting platform and live real-time dashboards of our AI’s output — was akin to building several new startups all at the same time.

Complicating things further, finding product-market fit was a very hard journey. After many years of building, Factmata has shifted to brand safety and brand reputation. We sell our technology to online advertising platforms looking to clean up their ad inventory, brands looking for reputation management and optimization, and smaller scale platforms looking for content moderation. It took us a long time to reach this business model, but in the last year we have finally seen multiple customers sign up for trials and contracts every month, and we are on target for $1 million in recurring revenues by mid-2022.

What needs to be done

Our journey demonstrates the high number of barriers to building a socially impactful business in the media space. As long as virality and drawing eyeballs are the metrics for the online advertising, search engines and newsfeeds, change will be hard. And small firms can’t do it on their own; they will need both regulatory and financial support.

Regulators need to step up and start enacting strong laws. Facebook and Twitter have taken massive strides, but the online advertising systems are far behind and emerging platforms have no incentive to evolve differently. Right now, there is no incentive for companies to moderate their platforms of any speech that isn’t illegal — reputational damage or fear of user churn are not enough. Even the most ardent supporters of free speech, as I am, recognize the need to create financial incentives and bans so that platforms really take action and start spending money to reduce harmful content and promote ecosystem health.

What would an alternative look like? Bad content will always exist, but we can create a system that promotes better content.

As flawed as they may be, algorithms have a big role to play; they have the potential to automatically assess online content for its “goodness,” or quality. These “quality scores” could be the basis to create new social media platforms that aren’t ad based at all but used to promote (and pay for) content that is beneficial to society.

Given the scope of the problem, it will take immense resources to build these new scoring algorithms — even the most innovative startups will struggle without tens, if not hundreds, of millions of dollars in funding. It will require multiple companies and nonprofits, all providing different versions that can embed in people’s newsfeeds.

Government can help in several ways. First, it should define the rules around “quality”; firms trying to solve this problem shouldn’t be expected to make up their own policies.

Government should also provide funding. Government funding would allow these companies to avoid watering down their goals. It would also encourage firms to make their technologies open to public scrutiny and create transparency around flaws and biases. The technologies could even be encouraged to be released to the public for free and available use, and ultimately provided for public benefit.

Finally, we need to embrace emerging technologies. There have been positive strides by the platforms to invest seriously in the deep technology required to do content moderation effectively and sustainably. The ad industry, four years on, has also made progress adopting new brand safety algorithms such as Factmata’s, that of the Global Disinformation Index and Newsguard.

Although initially a skeptic, I am also optimistic about the potential of cryptocurrency and token economics to present a new way of funding and encouraging good quality, fact-checked media to prevail and distribute at scale. For example, “experts” in tokenized systems can be encouraged to fact-check claims and efficiently scale data labeling for AI content moderation systems without firms needing large upfront investments to pay for labeling.

I don’t know if the original vision I set out for Factmata, as the technological component of a fact-based world, will ever be realized. But I am proud that we gave it a shot and am hopeful that our experiences can help others chart a healthier direction in the ongoing battle against misinformation and disinformation.

Read more from the TechCrunch Global Affairs Project

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter