Next steps for Mozilla and Trustworthy AI

(In short: Mozilla has updated its take on the state of AI — and what we need to do to make AI more trustworthy. Read the paper and share your feedback: [email protected].)

In 2020, when Mozilla first focused its philanthropy and advocacy on trustworthy AI, we published a paper outlining our vision. We mapped the barriers to a better AI ecosystem — barriers like centralization, algorithmic bias, and poor data privacy norms. We also mapped paths forward, like shifting industry norms and introducing new regulations and incentives. 

The upshot of that report? We learned AI has a lot in common with the early web. So much promise, but also peril — with harms spanning privacy, security, centralization, and competition. Mozilla’s expertise in open source and holding incumbent tech players accountable put us in a good place to unpack this dynamic and take action. 

A lot has changed since 2020. AI technology has grown more centralized, powerful, and pervasive; its risks and opportunities are not abstractions. Conversations about AI have grown louder and more urgent. Meanwhile, within Mozilla, we’ve made progress on our vision, from research and investments to products and grantmaking

Today, we’re publishing an update to our 2020 report — the progress we’ve made so far, and the work that is left to do.

[Read: Accelerating Progress Toward Trustworthy AI]

Our original paper focused on four strategic areas: 

  • Changing AI development norms,
  • Building new tech and products,
  • Raising consumer awareness,
  • Strengthening AI regulations and incentives. 

This update revisits those areas, outlining what’s changed for the better, what’s changed for the worse, and what’s stayed the same. At a very high level, our takeaways are:

  • Norms: The people that broke the internet are the ones building AI. 
  • Products: More trustworthy AI products need to be mainstream. 
  • Consumers: A more engaged public still needs better choices on AI. 
  • Policy: Governments are making progress while grappling with conflicting influences. 

A consistent theme across these areas is the importance and potential of openness for the development of more trustworthy AI — something Mozilla hasn’t been quiet about

Our first trustworthy AI paper was both a guidepost and map, and this one will be, too. Within are Mozilla’s plans for engaging with AI issues and trends. The paper outlines five key steps Mozilla will take in the years ahead (like making open-source generative AI more trustworthy and mainstream), and also five steps the broader movement can take (like pushing back on regulations that would make AI even less open). 

Our first paper was also “open source,” and this one is, too. We are seeking input on the report and on the state of the AI ecosystem more broadly. Through your comments and a series of public events, we will take feedback from the AI community and use it to strengthen our understanding and vision for the future. Please contact us at [email protected] and send us your feedback on the report, as well as examples of trustworthy AI approaches and applications.

The movement for trustworthy AI has made meaningful progress since 2020, but there’s still much more work to be done. It’s time to redouble our efforts and recommit to our core principles, and this report is Mozilla’s next step in doing that. It will take all of us, working together, to turn this vision into reality. There’s no time to waste — let’s get to work.


Share on Twitter