JStories — As generative AI spreads rapidly, large volumes of disinformation and misinformation — as well as fake images and videos — are circulating worldwide. Concerns are growing that these developments could undermine social trust and stability. In response, a new international collaboration is beginning to take shape. Driven by a sense of urgency that cross-border cooperation is essential, more than 100 companies and organizations around the world are preparing to work together across a range of fields.
The initiative is being coordinated by Fujitsu, a major Japanese IT company. In 2025, Fujitsu launched an international consortium called “Frontria” to address challenges such as countering AI-generated disinformation and misinformation and developing legal and regulatory frameworks.
Fujitsu has already built a track record of advancing R&D in cutting-edge technologies, including disinformation countermeasures, AI, and cybersecurity. In 2024, the company was selected as the prime contractor for a project publicly solicited by the New Energy and Industrial Technology Development Organization (NEDO) titled “Development of Technologies Related to Disinformation Analysis.” In collaboration with domestic industry-academia organizations, Fujitsu is developing a platform to counter disinformation.
However, efforts confined to Japan alone are not enough to curb the risks posed by disinformation and misinformation. Such content can travel across the globe instantly, triggering major impacts including social division and economic losses.
“It was our strong conviction that solving this problem requires international coordination and joint technology development that led us to launch Frontria,” says Izumi Nitta, senior research manager at Fujitsu Research.
“By bringing together diverse knowledge and technologies from around the world, we want to elevate our technologies beyond mere tools into foundational technologies that support the information infrastructure of the international community as a whole and help build a more robust information environment. We’re also planning hackathons and workshops within this fiscal year that will make use of our disinformation detection technologies and more,” Nitta says.
With international cooperation becoming increasingly indispensable, the significance of a Japanese company taking the lead is considerable.
“Perspectives on the risks posed by advances in AI technology differ by region. For example, Europe places emphasis on human rights and legal regulation, North America is seeking a balance between innovation and national security, and Asian countries take into account rapid population growth. Japan’s emphasis on AI ethics and international cooperation can play an important role in connecting these different regional perspectives and encouraging global dialogue,” Nitta says.
The initiative was launched with participation from more than 50 organizations worldwide, including groups from Japan, Europe, North America, India, and Australia. That number has since grown to around 70 in response to strong interest both in Japan and overseas. Member organizations have already begun activities such as meeting in person to discuss ideas for practical applications. Looking ahead, the consortium plans to deepen discussions with a focus on sectors such as insurance, finance, news media, and entertainment. By fiscal year 2026, it aims to expand to more than 100 participating organizations and generate concrete IP-based business case studies through the consortium.
“Frontria is an open platform tackling the global challenge of disinformation through cross-border collaboration. To build a trustworthy information society for the future, we hope many more organizations will resonate with this initiative and join us as partners,” Nitta says, expressing expectations for further expansion of the collaboration.
Translated by Mizuki Nagakawa | JStories
Edited by Mark Goldsmith
Top photo: Envato
For inquiries regarding this article, please contact jstories@pacificbridge.jp
***
Click here for the Japanese version of the article









