cross-posted from: mander.xyz/post/52050172

State-coordinated media leave measurable imprints on large language models’ outputs by shaping the web content these models train on, with effects strongest in the languages where media control is concentrated, a multi-institutional research project published in Nature demonstrates.

This research highlights how information-ecosystem control can propagate into model training data, creating language-specific biases practitioners should account for in dataset curation and evaluation.

According to the Nature study … the researchers analysed six interlinked studies and found over 3.1 million Chinese-language documents in an open-source multilingual dataset that closely matched phrasing from documented Chinese state media sources, amounting to about 1.64% of the Chinese corpus and roughly 40 times the representation of Chinese-language Wikipedia; for documents mentioning political figures or institutions the share rose to 23%.

The study also reports that only about 12% of matched documents came from known government or news domains, indicating broader dissemination across the web.

The study authors say that,

we show … that government control of the media across the world already influences the output of LLMs via their training data … LLMs exhibit a stronger pro-government valence in the languages of countries with lower media freedom than in those with higher media freedom. This result is correlational, so to triangulate the specific mechanism of how state media control can influence LLMs, we develop a multi-part case study on China’s media. We demonstrate that media scripted and curated by the Chinese state appears in LLM training datasets. To evaluate the plausible effect of this inclusion, we use an open-weight model to show that additional pretraining on Chinese state-coordinated media generates more positive answers to prompts about Chinese political institutions and leaders. We link this phenomenon to commercial models through two audit studies demonstrating that prompting models in Chinese generates more positive responses about China’s institutions and leaders than do the same queries in English. The combination of influence and persuasive potential across languages suggests the troubling conclusion that states and powerful institutions have increased strategic incentives to leverage media control in the hopes of shaping LLM output.

Web Archive link

Here is another article with additional information: Media Control’s Surprising Impact on AI Outputs

The original study is unfortunately behing a paywall: State media control influences large language models