Researchers say AI “slop” is distorting science, push for mandatory disclosure

Source Cryptopolitan

Scientists working inside the AI research world are facing a credibility problem they can no longer ignore.

Major conferences focused on AI research reacted after review systems became clogged with weak submissions.

Organizers saw a sharp rise in papers and peer reviews produced with little human effort. The concern is not style. The concern is accuracy. Errors are slipping into places where precision used to matter.

Conferences crack down as low-quality papers overwhelm reviewers

Researchers warned early that unchecked use of automated writing tools could damage the field. Inioluwa Deborah Raji, an AI researcher at the University of California, Berkeley, said the situation turned chaotic fast.

“There is a little bit of irony to the fact that there’s so much enthusiasm for AI shaping other fields when, in reality, our field has gone through this chaotic experience because of the widespread use of AI,” she said.

Hard data shows how widespread the problem became. A Stanford University study published in August found that up to 22 percent of computer science papers showed signs of large language model use.

Pangram, a text analysis start-up, reviewed submissions and peer reviews at the International Conference on Learning Representations in 2025. It estimated that 21 percent of reviews were fully generated by AI, while more than half used it for tasks like editing. Pangram also found that 9 percent of submitted papers had more than half their content produced this way.

The issue reached a tipping point in November. Reviewers at ICLR flagged a paper suspected of being generated by AI that still ranked in the top 17 percent based on reviewer scores. In January, detection firm GPTZero reported more than 100 automated errors across 50 papers presented at NeurIPS, widely seen as the top venue for advanced research in the field.

As concerns grew, ICLR updated its usage rules before the conference. Papers that fail to disclose extensive use of language models now face rejection. Reviewers who submit low-quality evaluations created with automation risk penalties, including having their own papers declined.

Hany Farid, a computer science professor at the University of California, Berkeley, said “If you’re publishing really low-quality papers that are just wrong, why should society trust us as scientists?”

Paper volumes surge while detection struggles to keep up

Per the report, NeurIPS received 21,575 papers in 2025, up from 17,491 in 2024 and 9,467 in 2020. One author submitted more than 100 papers in a single year, far beyond what is typical for one researcher.

Thomas G. Dietterich, emeritus professor at Oregon State University and chair of the computer science section of arXiv, said uploads to the open repository also rose sharply.

Still, researchers say the cause is not simple. Some argue the increase comes from more people entering the field. Others say heavy use of AI tools plays a major role. Detection remains difficult because there is no shared standard for identifying automated text. Dietterich said common warning signs include made-up references and incorrect figures. Authors caught doing this can be temporarily banned from arXiv.

Commercial pressure also sits in the background. High-profile demos, soaring salaries, and aggressive competition have pushed parts of the field to focus on quantity. Raji said moments of hype attract outsiders looking for fast results.

At the same time, researchers say some uses are legitimate. Dietterich noted that writing quality in papers from China has improved, likely because language tools help rewrite English more clearly.

The issue now stretches beyond publishing. Companies like Google, Anthropic, and OpenAI promote their models as research partners that can speed up discovery in areas like life sciences. These systems are trained on academic text.

Farid warned that if training data includes too much synthetic material, model performance can degrade. Past studies show large language models can collapse into nonsense when fed uncurated automated data.

Farid said companies scraping research have strong incentives to know which papers are human-written. Kevin Weil, head of science at OpenAI, said tools still require human checks. “It can be a massive accelerator,” he said. “But you have to check it. It doesn’t absolve you from rigour.”

Join a premium crypto trading community free for 30 days - normally $100/mo.

Disclaimer: For information purposes only. Past performance is not indicative of future results.
placeholder
Apple reportS $143.8 billion in Q1 revenue, up 16% from last yearApple brought in $143.8 billion for the December quarter, beating every estimate. That’s a 16% jump from last year. Profit hit $42.1 billion, or $2.84 per share, up from $2.40. Analysts were only expecting $2.67. After the report, shares rose 3% in extended trading. The biggest reason is, of course, the iPhone 17. It drove […]
Author  Cryptopolitan
Jan 30, Fri
Apple brought in $143.8 billion for the December quarter, beating every estimate. That’s a 16% jump from last year. Profit hit $42.1 billion, or $2.84 per share, up from $2.40. Analysts were only expecting $2.67. After the report, shares rose 3% in extended trading. The biggest reason is, of course, the iPhone 17. It drove […]
placeholder
OpenAI to retire popular GPT‑4o ChatGPT model next monthOpenAI will remove GPT-4o from ChatGPT on February 13, along with several other older AI models.
Author  Cryptopolitan
Jan 30, Fri
OpenAI will remove GPT-4o from ChatGPT on February 13, along with several other older AI models.
placeholder
Microsoft stock dropped 10%, wiping out $357 billion in value.Microsoft shares got hammered on Thursday, falling 10% and slicing off $357 billion in value in what is now the biggest one-day drop for the company since the world went into lockdown in March 2020. By the end of Thursday trading session, Microsoft’s total value landed at $3.22 trillion, down from just under $3.6 trillion […]
Author  Cryptopolitan
Jan 30, Fri
Microsoft shares got hammered on Thursday, falling 10% and slicing off $357 billion in value in what is now the biggest one-day drop for the company since the world went into lockdown in March 2020. By the end of Thursday trading session, Microsoft’s total value landed at $3.22 trillion, down from just under $3.6 trillion […]
placeholder
AUD/JPY Price Forecast: Bullish signals persist above 100-day EMA The AUD/JPY cross drifts lower near 107.70 during the early European session on Friday. The expectations of coordinated US-Japan intervention could provide some support to the Japanese Yen (JPY) against the Australian Dollar (AUD).
Author  Rachel Weiss
Jan 30, Fri
The AUD/JPY cross drifts lower near 107.70 during the early European session on Friday. The expectations of coordinated US-Japan intervention could provide some support to the Japanese Yen (JPY) against the Australian Dollar (AUD).
placeholder
Ethereum Price Forecast: ETH briefly breaches $2,700 amid launch of The DAO Security FundEthereum is getting a security boost from the comeback of The DAO, nearly a decade after the infamous hack.
Author  Rachel Weiss
Jan 30, Fri
Ethereum is getting a security boost from the comeback of The DAO, nearly a decade after the infamous hack.
goTop
quote