Daily Shaarli

All links of one day in a single page.

April 8, 2025

Calling All Mad Scientists: Reject "AI" as a Framing of Your Work • Buttondown
thumbnail

By Emily tl;dr: Every time you describe your work that involves statistical modeling as "AI" you are lending your power and credibility to Musk and DOGE.

Calling it "AI" is fast becoming yet another kind of anticipatory obedience.

If what you are doing is sensible and grounded science, there is undoubtedly a more precise way to describe it that bolsters rather than undermines the interest and value of your research. Statistical modeling of protein folding, weather patterns, hearing aid settings, etc really have nothing in common with the large language models that are the primary focus of "AI".

AI’s impact on elections is being overblown | MIT Technology Review
thumbnail

The focus on AI's impact on elections is distracting us from some deeper and longer-lasting threats to democracy.

Opening remarks on “AI in the Workplace: New Crisis or Longstanding Challenge” | by Emily M. Bender | Medium

Transcript of: https://www.youtube.com/watch?v=eK0md9tQ1KY

It’s a way to make certain kinds of automation sound sophisticated, powerful, or magical and as such it’s a way to dodge accountability by making the machines sound like autonomous thinking entities rather than tools that are created and used by people and companies.

With 7 key questions to ask of an automation technology.

Game of clones: Colossal's new wolves are cute but are they dire? | MIT Technology Review
thumbnail

Colossal Biosciences claims it created [extinct] dire wolves, but scientists outside the company are skeptical.

IMO, whether it's asking if the cloned animals are dire wolves or not; the VC funding; or the impressive tech, they risk distracting us from the underlying problems in biodiversity conservation.

See also:
https://assemblag.es/@theluddite/114302525871514316

The Anti-Labor Propaganda Masquerading as Science

Summarising several "scientific" studies where:

[they] reduce a human task into an oversimplified game that, at its core, involves producing some plausible-looking text, only to conclude that LLMs can, in fact, generate some plausible text. They have committed one of science's cardinal sins: They designed an experiment specifically to validate their preexisting belief.

The pandemic showed us that undermining the public's trust in science can cost human lives, but the harm here goes further. These so-called studies are purposefully, almost explicitly designed to reach the result that workers are dispensable.

How AI shaped global elections in 2024 - Rest of World
thumbnail

Rest of World’s global tracker found that AI was used more for memes and campaign content than mass deception in the 2024 elections...
...Global elections saw artificial intelligence used for playful memes and serious misinformation, revealing a complex landscape where tech’s impact is nuanced, not catastrophic.