...MEPs, led by Birgit Sippel from the S&D and Markéta Gregorová from the Greens, single out Facebook owner Meta for its not-so-open "open source AI."
"Meta prohibits the use of its Llama models for the purpose of training other AI systems, and forces anyone who develops a highly successful AI system based on Llama to negotiate a special licence with them," reads the letter.
Meta also doesn't share the code for how it trains its models, but very publicly champions its "open" approach.
"Their AI is only free and open until a business wants to compete with them," the MEPs write. "We urge the Commission and the AI office to clarify that such systems cannot be considered Open Source for the purposes of the AI Act."
More details about Meta's Llama 4 release: https://www.404media.co/facebook-pushes-its-llama-4-ai-model-to-the-right-wants-to-present-both-sides/
...computer vision papers often refer to human beings as “objects,” a convention that both obfuscates how common surveillance of humans is in the field, and objectifies humans by definition.
“The studies presented in this paper ultimately reveal that the field of computer vision is not merely a neutral pursuit of knowledge; it is a foundational layer for a paradigm of surveillance...”
By Emily tl;dr: Every time you describe your work that involves statistical modeling as "AI" you are lending your power and credibility to Musk and DOGE.
Calling it "AI" is fast becoming yet another kind of anticipatory obedience.
If what you are doing is sensible and grounded science, there is undoubtedly a more precise way to describe it that bolsters rather than undermines the interest and value of your research. Statistical modeling of protein folding, weather patterns, hearing aid settings, etc really have nothing in common with the large language models that are the primary focus of "AI".
Transcript of: https://www.youtube.com/watch?v=eK0md9tQ1KY
It’s a way to make certain kinds of automation sound sophisticated, powerful, or magical and as such it’s a way to dodge accountability by making the machines sound like autonomous thinking entities rather than tools that are created and used by people and companies.
With 7 key questions to ask of an automation technology.
@cstross@wandering.shop Do we need a different word from #enshittification to describe forcible insertion of unwanted #AI features into products or services? As I understand it, @pluralistic’s term describes a quite specific multi-step process, not simply "making things worse”.
I propose “encruftening” for "adding unrequested and undesirable features to an existing product”, which covers "AI", blockchain & whatever other horrors they have in store for us. Other suggestions?