In Taiwan’s AI-fueled chip boom, brokers control everything from paychecks to dorm beds, leaving workers feeling trapped and exploited.
Startups are deploying employee tracking tools in low-regulation markets with help from Silicon Valley venture capital.
Technologies that promise to track, manage, and supervise workers, increasingly using artificial intelligence, are getting entrenched in the developing world, according to a new report by Coworker.org, a labor rights nonprofit based in New York.
Audits of more than 150 startups and regional companies based in Kenya, Nigeria, Colombia, Brazil, Mexico, and India showed workplace surveillance is expanding in scale and sophistication, the researchers said.
"Homo economicus" is the hypothetical "perfectly economically rational" person that economic models often assume us all to be, despite the fact that we are demonstrably not perfectly rational.
we do live in the shadow of such modern demons: we call them "limited liability corporations." These are (potentially) immortal colony organisms that treat us fleshy humans as mere inconvenient gut flora. These artificial persons are not merely recognized as people under the law – they are given more rights than mere flesh-and-blood people. They seek to expand without limit, absorbing one another, covering the globe, acting in ways that are "economically rational" and utterly wicked. As Charlie Stross says, a corporation is a "slow AI"
Ted Chiang has proposed that when a corporate executive like Elon Musk claims to be terrified of AIs taking over, they're really talking about the repressed constant terror they feel because they are nominally in charge of a powerful artificial life-form (a corporation) that acts as though it has a mind of its own, in ways that are devastating to human beings
relied heavily on teams of human workers—primarily located overseas—to manually process transactions in secret, mimicking what users believed was being done by automation
Example of "AI" hype when it's neither artificial nor intelligent.
misled investors by exploiting the promise and allure of AI technology to build a false narrative about innovation that never existed. This type of deception not only victimizes innocent investors...
Note that the crime is misleading investors, not anyone else, which is very telling. It's only a crime when you rip off other rich people.
Discussed here:
https://old.reddit.com/r/nottheonion/comments/1jygobw/ceo_of_ai_shopping_app_faces_40_years_for_using/
A research-backed AI scenario forecast.
We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.
We wrote a scenario that represents our best guess about what that might look like. It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.
So much hype in this one, coming from people who (likely) have very narrow domain knowledge and trying to make hype-y predictions for humanity. Such as a footnote that reduces human brains to equivalents of "compute" as X FLOPS, and comparing "AI" to multiples of human brains as a measure of "superintelligence"...
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access.
Linked talk - ChatGP-Why: When, if ever, is synthetic text safe, appropriate, and desirable?
https://www.youtube.com/watch?v=qpE40jwMilU
...MEPs, led by Birgit Sippel from the S&D and Markéta Gregorová from the Greens, single out Facebook owner Meta for its not-so-open "open source AI."
"Meta prohibits the use of its Llama models for the purpose of training other AI systems, and forces anyone who develops a highly successful AI system based on Llama to negotiate a special licence with them," reads the letter.
Meta also doesn't share the code for how it trains its models, but very publicly champions its "open" approach.
"Their AI is only free and open until a business wants to compete with them," the MEPs write. "We urge the Commission and the AI office to clarify that such systems cannot be considered Open Source for the purposes of the AI Act."
More details about Meta's Llama 4 release: https://www.404media.co/facebook-pushes-its-llama-4-ai-model-to-the-right-wants-to-present-both-sides/
...computer vision papers often refer to human beings as “objects,” a convention that both obfuscates how common surveillance of humans is in the field, and objectifies humans by definition.
“The studies presented in this paper ultimately reveal that the field of computer vision is not merely a neutral pursuit of knowledge; it is a foundational layer for a paradigm of surveillance...”
By Emily tl;dr: Every time you describe your work that involves statistical modeling as "AI" you are lending your power and credibility to Musk and DOGE.
Calling it "AI" is fast becoming yet another kind of anticipatory obedience.
If what you are doing is sensible and grounded science, there is undoubtedly a more precise way to describe it that bolsters rather than undermines the interest and value of your research. Statistical modeling of protein folding, weather patterns, hearing aid settings, etc really have nothing in common with the large language models that are the primary focus of "AI".
Transcript of: https://www.youtube.com/watch?v=eK0md9tQ1KY
It’s a way to make certain kinds of automation sound sophisticated, powerful, or magical and as such it’s a way to dodge accountability by making the machines sound like autonomous thinking entities rather than tools that are created and used by people and companies.
With 7 key questions to ask of an automation technology.
Summarising several "scientific" studies where:
[they] reduce a human task into an oversimplified game that, at its core, involves producing some plausible-looking text, only to conclude that LLMs can, in fact, generate some plausible text. They have committed one of science's cardinal sins: They designed an experiment specifically to validate their preexisting belief.
The pandemic showed us that undermining the public's trust in science can cost human lives, but the harm here goes further. These so-called studies are purposefully, almost explicitly designed to reach the result that workers are dispensable.
Rest of World’s global tracker found that AI was used more for memes and campaign content than mass deception in the 2024 elections...
...Global elections saw artificial intelligence used for playful memes and serious misinformation, revealing a complex landscape where tech’s impact is nuanced, not catastrophic.
The focus on AI's impact on elections is distracting us from some deeper and longer-lasting threats to democracy.
This gambit is called "predatory inclusion." Think of Spike Lee shilling cryptocurrency scams as a way to "build Black wealth" or Mary Kay promising to "empower women" by embroiling them in a bank-account-draining, multi-level marketing cult. Having your personal, intimate secrets sold, leaked, published or otherwise exploited is worse for your mental health than not getting therapy in the first place, in the same way that having your money stolen by a Bitcoin grifter or Mary Kay is worse than not being able to access investment opportunities in the first place.
But it's not just people struggling with their mental health who shouldn't be sharing sensitive data with chatbots – it's everyone. All those business applications that AI companies are pushing, the kind where you entrust an AI with your firm's most commercially sensitive data? Are you crazy? These companies will not only leak that data, they'll sell it to your competition. Hell, Microsoft already does this with Office365 analytics:
https://pluralistic.net/2021/02/24/gwb-rumsfeld-monsters/#bosswareThese companies lie all the time about everything, but the thing they lie most about is how they handle sensitive data. It's wild that anyone has to be reminded of this. Letting AI companies handle your sensitive data is like turning arsonists loose in your library with a can of gasoline, a book of matches, and a pinky-promise that this time, they won't set anything on fire.
@cstross@wandering.shop Do we need a different word from #enshittification to describe forcible insertion of unwanted #AI features into products or services? As I understand it, @pluralistic’s term describes a quite specific multi-step process, not simply "making things worse”.
I propose “encruftening” for "adding unrequested and undesirable features to an existing product”, which covers "AI", blockchain & whatever other horrors they have in store for us. Other suggestions?
...The present findings suggest that the current state of AI language models demonstrate higher creative potential than human respondents.
https://doi.org/10.1038/s41598-024-53303-w
article about this paper:
https://mobinetai.com/ai-more-creative-than-99-people/
Data is powerful because it’s universal. The cost is context. Policymakers want to make decisions based on clear data, but important factors are lost when we rely solely on data.
Nguyen, C. Thi. “The Limits of Data.” Issues in Science and Technology 40, no. 2 (Winter 2024): 94–101. https://doi.org/10.58875/LUXD6515
Nature article about Replika AI companion:
https://www.nature.com/articles/s44184-023-00047-6