Daily Shaarli
April 5, 2025
This gambit is called "predatory inclusion." Think of Spike Lee shilling cryptocurrency scams as a way to "build Black wealth" or Mary Kay promising to "empower women" by embroiling them in a bank-account-draining, multi-level marketing cult. Having your personal, intimate secrets sold, leaked, published or otherwise exploited is worse for your mental health than not getting therapy in the first place, in the same way that having your money stolen by a Bitcoin grifter or Mary Kay is worse than not being able to access investment opportunities in the first place.
But it's not just people struggling with their mental health who shouldn't be sharing sensitive data with chatbots – it's everyone. All those business applications that AI companies are pushing, the kind where you entrust an AI with your firm's most commercially sensitive data? Are you crazy? These companies will not only leak that data, they'll sell it to your competition. Hell, Microsoft already does this with Office365 analytics:
https://pluralistic.net/2021/02/24/gwb-rumsfeld-monsters/#bosswareThese companies lie all the time about everything, but the thing they lie most about is how they handle sensitive data. It's wild that anyone has to be reminded of this. Letting AI companies handle your sensitive data is like turning arsonists loose in your library with a can of gasoline, a book of matches, and a pinky-promise that this time, they won't set anything on fire.
@cstross@wandering.shop Do we need a different word from #enshittification to describe forcible insertion of unwanted #AI features into products or services? As I understand it, @pluralistic’s term describes a quite specific multi-step process, not simply "making things worse”.
I propose “encruftening” for "adding unrequested and undesirable features to an existing product”, which covers "AI", blockchain & whatever other horrors they have in store for us. Other suggestions?
Data is powerful because it’s universal. The cost is context. Policymakers want to make decisions based on clear data, but important factors are lost when we rely solely on data.
Nguyen, C. Thi. “The Limits of Data.” Issues in Science and Technology 40, no. 2 (Winter 2024): 94–101. https://doi.org/10.58875/LUXD6515
...The present findings suggest that the current state of AI language models demonstrate higher creative potential than human respondents.
https://doi.org/10.1038/s41598-024-53303-w
article about this paper:
https://mobinetai.com/ai-more-creative-than-99-people/
Using quantitative metrics to assess researchers is often seen as a poor choice compared with using qualitative assessments. In this Perspective, the authors argue in favor of using rigorous, field-adjusted, centralized, quantitative metrics in a bid to help improve research practices as a low-cost public good.
Nature article about Replika AI companion:
https://www.nature.com/articles/s44184-023-00047-6