Transcript of: https://www.youtube.com/watch?v=eK0md9tQ1KY
It’s a way to make certain kinds of automation sound sophisticated, powerful, or magical and as such it’s a way to dodge accountability by making the machines sound like autonomous thinking entities rather than tools that are created and used by people and companies.
With 7 key questions to ask of an automation technology.
Summarising several "scientific" studies where:
[they] reduce a human task into an oversimplified game that, at its core, involves producing some plausible-looking text, only to conclude that LLMs can, in fact, generate some plausible text. They have committed one of science's cardinal sins: They designed an experiment specifically to validate their preexisting belief.
The pandemic showed us that undermining the public's trust in science can cost human lives, but the harm here goes further. These so-called studies are purposefully, almost explicitly designed to reach the result that workers are dispensable.
Colossal Biosciences claims it created [extinct] dire wolves, but scientists outside the company are skeptical.
IMO, whether it's asking if the cloned animals are dire wolves or not; the VC funding; or the impressive tech, they risk distracting us from the underlying problems in biodiversity conservation.
See also:
https://assemblag.es/@theluddite/114302525871514316
Rest of World’s global tracker found that AI was used more for memes and campaign content than mass deception in the 2024 elections...
...Global elections saw artificial intelligence used for playful memes and serious misinformation, revealing a complex landscape where tech’s impact is nuanced, not catastrophic.
The focus on AI's impact on elections is distracting us from some deeper and longer-lasting threats to democracy.
I don’t know who needs to hear this, but it seems to come up in my life weekly:
- NASA images are not in the public domain;
- They are generally free to use for most non-commercial, educational, & informational purposes without permission, but acknowledgement of NASA is needed (which true PD would not require);
- For commercial use, no endorsement by NASA can be implied;
- Images featuring people, e.g. astronauts, need explicit permission for commercial use.
This gambit is called "predatory inclusion." Think of Spike Lee shilling cryptocurrency scams as a way to "build Black wealth" or Mary Kay promising to "empower women" by embroiling them in a bank-account-draining, multi-level marketing cult. Having your personal, intimate secrets sold, leaked, published or otherwise exploited is worse for your mental health than not getting therapy in the first place, in the same way that having your money stolen by a Bitcoin grifter or Mary Kay is worse than not being able to access investment opportunities in the first place.
But it's not just people struggling with their mental health who shouldn't be sharing sensitive data with chatbots – it's everyone. All those business applications that AI companies are pushing, the kind where you entrust an AI with your firm's most commercially sensitive data? Are you crazy? These companies will not only leak that data, they'll sell it to your competition. Hell, Microsoft already does this with Office365 analytics:
https://pluralistic.net/2021/02/24/gwb-rumsfeld-monsters/#bosswareThese companies lie all the time about everything, but the thing they lie most about is how they handle sensitive data. It's wild that anyone has to be reminded of this. Letting AI companies handle your sensitive data is like turning arsonists loose in your library with a can of gasoline, a book of matches, and a pinky-promise that this time, they won't set anything on fire.
@cstross@wandering.shop Do we need a different word from #enshittification to describe forcible insertion of unwanted #AI features into products or services? As I understand it, @pluralistic’s term describes a quite specific multi-step process, not simply "making things worse”.
I propose “encruftening” for "adding unrequested and undesirable features to an existing product”, which covers "AI", blockchain & whatever other horrors they have in store for us. Other suggestions?
...The present findings suggest that the current state of AI language models demonstrate higher creative potential than human respondents.
https://doi.org/10.1038/s41598-024-53303-w
article about this paper:
https://mobinetai.com/ai-more-creative-than-99-people/
Using quantitative metrics to assess researchers is often seen as a poor choice compared with using qualitative assessments. In this Perspective, the authors argue in favor of using rigorous, field-adjusted, centralized, quantitative metrics in a bid to help improve research practices as a low-cost public good.
Data is powerful because it’s universal. The cost is context. Policymakers want to make decisions based on clear data, but important factors are lost when we rely solely on data.
Nguyen, C. Thi. “The Limits of Data.” Issues in Science and Technology 40, no. 2 (Winter 2024): 94–101. https://doi.org/10.58875/LUXD6515
Nature article about Replika AI companion:
https://www.nature.com/articles/s44184-023-00047-6
Review of initiatives and uptake of open research practices in psychology.
https://doi.org/10.1098/rsos.241726
Welcome to Shaarli!
Shaarli allows you to bookmark your favorite pages, and share them with others or store them privately.
You can add a description to your bookmarks, such as this one, and tag them.
Create a new shaare by clicking the +Shaare button, or using any of the recommended tools (browser extension, mobile app, bookmarklet, REST API, etc.).
You can easily retrieve your links, even with thousands of them, using the internal search engine, or search through tags (e.g. this Shaare is tagged with shaarli and help).
Hashtags such as #shaarli #help are also supported.
You can also filter the available RSS feed and picture wall by tag or plaintext search.
We hope that you will enjoy using Shaarli, maintained with ❤️ by the community!
Feel free to open an issue if you have a suggestion or encounter an issue.