For all those little papers scattered across your desk
Some unpolished thoughts from a recent conversation with other software engineers.
Responding to claims in an interview with Meta’s Yann LeCun about LLMs being on the outs, we discussed current trends:
anon1: is this what a bubble feels like?
me:
- I can’t help but feel that many who have LLM-based technology foisted upon them are not yet well-suited to judge their output
- I also can’t help but see the economics of AI hype as similar to crypto grift or 2008 bubble in the way that all the money is interconnected and could go boom with just a little tip
We wondered about “language” and “low dimensionality”:
anon2:
It has come as a surprise to nearly everyone—even experts in linguistics—that human language fits this requirement for such a discrete, low-dimensional space
This seems badly wrong. word2vec represents individual words as vectors in a high-dimensional space (seems like dozens if not hundreds is common), so it makes sense that anything that can produce reasonable text must be very high-dimensional.
And we pondered important questions about values and ethics in technology:
anon1: I’ve been slowly reading a collection of essays by anthropologist Diana Forsythe called “Studying those who study us” about her anthropological research of the research labs designing “expert systems” AI back in the 80s-90s which feels relevant to that claim. The researchers believed that if you just captured the rules of an expert-driven activity, like medical diagnosis, you could reproduce it with a logical system. Her research critiques this strongly by pointing out all of the unexamined cultural context, assumptions, background knowledge, and expectations which are just as much a part of the work of an expert as the textbook details the expert themselves believed they used to make their decisions. One of her go-to examples was a diagnosis system suggesting that a biological male had amniocentesis, a pregnancy complication, because biological sex assumptions were considered obvious enough (in the 80s at least) as to not mention.
The larger point is, we may not actually know what information is required to take correct action, and the act of attempting to compile that information, even the successful reproduction of action by a non-expert using that compiled information, does not prove we’ve actually captured the whole picture. Because the author and the reader both share a massive collection of conscious and unconscious context which it may be difficult or impossible to efficiently and accurately identify with the task itself. LeCun’s assertion seems to agree with her research. Although the mechanics of AI have changed dramatically, the nature of its failures seems to remain in the same category.
And kudos to the article, and LeCun, for being careful in the delineation of the use of the word ‘reason’ – honestly it’s much better reporting than I was expecting. It mainly frustrates me how many writers just casually anthropomorphize LLMs without any basis, seemingly unaware that the question of whether it is reasoning (and what reasoning is) is the entire philisophical debate here.
Finally reading the whole article, I think he’s correct about LLMs and the current situation, but I feel there are some blind spots in the predictions (like I have any standing to say so, but anyways). I particularly find it interesting how close the writer comes to identifying the importance of physical embodiment to the mind, intelligence, and personhood… then seems to miss the idea. For a moment I was sure the conclusion would be that our AIs will need sensory bodies before they can function as “System 2” minds (which I personally find at least plausible).
Additionally I find the end goal kind of horrific, even if it were plausible. Is it a good goal for humanity to function as slavedrivers for nearly-personalized automatons which we constrain and “take down” when they “misbehave?” Regardless of the actual ethics with respect to the computers themselves, how does that morally affect us?
I worry that the folks most excited about this future are unconcerned with that question, or even trying to somehow sanitize and normalize behavior that would normally be considered inhumane if applied to human labor, precisely because they would be comfortable as slavedrivers if it were acceptable. I know that’s a strong accusation but it’s concerning when they use the same language so uncritically.
me: Do you have a link to (details about) this collection? I’d like to add it to my backlog
anon1: I could only find it in hardcopy. It was just a random recommendation from the algo on Bluesky and I just pulled the trigger. So far it’s been pretty interesting. https://www.sup.org/books/anthropology/studying-those-who-study-us
[ed: in response to a point about how agents should not require boundless context to accomplish a task given a technical manual, because the manual is “more or less 1 to 1 with reality.”]
me: I think we all should suspect that the manual is not 1–1 with reality. Further, one concern is that designing systems this way changes reality to fit the manual, rather than the manual to fit reality. Which of course begs the most important question: who writes the manual? What values do they imbue explicitly, implicitly, intentionally, unintentionally? If you don’t think this matters, look at automated policing and the affected populations, or many other examples cited by Ruha Benjamin in Race After Technology.
anon1: You might find this lecure series interesting, too, related to “prescriptive technology” – https://www.cbc.ca/radio/ideas/the-1989-cbc-massey-lectures-the-real-world-of-technology-1.2946845
Franklin’s analysis was related to how technology constrains the choices of workers, but it also applies to nonhuman agents
Between those two works I’ve been appreciating feminist analysis of technology a lot lately
(I don’t think you need to be excited about feminism to appreciate them, dunno your values, but worth noting it’s a linking factor in both)
me: I was surprised that the article pointed out the question:
Without a lived experience of their own, they will need to be imbued with human goals and objectives—but which ones, and whose variants?
But not surprised (still frustrated) that it stopped short of actually engaging the question.
anon1: There’s a wealth of questions they did not engage with lol (though to be fair to the author, it’s a short piece and evidently part of a series I haven’t read)
me: Fair!