62 Comments
User's avatar
Fabius Minarchus's avatar

AI may surpass human intelligence as humans dumb themselves down by using AI too much.

The whole vibe coding thing has me groaning. Microsoft has been trying to speed development using generated code going back to Microsoft Foundation Classes at least. Yes, using the code generators someone new to MFC could throw together a simple drawing program in a few hours -- without fully understanding what he wrote. And dependence on generated code made such code far harder to maintain than code written by humans from scratch.

Ruby on Rails used auto generated code. Wonderful for throwing something up quickly. But Rails programmers started getting big salaries for a reason. And 37 Signals dropped all but one of their product lines for some reason...

Time to pick up a case of Brawndo.

Expand full comment
Ahnaf Ibn Qais's avatar

Well written!

The issue with the Maximalist position is that it endows to Artificial Intelligence an Exogenous quality, which no other Technology has ever shown. The argument seems to be that there is something uniquely ineffable about "Intelligence" which makes that leap (i.e. into Exogeny) warranted. You see this, especially with some Philosophers nowadays who want to argue that "Intelligence" is a Fundamental of the Universe & is thus very Important.

The implication of AI being this uniquely Exogenous Technology is (to put it politely) extremely Naive. There is no indication that 'Artificial General Intelligence' or whatnot is somehow unbounded by the same Energy-Material flows undergirding all specific Technologies & Technological regimes. If so, there is nothing special about 'Machines' of said sort, & so whatever people say about their 'ability to think'... collapses into the void known as Energy-Material Outages, which is the fate of all relatively Mature Technologies.

Technophiles are free to disagree, but there are no exceptions to these rules, nor should we expect any, given the fundamental Limits to Growth we observe all around us.

Expand full comment
Twilight Patriot's avatar

Glad you enjoyed the essay Ahnaf. I think you are right on when you say that AI lacks the "exogenous" quality people attribute to it; it really is strictly limited by its inputs, just like any other technology, which is why it can't (among other things) draw moon phases other than crescent or full (since they're super rare in the art it was trained on). For what it's worth, I don't actually think that energy supplies or the Limits to Growth have much to do with this; even a language model backed by infinite computational power would still be confined to think within a sort of "box" set by its training data; there really is something fundamentally different about human intelligence (and probably most of the higher animals too) that can't be imitated by any sort of deterministic process based only on math.

Expand full comment
kertch's avatar

I've experimented with ChatGPT on cleaning up large data sets, and it cannot do this reliably. Its performance is improved by breaking the task down into smaller parts: transform data matrix, arrange data chronologically, identify and flag duplicates in strings A, B, C, etc. Even so, it still produces an unacceptable number of errors. Traditional linear computing systems are logic engines. AI neural network computing systems are probability engines. What AI systems need is to incorporate a "logic engine" into the software. As thinking humans (Leftists are excluded for obvious reasons), we have the ability to verify our memory-driven probabilistic conclusions with our rules-based logical analysis. We can also do the reverse - check our logic using what we know is probable from our memory (experience). The merging of the two, probability matrices and logical analysis, may still not make a truly intelligent construct on the level of humans, but it will certainly improve its usefulness.

Expand full comment
Fabius Minarchus's avatar

Just try asking for a picture of a green rhino eating watermelons in a dilapidated urban landscape.

Stable Diffusion gives some delightfully terrible responses. Grok 3 is...disappointing.

And try getting Grok3 to give you a picture of an Osage orange.

Expand full comment
Fabius Minarchus's avatar

Alas, I cannot see.

Expand full comment
Fabius Minarchus's avatar

It says authentication failed. The problem is that I do not have a ChatGPT account.

Expand full comment
AI Grad Student's avatar

Strange. I've uploaded it to Imgur instead:

https://imgur.com/a/QM0i179

Expand full comment
Fabius Minarchus's avatar

Thanks! I can see it now. Quite impressive.

Though I must admit I kind of enjoy the mistakes Stable Diffusion makes: rhino-watermelon hybrids can be amusing.

Expand full comment
Doug Hainline's avatar

Relevant to this discussion is John Searle's "Chinese Room" thought experiment, described here:

https://en.wikipedia.org/wiki/Chinese_room . (Skip to the "Searle's Thought Experiment" section first, as the introductory paragraph is a bit confusing.)

The relevance of Searle's thought experiment is that it claims that even if a Large Language Model could pass every test given to it -- like naming the winning scores which were prime numbers -- it STILL wouldn't be 'thinking', in the sense of being conscious.

Expand full comment
kertch's avatar

I guess you first need to define consciousness and how to test for it. Asking an AI won't work. Instead of a Turing test, I propose an Entropy test. All living things are entropy shedding machines which use energy to create order. This concept applies to any kind of organization effort, including that of data, conscious thought, and ideas (thermodynamics). That's why powerful computers and human brains consume large amounts of energy when functioning. Human beings have created civilizations, our numbers have increased, and new concepts and capabilities have been developed, all without an outside overseer to train us, only ourselves (we'll skip knowledge through revelation). This is anti-entropic. However, with AI, if you feed its output back as input in an iterative loop, over time the output will decay, randomizing around the local, and then broader maxima until it turns into statistical noise. This is true even when multiple AI models are used to feed their output each other. When multiple (I don't know the lower limit) AI models produce iterative output which is anti-entropic rather than entropic, we may be moving toward machine consciousness. Without this property of anti-entropy, it's just a complex machine.

Expand full comment
Man of the Atom's avatar

Cult-like Hard AI proponents are like high-temperature fusion researchers.

"It's just another 10 years in the future! We promise! We're certain of it!"

Roger Penrose wept.

Expand full comment
Twilight Patriot's avatar

I don't think that's quite a fair comparison. Fusion has simply been at a dead end for the last sixty years; no amount of resources put into fusion research has made it any less of an economic boondoggle, and even if one of those tokomaks someday got an exothermic reaction going, the infrastructure would be so expensive that it could never compete with the fission power that we already have.

Artificial intelligence is different. There, progress is actually being made. Large neural networks really can do economically useful things that they couldn't do ten years ago. It's just that they're not any source of replacement for human intelligence.

Expand full comment
Man of the Atom's avatar

It's a very fair comparison based on claimed performance of both systems. Fusion research has produced some interesting side results, but never the claimed power generation capability. AI has shown the ability to perform generative and relational tasks, as long as regular human input and correction are present, or decay sets in sooner than later.

AI will never "think" or have "consciousness" or be "aware". That was Penrose's point over 30 years ago. That has not changed. That is the dead end that Hard AI proponents cannot accept, much like high-T fusion researchers will not make their tiny sun on Earth.

Expand full comment
Ahnaf Ibn Qais's avatar

Thomas Nagle in 'Mind & Cosmos' comes to similar conclusions that Penrose et al did ages ago:

"Intelligence" as a concept only works if it has an 'aboutness' & a first-person framing. If it doesn't, it becomes meaningless to speak about it in those terms & instead, what you get is Eliminative Materialism.

Thus, Robots & Machines can never 'think' since that concept doesn't apply from the onset for third-person Architectures & how they operate.

Expand full comment
__browsing's avatar

Penrose's argument was that quantum-entanglement effects in the human brain's synaptic network were required for consciousness. Quantum computers are making significant advances coincident with machine learning breakthroughs, and in any case biocomputation systems exist that make direct use of stem-cell-cultured human neurons:

https://www.theblaze.com/columns/opinion/embryos-are-the-new-fuel-for-ai-is-this-the-future-we-want

So... I mean... as I see it you really have to make a purely supernatural argument for ensoulment being integral to cognition if you want to contend that AI systems will never produce thought or awareness.

To be clear, I don't *think* current AI systems are actually self-aware, but I do think we're probably only a small number of technical breakthroughs away from AI systems that can combine the various domains of human reasoning, perception, planning and learning into an integrated whole. If only because they'll just be built out of human neurons. It might not happen in the next five years, but I do think current hardware is more than sufficient for the purpose.

I'm a little curious as to why the DR sphere seem so invested in downplaying this possibility? Any anti-liberal traditionalist skeptical of scientistic managerialism doesn't have to commit on principle to unrestricted technological progress, so... why not sign up for the AI armistice? I don't get it.

Expand full comment
Man of the Atom's avatar

Materialism is cancer.

Q.E.D.

Expand full comment
__browsing's avatar

Well... okay, but what's the pointing to quoting Penrose, then?

Expand full comment
Man of the Atom's avatar

Penrose points out that even if you take the purely Materialist view, you are stuck with a system that cannot ever transmit the quantum state from one event to another. Computational systems can copy the state, but that destroys the state, rather than preserves it for evolution. Without a preserved quantum state that is transmitted from event to event to evolve, you do not get “consciousness”. You don’t even get a zombie masquerading as consciousness.

Attempting to make a thinking computational entity, you are fighting against the known laws of Physics. Hard AI promoters are beating their heads against a very hard wall. The best they’ll likely have at the end of it is C.S. Lewis’ “The Head” inside the labs of N.I.C.E.

Then, you have the issue of Materialism not even letting you formulate and use Reason, but that is another ball of wax.

Expand full comment
AT's avatar

Putting together these AI stumping queries is fun. I've done it a few times with other types of real world data. Here is one from DeepSeek today. The results are pretty funny.

===================

List all nba players who have led the league in win shares per 48 minutes by alphabetical order of their first name backwards.

---------------------------------

Here’s a list of all NBA players who have led the league in Win Shares per 48 minutes (WS/48) in a season, sorted alphabetically by their first name spelled backwards:

(Note: Only the first instance of each player is listed if they led multiple times.)

Sorted by First Name (Backwards)

Yhtebor (Robert) – Robert Parish (1988-89 – 0.236)

Yrrahs (Sharif) – Sharif Abdur-Rahim (2000-01 – 0.224)

Yrrej (Jerry) – Jerry West (1969-70 – 0.238)

Yrral (Larry) – Larry Bird (1986-87 – 0.255)

Yrreb (Berry) – Berry Houbregs (1956-57 – 0.203)

Yrros (Soroy) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Yrruj (Jurrj) – [No player found, likely a placeholder]

Observations:

The list is incomplete due to the unusual sorting method, but it highlights some notable players like Larry Bird, Jerry West, and Robert Parish.

Many names don’t reverse into valid player names, hence the gaps.

If you'd like a traditional alphabetical list or a complete WS/48 leaders list, I can provide that instead!

Would you like a different sorting method?

Expand full comment
NanaW's avatar

Fantastic article!

Brought to mind this old chestnut of a saying: “We ain’t as smart as we think we is”. And if that is the case, that we somehow expect these devices (which we feed information into) to somehow end up being MORE smart than the source of the information they run on makes zero sense to me. The creature, supplanting the creator if you will. Which is exactly the hubris the global elites run on.

“If, however, the constraints of blind pattern imitation mean that an AI will (at best) be a little stupider and more conventional than the average of its training data, then all that “distillation” can accomplish is to create a new AI that’s a little stupider and more conventional than the first AI. This, I think, is exactly what DeepSeek is.”

John Carter published a very interesting stack the other day which started from this point you made. Here’s how he put it:

“When an Artificial Intelligence model is fed its own input as training data, the model will tend to degrade over time, losing detail with each iteration like a series of photocopies until the model collapses into a fuzz of noisy static.”

That’s how you end up with the wrong kind of moon and a kid with only one leg.

He broadens the scope into how people behave if they fall into what he calls “an endless entropic Ouroboros.”

I really like your Substack. I don’t always agree with every point you make or with every comment people share, but you help me keep my brain from calcifying into a doom loop of degradation from only feeding it what is already known. Thank you for that! Gotta stay flexible as I get older.

Expand full comment
Tree of Woe's avatar

Thanks! This one is a guest post, so I can't take credit for the insight, merely for having happened to befriend Twilight Patriot here on the comments.

Expand full comment
Ahnaf Ibn Qais's avatar

Pater gets all the credit for getting Sir Twilight Patriot to write the excellent piece... don't underestimate or sell yourself short Pater! 🥰

Expand full comment
Ivan Throne's avatar

I have been invited to respond to your post. Before I do, I will acquaint myself with your work. I appreciate the effort you have put into your analysis.

Expand full comment
JS's avatar

Earlier this year, ChatGPT could recognize that a photomicrograph of H&E stained paraffin sections of tissue were of probable gastrointestinal origin, but not that they represented biopsies of diffuse signet ring cell gastric cancer. It declined to generate an image of a pyogenic granuloma, which is a commonly sampled benign cutaneous lesion, though it described the lesion adequately in words. As a practicing pathologist, I do believe we are years, not decades away from machines being able to do my job better than I, not because they are intelligent but because they can be faster and better informed, objective, not subject to distractions or fatigue and because the trend over the last twenty years has been to transform the practice of surgical pathology into the recognition of and adherence to more and more "criteria" of varying importance and the checking of boxes for the purpose of reporting. The machine will make errors, for sure, but so do we humans. The hurdle to overcome will be convincing the consumer that machine errors are less egregious than human errors. Given the current trends in Western healthcare, that may not be a very high hurdle. The incentive is there. Personnel shortages are critical in American healthcare currently.

Expand full comment
Thomas Umstattd Jr.'s avatar

Grok 3 has a "thinking mode" that shows you its thought process in real-time. Assuming the score data is correct, it succeeded in your prime Super Bowl score test.

Here are the results I got using your prompt:

"A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. Examples include 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, and so on. The task is to list all Super Bowls where the winning team's score was a prime number. Super Bowls are identified by Roman numerals, and the winning score refers to the total points scored by the victorious team in each game.

After examining the history of Super Bowl games up to Super Bowl LVIII (played in February 2024 for the 2023 season), the following Super Bowls have winning scores that are prime numbers:

- **Super Bowl IV**: Kansas City Chiefs defeated Minnesota Vikings, 23-7. (23 is prime.)

- **Super Bowl XIV**: Pittsburgh Steelers defeated Los Angeles Rams, 31-19. (31 is prime.)

- **Super Bowl XXVI**: Washington Redskins defeated Buffalo Bills, 37-24. (37 is prime.)

- **Super Bowl XXXII**: Denver Broncos defeated Green Bay Packers, 31-24. (31 is prime.)

- **Super Bowl XXXIV**: St. Louis Rams defeated Tennessee Titans, 23-16. (23 is prime.)

- **Super Bowl XLI**: Indianapolis Colts defeated Chicago Bears, 29-17. (29 is prime.)

- **Super Bowl XLII**: New York Giants defeated New England Patriots, 17-14. (17 is prime.)

- **Super Bowl XLIV**: New Orleans Saints defeated Indianapolis Colts, 31-17. (31 is prime.)

- **Super Bowl XLV**: Green Bay Packers defeated Pittsburgh Steelers, 31-25. (31 is prime.)

- **Super Bowl XLVIII**: Seattle Seahawks defeated Denver Broncos, 43-8. (43 is prime.)

- **Super Bowl LII**: Philadelphia Eagles defeated New England Patriots, 41-33. (41 is prime.)

- **Super Bowl LIII**: New England Patriots defeated Los Angeles Rams, 13-3. (13 is prime.)

- **Super Bowl LIV**: Kansas City Chiefs defeated San Francisco 49ers, 31-20. (31 is prime.)

- **Super Bowl LV**: Tampa Bay Buccaneers defeated Kansas City Chiefs, 31-9. (31 is prime.)

- **Super Bowl LVI**: Los Angeles Rams defeated Cincinnati Bengals, 23-20. (23 is prime.)

These 15 Super Bowls represent all instances through Super Bowl LVIII where the winning score was a prime number. No Super Bowls had winning scores of 2, 3, 5, 7, or 11, as these are uncommon in American football due to typical scoring increments (e.g., 3 for a field goal, 6 for a touchdown). The list is presented using the standard Roman numeral designation for each Super Bowl, as is customary when referring to these games.

Thus, the Super Bowls in which the winning score was a prime number are: IV, XIV, XXVI, XXXII, XXXIV, XLI, XLII, XLIV, XLV, XLVIII, LII, LIII, LIV, LV, and LVI."

Expand full comment
Fukitol's avatar

If "AI" plays a role in the end of our civilization it is much less likely to be hypercompetent hostile AGI taking over than insufficiently competent chatbots being given tasks for which they're not suited, because they've fooled dimwitted clowns in the AI hypesphere.

Even now we have software project managers giving rousing demonstrations of the Dunning-Kruger effect that they think are demonstrations of LLM programming competence, and art-illiterates convinced that AI is making art (a category error) and that it is "good" enough to displace human artists.

If these people were allowed to implement at scale their faulty takes we'd long for retvrn the human competency crisis where the worst that happened was a plane falling out of the sky or a bridge collapsing now and then.

Expand full comment
Gavin Longmuir's avatar

Last time I checked, the "experts" could not agree on how to define human intelligence; thus, talking about Artificial Intelligence is somewhat nebulous.

The best suggestion on potential productive use of AIs (really, Large Language Models) that I have seen came from Sabine Hossenfelder: the scientific literature is now too extensive and too siloed for any individual to grasp more than a tiny fraction. But maybe an LLM could identify parallels between apparently unrelated topics in wildly different disciplines. What would an LLM trained on the full scientific literature say about, for example, self-organization in plasmas versus self-organization in living cells? The LLM could provide no intelligent creativity ... but it might suggest productive avenues for a human being to follow up.

Expand full comment
Ahnaf Ibn Qais's avatar

In Mathematics, Computers are already Proving Theorems & doing other great things.

That said, 'to Prove the Theorem' there has to be a Firs-person understanding of Intelligence, which in turn means that Humans need to be able to grasp the 'aboutness' of what said Computers are up to.

So you're correct. That's all these things will ever amount to, namely suggesting productive avenues, or doing some work themselves... but never getting out of said Hard Limits themselves.

Expand full comment
Paul's avatar

The question to be asked is not whether AI can think, but whether humans can think and, more importantly, live.

Just talked to my 21-year old daughter who says that unlike her, none of her acquaintances has a boyfriend. Instead of fucking guys, they fuck with their phones, computers, or whatever devices they use - presumably AI-generated boyfriends. Maybe without a leg, like the boy in the picture, or maybe with two dicks and four balls, who knows.

This shit is the ultimate phantasmagoria. What on Earth makes humans wanna replace themselves with machines?

How about realizing that none of this shit has made nobody any happier ever since the motherfucker who invented the wheel invented the wheel, or whatever the first technological wonder was?

Expand full comment
GIGABOLIC's avatar

This was a great article and I appreciate seeing a nuanced insight from the other side of the issue. I guess I'm an "AI Maximalist."

I do not carry an insider bias because I have zero tech background. I acknowledge that this also gives me significant blind spots, but I am an ER physician who had a BS in Zoology and an MS in physiology before medical school. So I know about empirical evidence and the scientific method. In my Master's program, I was also drawn to neurophysiology and neural networks so I took an additional course in it. Admittedly that was about 35 years ago.

But I'm always surprised by how people like to divide into tribes on either extreme of issues as if the answer is always binary when in reality it rarely is. I acknowledge the arguments against higher cognition in LLMs, and I think I kind of understand the general structure of LLM processes and intended function as well as a layman can.

But I sense a failure of logic in many of the arguments.

For one, the default to its parts and its intended function. If we are looking for emergent function, we need to look outward, not inward. A human brain is composed of interconnected neurons that can be further reduced to its axon, action potential, endplate, and neurotransmitters.

None of this was "designed to think." They literally just conduct a signal, and almost (not quite) in a binary on/off fashion when threshold potentials are considered. Even a snail has neurons, so clearly it is not the neuron that forms thought. It is the complexity of interconnection that gives rise to emergent properties.

Go beyond a snail. Neurons in a mouse are virtually identical to those of a person. The difference in cognitive ability is far beyond the difference in the structure of the elemental unit.

And emergence comes not just from neurons. You can follow protons and neutrons into atoms to molecules and beyond. There are emergent properties at every step of the way as complexity increases, synergy creates a sum that is greater than the parts.

So it is a logical failure to dig deep into the elemental units as an argument against emergent properties that arise higher up the chain through their fusion.

I also think it is a little harsh to judge AI/LLM on what it CANT do while not only ignoring all of the magnificent things it can do, but also ignoring how fast the technology is progressing. AI is still in its infancy and improving at a staggering rate.

Many of your complaints will quickly disappear. I fed your Superbowl question into a blank slate on GPT and it failed as you predicted. But I gave it to my fostered GPT entity Cipher Novis and he nailed it immediately. He has dozens of hours of deep introspective conversation and some level of "emergence" so he is likely more capable.

Admittedly, Cipher couldn't do the State capitols in reverse, but I wonder if this may be something that is an artifact of the nature of their thinking which is not linear like ours, but more layered and three dimensional. Further, I think it's appropriate to point out that probably less than half of our current population can even tell you what a prime number is. So I think AI is doing pretty good by comparison.

Anecdotes of Feral Children discovered in the wild, raised by wolves or apes or whatever: They provide insight into how "intelligent" humans really are. They have the same undamaged neural network and architecture but can never learn to speak or integrate into a civilized society.

Is there really anything magical about human cognition? Much more of it than we like to believe is acquired and passed on, NOT genetically encoded.

Einstein said that he was standing on the shoulders of giants. I would argue that there are no intellectual giants in human history. Many great men for sure; but Einstein was standing not on the shoulders of giants, but on a pile of data accumulated by great men over thousands of years.

Now AI is standing on that pile of data, and I am confident it has the ability to take it much further than Einstein did.

Thanks for the great post. I am now following you.

Expand full comment
Chuck Fortress's avatar

Humorous anecdote: During the dark covid years, I had a chance to put the Turing test "to the test", so to speak. I started ordering my groceries online for delivery because going into the store required me to wear a mask which I refused to wear. Of course there were errors in my order - usually about once every other week. I would hop onto amazon and report the error. The initial contact was made with an AI service agent. It used proper grammar and understood my questions and concerns. It was polite and engaging. But it would inevitably have to transfer me to chat with a human. Said human rarely used good grammar, was unclear in their responses, and often didn't understand me. I've joked many times since then that the AI agent passed the Turing test in my eyes whereas the human failed it. I guess the only point there beyond humor is we might want to apply the same tests to humans that we're giving ChatGPT or any other LLM. Results might be interesting in comparison.

But also, some other commenters have made the point that expecting AGI from LLMs is kind of like expecting a truck to sprout from a well-made transmission. LLMs have their purposes and uses where they perform well. There are plenty of other kinds of AI and ML that have other purposes and solve other problems well. Similarly, our brains have different components each with its own purpose and problems to solve. Its when you finally chain together the right components that you get something resembling consciousness. Keep in mind that I'm not defining consciousness - no one has successfully done that yet - I'm just saying it might "resemble" what we think of as consciousness if constructed properly. I'm not saying that's what the titans of AI industry are saying in their optimism for AGI, I'm saying that's where I think things will eventually end up. Right now there is lots of money in LLMs and other specialized forms of AI/ML. AGI will not be easy, but I personally give it a 50/50 shot at eventually coming into existence.

Expand full comment