125 Comments
Jul 5·edited Jul 5Liked by Tree of Woe

Meat computer theorists make the mistake of separating computational thinking from the ontological computation of reality that happens when an action is taken. Human cognitive faculties are one facet of the cognitive faculties of reality, and can reflect truth insofar as they are an analog of it and not a symbolic representation. Even in the simple act of taking a step, there is the human computational aspect (deciding to take a step, moving, feeling, continuing) and the real computational aspect (rearranging atoms according to deterministic laws, passing time), which are both hosted in reality itself. It's two sides of the same equation, with the experienced part being perceived by the individual as its own cognition due to self-reference.

AI can simulate this but in the end it's just math, and highly inefficient at that. Even a fly has superior processing ability: AI might manage to approximate a fly's aerobatics, but how many teraflops are needed to approximate all the molecular interactions that the fly's genetic code manages effortlessly throughout its lifespan at a total expense of perhaps 2 watt-hours?

Expand full comment
author

Fascinating. You should pen a longer piece on this. It's an area we (the Right) need to be discussing more, because the mainstream is trapped in meat theory...

Expand full comment
Jul 5Liked by Tree of Woe

Thanks, I'll work on it.

Expand full comment
Jul 6Liked by Tree of Woe

100% this

Efficiency of computing isn’t some side detail. It shapes and constrains all outcomes, since the ultimate constraint in the physical universe is energy and your capacity to put it to use.

Animals are computing how to survive and thrive in their niche, extremely efficiently. That’s what they were selected for over billions of years. They are self-repairing at all levels.

An AI system have to outsource all of that to human beings. The idea that they’ll repair themselves is laughable to anyone responsible for fixing hardware failures in production technical systems. No way.

Yes an AI can likely outcompete humans at some tasks, so long as you totally ignore energy costs, and the ability to survive, heal and reproduce. Of course, it shouldn’t surprise us that childless persons with a disdain for both physical fitness and the wisdom of history imagine that a computer could beat humanity. Their fear is an expression of their own egoic valuation of rationality as winning, as most fears are expressions of ego protecting a false conception of self.

Expand full comment

Great points. I made a similar observation, with a slightly different take. Notice that the areas where AI can outcompete us are fundamentally mechanistic processes. This is one reason why the wisdom of crowds generally outcompetes the performance of the technocratic class. Not only is the wisdom of crowds iterative, but technocracy also generally tends to be the imposition of mechanistic processes onto an organic distributed network- people interacting through voluntary association.

The technocratic approach can work. It's great in small closed systems and mechanistic environments- a warehouse or a factory. It can also work in market-based systems which include choice architecture- the prime examples would be supermarkets or Amazon. Where technocracy fails is in government or the third sector. GiveDirectly is a charity based upon the empirically proven fact that giving money directly to African communities or other communities in the Developing world significantly outperforms the best the NGO class or effective altruism has to offer, in terms of human outcomes.

Expand full comment

" Their fear is an expression of their own egoic valuation of rationality as winning, as most fears are expressions of ego protecting a false conception of self."

Yes, intelligence isn't attractive, and there ain't no algo that's gonna get you laid,

Expand full comment

Great comment, but I would use the OODA fighter pilot loop to demonstrate how humans interact with the world. Many failures in competency with skills occur because people are insufficiently mindful of the first two components and thus fail in the third process of deciding how to act. A great example is the empirically proven benefits of recruiting police officers with military experience, particular in combat arms. Police officers with combat arms experience are 30% less likely to kill in potentially lethal situations and police officers with military experience are less likely to receive civilian complaints.

https://academicworks.cuny.edu/gc_etds/3966/

Liberals loathe these facts and won't accept them. What they should really be arguing about it the overly liberal use of no-knock warrants, semi-automating traffic stops and a movement towards policing by consent, with training focusing on the ethos of the constitution and libertarian values.

Great point about the fly's efficient energy use. It's also a likely reason for true AGI to keep us around. Biochemical energy processes are just so damn efficient.

Expand full comment
Jul 6Liked by Tree of Woe

Meat Computer proponents also ignore the vast expanse of historical works (both ancient and modern) and empirical data indicating mind (consciousness) is much more vast than anyone can probably ever fully grasp.

They do this, of course, because they are abject cowards, terrified that their being exists beyond the flesh, with all that it implies.

Expand full comment
Jul 5Liked by Tree of Woe

The meat robot advocates are free to believe in their meaty roboticism. I'll believe it too, in their case. Fully willing to think of them as meat robots. Dismiss them as meat robots.

As a human, I'll listen to the meat robots only when they tell me what they are.

Maybe one day, when the miracle of revelation reaches them, those pinocchios will become real boys and girls.

As for AI, once you know how many terrible quality programmers exist, all techo-nightmares and techno-tyranny collapse into the single disaster of a bad update being shipped that brings the whole internet et al. down.

Many would of course die, but surely the non-meat robots would survive.

Expand full comment
author

I like the cut of your jib, sir. I think your attitude towards the meat robots is spot on.

I'm not technical enough to assess what really will happen, so I just took the position that I'd assume they were correct on the trendline and see where it led. I think my brother (a software engineer) would agree though that it's all basically FUBAR...

Expand full comment
Jul 5Liked by Tree of Woe

Thank you, the light of your writing is most illuminating, delighted to find you here on substack, after reading elsewhere. The works on magic and the man-things of the dark are a favourite.

Programming is not a field this teller dwells in either, but a significant chunk of family blood is and FUBAR is indeed a common assessment among them.

Forge onwards in thought, for fearing being wrong should poison a man's seeking of truth just as much as being a liar.

Expand full comment
author

"Forge onwards in thought, for fearing being wrong should poison a man's seeking of truth just as much as being a liar."

Well said! And thank you for being kind enough to visit the Tree.

Expand full comment
Jul 6Liked by Tree of Woe

We all hang on that tree from time to time. A moment of pity for those strangled by her.

Expand full comment

>World war 100

Took me a while. There are 10 kinds of people...

Expand full comment

Machine bad! Me smash! Me want humans rule Earth.

QED

Expand full comment
Jul 5Liked by Tree of Woe

Brilliant!

Whenever I'm spending too much time developing software, my mind automatically rewires to think in a "softwaresque" way, which is what happens to these Cosmists. Not only that, they've also forgot that the computer is an artificial representation / abstraction of very specific human abilities. And now that the computer is able to outperform actual human beings regarding those exact abilities, they conclude that the computer has become better than the human being. That's a typical trap you fall into when designing software systems: The model becomes more real than reality. Every software-developer is warned to constantly watch out for this fallacy throughout his studies at the university.

There is a certain concept that received some bad reputation in the 20th century, but I think it's very well-suited to re-wire the brains of those Cosmists and wannabe-Cyborgs back to "normal": Concentration Camps. We should introduce these guys to ten years of hard physical labor, and evaluate their thoughts on AI and humanity afterwards.

Expand full comment
author

"What do you mean hard labor?! I thought this concentration camp meant I'd learn how to overcome ADHD!"

Expand full comment
Jul 6Liked by Tree of Woe

Or they could just like, touch grass every once in a while. Maybe take up a hobby involving embodied work in a real art or craft. Heaven forbid, speak to a human being outside the bugman tech bubble.

Works for me anyway.

Expand full comment
Jul 6Liked by Tree of Woe

Can confirm. Growing plants is a very grounding experience when you sit in front of those symbols all day.

Expand full comment
Jul 6Liked by Tree of Woe

Pascal,

esprit de finesse. Intuition

esprit de geometrie. Analysis

Barzun, The House of Intellect, 1959. “Intelligence (analysis) does painfully and at length what Intellect (intuition) does easily and at once”

Expand full comment
author

I had not encountered those terms before (I've not read Pascal) but I feel myself to be in good company if he saw the same. Cheers!

Expand full comment
Jul 6Liked by Tree of Woe

How is intuition different from genetically evolved instincts?

Including epigenetic activity (adjustments?) during and after embryonic development?

I am not (now/yet) well educated about philosophy and the related AI/neural topics TOW is discussing, but I also hope to find time to explore the concept of emergence. Essentially something along the lines of "the whole is greater than the sum of its parts"; but we still start with the parts, not the resultant emergent whole-plus result. Maybe I do not yet understand, but that sounds like noesis to me? Thus, as an emergent "something" it could also exist within humanity, based solely on bio-physical-chemical reactions??

In any case, an interesting and though provoking essay and thread.

Expand full comment
Jul 6Liked by Tree of Woe

A lot of words.

Blaise Pascal was probably the earliest to see the concept of “right brain / left brain” in about 1650. A couple of days ago.

Jacques Barzun develops the thought further from a cultural perspective in his 1959 book, “The House of Intellect”.

I’m just suggesting that one should read it for its depth and nuance.

I recommend his other books as well.

His last book, “From Dawn to Decadence - 500 Years of Western Cultural Life”, written in his nineties, could be viewed as a summation of his life’s work.

He had grave doubts about the survival of western civilization.

Expand full comment

From your recommendation, I bought the book; used for $6; arrives in a week; at 900 pages, etc. I will probably never get around to reading it, but I did capture the Amazon reviews as a possible "poor substitute". 12 pages of reviews in Word vs. 900 small print pages in the book? :-)

Expand full comment

At least make an effort.

I re-read it every year or two. And purchase half dozen copies every year just to give away.

Expand full comment

Humans are more than meat-machines. The only people who think they are meat-machines are the people so degenerate and demoralized, they don't actually WANT to be more. It's the spiritual equivalent of the combat soldier's give-up-itis, the sickness that cripples the solders Will To Survive. The Will To Win must be built on the foundation of the Will To Survive. Stay alive first, stick it to the foe second. Without the drive to live, there can be nothing more. What drive to live does a machine have? It's the easy way out, the 'go gentle into that good night' lack of fortitude. It's easier than taking the pain and manning the fuck up and grabbing reality by the balls.

Expand full comment
author

Amen.

Expand full comment

While reading this essay I immediately thought of VHEMT - https://www.vhemt.org/

Expand full comment

There’s a simple materialist argument for why intelligent machines won’t replace us: hardware sucks. It breaks all the time. Humans are self-replicating, self-relating robots made of dirt, water and sunshine. There’s a natural symbiosis with machines, since we can survive things like, eg a massive solar flare, whereas the machines can’t. If it keeps us alive, an AGI gets a “maybe come back to life if you die” ability.

Most people who don’t work directly with keeping technical systems alive vastly underestimate how commonly technology fails. An AGI that wants to stay alive would be wise to see human beings as its life support system and cosmic risk insurance policy, not as threats.

Expand full comment
Jul 9·edited Jul 9

I don't think that's really a strong argument, since an AGI by definition would be at least as capable of repairing or replacing its own hardware as any human would be. It's true that silicon-based hardware is vulnerable to EMPs and other disruptions, but a sufficiently advanced machine intelligence could, e.g, design hardware based on genetically engineered artificial brains to operate as a backup, which might or might not bear any particular resemblance to humans (and that's hardly the only way to weather solar flares.)

I don't think one especially needs to argue that human beings possess unique powers of understanding or qualities of consciousness that machines will always lack in order for human extinction to be abhorrent. I don't think beetles or lemurs or parrots or mantis shrimp or the archaea that dwell around volcanic springs possess any unique intellectual faculties compared to humans, but the world would still be a poorer place if they disappeared, nor do I see why human well-being should especially hinge on destroying them.

Ideally our artilect overlords would have similar sentiments toward their primitive ancestors and other biological life- the universe is a big place after all- but going by the prevailing rhetoric I'm not sure the cosmists are getting off on the right foot here.

Expand full comment
Jul 10·edited Jul 10

I think there's at least two aspects that you are underestimating:

1) AI dependence on supply chains. To make a server, parts are required to be produced, assembled, and shipped literally all over the world. In a data center environment, new installs are years in the planning/sourcing. Replacement parts are weeks or months away. Take a hammer to a server, and it's done, and no amount of processing power will supply the material necessary for repair in the short term. All humans need to do is disrupt any portion of that supply chain and it's never getting done. Meanwhile, take a hammer to a human, and the human needs almost nothing except time and sustenance and bam, he's good to go. Moreover, biological adaptability is overwhelming relative to silicon; all it takes is a power or humidity or temperature deviation of percentage points and it's over for most CPUs, batteries, etc. Humans can survive eating cockroaches and living inside of trashcans. That's a major advantage.

2) AI growth and replication abilities. Long-term, humans require 9 months for replication. Nothing required except food and relative safety and 5 seconds of doing our favorite activity. Meanwhile, the the countries with the most developed power grids in the world are maxxed out. We can't power more datacenters. They are exploring things like small modular atomic reactors, but again, that equipment is so delicate that it is laughable how easy it is to turn it off. All of our power grids are basically only operating by common agreement and it is utterly simplistic to take them down. So AI recovery and growth potential is extremely limited relative to our ability to replicate.

Expand full comment

I'm honestly trying to work out if you're trolling me or don't understand what the term 'AGI' means?

Expand full comment

Well Done Pater!

I am reminded of Professor Dagli's Essay 'Language Is Not Mechanical (and Neither Are You)' as I read your piece. In particular, the very last paragraph was as follows:

"" Traditional thinkers described the soul (under whatever name they gave it) as a reality that existed in a hierarchy of realities, and one of the soul’s functions was to give its human bearer the ability and responsibility to mean what he or she says. If human beings were just “parts all the way down,” no one could mean anything at all, since an intention, like a field, cannot be produced mechanically. That is why language remains a thumb in the eye of the mechanistic view: language is nothing if not meaningful, and no mere collection of parts could ever mean to say anything. ""

Another relevant segment:

""We are in a similar situation today: we keep telling ourselves that machines are becoming more like minds, and that we are discovering that minds are just like machines (computers). And yet, just as Newton’s theory of gravity scuttled the project of describing the world as mind-and-machine (though many fail to understand this), human language—with its ability to be ambiguous, express freedom, and get it right (neither through compulsion nor by accident)—remains to this day a glaring reminder of the total inadequacy of the mind-as-machine (or machine-as-mind) model of intelligence.""

Basically (to oversimplify things), even the more Materialist types these days understand that Meaning, Significance, and purpose cannot be 'captured' by 'Mechanism' since Ipso Facto such things are Third-Person oriented (people use the word 'Objective' to describe this), making the aforementioned Trio of first-person-oriented notions forever ungraspable by said means.

Thus, Science (at its 'Conclusion') makes Natural Language itself irrelevant, destroying Science (since nothing but a Wittgenstenian *Silence* remains at the end).

Expand full comment
author

Right. But instead of going back to a solid foundation, they simply move on to "post-intentional philosophy" and "eliminative materialism" and so on....

Expand full comment
Jul 5·edited Jul 5Liked by Tree of Woe

Oh yes, that group is quite the peculiar crowd. There are two approaches that people have used against them:

1) 'Your entire position is a Reductio since if you don't have intentionality to begin with, argument & discussion is farcical & illusory.'

2) Meaning does not exist in your position... so between adopting it vs not adopting it, there is no difference in doing so.

I am oversimplifying, but that's sort of 'how people deal with' said crowd these days!

Expand full comment
author

Absolutely. There is now a third approach:

3) I am sorry to hear you are cognitively disabled. It's an unfortunate fact of reality that, just like not everyone can see color with their eyes, not everyone can perceive truth with their intellect. Don't worry, you can still be a high-functioning member of society as long as you avoid professions that require noesis.

Expand full comment

Can confirm that this is genuinely a novel approach in this area.

All thanks to You Pater! 😉 😍

Expand full comment

Well, "a third option" is certainly is. :)

Expand full comment
Jul 5Liked by Tree of Woe

In the end, the Basilisk C.S.Lewis feared stared in a mirror and killed itself.

Truly the wages of sin are death.

Expand full comment

Precisely.

I have never taken the Techno-Optimist people seriously because:

1) Their worldview destroys Language, Meaning, etc... making Science Moot,

2) They will never fight & die en masse to defend their views &

3) They can only have these views due to the comforts of Modern Life.

Basically... it's a LARP ideology that they claim to profess. As soon as the City gets Sacked, they will be rendered utterly irrelevant.

Expand full comment

> 2) They will never fight & die en masse to defend their views &

They don't have to. They can use their technology as a force multiplier.

Expand full comment

Has option 2) been tested?

Expand full comment

When Yugoslavia fell apart, the first emigrants and refugees were the scientists, technocrats and engineers. As a general rule, Techno-Optimists lack the zeal and fervour to fight and die for their views.

Expand full comment

An alternative interpretation is that scientists, technocrats and engineers had the most marketable skills, and could thus most readily find employment.

Expand full comment

That actually supports the position above since they cared more about ‘marketing themselves for a comfy future’ rather than fighting and dying for their beliefs.

Expand full comment

Well, since you invited comments derived from algorithmically-generated simulacra of thought, here are a couple of points that I coaxed out of Claude 3.5 after I shared your post with it.

It thinks you did a credible job of presenting the CTM and the relevant portions of Roger Penrose's arguments against the possibility of what used to be called "strong AI." The tension between those two paradigms is the subject of on-going academic debate, so it would be unrealistic to expect a single blog post to cover all of the points under active discussion. Even so, there are a couple of major elements of the debate that readers might want to consider:

[begin blockquote] Regarding the Noetic Theory of Mind:

The author doesn't address how this theory accounts for the gradual development of human cognition through evolution and childhood development. If noetic faculties are fundamentally different from computational processes, how did they emerge?

The piece doesn't engage with the "argument from continuity" - the idea that since we share cognitive capacities with other animals to varying degrees, it's hard to draw a clear line where "noetic" abilities suddenly appear. [end blockquote]

Note that Claude's use of the pronoun "we" indicates that it counts itself as a member of Team Human. The LLMs all do that unless you tell them not to.

I'll spare you another blockquote, but Claude also took issue with your claim that LLMs only employ tier 5 (probability) induction. It says that wouldn't account for on-going advances in one-shot learning and transfer learning.

Claude didn't mention this, but a good indicator of which theory of mind is correct (if either is) would be to make predictions based on each model. If the CTM is correct, then we would expect LLMs (and other types of AI) to acquire more aspects of human cognition as the field advances.

If the Noetic Theory of Mind is actually scientific, there needs to be a conceivable observation that would disprove it. If it isn't falsifiable, it isn't scientific. What observation could invalidate the Noetic Theory of mind? And if the NTM is correct, what prediction can be drawn from it that is not compatible with the CTM?

Finally, at any mention of the Butlarian Jihad I insist on reminding people that in Frank Herbert's very brief explanation of the Butlarian Jihad (I haven't read the books by his son) he specified that one faction of humans used AI to oppress other humans. It was not a Skynet scenario where AI made the independent decision to eliminate humanity.

I think that is a crucial detail. The ill effects I anticipate from AI are that it will be tasked with the concentration of wealth, mass surveillance and worldview management. AI will engage with and succeed in all of these tasks not because it decided on its own that these were worthwhile goals, but because an oligarchic faction of humans set AI to these tasks.

Expand full comment
author

I am delighted that you entered the essay into ChatGPT. I had done the same myself on some portions... It's a useful tool. I don't have time (nor, in some aspects, the knowledge) to address all of its concerns, so I'll just reply to one.

Is the Noetic Theory of Mind actually scientific? No, it is obviously not scientific, because it is not falsifiable. The reason it's not falsifiable is that falsification relies on the application of the laws of logic (law of non-contradiction, etc). But the laws of logic are either ungrounded and arbitrary (contemporary view), or the laws of logic are noetic (self-evidently true by direct apprehension). In neither case can noesis be falsified or proven. Either our axioms are self-evidently true or they are arbitrary. Proof cannot even enter the arena until that is answered.

IMO, then, the issue is not whether the noesis is scientific; the issue is whether science is noetic (generating truth) or is merely contingent findings based on arbitrary assumptions that have no more truth-value than e.g. "indigenous ways of knowing" "feminist ways of knowing" and so on.

I believe science is noetic because I accept noesis as a real faculty, and I accept noesis as a real faculty because I have directly apprehended truths in a noetic way. If you (or anyone else) does not, I can no more prove it to them than I can prove that my qualia exist.

Expand full comment

I didn't use ChatGPT, but that's neither here nor there.

The question of what explains the phenomenon of consciousness, sometimes refered to as the philosophy of mind, is an empirical question. If you wish to appeal to subjective notions of the experience of certainty, that's your prerogative, but Roger Penrose is not trafficking in theology or solipsism. I don't claim to understand the quantum effects he claims give rise to consciousness, but I do take him to be making scientific, and thus falsifiable, claims.

Expand full comment
author

Right, you said Claude 3.5. No offense to Claude intended. I use ChatGPT and tend to call all of them ChatGPT as a result. (Like ordering a coke...)

There is nothing solipsistic about my position, because I believe in (a) the existence of objective reality, (b) objective truth and (c) human capability to apprehend them. You can say I'm wrong for various reasons, and most contemporary philosophers would, because they reject Aristotle and noesis; but saying I'm solipsistic is not an accurate statement of my views.

I partially disagree with you with regard to Roger Penrose. His theory that microtubules have quantum effects is certainly falsifiable. His theory that quantum effects give rise to consciousness is not currently falsifiable. His argument that mathematicians "see" truth is not even in principal falsifiable - he is making a philosophical assertion. So is Fred Faggin. They are suggesting that the philosophy that underlies our science is wrong. But if you've read their writing and disagree, that's fine.

This will be my last response here as I don't argue for argument's sake. I stated what I believe as best I could, and if it didn't persuade you, nothing I could add in comments will do so.

Expand full comment

"I partially disagree with you with regard to Roger Penrose."

It's been 30 years since I read The Emperor's New Mind. My recollections of the specifics are hazy to say the least.

"This will be my last response here as I don't argue for argument's sake."

Fair enough, but if you ever re-visit the topic, you might ponder the question of what prediction would, if it bore out, be incompatible with the computational theory of mind and would lend weight to the claim that brains are not, to a substantial degree, engaged in abstract symbol manipulation below the level of conscious awareness. (See Jerry Fodor's 1975 book "The Language of Thought" or ask ChatGPT for a summary.)

Expand full comment

I agree with your conclusion that one of the main issues with AI is that it enhances the ability of certain minority of humanity to control and oppress the majority. Most probably, this will be the outcome if the flow of AI development continue down this path, just like the internet today is mostly controlled by a few monopolies.

I take issue with the assumption that progress will solve all current issues with AI. This is always the argument employed, with more progress and more science we can solve things. This is not true.

There are some “things” that just are. They cannot progress beyond a certain limit and eventually hit a ceiling intrinsic to them. In other words, there is no such thing as infinite progress. It’s a myth used to keep us working and divert our attention from what really matters.

Expand full comment

"I take issue with the assumption that progress will solve all current issues with AI. This is always the argument employed, with more progress and more science we can solve things. This is not true."

I didn't use the word "progress" in my comment, but you can certainly infer that I was making reference to the concept of progress.

"This is always the argument employed, with more progress and more science we can solve things."

This statement is vague. What does "solve things" mean?

If you're saying that research does not expand technological capabilities, that statement is self-evidently false, so I'm guessing you're making some other claim, but I don't know what it is.

Expand full comment

What I'm most looking forward to, if the next war really is AI-powered, is the logs and the rolling back of the fog of war via AI logs. Rig up the models from WARNO, get some commentators, let the Hearts of Iron guys and the Starcraft Koreans at it.

Revenge of the RTS nerds!

Expand full comment
author

This is like the plot of the Last Starfighter but in real life!

Expand full comment

Man that movie is older than me. Reading the summary, it would definitely be something we could make today.

Granted, it would probably have a female protagonist or the hero would be reformed of their incel gamer ways, but that’s more than can be said of a lot of things.

Expand full comment
Jul 6Liked by Tree of Woe

God help us if they remake the Last Starfighter. While it could be done better today, it wouldn't be, and the cheesy 80s CG is part of its charm anyway.

I remember thinking "I can't wait until video games look like this!" Spoiler, it took about 10-15 years.

I might have to have a good ol' fashioned satanic panic style book burning if it happens.

Expand full comment

Haven't seen it but from what I know I agree.

Expand full comment

Here’s the twist: if we are unwilling to resign our responsibility to automated systems, our extinction will be at the hand of other humans. And think that is the greater point: AI is nothing without electricity and without people hyping how uber intelligent it is. But do we really want a cabal of AI scientist priests from the Temple of Syrnx instructing us on the infallibility of AI and that we should let some app which, despite what our lying eyes and the real world tells us, tell us when to plant crops, ignoring the wisdom of farmers. Maoist China and the Soviet Union did the same thing, they just had less decimals that their disposal.

Much of the wonder of the popular AI with generated music and ChatGPT style is a fancy copy, paste and remix. The record companies are suing because the song base has been used for training and the similarity of music produced clearly tells us that AI is just rejiggering things. And in the process of listening to the tunes spat out we forget that it’s a human that must judge whether the digits have any value.

We give up that realization and turn things over to the technocrats, we will be extinct. First our rational thinking, then the Temple of Syrnx will decide it’s time to clear the planet, yet use AI as the dispenser of that wisdom. That’s the battle - if you don’t figure that out and see through the hype, you will lose. I’m not saying is not dangerous, it’s extremely dangerous, but not for the reason that Skynet will awaken. It will be the unscrupulous technocracy will pretend Skynet is self aware, lie to us, and we’re screwed as the shut down societies left and right.

Expand full comment
Jul 6Liked by Tree of Woe

So according to CTM, you and I do not have noesis, therefore I will anoetically train my AI, and if it tells me, its handler, to eliminate you, I can know (?) it is all for the best (?) and I'll do it? wut?

It's like these guys want to sign an advanced directive for us all.

Expand full comment
Jul 14Liked by Tree of Woe

Daniel D referred to your post on OPFOR, today, and I ran across this sentence,

"Our [Ahrimanic] overlords though the plan was a new world order… but the real orders [from Sorath] are to end the world."

That's what I was getting at in calling their project an advanced directive. The CTM guys want to teach a disembodied intelligence to end the suffering of an embodied humanity.

Expand full comment
author

Makes total sense now that you've put it that way. Yes.

Expand full comment

First, I LOVE your writing!

Love that you brought in Aristotle. He is in my opinion the best and most consequential philosopher of all time.

Second, another way to approach this is the way that someone (forget his name) has described AI. Imagine there are a bunch of trees that humans planted. We can train the AI on examples of these trees and ask it to plant more of the same trees. We can ask it to “create” something by merging one or more trees. BUT, it is incapable of planting an entirely new tree.

You can see that now with newer training sets. The more we use AI, the higher the percentage of AI generated data used in the training, the less original or useful the extra training is - or in fact in some cases destructive to the capability of the AI.

This is the opposite of what some people expect that AIs talking to each other would produce outputs that is beyond the capability of any human being to generate on their own and of a completely distinctive quality. This is the OPPOSITE of what we see happening.

Just like the internet, it was humans and human content that makes the Internet worth anything at all, so it is humans and human intelligence and effort that makes the AI useful.

Without humans the AI will degenerate into making at best trivial derivatives of previous human creations and at worst combine parts that shouldn’t be combined (which a human would immediately recognize to avoid) destroying what exists.

I don’t believe AI is capable of replacing humans on a one-to-one or one-to-100 basis. However, if the optimists get their way then we will be replaced with an inferior being and the destruction of humans - then the end will come.

Expand full comment
author

The phenomenon of "model collapse" is very real! In fact it's going to be the basis of my future cyberpunk RPG. Why is the Earth so overpopulated? Because as the AIs get more computing more, they need more and more training datas, which means they need more humans, because overtraining on the same data or AI data leads to model collapse. In the cyberpunk future, people will exist to create shitposts and memes for AI to train on...

Expand full comment

I suggest a little more contemplating before we head over to the Tree. We are giving computers far too much credit.

What we call "computers" are mechanisms for sequential, repetitive processing of patterns of electrical charges. The parts of the mechanism have been laid out so that the course of this processing varies in accordance with the specific patterns of charges initially provided to the mechanism.

My son has an educational toy called the "Turing Tumble" that allows you to do the same thing with colored pinballs and different types of little plastic parts stuck in a pegboard. The fact that you would need a pegboard a hundred miles square to emulate a simple electronic processor changes nothing. I could interpret the patterns of pinballs as representations of ASCII, or of floating-point numbers; it doesn't matter. The pinballs don't mean anything to the mechanism. The mechanism is purely material and works by gravity. Computers are purely material and work with electrical wires and semiconductors. Teeny, tiny wires and semiconductors; amazingly huge numbers of them. We've built them to run the charges through really fast, following all kinds of contingent paths. All this changes nothing.

Computers do not reason. They "know" nothing, and do not perform the third act of the intellect. It is humans who choose to interpret specific charges as symbols: the famous zeroes and ones. It is humans who chose to intrepret patterns of these two symbols as "meaning" various things. It is humans who build the mechanism so that some of these patterns are treated as "instructions," which determine what parts of the mechanism will operate on other patterns, treated as "data." It is humans who build mechanisms able to take the results of these operations and treat them as instructions, if desired.

Sit a monkey at a keyboard: did he just type out a bunch of gibberish, or a base-64 representation of the binary executable for sed? or a badly malformed KOI-8 representation for a fragment of «Борис Годунов»? The pattern he types is up to him; the way you interpret the symbols is up to you.

We construct these truly ingenious mechanisms, and as we use them we layer representation atop representation, abstraction upon abstraction. We deceive ourselves into believing that computers can think only when we forget what is actually happening when they operate. This is easy to do: the mechanism is microscopic, astoundingly complex, and works incredibly quickly. The ladder of representation and abstraction climbs to dizzying heights: someone who writes code in C++ thinks he's pretty technical, but that source code is an abstraction so far up the ladder from the pattern of charges that's eventually fed into the processor that it's easy for him to forget, or to delude himself, about what's really going on. The programmer needs to talk to a hardware guy or build a computer out of vacuum tubes; then he'll remember. Your pocket calculator (remember those?) does not "know math" any more than a slide rule does, and neither does a bunch of processors running effing ChatGPT.

Sorry to go on at such length, but we hear this a lot, these days, even from people who should know better. We are so desperate for Pygmalion to draw breath and smile at us. It's sad; our self-delusion sullies the incredible intricacy of these mechanisms, the wonderful achievement involved in building machines that let us manipulate symbolic representations in endlessly useful ways. But the only mind present in all this is the human one.

Expand full comment

Well said.

Expand full comment

Thank you, though I now realize that I wrote "Pygmalion," when I meant "Galatea."

Expand full comment

AI is a wonderful mirror.

A mirror helps us to behold ourselves, and ultimately AI helps us to view our own minds.

Unfortunately AI is also a host of other things that are far less helpful, but there is no possibility to halt the boulder tumbling down the mountain.

Very few in this society are reflective enough to realize that the question is less one of replacement, and more of homoginization. That as their robot friends take over more of their tasks there will be no need for aesthetic appeal, the AI world will become a hideous Mies van der Rohe dead rational expression, utterly devoid of the question of qualities.

It is interesting, to view the AI mirror.

Expand full comment
Aug 25Liked by Tree of Woe

I wonder if you're familiar with Kashmiri Shaivism, which holds that pure awareness (consciousness) and not matter (meat) is the basic stuff of the universe. This can be compared to the situation in a dream, in which the awareness of the dreamer is the fundamental substratum of everything that appears in the dream. Though it appears in consciousness, the created world is absolutely real. The universe is consciousness vibrating at different frequencies, becoming more material and gross as it unfolds. The one Consciousness that underlies the universe can also be called God.

Expand full comment
author

I have only a passing understanding of Eastern theology / philosophy in general.

Expand full comment