Talk of LLMs, AGI, & “recursive self-improvement” consistently ignores first principles that have constrained every complex system.
Everything (emphasis on “thing” 😉) reduces to four binding constraints… namely energy, materials, demography, and ecology… none of which can be hand-waved away by software progress or clever abstractions… for these things ultimately exist only via the interplay of the former four fundamentals… & so Unless one of those constraints is breached in a thermodynamically unprecedented way, their interaction inevitably generates negative feedback loops that dominate outcomes:
The intelligence-explosion narrative rests on a basic category error… whereby it treats recursive self-improvement as if it can operate independently of the physical world, which itself supervenes on said fundamentals aforesaid:
& this is why Sustained exponential growth doesn’t exist outside simplified models; in reality, exponentials are transient phases that resolve into logistic curves once constraints assert themselves & we see the negative hits crush all growth yielding hysteresis…
For this specific talk of “recursive self improvement” in particular… Compute requires energy, & said energy requires infrastructure, with said infrastructure then requiring materials & labor…
All of that then depends on demographic capacity & political stability. 😘 (Hint— These are already dead 🥰):
Materials are finite, demographics are aging or shrinking, ecological limits are non-negotiable, & each layer constrains the next through hard feedbacks….
& hence, every supposed “runaway” dynamic activates countervailing pressures that slow growth, raise costs, increase fragility, & impose ceilings in an inevitable, DOOM-ed & Destined manner 😍
The relevant question isn’t whether intelligence can improve itself in theory, but whether intelligence can outrun the systems that physically instantiate it to begin with… systems without which nothing else can even prevail anyhow:
Thermodynamics, History & Theology… answers that question clearly & repeatedly, time & time again:
All Growth eventually saturates, negative feedbacks dominate… & limits reassert themselves…. 😊
“Recursive self-improvement” is thus not a law of nature; constraints are… & they will simply make themselves felt this century as the 🌍 moves decisively into Negative Sum conditions & Perpetual Conflict, coupled with Low Energy Dominance.
"Materials are finite, demographics are aging or shrinking, ecological limits are non-negotiable, & each layer constrains the next through hard feedbacks…."
All true. There are eventually constraints.
Meanwhile, materials (and energy) can be reduced by adopting far more efficient coding and models. We are already seeing software and hardware pioneers do this related to AI. If compute is a constraint, new iterations will work to maximize its efficiency. We have a LOT of room in modern systems.
Demographics... to Archon's point about correct is not relevant to elite belief, they believe that there really don't need to be so many of us. In the Doomer AI world, they use AI's Control Grid capability and take steps toward that end. So demographics matter in a DOOM! way, but maybe not the way you're advocating.
Combine both of the above, and what we'd also expect is greater and greater share of resources going to AI vs. people, as EROEI continues to decline.
The question isn't whether limits are eventually reached. The question is what capability level is realized before the level-off, and what shape we'll be in as humans by that point. There's plenty of both optimism and DOOM to go around within that defined box.
"Optimization" simply reduces slack which in turn means more brittleness & more destruction to compound shocks & Seneca Cliff style simultaneous failures.
Whenever tech bros use the word 'Optimization' what they're really talking about is lack of slack, which means less reliance & more fragility to failure. It's just "Centralization" except in the 21st century context of those illiterate to how Large Complex systems work & what does (& doesn't) break them.
"Resources going to AI" will continue right up until said Seneca Cliff appears (as it always does) & we see Hysteresis towards a Lower Energy Dominant Reality.
Nothing in that High Energy setting will survive... Nothing Meaningful because the whole point of "optimization" was to get a setting which doesn't survive in LED.
"Capability" doesn't survive fragility, if said "Capability" was accrued via debts of energy, materials, demography & ecology that aren't tending towards LED.
"Optimization" simply reduces slack which in turn means more brittleness & more destruction to compound shocks"
How does Deepseek compare to Claude? What are the implications for corresponding infrastructure?
I'm not convinced that your description is correct. Optimization can mean doing something in a completely different way. Moving goods by river instead of animal + wagon. That isn't just slack, that's a different architecture. Or, it can reflect improvement in component elements, as the power of early engines steadily increased along with reliability and efficiency.
The recent rumblings from finance are about to start making less capex important, and the amounts mean both felt constraints in operation, and huge rewards on offer to those who can deliver lower-infrastructure performance. Which dev group now includes the AIs themselves. There are also hardware vendors beginning to deliver in this space, as they rethink the concept of the datacenter and its associated architecture. Early trials are delivering big energy savings in real tasks, and reliability.
I'm not certain that they'll succeed in bending the curve, but I'd be an idiot if I didn't concede that it was possible.
"Optimization" is being used here in a very specific way-
Increased Complexity & Increased Throughput to Centralize & Concentrate more Power which is then used recursively to... Increase More Complexity.
Tainter & others came to this conclusion in "The Collapse of Complex Societies" & other related anthropological works via induction.
Moving goods by river... brings in the constraints of tidal systems.
Early engines... brings in the element of increased pollution.
This is all cumulative. Which is the point:
All these 'promises' by the naive Tech bros disregard the fundamentals of Complex Systems... namely that when they *Optimize*, they simply accelerate the crash into Limits & Constraints.
Data centres have Hard Limits a la Electricity, Water, etc.
LLMs have no use case in the real physical world (farming, manufacturing, etc) because any such *serious* attempt would mean "Optimizing" (I am using the word here in the negative aforesaid way) for Catabolic Collapse, Seneca Effect Style... which is just monumentally Stupid.
There is no such thing as 'bending the curve'... there is only Limits & Constraints... & all it comes down to is "When do they arrive & assert themselves? Now or later?"... One can delay, but never avert.
You can actually argue that biological neurons do communicate in "1s and zeros", insofar as synaptic stimulus arrives in bursts of activation potentials of varying frequency, and conversely that machines have been encoding floating-point-numbers with ludicrous levels of accuracy for decades. The discrete-state-machine argument doesn't really hold up IMO, and of course goes out the window if you use biological neurons for computation.
Have you read the work Gary Marcus? The biggest problem with LLMs is that they do not maintain an internal representation of the world, so “hallucinations” are a feature, not a bug, of next-token statistical predictions. I personally would not want a “50% success rate” for high-risk, small margin of error kind of task, like security or health.
Bad doctors give a correct diagnosis 95% of the time. Look it up. That's the baseline "AI" needs to achieve to be even considered as a replacement for humans.
Let’s believe the investors that want to sell you their private stake in all these companies over the experts that pioneered, understand, and built the technology.
I wanted to hold off on writing a comment until I read everything, but it just burns and I'll write while I read.
> On January 30th, videogame stocks plummeted.
> Then, just two days ago, February 4th, SaaS stocks crashed.
Not true. *Everything* plummeted, and has been selling until this Friday (probably, maybe it will continue). The most likely explanation is liquidation. Somebody big and loaded was forced into selling just about everything. Note that the last day of the month (Jan 30th was the last working day as Jan 31st was Saturday) is important in financials because various statements and terms use the prices and holdings on the end of that day for effect. So, a big institution may have pretended to be fine until Jan 30th, but when that day came they simply had to make a move to fulfil their contract obligations (or else go to jail). The most impressive action was in precious metals that were whipsawing 10% up and down in a single day. You go to bed with gold $4800, you wake up with gold $5000, it's $4700 and going down by the evening and $4900 going up in the morning. What. The. Fuck. To be fair, the period of a few weeks prior was also a bit insane.
--------------------------
Recently I decided to update my understanding of the state of "AI". Last time I checked, "AI" was platoing and there was no real expectation it would achieve "AGI". But that was about a year ago. I went to YouTube searching for more recent videos and found nothing to really change my opinion. Interestingly, there was a lack of more recent videos on the topic, in either direction, but there were still some. In particular, this guy seemed to be giving a critical roasting to the idea "AI" has platoed so maybe he has discovered the truth: https://www.youtube.com/watch?v=bv19nXfb0bc He belives "AI" has platoed.
A friend made a good comment about Genie 3- "Great just what the world needs more crappy indie games."
The technology to make and film your own movie at a high level of production has existed for a long time and yet strangely there seems to be a dearth of actual good and well made creative stories. The democratization of the technology hasn't radically changed how certain winners rise to the top in any creative field. For every successful YouTube fitness influencer there is some guy who has 10 to 50 views per video for a year that never made it.
Claude and its code outputs I saw someone claim that it truly unlocks the industrial production scale of software but I see it more like 3D printing. With some of these tools and services a certain few people will make their own bespoke software in the same way some people do print 3d trinkets or repair parts for broken objects. If anything the ability to create software just became even easier but do you think people are really going to stop the scrolling or passive consumption that dominates their lives to create things with these new technologies?
The point to increasingly not needing human talent in the loop is that you increasingly don't need human talent in the loop. (And to the extent that any human remains in charge, the payoff-inequalities multiply, which I don't consider especially a good thing.)
You need a human to consume a product and generate revenue. Netflix produces a glut of content already before AI but surprisingly enough just that content alone hasn’t worked for them in succeeding and growing. They have only grown by introducing advertising as a new revenue stream.
When content production costs fall - which is what a lot of said AI tools are doing - it doesn’t translate into more revenue or actual value. That remains more elusive.
> "You need a human to consume a product and generate revenue"
They'll solve the former with UBI, if that's really needed at all, and again I really just don't see how the latter follows. Netflix's revenue is capped by the number of hours people have in the day to binge-watch period dramas and buddy cop slop. It doesn't mean they'll continue hiring humans to meet that finite demand.
> > "You need a human to consume a product and generate revenue"
> They'll solve the former with UBI, if that's really needed at all, and again I really just don't see how the latter follows.
That's not how the economy works. xD "Economy" is people doing things. If robots start doing all the things, then what is the purpose of the robots doing things? There is none, therefore robots won't be doing things! It's a real simple question philosophically. xD
Right now I could instantiate a robot economy in my computer. It would be as important as a robot-only economy, which is to say, not important at all.
> "That's not how the economy works. xD "Economy" is people doing things. If robots start doing all the things, then what is the purpose of the robots doing things?"
Debates over teleology don't, in themselves, prevent physical economies from being reconfigured in ways that put people out of work, or render them welfare-dependant, or lead to military arms-races between drone-soldiers and AI-generals, or otherwise alter the world so dramatically that nothing recognisably human emerges from the other end. Whether there's a 'point' to this from a human-centred PoV may well be irrelevant.
Under the agreed upon rules of the game Netflix as a company has a duty to grow and to increase its value in stock price. Netflix’s revenue is not capped by the number of hours people have in the day to watch TV, if you are a paid subscriber you have generated revenue for them. Producing the content was an initial draw to get people to subscribe. Now their business wants to expand and grow -which is to make more money. They can do this by adding more subscribers OR they could open up to advertisers who have big budgets.
Their content strategy matters to an extent that they realize that if people find NOTHING to watch they will likely cancel their subscription, likely they have an internal metric that suggests when an account fails to watch a certain number of hours a week that account is at risk of cancellation.
There are other ways they could increase revenue as well: they could start an up charge for live sporting events like the Boxing coverage they have dipped their toe into.
Netflix is most certainly not capped in revenue by the number of hours in the day people actually watch their content.
I don't see what part of this post is arguing for human talent being a necessary part of the business strategy here. Are you arguing that if Netflix doubled the quality of their artistic output that they'd get twice as many subscribers or their existing subscribers would pay twice as much? Is either scenario terribly realistic?
We’re miles off topic here but I’m pointing out you don’t seem to have a very good understanding of business in general because your assumptions about how Netflix creates revenue are nonsensical.
Increasing language proficiency in *language models* is not, and never will be, evidence of a general intelligence. A Boeing 777 flies itself more surely and flawlessly than Claude can code. No one deems it generally intelligent. This is investor hype by which 120 IQ midwits extract cash from wealthy idiots.
Basically, the markets seem to be betting both that AI will fail horribly (i.e, overinvesting in data centres) and also succeed wildly (i.e, put SaaS out of business), which has the effect of 'dump all stock', because even if AI succeeds it's hard to predict which specific firms are going to wind up dominant, because the paradigm is shifting so rapidly.
"And so investors do the only rational thing available when the framework for rationality itself is unstable and when most options look like the losing option: they sell everything."
...Which I guess is what the sinking sensation in my gut has been saying for a while, if not in those exact words.
You make a good point about Sequia and Nature being pretty serious people, and personally I've never argued that AGI was impossible on principle. Just that's incredibly reckless to make it outside anything but the tightest global regulation. I guess we're in a race to see if the bubble pops, and whether this will cool off investment.
3. Figure out how to disable data centers from a distance.
4. Keep your pre AI skills sharp in case the world goes into Idiocracy and AI has less data to train with. Google appeared to go into woke censorship mode because it could no longer use the full Internet for a link graph. So they are stuck with universities, the government and mainstream media.
(This is an opportunity for the intellectual right: Found lots of schools and link out to good
intellectual right content. Focus on quality vs. strict ideology.
One last play: become a Bond Villain by replacing NVidia for AI. NVidia's architecture is absolutely horrible for the problem of doing lots of matrix multiplies on different batches of inputs. The memory architecture is backwards. Connect me with some hardware guys and some money and we could easily make a neuro net computer that is at least ten times faster and 10 times more energy efficient and connecting GPUs. Maybe 100x.
I solved this problem back around 1990 when I worked at a supercomputer company.
What company did you work at? The story about NVIDIA has been from the start that their architecture is shit, but they know how to crank it out at prices better than competitors, so they ended up winning the market back in early 2000's and Moore's law ensured nobody could threaten them.
I'm still not entirely sold on the utility of AGI. We already have a couple billion human-like minds and I don't give a fuck what they output, why would I care what IBM brand retard says?
Further, barring extreme economic or force incentives, why would anyone else?
But can it decide which tastes the best; a pear or a strawberry?
Rather than computing power or the ability to use pattern recognition to predict probabilities, isn't oue very human subjectivity the better meter for thought?
Octopi/squids of some varieties communicate via body position, movement of arms, patterns of colour on different parts of the body - several magnitudes more information per unit and time-segment than human speech or text.
Same with many small birds such as Great Tits - the sounds they produce are more complex and carry more information than humans can convey in speech or text.
Do we therefore call octopi, squid and tits more intelligent than humans?
Corollary: if we use human intelligence (what there is of it) as the benchmark, then we also use human ways of thinking/using that intelligence as the demarcation-point.
Thus: the old model based in that humans are separate and different from animals is the problem here.
I think they will certainly attempt AGI but it will be a mix of Wizard of Oz style propaganda and the AI doing its thing, and of course we know they want to use AI, and what they imagine it can do, to bring untold suffering and evil to people (if they can get away with it). As at the end of the day the god of this world and his ilk are calling the shot with the elites their puppets.
I was working with the lowly ChatGPT 3.0 version on a project that required plotting timelines of particular developments that mark our civilization's rise and ensuing decline. I started with low hanging fruit called classical music. It was designed as a collaborative projects of sorts with inputs solicited both from myself and the AI.
What I found both disturbing and exhilarating is that it was, in fact, a good collaboration (ie, once I instructed the AI to dispense with silly compliments which I find annoying). Better than what I could have had with many humans also knowledgeable and partial to this one area. I am still trying to figure out what made this such a satisfactory experience. Was it the fact that my hunch was confirmed with hard data and the plots kind of tracked with what my intuition told me ahead of time? or was it the fact that AI seemed to guess what my intuition was, as I provided more input and suggested modifications?
The really disturbing, yet fascinating part, was that my newly found AI 'friend" provided some excellent conclusions, ones I'd have no doubt inferred myself except it beat me to it.
I have since moved to a much harder subject matter (because definitions vary and prejudices multiply, my own included), namely Philosophy. Both myself and the AI are struggling a bit to keep the project on track. My conclusion so far though is that I would not have been able to plow through this minefield with another human person, for any number of reasons. This conclusion is what disturbs me most, and I am trying to analyse right now why.
The most important graph in AI has gone vertical . . . because the vertical axis was changed from exponential to linear. No change in trend, just legerdemain.
Talk of LLMs, AGI, & “recursive self-improvement” consistently ignores first principles that have constrained every complex system.
Everything (emphasis on “thing” 😉) reduces to four binding constraints… namely energy, materials, demography, and ecology… none of which can be hand-waved away by software progress or clever abstractions… for these things ultimately exist only via the interplay of the former four fundamentals… & so Unless one of those constraints is breached in a thermodynamically unprecedented way, their interaction inevitably generates negative feedback loops that dominate outcomes:
The intelligence-explosion narrative rests on a basic category error… whereby it treats recursive self-improvement as if it can operate independently of the physical world, which itself supervenes on said fundamentals aforesaid:
& this is why Sustained exponential growth doesn’t exist outside simplified models; in reality, exponentials are transient phases that resolve into logistic curves once constraints assert themselves & we see the negative hits crush all growth yielding hysteresis…
For this specific talk of “recursive self improvement” in particular… Compute requires energy, & said energy requires infrastructure, with said infrastructure then requiring materials & labor…
All of that then depends on demographic capacity & political stability. 😘 (Hint— These are already dead 🥰):
Materials are finite, demographics are aging or shrinking, ecological limits are non-negotiable, & each layer constrains the next through hard feedbacks….
& hence, every supposed “runaway” dynamic activates countervailing pressures that slow growth, raise costs, increase fragility, & impose ceilings in an inevitable, DOOM-ed & Destined manner 😍
The relevant question isn’t whether intelligence can improve itself in theory, but whether intelligence can outrun the systems that physically instantiate it to begin with… systems without which nothing else can even prevail anyhow:
Thermodynamics, History & Theology… answers that question clearly & repeatedly, time & time again:
All Growth eventually saturates, negative feedbacks dominate… & limits reassert themselves…. 😊
“Recursive self-improvement” is thus not a law of nature; constraints are… & they will simply make themselves felt this century as the 🌍 moves decisively into Negative Sum conditions & Perpetual Conflict, coupled with Low Energy Dominance.
Tl;dr— Pater OPTIMIST confirmed! 😘 🥰😍🤭
"Materials are finite, demographics are aging or shrinking, ecological limits are non-negotiable, & each layer constrains the next through hard feedbacks…."
All true. There are eventually constraints.
Meanwhile, materials (and energy) can be reduced by adopting far more efficient coding and models. We are already seeing software and hardware pioneers do this related to AI. If compute is a constraint, new iterations will work to maximize its efficiency. We have a LOT of room in modern systems.
Demographics... to Archon's point about correct is not relevant to elite belief, they believe that there really don't need to be so many of us. In the Doomer AI world, they use AI's Control Grid capability and take steps toward that end. So demographics matter in a DOOM! way, but maybe not the way you're advocating.
Combine both of the above, and what we'd also expect is greater and greater share of resources going to AI vs. people, as EROEI continues to decline.
The question isn't whether limits are eventually reached. The question is what capability level is realized before the level-off, and what shape we'll be in as humans by that point. There's plenty of both optimism and DOOM to go around within that defined box.
"Optimization" simply reduces slack which in turn means more brittleness & more destruction to compound shocks & Seneca Cliff style simultaneous failures.
Whenever tech bros use the word 'Optimization' what they're really talking about is lack of slack, which means less reliance & more fragility to failure. It's just "Centralization" except in the 21st century context of those illiterate to how Large Complex systems work & what does (& doesn't) break them.
"Resources going to AI" will continue right up until said Seneca Cliff appears (as it always does) & we see Hysteresis towards a Lower Energy Dominant Reality.
Nothing in that High Energy setting will survive... Nothing Meaningful because the whole point of "optimization" was to get a setting which doesn't survive in LED.
"Capability" doesn't survive fragility, if said "Capability" was accrued via debts of energy, materials, demography & ecology that aren't tending towards LED.
"Optimization" simply reduces slack which in turn means more brittleness & more destruction to compound shocks"
How does Deepseek compare to Claude? What are the implications for corresponding infrastructure?
I'm not convinced that your description is correct. Optimization can mean doing something in a completely different way. Moving goods by river instead of animal + wagon. That isn't just slack, that's a different architecture. Or, it can reflect improvement in component elements, as the power of early engines steadily increased along with reliability and efficiency.
The recent rumblings from finance are about to start making less capex important, and the amounts mean both felt constraints in operation, and huge rewards on offer to those who can deliver lower-infrastructure performance. Which dev group now includes the AIs themselves. There are also hardware vendors beginning to deliver in this space, as they rethink the concept of the datacenter and its associated architecture. Early trials are delivering big energy savings in real tasks, and reliability.
I'm not certain that they'll succeed in bending the curve, but I'd be an idiot if I didn't concede that it was possible.
"Optimization" is being used here in a very specific way-
Increased Complexity & Increased Throughput to Centralize & Concentrate more Power which is then used recursively to... Increase More Complexity.
Tainter & others came to this conclusion in "The Collapse of Complex Societies" & other related anthropological works via induction.
Moving goods by river... brings in the constraints of tidal systems.
Early engines... brings in the element of increased pollution.
This is all cumulative. Which is the point:
All these 'promises' by the naive Tech bros disregard the fundamentals of Complex Systems... namely that when they *Optimize*, they simply accelerate the crash into Limits & Constraints.
Data centres have Hard Limits a la Electricity, Water, etc.
LLMs have no use case in the real physical world (farming, manufacturing, etc) because any such *serious* attempt would mean "Optimizing" (I am using the word here in the negative aforesaid way) for Catabolic Collapse, Seneca Effect Style... which is just monumentally Stupid.
There is no such thing as 'bending the curve'... there is only Limits & Constraints... & all it comes down to is "When do they arrive & assert themselves? Now or later?"... One can delay, but never avert.
Why have you blocked me?
Materials are not finite. Julian Simons demonstrated this over and over and over again.
No such thing was demonstrated, let’s not be deluded 😉 😘
One doubts. In fact, one disbelieves. I'll have to read the papers.
E.g., these reasons. https://wmbriggs.substack.com/p/the-limitations-of-ai-general-or
You can actually argue that biological neurons do communicate in "1s and zeros", insofar as synaptic stimulus arrives in bursts of activation potentials of varying frequency, and conversely that machines have been encoding floating-point-numbers with ludicrous levels of accuracy for decades. The discrete-state-machine argument doesn't really hold up IMO, and of course goes out the window if you use biological neurons for computation.
Have you read the work Gary Marcus? The biggest problem with LLMs is that they do not maintain an internal representation of the world, so “hallucinations” are a feature, not a bug, of next-token statistical predictions. I personally would not want a “50% success rate” for high-risk, small margin of error kind of task, like security or health.
Bad doctors give a correct diagnosis 95% of the time. Look it up. That's the baseline "AI" needs to achieve to be even considered as a replacement for humans.
Oh yeah.
Let’s believe the investors that want to sell you their private stake in all these companies over the experts that pioneered, understand, and built the technology.
Sure.
I wanted to hold off on writing a comment until I read everything, but it just burns and I'll write while I read.
> On January 30th, videogame stocks plummeted.
> Then, just two days ago, February 4th, SaaS stocks crashed.
Not true. *Everything* plummeted, and has been selling until this Friday (probably, maybe it will continue). The most likely explanation is liquidation. Somebody big and loaded was forced into selling just about everything. Note that the last day of the month (Jan 30th was the last working day as Jan 31st was Saturday) is important in financials because various statements and terms use the prices and holdings on the end of that day for effect. So, a big institution may have pretended to be fine until Jan 30th, but when that day came they simply had to make a move to fulfil their contract obligations (or else go to jail). The most impressive action was in precious metals that were whipsawing 10% up and down in a single day. You go to bed with gold $4800, you wake up with gold $5000, it's $4700 and going down by the evening and $4900 going up in the morning. What. The. Fuck. To be fair, the period of a few weeks prior was also a bit insane.
--------------------------
Recently I decided to update my understanding of the state of "AI". Last time I checked, "AI" was platoing and there was no real expectation it would achieve "AGI". But that was about a year ago. I went to YouTube searching for more recent videos and found nothing to really change my opinion. Interestingly, there was a lack of more recent videos on the topic, in either direction, but there were still some. In particular, this guy seemed to be giving a critical roasting to the idea "AI" has platoed so maybe he has discovered the truth: https://www.youtube.com/watch?v=bv19nXfb0bc He belives "AI" has platoed.
A friend made a good comment about Genie 3- "Great just what the world needs more crappy indie games."
The technology to make and film your own movie at a high level of production has existed for a long time and yet strangely there seems to be a dearth of actual good and well made creative stories. The democratization of the technology hasn't radically changed how certain winners rise to the top in any creative field. For every successful YouTube fitness influencer there is some guy who has 10 to 50 views per video for a year that never made it.
Claude and its code outputs I saw someone claim that it truly unlocks the industrial production scale of software but I see it more like 3D printing. With some of these tools and services a certain few people will make their own bespoke software in the same way some people do print 3d trinkets or repair parts for broken objects. If anything the ability to create software just became even easier but do you think people are really going to stop the scrolling or passive consumption that dominates their lives to create things with these new technologies?
> For every successful YouTube fitness influencer there is some guy who has 10 to 50 views per video for a year that never made it.
I'm going to blame The Algorithm. Say no to The Feed people!
The point to increasingly not needing human talent in the loop is that you increasingly don't need human talent in the loop. (And to the extent that any human remains in charge, the payoff-inequalities multiply, which I don't consider especially a good thing.)
You need a human to consume a product and generate revenue. Netflix produces a glut of content already before AI but surprisingly enough just that content alone hasn’t worked for them in succeeding and growing. They have only grown by introducing advertising as a new revenue stream.
When content production costs fall - which is what a lot of said AI tools are doing - it doesn’t translate into more revenue or actual value. That remains more elusive.
> "You need a human to consume a product and generate revenue"
They'll solve the former with UBI, if that's really needed at all, and again I really just don't see how the latter follows. Netflix's revenue is capped by the number of hours people have in the day to binge-watch period dramas and buddy cop slop. It doesn't mean they'll continue hiring humans to meet that finite demand.
> > "You need a human to consume a product and generate revenue"
> They'll solve the former with UBI, if that's really needed at all, and again I really just don't see how the latter follows.
That's not how the economy works. xD "Economy" is people doing things. If robots start doing all the things, then what is the purpose of the robots doing things? There is none, therefore robots won't be doing things! It's a real simple question philosophically. xD
Right now I could instantiate a robot economy in my computer. It would be as important as a robot-only economy, which is to say, not important at all.
> "That's not how the economy works. xD "Economy" is people doing things. If robots start doing all the things, then what is the purpose of the robots doing things?"
Debates over teleology don't, in themselves, prevent physical economies from being reconfigured in ways that put people out of work, or render them welfare-dependant, or lead to military arms-races between drone-soldiers and AI-generals, or otherwise alter the world so dramatically that nothing recognisably human emerges from the other end. Whether there's a 'point' to this from a human-centred PoV may well be irrelevant.
Under the agreed upon rules of the game Netflix as a company has a duty to grow and to increase its value in stock price. Netflix’s revenue is not capped by the number of hours people have in the day to watch TV, if you are a paid subscriber you have generated revenue for them. Producing the content was an initial draw to get people to subscribe. Now their business wants to expand and grow -which is to make more money. They can do this by adding more subscribers OR they could open up to advertisers who have big budgets.
Their content strategy matters to an extent that they realize that if people find NOTHING to watch they will likely cancel their subscription, likely they have an internal metric that suggests when an account fails to watch a certain number of hours a week that account is at risk of cancellation.
There are other ways they could increase revenue as well: they could start an up charge for live sporting events like the Boxing coverage they have dipped their toe into.
Netflix is most certainly not capped in revenue by the number of hours in the day people actually watch their content.
I don't see what part of this post is arguing for human talent being a necessary part of the business strategy here. Are you arguing that if Netflix doubled the quality of their artistic output that they'd get twice as many subscribers or their existing subscribers would pay twice as much? Is either scenario terribly realistic?
We’re miles off topic here but I’m pointing out you don’t seem to have a very good understanding of business in general because your assumptions about how Netflix creates revenue are nonsensical.
Increasing language proficiency in *language models* is not, and never will be, evidence of a general intelligence. A Boeing 777 flies itself more surely and flawlessly than Claude can code. No one deems it generally intelligent. This is investor hype by which 120 IQ midwits extract cash from wealthy idiots.
Not sure if you saw this already, but the Agorithmic Bridge did an interesting article on this topic:
https://www.thealgorithmicbridge.com/p/the-stock-market-has-no-idea-whats
Basically, the markets seem to be betting both that AI will fail horribly (i.e, overinvesting in data centres) and also succeed wildly (i.e, put SaaS out of business), which has the effect of 'dump all stock', because even if AI succeeds it's hard to predict which specific firms are going to wind up dominant, because the paradigm is shifting so rapidly.
"And so investors do the only rational thing available when the framework for rationality itself is unstable and when most options look like the losing option: they sell everything."
...Which I guess is what the sinking sensation in my gut has been saying for a while, if not in those exact words.
You make a good point about Sequia and Nature being pretty serious people, and personally I've never argued that AGI was impossible on principle. Just that's incredibly reckless to make it outside anything but the tightest global regulation. I guess we're in a race to see if the bubble pops, and whether this will cool off investment.
I see multiple plays:
1. Roll over to the inevitable.
2. Learn how to fool AI when you need to.
3. Figure out how to disable data centers from a distance.
4. Keep your pre AI skills sharp in case the world goes into Idiocracy and AI has less data to train with. Google appeared to go into woke censorship mode because it could no longer use the full Internet for a link graph. So they are stuck with universities, the government and mainstream media.
(This is an opportunity for the intellectual right: Found lots of schools and link out to good
intellectual right content. Focus on quality vs. strict ideology.
One last play: become a Bond Villain by replacing NVidia for AI. NVidia's architecture is absolutely horrible for the problem of doing lots of matrix multiplies on different batches of inputs. The memory architecture is backwards. Connect me with some hardware guys and some money and we could easily make a neuro net computer that is at least ten times faster and 10 times more energy efficient and connecting GPUs. Maybe 100x.
I solved this problem back around 1990 when I worked at a supercomputer company.
What company did you work at? The story about NVIDIA has been from the start that their architecture is shit, but they know how to crank it out at prices better than competitors, so they ended up winning the market back in early 2000's and Moore's law ensured nobody could threaten them.
But Moore's law may have died just now.
I'm still not entirely sold on the utility of AGI. We already have a couple billion human-like minds and I don't give a fuck what they output, why would I care what IBM brand retard says?
Further, barring extreme economic or force incentives, why would anyone else?
Use Claude Pro to attempt to accomplish any white-collar task and come back to me
AI is not feasible cost-wise. That is its Achilles Heal.
Will all this still be profitable with hundred ex prices? xD
But can it decide which tastes the best; a pear or a strawberry?
Rather than computing power or the ability to use pattern recognition to predict probabilities, isn't oue very human subjectivity the better meter for thought?
Octopi/squids of some varieties communicate via body position, movement of arms, patterns of colour on different parts of the body - several magnitudes more information per unit and time-segment than human speech or text.
Same with many small birds such as Great Tits - the sounds they produce are more complex and carry more information than humans can convey in speech or text.
Do we therefore call octopi, squid and tits more intelligent than humans?
Corollary: if we use human intelligence (what there is of it) as the benchmark, then we also use human ways of thinking/using that intelligence as the demarcation-point.
Thus: the old model based in that humans are separate and different from animals is the problem here.
But: how to understand a different intelligence?
I think they will certainly attempt AGI but it will be a mix of Wizard of Oz style propaganda and the AI doing its thing, and of course we know they want to use AI, and what they imagine it can do, to bring untold suffering and evil to people (if they can get away with it). As at the end of the day the god of this world and his ilk are calling the shot with the elites their puppets.
At current level, if it stays "a very helpful mirrow" it is pretty great.
If it can be leveraged for harder tasks, also good.
Will it take over the world? Dunno. How much energy will that compute take?
I was working with the lowly ChatGPT 3.0 version on a project that required plotting timelines of particular developments that mark our civilization's rise and ensuing decline. I started with low hanging fruit called classical music. It was designed as a collaborative projects of sorts with inputs solicited both from myself and the AI.
What I found both disturbing and exhilarating is that it was, in fact, a good collaboration (ie, once I instructed the AI to dispense with silly compliments which I find annoying). Better than what I could have had with many humans also knowledgeable and partial to this one area. I am still trying to figure out what made this such a satisfactory experience. Was it the fact that my hunch was confirmed with hard data and the plots kind of tracked with what my intuition told me ahead of time? or was it the fact that AI seemed to guess what my intuition was, as I provided more input and suggested modifications?
The really disturbing, yet fascinating part, was that my newly found AI 'friend" provided some excellent conclusions, ones I'd have no doubt inferred myself except it beat me to it.
I have since moved to a much harder subject matter (because definitions vary and prejudices multiply, my own included), namely Philosophy. Both myself and the AI are struggling a bit to keep the project on track. My conclusion so far though is that I would not have been able to plow through this minefield with another human person, for any number of reasons. This conclusion is what disturbs me most, and I am trying to analyse right now why.
The most important graph in AI has gone vertical . . . because the vertical axis was changed from exponential to linear. No change in trend, just legerdemain.