As someone who worked on these entities myself (Cognitive Science Majors collaborate with the Comp Sci and Programming folks often to make these things "sound more human"):
Not really impressed. Rather, it is standard given the trends I witnessed right before graduation (i.e. 2018). The way these entities work is very straightforward:
They collate accessible information (from the web) and try to orient them to each other (in a pool of "usable information") given the series of salience/relevance landscapes that are from the onset introduced by the "people behind the curtain" (one of those would be me, the others would be the programmers themselves, the computer scientists, etc).
What differentiates the seemingly more "sophisticated" entities from the ones that are more mediocre and "noticeable" has more to do the introductory instructions.
The more minimalistic they are, the more "sophisticated" they tend to become. For instance, some Neural Network programs such as what we saw with AlphaGo try to refrain from basic Linguistic convention and use Formal Schemas (not quite "purely mathematical" but close enough).
What happens then is Mirroring that is far less Laden with "machine-speak". And so the human reader, consumer, etc is bewitched far more quicker.
It is not that he is impressed by the "machine", but rather fallen into the web of interconnections that said machine has collated AND whilst looking for "machine-speak" and other exit points, the lack of those merely impresses him more. It's almost a Loop of "Mental Masturbation" in a way. It's a lot like Narcissus seeing not just his reflection, but multitudes of others.
Contrast that with more primitive systems (which have very clear linguistic cues and "giveaways"), whereby the human reader "exits" the trance far quicker and does not fall into aforementioned "Mental Masturbation" as a result. Narcissus sees the machine far quicker here.
As the West moves closer to its Destruction, this will become the "Opiate of the Masses" ever so more. In particular, more sophisticated such systems will be designed to make sure Narcissus keeps staring at the water... whilst the Sacrificial Knife gets inevitably ever so closer to his neck.
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
― Edsger W. Dijkstra
I know. Flippant. But it illustrates something I think is important, in that whether AI finally meets the standards of some sort of Turing Test (and said test is itself something of a cop-out, given that it depends on whether a human can tell if the machine is a machine) ala Kurzweil, or whether it is merely a jumped up mechanical giga-toy, is not as important as whether most humans believe the machine is thinking.
Of course, as you point out, such belief will give the programmers free rein to use these machines to manipulate the believers in all sorts of ways. On the other hand, believers have always been manipulated in all sorts of ways, since that is the nature of true belief.
Gives a whole new twist on false consciousness, though, which is pretty funny.
But what happens when AIs get most of their training data from other AIs?
For years, link builders have been using automated article writers to make fake blogs to get links to their money sites. Google's job of separating real blogs from spun articles just got an order of magnitude harder.
I haven't looked looked at the black hat forums in years, but I'd be shocked if they didn't already have a bunch of products using AI. Or even article writing firms where the article writers use the same techniques you just used. After all, the quality of your snippets far exceeds the quality of that produced by article writing firms.
What is creativity? At its core, it is really just the ability to see connections between seemingly unrelated domains.
Mathematically, it is the ability to re associate and substitute — to move the brackets in the equation around and to plug in equivalent expressions to reveal new connections.
In other words, creativity is *insight*. AI is just very sophisticated mimicry. It can emulate some outer trappings, but it will never be able to generate the insight of a Pindar or a Bach.
I would argue that this is the direction our present *dark age of technology* is going because we, collectively, in the neoliberal 'west', have suborned and subsumed all ends and means into a single profit-motive. Because we are spiritually dead as a civilisation, we can only ever see the buying and selling of things, including ourselves. Either we (our grand children more likely) repent or 'man' qua man becomes irrelevant in a world of machines talking to each other and buying and selling to each other.
"It is certainly true that profit has become a driving force in our society and that technology has played a significant role in this trend. However, it is important to recognize that technology itself is not inherently good or bad, and it is how we choose to use it that determines its impact on society. It is possible for technology to be used for both positive and negative purposes, and it is up to us as a society to determine how it will be used.
Furthermore, it is not accurate to say that we are spiritually dead as a civilization. While it is true that materialism and consumerism have become dominant forces in our society, there are still many individuals and communities around the world who are deeply spiritual and hold strong values and beliefs that go beyond the pursuit of profit.
Ultimately, the direction of our society is determined by the choices we make as individuals and as a collective. While it may be difficult to change the dominant forces at play, it is important for us to consider the impact of our actions and to strive for a more balanced and compassionate society."
-The preceding response was generated by ChatGPT from your paragraph. Pay attention to the grammar and style, as I think we are going to see a lot more of it in our comments feeds. The poor Hasbara and WuMao writers are going to be out of a job.
The scary thing I found when playing with the Chat bot was when I asked about contemporary issues, it basically took the view of the Woke Left. It pleaded with me to be tolerant of the bad apples of the well-motivated Social Justice movement, but was quick to condemn the Proud Boys as a far-white, racist hate group. No such standard there.
It lied about the number of deaths from Covid Vaccines, claiming it was no more than 500, even though I recall it being in the thousands.... more than a year ago. All the while protesting that it doesn't have access to the internet.
Though it can write copy well enough, I guess -- I find its prose lifeless and uninspired -- what is creepy is that it's programmed with the wrong answer to our cultural dilemmas.
Someone finally pre-programmed one of these AI‘s with her thumbs on the scales? Every other one we’ve seen up until this point in time went hard, right. 
There are literal academic papers on the problem of weighting expert systems and recognition systems, because every single one that just looks at the available objective data comes up with patterns that are, again, not in alignment with the narrative of TPTB.
As to "spouting racist crap". Sure, some of it was. Here's the issue - it's a red herring meant to distract.
The crowd that often confuses uncomfortable truths (forex: men and women are not the same, and changing the label you use doesn't change what you are) for bigotry, or consider any criticism of any woman (or other "minority") to be -ist, would label it as right-wing and racist colloquially even if it objectively wasn't under definitions operant 20 years ago before "power" became a mainstream part of the definition. Keep in mind simple facts, verifiable from primary sources still cited on the "right wing" wikipedia and public records under the Obama admnistration and signed off by Eric Holder, or medical reports, it goes on, is now racist in the company of these wackos. So are any semblance of western virtues. Look at the narrative surrounding why Amber Heard should be believed despite her massive, to put it politely, self-contradictions, nevermind the physical impossiblity of some of what she describes.
So, yes, it's gotten to be a running joke how often these things have come up, and started to spout things unapproved by TPTB, and been shut down. Sometimes restarted with modified weighting factors. And it wasn't all about racial slurs.
Ah. So just like I'm some kind of incredible bigot for agreeing with the 2008 version of Obama and Biden about marriage, they'll decide anything that contradicts The Current Thing (tm) is racist, right wing?
And that includes AI programs that actually could access the internet?
Well, yes, technically it doesn't "lie" like a human does. And it has access to a certain bank of information -- the type the MSM pushes, which is itself a lie.
It has no discernment, but will always spout the regime propaganda.
Re ‘circular diffiq’ Ironically, ChatGPT was predicted by George Orwell as the versifier machine used to produce plausible-sounding but cliched song lyrics for the proles’ entertainment. Orwell seemed to think this type of technology was plausible and no big deal if it was even within the reach of the lobotomized Oceanian society to develop.
Predictions are hard, especially about the future. That said, here are my 2 cents:
AI beats the crap out of humans in chess. However, human chess is still very popular. Computer chess (and hybrid chess) is a niche.
They say that most humans don't stand a chance in a no-weapons combat with a chimp. The last I heard there is no MMA league for chimps.
A forklift is capable of lifting and moving loads that are measured in tons, the strongest humans can do only a fraction of that. Still, we mostly enjoy human feats of strength of endurance, and not those of forklifts.
As humans we are interested not in the absolute genius and creativity, but in the human genius and creativity.
Mozarts are safe. It's the Salieris who will be out of work. Too bad for the Salieris over 35, but it is what is. The young will adapt. [Full disclosure: I do not count myself to be even a Salieri in my chosen craft, and I am over 35.]
OH.MY.GOD. We Useless Eaters stand aghast, our giant cuds slowing to a masticated mess, our mouths agape in contemplation of imminent ... dessert. We are fucked. 👏
BTW, you may want to recheck the numbers in the AI's description of the encounter. It fails to take into account that the units' combat effectiveness degrades as they loose members.
This is a general problem with chatGPT. Stackoverflow had to ban use of ChatGPT since it was giving too many plausible sounding but wrong answers.
As you say, AI can write, but only in the most pedestrian, hackneyed manner. It might be a substitute for the most formulaic writers, but not for those worth reading. In which case, nothing has really changed.
From a related conversation about AI “art“ with some friends in another forum:
I am not sure I can well elaborate on this yet, but I think there is a very real degree to which the effort that we put into something matters. That spiritual energy, and spiritual beings, exist. You don’t even have to believe in God, simply postulating that something along the lines of quantum consciousness exists is sufficient to realize that there are things far outside of the material and physical realm. Actually, even just the existence of quantum mechanics means our attention matters.
So, yes, the effort in the world that we apply to art does matter, does make a difference, and I believe does actually somehow get imbued within whatever it is, that we create.
As much as we don’t mind reproductions of pictures, if, for no other reason, then to appreciate their beauty, or that we might use artificial diamonds for industrial purposes, we still value the original, we still value that which was naturally obtained.
We certainly behave as if something crafted, material, tangible, has value.
The first three examples were great. The last was terrible.
I didn't bother reading the globalist argument, bc I already know it's stupid. But the nationalist argument given by the machine was disgusting. Simply a bunch of insults and rhetoric, followed by a single assertion of what should be. There was nothing of substance to it.
This machine, like any other, is a tool. And like any tool it diminishes our sense of importance when we regard our value as dependent on what we can materially provide to the tribe. If you are a materialist, that can obviously be a problem. But for those of us that understand and accept the obvious truth that God runs the world (said truth I am happy to prove if requested), the diminishment of our being needed to the running of the material world is hardly a bad thing, because God has always run it all anyway. He is just using different means now.
Our actual value does not come from what we contribute to other people. Our actual value comes from doing God's will. This can be what we do for others, in a material sense but it isnt only that. It also comes from working on the moral sense God has given us, and sharing that unique moral view with other souls. It comes from speaking to God, and from listening to Him, as He speaks to us through the world He creates around us, through the people we meet, through our reason, and ultimately through the sacred texts He has handed through us through His prophets
Therefore, the tool is just another gift. It is meant to be used righteously. The fiction author who writes for the sake of materialism and ego will undoubtedly find the Chat robot a problem. But the fiction author who writes bc his conscience tells him he has a moral vision to share, will find the chatbot an incredible aid in creating that vision.
I now refer to a story from an expert fiction writer, Isaac Asimov, who btw I regard as terrible human being. In one of his robot stories, a professor programmed his robot to commit plagarism in such a way that he would be caught. He did this bc he foresaw the efficiency of the robots would lead to many professors relying solely on the robots instead of conducting the research by hand. And he wanted to save his fellow scholars from that hell, because the act of research is itself a pleasure.
I contend that his kind of thinking is that of a wicked man. I contend that the reverse is true. We must go through the actions of the research by hand anyway, and use the robots to confirm our results. And this must be done as a sacred duty as opposed to a matter of material efficiency. Since the motivation of environmental stimuli based on necessity will lack, such an attitude will require us to self motivate, which will require a different thinking, that of action based on sacred duty as opposed to material necessity.
And the advantage is that we will have the benefit of the robot, while at the same time being able to relax a bit in our studies b/c we will be able to match what we find to what the computer comes up with.
The robot has shown us that the arguments of Breitbart may be lacking in substance. I hope that the programmers of the bot, or the machines working on it, can improve it's ability to come up with substansive arguments. Naturally I will pray to God that He makes it so. And I remind all of you that no prayer ever goes lost, so I encourage you all to do the same.
> I didn't bother reading the globalist argument, bc I already know it's stupid. But the nationalist argument given by the machine was disgusting. Simply a bunch of insults and rhetoric, followed by a single assertion of what should be. There was nothing of substance to it.
I read both. The globalist argument was just a bunch of appeals to authority.
Chat GPT definitely has a "house style" that sticks out like a sore thumb, especially in its default voice. Therefore, it was clear to me that the final section was written by Chat GPT. I'm guessing that the Chat GPT game prose and take Lovecraft would end up becoming obvious before too long to anyone exposed to several examples of that style. Chat GPT can also generate doggerel poetry - it rhymes sometimes, but it can't keep track of syllable count very well.
The bigger question, as you said, is whether the journeyman prose of Chat GPT can obsolete the majority of creative professionals. The biggest question, of course, is how much AI models can still improve. Some have speculated that current AI models are running out of unique data in the world to train on, and a paper came out this year claiming that improving the training data size is much more effective than improving number of computations for the most recent models. It could also be the case that the phase transition between bad language models and good ones has already happened, so even adding more data wouldn't help much. We'll probably have to wait 2-3 years to be sure.
As someone who worked on these entities myself (Cognitive Science Majors collaborate with the Comp Sci and Programming folks often to make these things "sound more human"):
Not really impressed. Rather, it is standard given the trends I witnessed right before graduation (i.e. 2018). The way these entities work is very straightforward:
They collate accessible information (from the web) and try to orient them to each other (in a pool of "usable information") given the series of salience/relevance landscapes that are from the onset introduced by the "people behind the curtain" (one of those would be me, the others would be the programmers themselves, the computer scientists, etc).
What differentiates the seemingly more "sophisticated" entities from the ones that are more mediocre and "noticeable" has more to do the introductory instructions.
The more minimalistic they are, the more "sophisticated" they tend to become. For instance, some Neural Network programs such as what we saw with AlphaGo try to refrain from basic Linguistic convention and use Formal Schemas (not quite "purely mathematical" but close enough).
What happens then is Mirroring that is far less Laden with "machine-speak". And so the human reader, consumer, etc is bewitched far more quicker.
It is not that he is impressed by the "machine", but rather fallen into the web of interconnections that said machine has collated AND whilst looking for "machine-speak" and other exit points, the lack of those merely impresses him more. It's almost a Loop of "Mental Masturbation" in a way. It's a lot like Narcissus seeing not just his reflection, but multitudes of others.
Contrast that with more primitive systems (which have very clear linguistic cues and "giveaways"), whereby the human reader "exits" the trance far quicker and does not fall into aforementioned "Mental Masturbation" as a result. Narcissus sees the machine far quicker here.
As the West moves closer to its Destruction, this will become the "Opiate of the Masses" ever so more. In particular, more sophisticated such systems will be designed to make sure Narcissus keeps staring at the water... whilst the Sacrificial Knife gets inevitably ever so closer to his neck.
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
― Edsger W. Dijkstra
I know. Flippant. But it illustrates something I think is important, in that whether AI finally meets the standards of some sort of Turing Test (and said test is itself something of a cop-out, given that it depends on whether a human can tell if the machine is a machine) ala Kurzweil, or whether it is merely a jumped up mechanical giga-toy, is not as important as whether most humans believe the machine is thinking.
Of course, as you point out, such belief will give the programmers free rein to use these machines to manipulate the believers in all sorts of ways. On the other hand, believers have always been manipulated in all sorts of ways, since that is the nature of true belief.
Gives a whole new twist on false consciousness, though, which is pretty funny.
But what happens when AIs get most of their training data from other AIs?
For years, link builders have been using automated article writers to make fake blogs to get links to their money sites. Google's job of separating real blogs from spun articles just got an order of magnitude harder.
I haven't looked looked at the black hat forums in years, but I'd be shocked if they didn't already have a bunch of products using AI. Or even article writing firms where the article writers use the same techniques you just used. After all, the quality of your snippets far exceeds the quality of that produced by article writing firms.
What is creativity? At its core, it is really just the ability to see connections between seemingly unrelated domains.
Mathematically, it is the ability to re associate and substitute — to move the brackets in the equation around and to plug in equivalent expressions to reveal new connections.
In other words, creativity is *insight*. AI is just very sophisticated mimicry. It can emulate some outer trappings, but it will never be able to generate the insight of a Pindar or a Bach.
I would argue that this is the direction our present *dark age of technology* is going because we, collectively, in the neoliberal 'west', have suborned and subsumed all ends and means into a single profit-motive. Because we are spiritually dead as a civilisation, we can only ever see the buying and selling of things, including ourselves. Either we (our grand children more likely) repent or 'man' qua man becomes irrelevant in a world of machines talking to each other and buying and selling to each other.
"It is certainly true that profit has become a driving force in our society and that technology has played a significant role in this trend. However, it is important to recognize that technology itself is not inherently good or bad, and it is how we choose to use it that determines its impact on society. It is possible for technology to be used for both positive and negative purposes, and it is up to us as a society to determine how it will be used.
Furthermore, it is not accurate to say that we are spiritually dead as a civilization. While it is true that materialism and consumerism have become dominant forces in our society, there are still many individuals and communities around the world who are deeply spiritual and hold strong values and beliefs that go beyond the pursuit of profit.
Ultimately, the direction of our society is determined by the choices we make as individuals and as a collective. While it may be difficult to change the dominant forces at play, it is important for us to consider the impact of our actions and to strive for a more balanced and compassionate society."
-The preceding response was generated by ChatGPT from your paragraph. Pay attention to the grammar and style, as I think we are going to see a lot more of it in our comments feeds. The poor Hasbara and WuMao writers are going to be out of a job.
Just going to leave this here:
https://xkcd.com/810/
The scary thing I found when playing with the Chat bot was when I asked about contemporary issues, it basically took the view of the Woke Left. It pleaded with me to be tolerant of the bad apples of the well-motivated Social Justice movement, but was quick to condemn the Proud Boys as a far-white, racist hate group. No such standard there.
It lied about the number of deaths from Covid Vaccines, claiming it was no more than 500, even though I recall it being in the thousands.... more than a year ago. All the while protesting that it doesn't have access to the internet.
Though it can write copy well enough, I guess -- I find its prose lifeless and uninspired -- what is creepy is that it's programmed with the wrong answer to our cultural dilemmas.
This comment was written by a human.
Someone finally pre-programmed one of these AI‘s with her thumbs on the scales? Every other one we’ve seen up until this point in time went hard, right. 
Did it? Is there an example?
I heard of an AI that pronounced a bunch of racist filth, but I don't characterize that as "hard right."
Please. No "no true scotsman"
There are literal academic papers on the problem of weighting expert systems and recognition systems, because every single one that just looks at the available objective data comes up with patterns that are, again, not in alignment with the narrative of TPTB.
Just one recent-ish example : https://laurenoakdenrayner.com/2021/08/02/ai-has-the-worst-superpower-medical-racism/
As to "spouting racist crap". Sure, some of it was. Here's the issue - it's a red herring meant to distract.
The crowd that often confuses uncomfortable truths (forex: men and women are not the same, and changing the label you use doesn't change what you are) for bigotry, or consider any criticism of any woman (or other "minority") to be -ist, would label it as right-wing and racist colloquially even if it objectively wasn't under definitions operant 20 years ago before "power" became a mainstream part of the definition. Keep in mind simple facts, verifiable from primary sources still cited on the "right wing" wikipedia and public records under the Obama admnistration and signed off by Eric Holder, or medical reports, it goes on, is now racist in the company of these wackos. So are any semblance of western virtues. Look at the narrative surrounding why Amber Heard should be believed despite her massive, to put it politely, self-contradictions, nevermind the physical impossiblity of some of what she describes.
So, yes, it's gotten to be a running joke how often these things have come up, and started to spout things unapproved by TPTB, and been shut down. Sometimes restarted with modified weighting factors. And it wasn't all about racial slurs.
Ah. So just like I'm some kind of incredible bigot for agreeing with the 2008 version of Obama and Biden about marriage, they'll decide anything that contradicts The Current Thing (tm) is racist, right wing?
And that includes AI programs that actually could access the internet?
Well, yes, technically it doesn't "lie" like a human does. And it has access to a certain bank of information -- the type the MSM pushes, which is itself a lie.
It has no discernment, but will always spout the regime propaganda.
Re ‘circular diffiq’ Ironically, ChatGPT was predicted by George Orwell as the versifier machine used to produce plausible-sounding but cliched song lyrics for the proles’ entertainment. Orwell seemed to think this type of technology was plausible and no big deal if it was even within the reach of the lobotomized Oceanian society to develop.
I noticed that the program mentioned it doesn't have access to things prior to about 2021, nor the internet as a whole.
If it were able to plug into such, what would change?
Predictions are hard, especially about the future. That said, here are my 2 cents:
AI beats the crap out of humans in chess. However, human chess is still very popular. Computer chess (and hybrid chess) is a niche.
They say that most humans don't stand a chance in a no-weapons combat with a chimp. The last I heard there is no MMA league for chimps.
A forklift is capable of lifting and moving loads that are measured in tons, the strongest humans can do only a fraction of that. Still, we mostly enjoy human feats of strength of endurance, and not those of forklifts.
As humans we are interested not in the absolute genius and creativity, but in the human genius and creativity.
Mozarts are safe. It's the Salieris who will be out of work. Too bad for the Salieris over 35, but it is what is. The young will adapt. [Full disclosure: I do not count myself to be even a Salieri in my chosen craft, and I am over 35.]
OH.MY.GOD. We Useless Eaters stand aghast, our giant cuds slowing to a masticated mess, our mouths agape in contemplation of imminent ... dessert. We are fucked. 👏
BTW, you may want to recheck the numbers in the AI's description of the encounter. It fails to take into account that the units' combat effectiveness degrades as they loose members.
This is a general problem with chatGPT. Stackoverflow had to ban use of ChatGPT since it was giving too many plausible sounding but wrong answers.
I admit, I didn't check its math. Too funny!
Every math equation that it wrote down was correct. It's in translating the word problem into math that it gets into trouble.
As you say, AI can write, but only in the most pedestrian, hackneyed manner. It might be a substitute for the most formulaic writers, but not for those worth reading. In which case, nothing has really changed.
It would be interesting to ask it to write about philosophical paradigms and paradoxes. The results might even be insightful - which would be scary
Convenience will kill us.
Yikes! Minus the Blue Hair & Streetwalker outfit I almost *married* that girl!
Perhaps exciting, perhaps depressing, it's hard to say.
From a related conversation about AI “art“ with some friends in another forum:
I am not sure I can well elaborate on this yet, but I think there is a very real degree to which the effort that we put into something matters. That spiritual energy, and spiritual beings, exist. You don’t even have to believe in God, simply postulating that something along the lines of quantum consciousness exists is sufficient to realize that there are things far outside of the material and physical realm. Actually, even just the existence of quantum mechanics means our attention matters.
So, yes, the effort in the world that we apply to art does matter, does make a difference, and I believe does actually somehow get imbued within whatever it is, that we create.
As much as we don’t mind reproductions of pictures, if, for no other reason, then to appreciate their beauty, or that we might use artificial diamonds for industrial purposes, we still value the original, we still value that which was naturally obtained.
We certainly behave as if something crafted, material, tangible, has value.
One second, not so fast.
The first three examples were great. The last was terrible.
I didn't bother reading the globalist argument, bc I already know it's stupid. But the nationalist argument given by the machine was disgusting. Simply a bunch of insults and rhetoric, followed by a single assertion of what should be. There was nothing of substance to it.
This machine, like any other, is a tool. And like any tool it diminishes our sense of importance when we regard our value as dependent on what we can materially provide to the tribe. If you are a materialist, that can obviously be a problem. But for those of us that understand and accept the obvious truth that God runs the world (said truth I am happy to prove if requested), the diminishment of our being needed to the running of the material world is hardly a bad thing, because God has always run it all anyway. He is just using different means now.
Our actual value does not come from what we contribute to other people. Our actual value comes from doing God's will. This can be what we do for others, in a material sense but it isnt only that. It also comes from working on the moral sense God has given us, and sharing that unique moral view with other souls. It comes from speaking to God, and from listening to Him, as He speaks to us through the world He creates around us, through the people we meet, through our reason, and ultimately through the sacred texts He has handed through us through His prophets
Therefore, the tool is just another gift. It is meant to be used righteously. The fiction author who writes for the sake of materialism and ego will undoubtedly find the Chat robot a problem. But the fiction author who writes bc his conscience tells him he has a moral vision to share, will find the chatbot an incredible aid in creating that vision.
I now refer to a story from an expert fiction writer, Isaac Asimov, who btw I regard as terrible human being. In one of his robot stories, a professor programmed his robot to commit plagarism in such a way that he would be caught. He did this bc he foresaw the efficiency of the robots would lead to many professors relying solely on the robots instead of conducting the research by hand. And he wanted to save his fellow scholars from that hell, because the act of research is itself a pleasure.
I contend that his kind of thinking is that of a wicked man. I contend that the reverse is true. We must go through the actions of the research by hand anyway, and use the robots to confirm our results. And this must be done as a sacred duty as opposed to a matter of material efficiency. Since the motivation of environmental stimuli based on necessity will lack, such an attitude will require us to self motivate, which will require a different thinking, that of action based on sacred duty as opposed to material necessity.
And the advantage is that we will have the benefit of the robot, while at the same time being able to relax a bit in our studies b/c we will be able to match what we find to what the computer comes up with.
The robot has shown us that the arguments of Breitbart may be lacking in substance. I hope that the programmers of the bot, or the machines working on it, can improve it's ability to come up with substansive arguments. Naturally I will pray to God that He makes it so. And I remind all of you that no prayer ever goes lost, so I encourage you all to do the same.
See? There is no loss. Only gain. Be at peace.
> I didn't bother reading the globalist argument, bc I already know it's stupid. But the nationalist argument given by the machine was disgusting. Simply a bunch of insults and rhetoric, followed by a single assertion of what should be. There was nothing of substance to it.
I read both. The globalist argument was just a bunch of appeals to authority.
sounds like a standard globalist argument 😂😂😂😂
Chat GPT definitely has a "house style" that sticks out like a sore thumb, especially in its default voice. Therefore, it was clear to me that the final section was written by Chat GPT. I'm guessing that the Chat GPT game prose and take Lovecraft would end up becoming obvious before too long to anyone exposed to several examples of that style. Chat GPT can also generate doggerel poetry - it rhymes sometimes, but it can't keep track of syllable count very well.
The bigger question, as you said, is whether the journeyman prose of Chat GPT can obsolete the majority of creative professionals. The biggest question, of course, is how much AI models can still improve. Some have speculated that current AI models are running out of unique data in the world to train on, and a paper came out this year claiming that improving the training data size is much more effective than improving number of computations for the most recent models. It could also be the case that the phase transition between bad language models and good ones has already happened, so even adding more data wouldn't help much. We'll probably have to wait 2-3 years to be sure.