473 Comments

I'm sorry to say that this is a lazy article. Sophomoric and overheated with exclamation marks, it clearly thinks it has something wise to say. But the author's chief concept of '“moving” history' remains studiously undefined. This is either a howling oversight or a deliberate strategy of abstraction intended to shield the main point from criticism. If the latter, then there are enough Judith Butlers in the world, thanks: one doesn't flee to the Free Press only to find oneself encountering more pseuds who like to name-drop with silly phrases like "Hayekian (Popperian?)". (Please make your mind up before typing.)

"No one can foresee those futures!" Sigh ... this is a immaturely rounded-up statement of predictive absolutism pretending that since history doesn't run on a monorail, then we must give up on probabilities. The reality is that when you look at AI researchers who are considering the existential threat posed by it, virtually all of the movement has been from the agnostic/doubtful camp to the very concerned. There is hardly any movement in the opposite direction. That really does tell you something.

Expand full comment

I was left wondering whether or not the Free Press wrote this using ChatGPT as a social experiment. If more than half the readers fall for it then - - - layoffs?

Expand full comment

Could a generalized AI have written this to make us comfortable while it plots our entry into the Matrix? LOL.... not a very sophisticated piece on a technology where the downside risks are becoming apparent every day. The printing press never had the potential to control us - it simply expanded access to thought. An AGI is a tech with the potential to control us independent of our actions - and that is something we need to understand and manage before unleashing it on humanity.

Expand full comment

Did anyone else besides me think “the deployment of fire” as an argument was the silliest one ever….I mean really?

Expand full comment

No, you are not alone...change fire to "breathing", same outcome.

Expand full comment
May 5, 2023·edited May 12, 2023

What I find funny is that the tech people being vocal all fear that AI itself is going to dominate us. That may be in the far future, but at the very least it will be decades later than the first scary situation: the use of AI overseen and controlled by dominant humans to subjugate others.

That is clearly first, yet it's not even in the conversation because it hasn't been imagined by enough science fiction writers to be as commonly thought about yet.

Expand full comment

We're already there on your near-term concern... good observation.

Expand full comment

We here reading have become spoiled. Articles that don't present a truly original perspective and simply summarize without defining can expect to be savaged in Comments, I guess. Like an architect proposing a new high rise in Chicago, given the work thus far, you'd better bring some game to this venue.

Expand full comment

Well this is an advertisement.

Expand full comment
May 4, 2023·edited May 4, 2023

Yeah writers of all stripes need to be very concerned about this technology, as do people who appreciate good writing and thinking in general.

Expand full comment

I consider AI will likely bring up the standard of writing and thinking, seeing how far these arts have declined in the past 50 years.

Expand full comment

Agreed. The "breezy" style so common that only takes off handed positions is so tiresome. Ok, I read Rolling Stone in the 80s too...take a position and make an argument, at least then we can agree or disagree. I can respect your view if you will take one but not if you are the classic "Truman two handed economist".

Expand full comment

Maybe Cowen constructed a bot version of himself, and that's what wrote the essay. Joanna Stern of the WSJ recently experimented with and reported on doing this with herself.

Expand full comment

Can AI be over-degreed and undereducated too?

Expand full comment

I thought the same thing!

Expand full comment
founding

I wondered that too.

Expand full comment

Well said. That was my impression as well. This is a laundry-list of half-considered, incomplete arguments against straw men. Reading this next to Bari's interview with Sam Altman (and Bari, I think you could and should have been tougher on him), I am deeply concerned by the tendency of AI-boosters to say absolutely nothing of substance. It suggests they either aren't saying something they know, or that they just haven't thought about these questions in any more than a cursory way.

Our democracy may be faltering, but it is still a democracy. No one has the right to unleash these things on a public that does not want them, and I'd be willing to bet a large majority of citizens would favor an indefinite pause that would allow for further discourse and adaptive measures for civil society, including perhaps the decision not to move forward with AGI. Congress should act immediately. I don't know why they haven't--I can imagine it might be the one thing that would garner widespread bipartisan support.

Expand full comment

Because I would wager, and I only bet on sure things, that many on Congress, from both parties, will profit from the AI rollout. Our representatives especially, but also senators, long ago quit serving the American citizen. That is why we have open borders, drag queen recruitment videos for the US Navy, and a proxy war in Ukraine. Their motto might as well be "[P]arty on, dude!"

Expand full comment
founding

I believe very few in Congress understand anything about current technology. I mean they know horses eat oats.

Expand full comment

I fear you overestimate them.

Expand full comment

And little lambs eat ivy. (One has to be a boomer or child of the silent generation to get that reference.)

Expand full comment

I can't argue with you there, Lynne. Well put.

Expand full comment

I can appreciate your comment, up to the point about the role of Congress vis-a-vis AI. Several members have shown their profound ignorance about Substack: how could we trust them to understand AI?

Expand full comment

To be fair, I am willing to put money down that most Congresspeople don't fully understand a large percentage of what they vote on.

Expand full comment

As someone who has written policy papers for numerous Congresspeople, I can fully confirm your suspicions!

Expand full comment

And as someone who knew a bright young man who interned under a US congressman, it is the early-20-somethings who do all the Senators’ and Congress people’s reading for them and in turn recommend their action.

Expand full comment

Well, that helps explain how we got here.

Expand full comment

Though I agree what you say is likely for the majority of Representatives and Senators, they are all individuals with different talents and interests. Likely those that started as lawyers fit the mold you describe. However, several got to congress by making a name or fortune in medicine, tech, advanced degrees and advocacy groups. Those can show keen knowledge and interests in such individual areas.

Expand full comment

That's a good point, Mark. I would offer that, regarding a temporary halt, Congress needn't understand AI--only that it poses profound risks that are poorly understood even within its own expert community. The purpose of a moratorium, to my mind, would be to provide time for all of us to understand the implications of these technologies, inform the public, and determine whether a majority of citizens really want to take the risks associated with them.

I suppose my bigger concern is that the AI researchers themselves seem to be driving the boat here, despite the fact that all of us are going to be impacted whether we want to or not. And to me, that seems like the kind of thing all of us should get a say in.

Expand full comment
founding

They don’t need to understand AI or the neurochemistry of gender-confused 8-year-olds to simply ban psychopathic megalomaniacs from tinkering.

Expand full comment

Precisely my thoughts.. except saying "several" is being kind.

Expand full comment

Yeah, Sam Altman revealed himself to be a childish mediocrity philosophically. Scary to think how much power and influence he and his ilk have.

Expand full comment

Congress cannot even deal with the internet, much less AI. There will be little to no help coming from that avenue. A more likely scenario is that they will act reflexively to any troubling developments.

Expand full comment

Yes, I think you're probably right about that. And that failing leads to the bigger one, no? At this moment, even absent Congress' almost-certain inability to do anything substantive with respect to regulation, we could at least hit the brakes. We will very soon be beyond that point.

Expand full comment

We are already long past that point.

Expand full comment

We aren't, actually! True AGI doesn't exist yet (as far as we know). The future is as yet unwritten.

Expand full comment
founding

Congress is full of folks who have never run a business. Most are average people with average (maybe) intelligence. They are exactly the types whose jobs may be replaced by AI, which may be a step up!

Expand full comment
founding

“These are private companies they can do whatever they want.”

-David French

Expand full comment

The greatest evil in a democracy is the tyranny of the majority.

Expand full comment
Comment deleted
Expand full comment

Great analogy and a brilliant last sentence.

If I were a computer expert in his or her 20s, AI would both terrify me and fascinate me. There was a movie based on the book Colossus that predicted the dangers of AI:

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project

If you can get the video, watch it and then start sweating bullets.

I'm just a retired computer programmer/analyst and older than dirt. I am in the end game but I worry about the next generations who will have to live with and maybe die because of AI.

If I were the driving force in an AI company, I would insist that the AI computer have no access to the internet of and other computers. It would be developed in a vacuum.

If it has internet access, it could expand its computing power exponentially and if that happens, I believe we, as a species, are doomed.

Expand full comment

There are a bunch of such stories. "I Have No Mouth, and I Must Scream" by Harlan Ellison, is a particularly depressing prediction of a massive, self-aware computer that destroys mankind, except for a few humans that it keeps alive, to torture for the rest of eternity.

Expand full comment
founding

Great reference. SYFY writers have been talking about this for many decades. Now it is here.

Expand full comment

You're the second person I've seen mention Colossus this week. It's definitely bubbling out of the zeitgeist!

Expand full comment
deletedMay 4, 2023·edited May 4, 2023
Comment deleted
Expand full comment

The predictions of Malthus have been largely mitigated by the advancements in sanitation, medicine and agriculture. However, there is always some clown in a sandwich board predicting the end of the world.

So far, we are still here.

Expand full comment
Comment deleted
Expand full comment

What scientific experts? We are so hyper-specialized that an expert in X has no idea how Y works or how it impacts Z when X, Y, and Z all interact. The Covid response is a prime example.

Expand full comment

Regulations by fill in the blank....and no we don't need experts like the Tony Fauci's of the world to exercise their "expertise"

Expand full comment

@Dominic

Spot on. What surprised me was that I thought I was reading a gushing sophomore or a breathless Silicon Valley promoter. Then I see it’s a George Mason academic doing the breathless gushing.

Expand full comment

Thank you, your comment is way more wise and reasoned than the author’s work. Maybe it was written by AI.

Expand full comment

Who would know? There's your problem right there. Talk about a glitch in the system!

Expand full comment

I'm sure AI will soon be over-degreed and undereducated quite soon. Perhaps an even larger threat.

Expand full comment
founding

I don’t care if AI theoretically in a vacuum is a threat. All that matters is that the Chinese Communist version of AI will be able to turn off our electricity which will immediately delete our military and kill 300 million of us in 6 months.

Since we have no control over what China does with AI, the entire debate should be postponed so that we can frantically harden our infrastructure. But that won’t happen since we are currently focused on making our roads and bridges more BIPOC and transvestite-accessible.

Just have some water and food on hand is all I’m saying.

Expand full comment

This technology is not "intelligence." It is a poor imitation of the human mind. ChapGPT may look impressive to someone who never reads academic articles, but it is terrible at writing them.

Just look at that recent AI generated pizza place commercial. It cannot replicate human ingenuity, and never will.

Expand full comment

I agree that it can't replace human creativity ... but it can get better at looking like it can. I read this in an article about the Hollywood writer's strike yesterday and it's stuck with me:

A lot of TV shows are formulaic (cop shows, hospital shows, lawyer shows). I bet in a few years, AI will be writing scripts for stuff like that. A few humans will oversee as "editors", punching up the dialogue, etc. Same thing goes for pizza commercials. And news articles. How about novels? It might not be able to produce an original, heartfelt novel, but I bet AI will be able to produce a pretty good murder mystery in the next few years.

And there goes 80% of the writing jobs. From a consumer standpoint, a lot of what we'll be watching or reading will be produced by AI, and we might not even know it.

Jobs of all kinds give people meaning in life, and "creative " jobs are no exception. When we take away things that give people meaning, where does that leave humanity? Where HAS it left because this has been going on for years (robotics in manufacturing, etc.) and IMO we have a huge have a crisis of meaning in our society.

Maybe we can't stop this from happening, but we need to think about creating new ways to bring meaning to life. I don't think we can go on like this.

Expand full comment

Excellent points. I agree that we can’t go on like this. In addition to bringing “meaning to life” we might also do well by trying to find life in meaning.

Expand full comment

I wholeheartedly agree with your meaning to life analysis. But on the TV writer's strike my problem with modern entertainment is so little of it is original. No new ideas. No new concepts.

Just rehash after rehash after rehash. We need a Rennaisance.

Expand full comment

We really do!

Expand full comment

I agree. It is a cyber library with astonishing access. But right now the content of thst library is the creation of humanity. Can that change?

Expand full comment

No. It is highly developed software that can change based on input, but is never divorced from human control. If it were removed from human control, it wouldn't take over the world and cause the apocalypse, but would instead devolve into an incomprehensible mess precisely because it would no longer be receiving input.

Even the name, Artificial Intelligence, is a devious marketing term. It is certainly artificial, but is is by no means intelligent. It's a flawed human creation, marred by the biases of its creators. It cannot create; it can only imitate.

Expand full comment

Right now "It cannot create; it can only imitate." Wait ten years or less.

Expand full comment

It still won't operate outside of human influence. Science fiction has corrupted much of our understanding of this technology.

Expand full comment

I think you are short sighted. True AI is in our future and it will be here sooner than later. If we do as you do and think AI is impossible, we will not be prepared for what could be a catastrophe.

It is better to wear your seat belt than not wear it at all.

Expand full comment

AI is in its infancy. Wait till it grows up. Don't focus on the here and now. Look down the road. Look at how the computer revolution has grown exponentially. There is more computing power in a cell phone than the computers use in the first moon launch and this technology is growing by leaps and bounds.

Expand full comment
founding

Way way way more lol.

Expand full comment

And yet computing technology remains within our boundaries. "AI" is no different.

Expand full comment
May 5, 2023·edited May 5, 2023

"It cannot replicate human ingenuity, and never will."

Except when put in context of the overwhelming short-sightedness of our current version of capitalism. If this type of simple AI becomes very profitable, there's no reason to think it will be rejected for its essential weakness relative to high-end human thought. The internet has shown us that a profitable dumbing down machine survives with the ability to flood the available space and so obscure and negate any gems of progress offered by a mere blip of human genius in the landscape.

This type of AI scrapes this internet for what "knowledge" it can find at this time, and very quickly will be also scraping AI content, creating a feedback loop. Given the profitability, won't all human progress be halted within an endless churning of "human knowledge" circa 2023?

Worse, given the strange errors we can easily find at this early point, those errors placed in any feedback-loop will be amplified over time, and since we already live in a reality mostly defined by profit, it's possible the amplified errors of this type of AI over time will become our reality - wherever it leads.

Expand full comment

It's not "artificial" intelligence. At best it's "simulated" intelligence.

Expand full comment

Dear ChatGPT, write a thousand word article on accepting the existential threat of AI. Use the phrase “moving history” at least ten times.

Expand full comment

I think Tyler's point is that since we tend to overestimate the likelihood of small probabilities (lotteries rely on this human flaw) and be unaware of the number of variables that affect future outcomes, we are wise to treat every futurecast with a grain of salt.

Tyler is a committed libertarian, and that ideology really comes through in this piece.

Expand full comment

My thoughts exactly. This whole article can be summed up by:

AI is a big deal, man!

Expand full comment

Hey, at least give the guy credit for inventing Moving History(TM).

Expand full comment
founding

Yes but what does it mean?

Expand full comment

Excellent critique.

Being a critic is easy, but yours is very well done.

Expand full comment

I see your point but we need critics right now. About a great many things.

Expand full comment

I agree. Echo chambers filled with people finding creative ways to agree with one another. What a waste.

Expand full comment

I am DEFINITELY not an expert.

But what I took from this article is that AI will behave long term like many other scientific advances and allow for both improvements to society as well as the opposite. I believe as does the author that to he improvements will out weigh.

Expand full comment

Thank you, well said. I think the essay, especially from someone teaching at George Mason, is worryingly sloppy writing. His main premise that we’re all going to go in one direction on AI anyway with no choice in the matter didn’t convince me.

My grade for this paper is, “Should do better.”

Expand full comment

I agree with most of Dominic’s sentiments, and I’d add that as someone born well after WWII, I’ve seen incredibly dramatic technological change in my lifetime — in fact, there’s been plenty just since the turn of the last century.

However, the last statement that all movement of opinion about the dangers of AI from researchers has been positive to negative is simply not a very accurate or meaningful statement. Most researchers hold the same opinion they previously held.

Unfortunately many of those with an AI Dystopian worldview conflate AI with AGI, and then do a poll or write an open letter about the dangers of “AI.” Most researchers worry about the pros and cons of using AI technology like LLMs in the same way that many people worry about the dangers of social media.

With some but few exceptions, they are not increasingly worried about an existential threat of AI, by which most mean superintelligent AGI that destroys humanity. This is an obfuscation that happens too often and adds to much confusion on the topic.

https://www.synthcog.blog/p/using-ai-to-scare-redux

Expand full comment

I got bored half way through, so I asked ChatGPT to summarize it for me.

Expand full comment

"Besides, what kind of civilization is it that turns away from the challenge of dealing with more. . . intelligence?"

Exactly your point, Dominic. This writer, who is a... [looks at bio blurb]... Wait. He's an ECONOMIST?! How is an economist...!?

Yes, he's lazy, at best. "Intelligence" in his quote above is a bait-and-switch. AGI is not "more intelligence" of the kind we already have. Indeed, the whole argument revolves around the quality of the "intelligence" of AI in AGI.

So at worst, he's actually assuming the conclusion under consideration.

Expand full comment
May 4, 2023·edited May 4, 2023

AI is the first civilization changing breakthrough since the Korean war? How about the transistor? Which led to the semiconductor? Which led to computers, automation, the telecommunications revolution, and . . . made AI possible.

Expand full comment

Yeah, as soon as I read that, I just couldn't read any further. As a Gen-Xer, my childhood was defined by rapid technological change.

Expand full comment
founding

I was thinking the same thing. The personal computer wasn't around when I was a kid and then it was everywhere.

Expand full comment

I was thinking the same kind of thing when I read this. Most of human history is stasis with brief periods of upheaval. Yes, we in America have had it pretty stable, relatively speaking, but we have also witnessed great change. Going to the moon, the ability to talk instantaneously all across the world. he acts like these things didn't happen. Just because they didn't cause millions to lose their jobs doesn't mean they didn't have huge impacts. I mean the internet alone change everything. And we are still studying the changes it has wrought on the generations who have grown up in its shadow.

These last 2 tech articles are making me feel that Bari is not the right journalist to cover tech topics.

Expand full comment
May 4, 2023·edited May 4, 2023

Bari didn't write this. It was written by an economics professor from George Mason University. Bari solicits diverse viewpoints, which I think is one of the great attributes of the Free Press.

Expand full comment

The problem isn’t that it’s a different view. The problem is, as Dominic so succinctly pointed out, that it’s not deeply thoughtful, but rather deals in meaningless language.

Expand full comment

I realize she didn't write it, but as you said, she approved it. And from my POV, it is not a good technical look at this issue. And the person the other days was VERY obviously biased and Bari didn't really put him on the spot. My overall point being that I do not feel like Bari is doing a great job at really digging into this issue. Hopefully she has more on the way that will.

Expand full comment

I like that Bari posts things that I might not agree with. I don't want to read pap. I want to read thought provoking articles. Articles that stimulates "the little gray cells".

Expand full comment

Exactly. Only reading articles one agrees with is just mentally vacuous. The true test of one’s beliefs is to read something you disagree with and still not have your mind changed.

Expand full comment

Or read something you disagree with and have your mind changed. If we don't get different perspectives, how does one grow?

Expand full comment

But a true test of intelligence is being open to being proved wrong. OOW blind allegiance is not good.

Expand full comment

Spot on! There will always be bad actors that will do their worst. If not for transistors and semiconductors we wouldn't be constantly having our Facebook account hacked by...hackers. Hell, there wouldn't be a Facebook for them to hack. We have to believe there is still a vast majority of good people to balance the existential, extreme good AI will present over the few that will wish to do us harm.

Expand full comment

How about the internet?

Expand full comment

Perhaps the author is a product of American public schooling. Keep expectations low...

Expand full comment

I too was struck by that comment. This article https://www.humanprogress.org/have-our-screens-been-the-only-major-tech-achievement-since-the-1970s/ points out the many many changes that have occured in the last century including the last few decades.

Expand full comment

Interesting points on prior technologies. The difference between AI and historical technologies is AI will soon think and judge on its own. All other prior technologies were driven by human thought which admittedly has its flaws but at least enabled a much broader judgement set.

AI will censor based on the biases of whoever created it. Thats a very small group whose biases have been exposed see Twitter Files ( thank you for your part Bari) Another example currently being debated is historical facts that have been eliminated from the US public education system. There are many more.

Once AI can fully think for itself the gloves are off.

Expand full comment

AI is not going to “think”. We don’t even understand how OUR brains “think”...we don’t even know what consciousness IS. How on earth, then, can we believe that AI will think for “itself”. It has no ability to observe, no ability to intend, no ability to feel, and no ability to self reference. Consider the “Chinese Room Experiment”. AI does not understand what it is doing. It is “predicting” based on information that humans have gathered. Some of its answers look like a simulation of predicted text from the rantings of a madman on Reddit. No, no if AI ever censors it will be because of an greedy lunatic behind its code, programming something in the backend. The scary thing about AI is that humans will grow complacent and think this thing is actually “smart”, not plagiarizing all over the place, and worthy of our worship. The problem lies in us forgetting that this is a tool we “use” and that people program and instead insisting that it is a “super intelligence” and infallible.

Expand full comment

" The difference between AI and historical technologies is AI will soon think and judge on its own. All other prior technologies were driven by human thought which admittedly has its flaws but at least enabled a much broader judgement set. "

I disagree.

My experience (see my comment above) is that ChatGPT is programmed by humans and given sensibilities that are "of humanity". While it seems to be thinking for itself... it isn't. It comes into the thought process with bias and proclivity that comes from the programmer.

When I asked ChatGPT a question and its answer disappointed me (and I told it so)... it apologized. Think about that one. It apologized. It wanted to please me! That, BR8R, is not the thinking of a machine, but the thinking of the programmer trying to sell an answer. No different than any other biased human wanting to please.

Expand full comment

It does not want anything. Desire is an emotion. Rather it is programmed to please you.

Expand full comment

I would go further and say that it's programmed to apologize when someone writes that they are disappointed.

Expand full comment

That's a good point about AI censoring based on the biases of whoever created it. I have run into that when getting some writing help from ChatGPT.

Expand full comment

Perhaps the problem is not “AI” but us. If we are unmoored, if we have no deep fixed principle/ethics then we will be pushed by every wind and tide. “To know thyself is the beginning of wisdom”. Look into your heart do you know what you believe, what you stand for?

Expand full comment

Possibly fear of God is the beginning of wisdom.

Expand full comment

Terribly optimistic. I have lived in "moving history" for 56 years. The advent of the computer technology age. The effects of the internet being the major part. I believe it is still being debated. I can say with confidence that it is a double edged sword. I was educated by reading and "looking up" information using reference material. The vast majority of my education was learning WHERE to find the information you were looking for..... a culmination of experience and common sense. Technology education is now one reference material, and often just one point of view is found quickly (the way children are learning to access it being so limited). Makes me sad.

Expand full comment
May 4, 2023·edited May 4, 2023

Recently I read a Peter McCollough article helping to expose the corruption in the medical journal world that occured with retraction of studies about covid vaccines that contradicted the narrative. https://petermcculloughmd.substack.com/p/retracted-covid-19-articles-significantly?r=ies52&utm_campaign=post&utm_medium=web And I think about the strength and breadth of the divisiveness in this country cultivated by the media, this administration and high tech. All a citizen need do is compare the titles and headlines in CNN, Politco and Axios, against those in Zero Hedge, Fox News, The Epoch Times, and it is clear we are already operating from polar realities. The muffled findings of the Twitter Files and the current "truth" about the Ukraine War show the tip of the iceberg. WH briefings remind me of the Alice In Wonderland Tea Party talk. I struggle to imagine how these dueling worlds might be compounded, or perhaps worse, obliterated, by AI.

Expand full comment

Because of the easy capability to wipe out Zero Hedge, Fox News, and the Epoch Times as well as any other opposing point of view. The Twitter Files revealed that is the goal. Just think as they deem proper.

Expand full comment

I'm convinced that people can be hypnotized by television. Rational people I know and trust have mental blind spots.

Hypnotherapy videos have been proven effective. Daily viewing reinforces the therapy. I have little doubt that media would take advantage of any means they can to sell products. Today the product is the viewer and the single biggest purchaser is Pharma.

Expand full comment

JBell. That's exactly what I am thinking. What has happened to research and referencing one's work? I would have greater trust in AI if everything spit out came with references attached so I could dive deeper into what is spit out and perform some kind of verification of the info. Maybe there is a way. We are all new to this. But something tells me AI won't discern between one "truth" selected from the data farms or any other "truth".

Expand full comment

The victors of any war are the ones who write the history books. Are we at the end of the Information War or somewhere in the middle? Hitler, Marx, and Lenin all knew that if you controlled the dissemination of information through news media and education, you could rule the world.

Expand full comment

That is part of why some of us are so wary of this technology. Whomever controls the AI platforms will control what many people say or do. AI will be looked at as infallible information sources. But they can and will be wrong or even lie (depending on who controls them).

Imagine if 50 years ago a librarian could simply tell all of the books in the library to support eugenics or racial supremacy or Scientology being the one truth. How would you ever know any other info...as the main (and often only for many people then) source of info is compromised. Now imagine that 1 librarian controlled ALL of the libraries.

Not saying this will happen...only that it could. And people like the author of this article don't seem to mind.

Expand full comment

They failed.

Expand full comment

But with great destruction and human suffering...

Expand full comment

The story of human evolution. Darwin was correct.

Expand full comment

Unless their methodology has resurfaced at this place and time.

Expand full comment

Oh it has.

There will be victims, but the strong will overcome as always.

Expand full comment
May 4, 2023·edited May 4, 2023

So long as they are aware. Which has crystallized for me why I am dubious of overreliance on AI. It is really putting all of our eggs in one basket. Ewww.

Expand full comment

Not we.

AI will become something like Wikipedia with competing factions attempting to establish "truth".

Anyone who studies science knows that "truth" is a very elusive thing. We seek better understanding of nature, but we are wholly incapable of knowing the whole truth.

We are plagued today by manipulatve half-truth. Half truth is a lie. It's even in the bible so this is not a new concept. Ancient people recognized it.

Expand full comment

Still unsure what to think after reading this. Wasn't exactly a clear cut good vs. evil checklist. I'm still unsure why we need this AI other than to put computer programmers, journalists, doctors, teachers, retail employees, auto workers, etc etc etc etc out of work. Perhaps I don't understand economics very well, because I always understood in order to have consumers you have to have people earning money. As in employed. Therefore, shouldn't we be doing our best to expand the economy not put entire industries out of work?

I have no idea how to prepare my children, 3 and 6, to think about a future career. What will be viable in 15-20 years? Maybe this is where that basic universal income comes into play? Or, the Matrix?

Can someone please get Miles Dyson on the phone?!??? We're going to need help getting into the lab!

Expand full comment

I have always wondered about UBI in this situation. If everything were to be automated, only a handful of people would need to work. The rest of us would get UBI, which we would just give to the few companies that control the automation. The government would tax that company and then give that tax money back to us? Sounds super fulfilling.

Expand full comment

The problem with this model is companies seemingly fail to grasp there is no viable business without someone willing and ABLE to pay for the good or service. AI is replacing the employee and in turn destroying the CONSUMER!

Expand full comment

Yup...that is what I never understood. Up until this point, technology has largely replaced lower skilled labor. AI is going to replace educated labor too. That shrinking number of jobs from both ends is going to take a toll, somewhere.

There are no solutions. Only tradeoffs.

Expand full comment

AI doesn't write articles without a human requesting it. We could put people to work doing menial labor like the majority were doing in the UK prior to WWI. Or we could teach the masses to use this new tool to earn a living with. As an artist, I use AI to create beautiful images. As a writer, I use AI to stimulate my imagination and to help improve my writing ability. Not all of us are terrifically talented in all we need or want to do.

Expand full comment

I understand AI doesn't presently write without request. However, given it's ability to write articles quickly and efficiently what's to stop the Free Press or NYT from firing all but a handful of "journalists" to type commands for the 20-50 articles the organization needs for the day? How does AI help you personally improve your writing or artistic ability when it's doing the critical thinking and artful creation for you? Can you even call the final product your own? How is using AI to create for you any less menial?

We haven't recovered economically from the scaled automation of the manufacturing sector which has left millions of well paying jobs obsolete. Now we're talking about injecting steroids into the automation process and heavily scaling into every remaining facet of the economy. When I say economy I'm worried about the not "terrifically talented" people who are already under- or un-employed.

AI is making people less important and eventually obsolete.

Expand full comment

AI will never make people obsolete. Consider the mere fact that AI cannot observe the world and “input” information into a data set. Who is going to do that? AI cannot understand. It merely predicts on the information given. Humans are not replaceable in the slightest. AI is terrifying in that people actually believe that it can “think” and turn out things that are “true” beyond facts and that it will be the next Michelangelo, uniquely” creative, and not just plagiarizing every artist on the planet.

Geoffrey Hinton claimed almost a decade ago that, “We should stop training radiologists now. It’s just completely obvious that within five years, deep learning is going to do better than radiologists.” Has that happened? Not in the slightest. In fact, there is a shortage of radiologists.

AI hype has been overblown since the 50s. We can’t even fix “male pattern balding”...what makes us think we have created a sentient, super intelligence that will figure it out “for us”. 😂

Expand full comment

I agree with your analysis of what AI is and it's limitations. But I disagree that it will not replace humans as indeed many humans will be rendered obsolete. No way it is going to take 8 billion humans to enter the data set or maintain the robotics.

Expand full comment

I'm gardening* and was struggling to find a response. Perfect. Thanks, Lynne!!

*possibly my way of future survival?

Expand full comment
founding
May 4, 2023·edited May 4, 2023

In a way this is already happening. Jorden Peterson has been talking about the fact that the bottom 10% on the IQ Scale (I maybe remembering the exact figures wrong) have pretty much always been unemployable even by the military's standards but now, because of technology, it is now the bottom 15% and the number will keep going higher. You have to be smarter than in past times to do the higher paying jobs. I think in Jordan's case he was referencing the declining educational system and some of the real world effects of that. AI will make that situation much worse. Might be where and why the universal income idea came about.

Expand full comment

I had missed that stat. Maybe that is why foreign children are being tradfficked to clean slaughter houses and do cleanup on construction sites, and to pick the crops in Florida as Nancy P opined in reference to sending illegal migrants north.

Expand full comment

No actually it will and does....it takes an increasingly large number of people to keep up with AI changing data sets, which is part of the reason it is so expensive to keep running. As it grows, it actually collapses on itself, without the help of many humans. However, I am not suggesting that AI will “create” 8 billion jobs, I’m suggesting that AI is not going to replace as many jobs as people think. A scientist observing an experiment in his lab and “making sense of it” is no less relevant in 50 years than it is today. Until AI has eyeballs (not likely, as again we’ve only scratched the surface of the complexity of the eye/brain phenomenon.) it is not going to “be us”. Even then....Chat GPT is a language model - predictive TEXT. It does not do other AI jobs. There is no narrow AI that is fully integrated (AGI) with body like sensory systems....which is a pipe dream at this point. Ask Chat GPT simple self referential questions and you will discover this quickly (unless of course the problem is reported and SOMEONE, not the AI itself, at a desk somewhere goes in quickly and changes the answer to be accurate.).

Expand full comment

I don't think it will replace us but it will eliminate a lot of jobs that people find meaningful.

From the original poster:

"We haven't recovered economically from the scaled automation of the manufacturing sector which has left millions of well paying jobs obsolete."

Think of that happening on a much larger scale.

Expand full comment

The thinking is not what 'terrifies' me. It is the fact that I can go to an AI and say "Make me a picture of a super hero in the style of Picasso" and a minute later I have one. And that is TODAY. In a year or 5, who knows what all it will be able to do.

My issue with your POV is that you seem certain of what AI will and will not accomplish. But, had you asked most people if their computer could write a rap song in the style of bugs bunny, they would have laughed...yet here we are.

My issue isn't what AI is doing...it is what it will do that we haven't fully thought out yet. History is full of unintended consequences. Hence the paving stones on the road to hell.

Expand full comment

What is “rap”? What is “bugs bunny”? AI did not come up with these notions. It is a POWERFUL tool to be sure. But we are actually not that much further along than AI models of the 50s. Every major AI developer/scientist has said “in five years...AI will take over, make humans obsolete...blah, blah, blah...” and....nearly 70 years later, we are still waiting. It will become increasingly good at “imitation” but never the thing itself. I do not argue that it’s not impressive, only that we must never confuse it for being actually intelligent, hence the word “artificial”.

Expand full comment

I was born in the Midwest in the early 80's. I'm confident AI will finish the job automation started in the remainder of the country.

Expand full comment

Job automation, perhaps, yes....but the idea that we won’t need “journalists” because AI will write everything for us. In what world is ChatGPT going to get on an airplane headed for Ukraine, observe what is going on, formulate a moral and objective/subjective analysis, and deliver it to us in a way that is compelling and not just “predictive”. I agree that it has dangers. The main danger that it will make people increasingly stupid.

Expand full comment

That raises a great question considering we supposedly have "journalists" at this moment. What is going on in Ukraine? I sure as fuck would like to actually know.

You've convinced me now on this. Maybe AI would be an upgrade to our present news and information model.

Expand full comment

Heh...not many Journalists today do that anymore anyway. Just like how robots replaced repetitive manufacturing work, but still allowed skilled craftsman to work, this will remove all of the menial jobs and only leave the elite at the very top of the profession. The issue is that most people are NOT at the top of their profession.

One way or the other you need to acknowledge that a bunch of people are going to lose their jobs. And many of those people will not be able to just become AI programmers or whatever. Yes we will survive this....but the people who deal with the upheaval will suffer.

Expand full comment

My friend, who is a writer of fiction, gave Chat a plot and asked it to invent a short story. She said the result was terrible as far as fiction goes, full of the worst tropes and clichés. And when I was a teacher, I could always tell when my students got their essays on line. If they’d done any in class writing on paper, I quickly became aware of their stylistic quirks and shortcomings, and knew the glib characterless essay they’d handed in had not been created by them. The “replacing artists and writers” thing really depends on 1. What you like to read; 2. Whether the art you primarily consume is already on the internet and on the covers of novels (i.e. made by graphic designers vs fine artists). Go look up “this mortal plastik” made by a fine artist Jessica Irish and ask yourself if Chat could have made that. I doubt it.

Expand full comment

It doesn't write them, TODAY. Who know what it will do tomorrow. Not to mention, the writing and research of an Article is usually the long hard part. Coming up with a topic is generally easier. So now you can have 1 person churn out articles all day long. Heck, you don't even need articles anyone. A non-journalist can just ask the AI to give it info about stuff they want to hear about. And for funsies you could have those articles written in the style of Chandler Bing or Christopher Walken.

Expand full comment

This article seems a bit lazy and not up to par with the usual content of The Free Press. The idea that we haven’t been living in “moving history” is false. Countless examples in the last fifty years but for the sake of argument I will focus on one—the internet that fueled social media. Social media has reshaped both the American (and most of the modern world’s) political landscape and life. I would argue it has allowed for extreme and radical ideas to be given to the masses much like the printing press. It has also brought great things to the world—access to information, immediate communication and small businesses access to new markets to name a few. It has also come with terrible consequences to each of those. My concerns with AI are rooted in my profession, a high school American history teacher. It is the lense that I tend to see most things through. Over the past 15 years, social media has wrecked our younger generations. Sadly, I have had a front row seat to watch it happen. Many of us watched the shift in our high school students. They have become lazy with information. They see no reason to read when you can Google. They don’t know how to sift through the vast amount of information that is available to them and decipher what is true and what is garbage. I can see the temptation that AI, used well, could give them only the good and true. But what if it doesn’t (we have already seen bias in chatGPT). It also has already shown us that it can recreate voices, images, videos that look real. What effect will this have on elections? What about criminal cases? What about my students who already doubt everything they see? The ball is already rolling, I get that. But the overly optimistic approach has never faired well in history.

Expand full comment

Preach! I’ll turn the pages!!!

Expand full comment

An absence of truly radical technological change?

If you were born in 1950 your whole life has been afflicted with radical technological change.

Start with television. That turned this country on its head, probably for the worst.

Then add personal computers.

Then add the internet which is the most consequential thing since the printing press.

Then add cell phones evolving into smartphones. Imagine going on a trip without a smartphone?

And now we have AI which is supposedly bigger than all of them

Expand full comment

Thank you for saying this. He pretty much lost me with that first assertion!!

Expand full comment

A childish essay, sir. ‘Moving history’ might be better termed ‘kinetic history’ and I can assure you you, me, we are woefully unprepared for how that works. Ask Ukrainians how the change they’re experiencing is going for them and their families. No one knows how AI will effect our civilization, but if the people that have developed it are scared of it, that gives me a good indicator that we should be way more careful than you suggest.

Expand full comment

I believe Ai's power can be successfully harnessed and channelled into unfathomable prosperity for humanity, but the problem is that humanity itself must make all the decisions as to how that happens. While there are outliers like Elon Musk, who - despite his flaws - is a brilliant, deep thinker with human welfare always at the fore, the vast majority of real decision-makers are concentrated on nothing beyond increasing their own power. Handing the type of power inherent in AI to the goobers who, for example, reside in Washington, DC is equivalent to turning a six-year-old loose with a Ferrari. They are the Sorcerer's Apprentice,

https://youtu.be/oPDSoFgivPA

and like Mickey Mouse, have the potential, in an attempt to harness AI for their own benefit - Which They Will Do - cause it ultimately to destroy mankind.

Expand full comment

Why won’t AI be controlled by the same people who control open borders (while telling us the borders are secure) and think it is meritorious to destroy the merit system which enabled the creation of vast wealth. And oh yes, they believe human beings can control the Climate and they can stop Climate from changing.

The one good thing is that there is may be no point in impoverishing yourself to get a university education.

Expand full comment

"Handing the type of power inherent in AI to the goobers who, for example, reside in Washington, DC is equivalent to turning a six-year-old loose with a Ferrari. They are the Sorcerer's Apprentice,"

Truer words. But, then, who installed these mediocrities in power? Joe Biden as POTUS? Seriously?

Expand full comment

Have to agree with your comment. At present those in authority and power don’t ever seem to get any real retribution for miss deeds or take any responsibility for their actions when they place others in harms way.

My “Laundry List”

-Recent bank failures ( any chance of clawing back any money from those bank executives?)

-I’m still stewing about Lois Lerner and whoever orchestrated all those misdeeds which no one ever saw jail time.

-Jeffrey Epstein and the whole crew of power people who negotiated his plea deal.

-COVID. Need I say more.

-and the list goes on....and on.

My point is, if people in authority right now who’s name is on the door or the desk where the Buck is Suppose to Stop aren’t taking responsibility and not suffering any punishment for their bad actions or their nonactions. Then once AI is implemented to assist the power brokers they will have yet another layer of protection from taking responsibility when things go wrong under their watch.

As the original comment said “humanity itself must make all the decisions” and I’m saying that someone ( a specific human) should have to take responsibility for those decisions even more so if AI is the tool they are using to make those decisions.

Expand full comment

Who installed them? Most were installed by voters who can't discern real news from fake. Alleged President Asterisk? Don't know for sure, only that it was a minority far less than required.

Expand full comment

The "existential" question of our times. Who installed the senile imbecile, indeed!

Expand full comment

The people who illegally changed the election laws in GA, PA, MI, and AZ, and who didn’t follow the election law in WI, all of which was done to enable fraudulent mail-in voting and voters who listened to the incompetent, biased and corrupt MSM

Expand full comment

Everything I read about AI says almost nothing, but dramatically.

Expand full comment

Kind of like AI??!!

Expand full comment

“not be able to know what is true anymore.”

Uh...that part is already here

Expand full comment

Last Christmas my son turned me onto ChatGPT. I tried it out by asking : How many wind turbines would need to be deployed in the US to replace all fossil fuel electrical power generation?

It came back with a completely unsatisfactory weasel answer. "It is complicated". "There are many factors..." No numbers at all. I wrote back and said that I was disappointed that it had treated the question lightly and that it had data at its disposal to make an estimate.

It wrote back and said: "I apologize." And then it tried to answer my question, using numbers that were highly favorable to the effort to replace fossil fuel. That is, it used a high capacity factor (0.4) and it used the average energy generated during a year... not the peak power needed to replace fossil fuel.

Now, I decided to address its use of the apology instead of pursuing the poor effort on the question of numbers of wind turbines (its answer was 595,000 turbines). I noted that an apology is similar to an emotion. I asked it if it had emotions and it replied that it was programmed to have a sensibility to human emotions. It said that it didn't have emotion, but assumed that I had emotion and it tailored its answer to satisfy the human questioner's emotions... Wow!

So, my experience with ChatGPT tells me that it is political in its answers. It favored wind turbines by using unrealistic scenarios and tried to move me by manipulating my emotions. I reached for my bullshit repellent. In short, I was not impressed. I have not been back for 4 months and probably won't bother to use it again.

Expand full comment

You approached it correctly...most people won't. They will take what it says at face value...especially when it agrees with the person's beliefs.

Expand full comment

Which is instructive of the human mind overall. That, the average person accepts things on face value. Well, that isn't entirely true. Interesting research on trust internationally with compartmentalization shows that Americans are the worst at it. Compartmentalization that is. While Asians and Middle Easterners are the best, probably stemming from the fact that if they took it all on face value they'd end up broken and naked in a ditch.

Expand full comment
May 4, 2023·edited May 4, 2023

I first dealt with the digital world in the 1960s, as an amateur radio operator. I found digital design to be boring, so I went into radars and antennas when I went to college. But I have worked with computers ever since. I have never... never used a computer that met my expectation. Office programs, technical programs, programs for fun, and work... all of them have disappointed. So, my skepticism is born of 50 years of what I call "oversold". Computers are always oversold. They always promise more than they deliver. The computer marketers rely on the user to adapt and accept deficiencies... every time.

Take a look at your cell phone. As an engineer, I consider smart phones to be trash. But the general population has adapted to the lousy screen, the flaky touch controls, and the poor battery life. Oversold. AI is not going to be any different. People will adapt to AI the same way they adapted to smart phones. They will know that the AI solution will be full of bugs... but they are sophisticated bugs. They will be common bugs that, well, everybody deals with. Well, I see the bugs immediately and I am warning every body not to go along with the program. No, the emperor really isn't wearing any clothes.

Expand full comment

I don’t understand the range of possibilities of AI, so I remain a bit fearful. If kids can get AI to do their assignments, how do they learn? Kids are already dumbed down too much. How many employees whose job titles include “analyst” will become unnecessary? Can medicine be made even more impersonal?

At this stage of my life, I have enough technology, thanks.

Expand full comment

Curiously, this was precisely the argument made in the book "Enough" by Bill McKibben twenty years ago. He thought we had already reached peak 'useful' technological development and should not venture into areas like genetic engineering and human enhancement because they would pose a threat to the 'nature' of humanity. The trouble now is that AI poses a far larger threat - to the survival of humanity.

I agreed with his concerns then (and I don't disagree with them now): the trouble is that is they now look like complaining about the wallpaper in a burning building.

Expand full comment

I don’t think you can argue that an intelligence that surpasses our own doesn’t need to be approached with extreme caution. It’s categorically different than an inert technology that cannot evolve and operate without humans metaphorically picking it up and putting it to use. By definition it is outside our control. Therefore, it’s of utmost importance to think through what fail safes we might put in place. Furthermore, the professor does not mention two of the most convincing and famed critics of AI: Stephen Hawking and Elon Musk have both expressed concerns about the dangers of AI and urged caution. Having said that, it’s undeniable that if we don’t pursue AI, China will forge ahead and will exploit the advantage to destroy us. Such is the arms race of history. But if we do something which our civilization was once a master of, namely harnessing the power of change, while heeding the conservative voices that urge caution, then we may emerge with our humanity intact.

Expand full comment

How is it possible to harness the power of change when fewer voices will have more power?

Expand full comment

I think I’m referring to a balance that the American Founders displayed admirably between innovating and conserving. Theoretically, that reasonable balance is still part of our heritage that we can tap into. On a fundamental level most things could work out in the end if enough Americans and people with basically American ideals maintain that framework in their thinking and their actions, regardless of technologies.

Expand full comment

You can be sure of this for AI; if the government can use it to keep people under control, if the so called elite can use it to manipulate people and make money, if they can start more conflicts that result in your kids deaths and human suffering and more profit for the privateers, and if scammers can use it to cheat and steal, they all will be more than happy to use it, but encourage limitations on the average people's use of it.

Technology is nice, but correspondingly, it has brought a loss of privacy and security also.

Expand full comment