How Smart Are Those Machines, Really?

Print Friendly, PDF & Email

Currently there is fear and trembling about the alleged possibility that machines can attain humans’ alleged intelligence. Not much of a feat, you may think. My own idea is that they are not and will not be able to do so. They can only pile up words that have been injected into them; they cannot make value judgments. Yes, they can find that “democracy” appears 99 billion times in the holdings of the University of Michigan library, and they can discover that in 99% of those instances it’s regarded as a good, better, best thing for “our society” to have. They can write “essays” substantiating that view, with many references to books and quotations of arguments. But they cannot decide which of these arguments, if any, are actually true, in what sense they are true, and to what degree, if any, their trueness is important. (They may do better with deciding whether a drawing of a cat is truly a drawing of a cat.)

Let me put this in another way. Suppose I’m speaking before an audience, and I suddenly have a heart attack, and a doctor is summoned to figure out whether I’m dying. She can do that in a quantitative way. She can count my pulse, determine my temperature, and so on. She can pronounce me dying, still dying, and finally dead, but she cannot with quantitative methods answer the question of whether this is a good thing or not. To do that, she would have to use qualitative methods. She would have to determine for herself what is good, better, best. She would have to cope with the possibility that many people in the audience might think, for various and obvious reasons, that my death is a good thing, but that the one person who thinks otherwise might just be right. A machine would be able to replace some of her doctor functions, but it would not be able to replace her philosopher function. It might be able to fake the issue, by finding in its capacious memory such cliches as “any man’s death diminishes me,” or “a good professor is a dead professor”; it might fill 10 or 10,000 pages with summaries of arguments; but that wouldn’t be “thinking.”

If exposition of the law is just citation of cases and reproduction of formulaic arguments, and most readers can’t tell the difference, then why get outraged about a fake?

 

This is my position, and I’m sticking to it. Lately I’ve been both amused and disgusted by the reported ability of a computer to fake a legal brief, complete with references to phony cases, and then to act like a real lawyer and maintain its total innocence. These accomplishments, however, do not demonstrate the legal or literary intelligence of the computer; they demonstrate the lack of intelligence often displayed by people who participate in that genre of writing. If exposition of the law is just citation of cases and reproduction of formulaic arguments, and most readers can’t tell the difference, then why get outraged about a fake? I’m sure that “work” in my own profession, literary history and criticism, would pass for human just as easily. A look at the literature shelves in the library strongly indicates that mechanical reproduction has been going on for at least 50 years.

Back then, word salads were more likely to be served in the dingy cafeterias of the “learned” professions than under the circus tent of politics. It used to be that labor leaders howled their demands, and politicians “lashed out” in fearful bellowings. The sounds were human, all too human. The old school may now be perishing in its final eruption, the recent communications of Donald Trump. People wonder what he means when he excoriates Ron DeSantis for being worse than the Democrats about COVID, being a bully about Disney, being “sanctimonious,” etc. You can try all you want to figure out what he means. To me it’s like wondering what Godzilla meant when he ROARRERRD and smashed another building.

But this is now passé. The well-named salad is now the dish of choice. I don’t need to tell you what it is. You know it when you hear it. It is, for example, the universal language of pressure-group pronouncements, in which all the vegetables from “LGBTQ+ rights” to “healthcare rights of families” to “rights to self-determination of indigenous peoples” are mixed together, in the hope that every diner will find something to fork out and swallow. Or, to be fair, it’s the strange mixture you get at the bar and grill on the other side of the political divide, where they can toss you up almost any kind of mélange, with ingredients varying from “America first” to “we must project American power in the world.”

Biden has oodles more viruses, and they’re really nasty.

 

But the reigning salad chefs, right now, are in business on the left side of the street. One is Karine Jean-Pierre, America’s sweetheart, whose performances in the White House briefing room are tests of even liberal reporters’ powers of endurance. Her sayings could certainly be generated by a machine. I like to think of her as one. Not only does it fit the empirical evidence, but there’s already been a movie made about beings like her. It’s called Westworld.

Jean-Pierre’s boss, Joseph Robinette Biden, Jr., is virtual (pun intended) proof that the White House has become a Westworld set. You can tell by the type of errors he makes. Once a virus gets stuck in his program, it’s there for good. Jean-Pierre would be lost without “let me be clear,” “this president has always,” and “for that I refer you to [insert name of acronymic government agency, which will never reply].” But Biden has oodles more viruses, and they’re really nasty. They’re the ones that generate “come on, man,” “mega maga Republicans,” “the real deal,” “true story!”, “not a joke,” “I’m not kidding,” “no, I really mean it,” “guess what?”, “I shouldn’t say this, I’ll get in trouble,” and other absurdities that his software keeps running in an endless loop.

More interesting are certain sounds he emits that appear to come from a cheapo version of AI that’s programmed to match words from category A to words from category B, without any means of checking for matches with category C, the “Ridiculous-Untrue-Avoid” list. So, if category A is “bad things that happened to my son Beau,” it’s programmed to match with category B, “very bad places for things to happen,” and bingo! The selection is made, and we get, “My son Beau, who died in Iraq.” Or, category A is “my previous occupations,” and category B is “gnarly plebeian stuff,” and we get, “I used to drive an 18-wheeler.” But maybe category A is “good things about me,” which can be matched with something from category B, “accomplishments that have marked my presidency.” The protocol will not allow any choices of N-words: “nothing, null, nada, N/A, non-starter.” But it will allow reversals of bad words, such as “inflation,” “deficit,” and “illegal border crossings.” So we get, “I lowered inflation,” “I lowered illegal border crossings,” and “I lowered the deficit more than any other president in history.” (In every third iteration, the program adds “in history.”)

This is not a very good program, but it does create flights of imagination that some people regard as evidence of humanity. Consider what happened on June 8, when Thinking Machine JRB-JR held a press conference with the British prime minister. Cruising its data banks, the computer lit on “railroads” as a subject that’s safe to talk about, and “building” as a good, optimistic, “positive” match. But where should we build a railroad? Where is Joe Biden actually building a railroad? This is where the program got sophisticated. It checked a list of places where railroads already exist, found that they’re almost everywhere, and proceeded to option 2, which is a list of places where railroads have never existed. It came up with this: “We’re talking about building — and I had my team putting together with other countries as well — to build a railroad from the Pacific Ocean — from the Atlantic Ocean all the way to the Indian Ocean.”

Right. Except for that little temporary problem about “Pacific” and the flaw in the data that rendered all of southern Africa as railroad-needy . . . mission accomplished.

This is not a very good program, but it does create flights of imagination that some people regard as evidence of humanity.

 

But it wasn’t the last time when things went all geographic like that. You know how computers are — once they get something into their heads . . . if they try to fix something, they just make it worse. Or haven’t you ever pressed that button after “Do you want Windows to fix this problem?” So here’s JRB-JR one week later, on June 14, performing for a conclave of wealthy “environmental” organizations and announcing: “We have plans to build a railroad from the Pacific all the way across the Indian Ocean.”

Imaginative? Yes. Proof of intelligence? I have grave doubts.

But come on; we should all have fun with this stuff. Last week I found something that’s lotsa fun. It’s about one of my favorite subjects, the Titanic, and it has nothing to do with either Biden or the Titanic submersible disaster.

But before I tell you — Let’s have a word from the sponsor. I want you to know that about 20 years ago I wrote a book called The Titanic Story, in which I tried, among other things, to expose some of the persistent Titanic myths and fallacies. You can get the book from Amazon, cheap! Go and do it. A few years later, I wrote an article for Critical Review (vol. 15, 2003, pages 403–434), discussing the ways in which myths were created about the disaster. If you google that item, you can find a number of places where you can get it online. Please go and do that, too.

Perhaps we shouldn’t give machines too much of the blame. After all, they, like Joe Biden, are just trying to prove they’re smart (and with similar results).

 

Thank you. To continue — After writing those things, I got tired of trying to identify untruths about the Titanic; there were too many and they were too boring. But what I found last week was a marvelously amusing discussion of the kind of word salads that we’re now being served whenever we open a book or look at a screen. It’s by a guy named Mike Brady, who runs a YouTube channel called “Oceanliner Designs,” and it’s a review of a purported history of Titanic. See if you don’t think this history could have been written by “many men, many women, and many children,” as Dr. Johnson said about a poem he didn’t like — or, I might add, by many machines.

But perhaps we shouldn’t give machines too much of the blame. After all, they, like Joe Biden, are just trying to prove they’re smart (and with similar results). And another explanation for verbal garbage has emerged.

A “marketing agency” named Captiv8 (“captivate” — how clever) is reported to have been the PFC that was told by the lieutenant, a vice president of Anheuser-Busch, to fulfill an alleged “super clear mandate” from the top brass, which was to modernize Bud Light’s “brand” — which meant, in practice, introducing obnoxious “social media influencer” Dylan Mulvaney to promote the cheap, innocuous beer. That decision, as you know, resulted in an enormous backlash. Bud sales cratered, going down by one-quarter and staying there, with stocks sustaining billions in damages. Now, Captiv8, which had been lying low, is being discussed, and one of its claims has been noted — that it has more than 30 million internet “creators” in its “farm system.” At the same time, the company appears to have claimed that it has “more than 1 million influencers on platforms such as YouTube, TikTok, Instagram and Twitter.”

Well, never mind the difference in the numbers. Even a million “influencers” could sling a lot of trash onto the salad bar. Whether the influencers are actually AI or not.

5 Comments

  1. Chris Nelson

    LOL @ ‘cheap’. Some of us still have to deal with more-or-less (or less-and-less every day) money.

    I’ll try to find a cheap’er’ version of Stories of the Titanic. If you haven’t already read it, you might enjoy Charles R. Pelligrino’s “Ghosts of Vesuvius”, which gives more than a nod to Titanic.

  2. “… If exposition of the law is just citation of cases and reproduction of formulaic arguments …”
    Great science-fiction author Lloyd Biggle, Jr., wrote a novel, “Monument,” in which a class action lawsuit is fought in court using just such a system: One side presents all the precedents it can find, then a machine, a sort of “judge machine,” analyses them, accepting some, discarding others, especially those that have been rendered invalid by newer rulings.
    Some lawyers try to slip in previous rulings that seem appropriate, and hope the judge is not up to date. But, in the story, the judge always knows.
    AI might be very good for such a use, although we first need to get rid of the 90 or more percent of laws that violate rights. (In “Monument” it looks bad for the innocent natives because so many laws are written to benefit some capitalist crony. You do want to read “Monument,” by Lloyd Biggle, Jr. Really, you do.)
    There is always the danger of GIGO: Garbage In, Garbage Out. It depends — always — on the inputting. And, I guess, the inputter.
    However, I don’t believe anyone will ever be able to program a robot or computer or any other incarnation of AI to write, essays or poetry.
    As you, Prof. Cox, saw in the “poem” I sent you privately, AI-written poetry is as bad as “country” music: Rhymes are off (all that seems to matter is the vowel sound in the middle of words, so that “time” and “wine” will be presented as rhyming), and logic is ignored, as is common sense and reality.
    (NOT intended as a general denunciation of country music. In the early ’90s was a song with this line: “He could look you in the eye and lie … with his white hat on.” Marvelous! And I think of it every time I see some “news” announcer try to tell us, with a straight face, another whopper.)
    Of course AI might get better, and maybe eventually learn how to write poetry, but not any time soon, judging by its current creations.

  3. Scott Robinson

    Dear Stephen,

    Good article pointing out the shortcomings of quantitatively composed bodies of oration or literature. I think it is interesting that your take on the jibberish coming out of the mouths of the talking heads is not the result of them being dumber than a person who doesn’t speak, and instead is the result of them reading what was composed by some AI speech writer. Like you said, “they can’t make value judgements”, this is what I was thinking the problem of AI replacing people. While the AI machines have tons of file sorters gathering and organizing and reorganizing files, they don’t know what’s right or wrong on a moral basis. This is the problem with the popular idea of objective morality. There is a reason that it is called, “the cold, hard truth”, and that is reflecting cold and hard not having or caring about feelings, souls, and spirits. If you try an objective approach it is difficult to present right vs wrong without preexisting definitions of right and wrong. These definitions are not based on cold, hard, objective facts.

    Best Wishes,
    Scott

  4. Scott Michael Robinson

    Dear Stephen,

    In my previous comment, my language may have been deemed offensive, but I did also think about how this article is called “Word Watch”, and so its topic of discussion may have more to do with words and their meanings. The “word” I see is AI, which of course is two words, Artificial Intelligence. Since I’m simple minded, I will separate it into its individual parts. Artificial means fake or not real which is a way of obtaining the same end product with non-natural means. Intelligence, according to me, is the ability to take information and facts from many different sources and process them into a single idea, theory, or principal. People do this, in a simplistic example, when they know words to describe many colors and another assemblage of words to describe many objects and they describe what they see as a yellow ball. I see this as where you get the word salad from. It’s like salad bar, take many different ingredients and piece them together in a salad. The problem is that you don’t get one integrated whole like a multicellular organism, you just get a pile of separate ingredients. This is why we put dressing on our salads, because dressings are multiple ingredients integrated into one whole. Putting the dressing on the salad gives the uniting of the separate ingredients’ flavors into one whole flavor. The problem of Artificial Intelligence is that it doesn’t unite the agglomeration of the separate facts into one theory, principal or view.
    I like that you said, “they cannot make value judgments”. Value judgments are like salad dressing, they unite the separate facts into one whole theory, principal or viewpoint. The important question is, why can’t we make value judgments based on an assemblage of facts? Values are necessary to determine how much each fact is worth relative to the others. This relates to subjective vs objective, or feelings vs facts. Feelings are not facts, you can’t touch (weigh) them, taste them, hear them, smell them, or see them. Just like the atheist rationale for why there is no God. Is there a way to get machines with feelings in order to make value judgments? I think the problem is that you can’t generate artificial feelings and therefore can’t get artificial values.

    Best Wishes,
    Scott Robinson

Leave a Reply

Your email address will not be published. Required fields are marked *