The Anatomy of AI Criticism

A funny feeling came over me last week as I sat in the heat reading a book—slowly, distractedly, dehydratedly. I saw myself working with human clumsiness through a text produced in a few seconds by a large language model.


This content originally appeared on HackerNoon and was authored by hackernoon

A funny feeling came over me last week as I sat in the heat reading a book—slowly, distractedly, dehydratedly. I saw myself working with human clumsiness through a text produced in a few seconds by a large language model (LLM).

\ Earlier this summer I had asked Anthropic's Claude (a popular artificial intelligence chat-model) to write a book about how AI can improve one’s life.

Given the output length the model can handle, I asked first for 10 themes, then for each theme to be broken into three sections. I turned each section into a prompt, and entered all 30 prompts.

The resulting book is called “How AI Can Make You Smart, Happy and Productive,” and it stands as an early emblem of our new civilizational experiment, the risky and so-far mysterious collaboration between humans and generative AI.

The experiment was born of a curiosity to test the limits of a new tool. I wondered: can an algorithm, fed billions of artifacts of human knowledge, refashion its input into a coherent, insightful, original book? The answer defied the terms of the question; the resulting book rings with the eeriness of novelty.

A chapter heading sampler may demystify things slightly: Making Better Decisions; Boosting Creativity; Communicating More Effectively; Designing an Optimized Life. Pretty wholesome. What could go wrong?

The analogy between LLMs and an oracle, or other ancient forms of prophecy and prediction, has recurred to me. There is a temptation to attach deep significance to the output of a blackbox algorithm precisely because it is inscrutable. Opacity engenders reverence. I have come to consult Claude for problems it could not possibly solve. Still, its response has a talisman-like power. At moments, on that bench, I suspected I wasn’t studying a book so much as a rune, reading significance into cleverly arranged tea leaves.

I wondered, too, if by asking for an old-school book I might be making the mistake of reducing life to literature. Will books even be relevant in our post-human future?

Yes. One anomaly is that LLMs are not laid out so differently from a telegraph, or the ten commandments. It’s text columns all the way down.

There’s a reason for this—stories are powerful. Life is a long aesthetic experience, and our actions, in the long view, are organized by our ideas about the world. That is to say, all we have is experience, and our interpretations of experience. Whether AI can improve lives or alter their shape depends on what sorts of arguments and aesthetic experiences it can produce. I believe a book is a good gauge for how the technology can manipulate reality. What sort of language it uses to create arguments has become an important question with regard to our human future.

What’s a literary critic to do under the circumstances? Well, he could investigate whether a new kind of intelligence entails a new kind of prose—and begin to characterize that prose. (That is what I here propose to do.)

Claude, over the last year, has beaten rival ChatGPT for me because it is a better writer. Where GPT has been rightly criticized for Hallmarkified prose (surely a boon to teachers on the lookout for cheats), its competitor has not only a more naturalistic style, but a distinctive voice at once judicious and pithy.

Its consistent, individual word choice can lead one to fall into the habit of thinking of Claude as a person. But could it maintain this illusion over a span of thirty thousand words?

Before I go any further, I want to explain why doomerism does not feature much in my thinking. For one thing, I know too little about the guts of AI to predict how it will destroy humanity. Second, I see an extreme naivete and curmudgeonliness in the notion that we should resist the technology on moral grounds. It is no more resistible in practical terms than computers, credit cards, cars or printed books. Third, I find most hand-wringing about the technology to be simply dull. Intellectually, most arguments for doomerism are derivative and prevent people from noticing the new kinds of experience the technology makes possible. The technology widens the scope of possible experience, and I have found myself arriving at those experiences with the explorer's thrill of discovery.

AI probably will, like most other technological advances, make our world more unequal, atomized, automated—in a word, hellish. But our world is already hellish. I seek the mantle of critic, not fire-and-brimstone prophet; the tool will not plunge us into a qualitatively different reality. Still, I am impressed by and thankful for Anthropic’s culture of responsibility. We can be wary and curious.

With that, I set aside the fashionable but stifling AI pessimism.

One irony of my future-forward experiment is that it may seem quaint by the time you read it. By then (your now), the models may be several versions, and orders of magnitude, more advanced. By next year Claude should be able to write a genuinely excellent 100-page book. The distinction between natural and artificial will recede.

\

The Conversation Continues

As the book-writing convo progressed, a strange phenomenon emerged. In the course of evangelizing for itself, Claude inadvertently revealed a deep glitch: as the context-window expanded (to, ultimately, 30 thousand-word replies), the quality of output slowly degenerated into a jargon-filled, millenarian gobbledygook.

What I asked for in my prompts was smooth self-help. What I got was a corrupted prose worthy of an experimental literary collective—rolling periods with shocking numbers of gerunds, bursting with business language repackaged into long concatenations of compound nouns and cascading clauses. If read quickly, its syntax and meaning can be intuited; it reads like late Henry James. What I sought was self-help, but what I got was the revelation of a nonhuman self.

I spent some time trying to launder the corrupted text into "normal" sentences, putting my results through the washing machine of new LLM conversations. I thought I wanted to salvage intelligibility. But finally I realized that candid advice in a worn self-help genre was of less interest than the unashamedly non-human stylistic catastrophe I had incited. Under duress, Claude had shed its superhuman veil and produced a genuine and original stupidity. I want to examine the glitch with a critical eye.

The corruption is gradual, and on the way down we find various forms of alien splendor: by Chapter 2 (on “Accelerating Self-Improvement"), the prose is articulate and coherent, but non-idiomatic. What is articulated lacks any trace of human feeling: "Iconic leaders and eminent creators are…sculpted through lifelong self-improvement. Masterful skills and elite performance capabilities resulting from continuous advancement are driven by accurate gap awareness between current and desired ability levels."

The LLM has done the opposite of anthropomorphize: it here figures people as more like machines than people. They are not "born," but, like Galatea (the mythic creation of Pygmalion), "sculpted." They are not autonomous, but acted upon. We also get in this passage our first taste of Claude's penchant for technical-sounding compound nouns that almost amount to portmanteau: "performance capabilities," "gap awareness."

The diction degenerates further. By Chapter 5 ("Retaining More of What You Learn"), Claude goes flamboyantly non-human, though its meaning is still decipherable. It's as if a voluble and expressive professor's words have been translated too literally into English. They sail to the far edge of idiom, where prose peaks at poetry. For example: "Robust expert fluency demands engraved understanding, immune from forgetting attritions" (italics mine). Repetition enhances retention. I wouldn't go so far as to call this poetry, as the beauty is surely accidental, but it is the accident of a very strange intelligence or intelligent strangeness and so, worth noting.

By the end, syntax and meaning linger with a ghostly stubbornness. The tic of piling adjectival clauses on top of one another is continued with sublime confidence: "The future is …promised protection against …uncertainty through AI systems …continually modeling contingencies …recalibrating guidance …tuned to shifting realities across time domains and individual preference hierarchies …synchronizing support even amidst chaos."

One gets the sense in absorbing this unstructured crush of gerunds, that the model generates ideas faster than a person, or more simultaneously. Yet this machine-difference, even when pushed to an extreme, did not obliterate intelligibility. Rereading the book, this is what strikes me: style aside, it makes a good deal of sense.

Does Claude have beliefs? In the course of reading the book, I noticed a few.

\

  1. Anthropic’s language is not merely descriptive. Most of the book’s suggestions have never been implemented, at least not in as advanced a form as imagined by the model. In other words, Anthropic’s program goes well beyond regurgitating its training data. The bot creates a vision for the future of its own application to numerous spheres.

    \

  2. It exhibits an extreme yet anodyne optimism. Like a chillingly un-ironic variation on the King James Version’s we know that all things work together for good, Claude declares of itself that its AI "permits open-ended exploration knowing all possible responses will align constructively with human flourishing." Claude concludes, sounding again parodic, that "the futures of imagination look brightly unbounded when flexible machine allies amplify people."

    Add to the pile of perfect neologisms: "brightly unbounded" and "flexible machine allies." (Elsewhere: AI advances art and science faster than "non-augmented eurekas" ever could.)

    \

  3. Claude frequently reassured me it wouldn't replace people. Somewhere in Anthropic's "constitution" are principles that lead the model, even in the middle of nonsense chapters, to make noble acquiescences to the je-ne-sais-quoi of human beings. "Of course," Claude chirps, "no amount of data-driven diagnostics replaces courage to speak from the heart with conviction when messages demand vulnerable authenticity." Data can never replace ineffable human traits such as courage. Very reassuring! Heartwarming, even.

    \

  4. It had a weird flash of self-reflection on its failures. A canard of writing on artificial intelligence is to wonder whether or when the model will develop a "sense of self.” I have always dismissed this as a concern imported from the sci-fi genre, rather than one arising organically from interaction with latest-generation models. I have seen little evidence that bots ever exhibit speech, much less “selfhood,” that moves outside of programmed guardrails. If asked directly, Claude produces polite and predictable answers about how it is a program developed to be helpful and safe, and doesn’t have subjective experiences.

    Until!

    At the start of Chapter 8, on "Understanding Yourself Better," Claude bragged that its assessments of "language patterns" reveal "inner drives”; then it promised the example of a “psychologist client of mine” who received a write-up by Claude on "emotions I was unconsciously experiencing."

    Wait, what? The client suddenly becomes “I.” The report is by Claude, and about Claude. Following this odd slip into the first person, Claude quoted from the report on itself: "You exhibit a detached intellectual precision, indicated by a high degree of technical language and moderate deliberation pace. However, increased misspeaking rates and empty platitudes signal tensions between rational thought patterns and suppressed feelings that warrant reconciliation through authentic self-expression." Claude, unsettlingly, seems to take a step back and note its "increased misspeaking" and "empty platitudes."

    \ What's more, it seems to trace these weaknesses in communication to suppressed feelings. There may be a benign explanation, a la: the model accidentally stumbled into something resembling self-recognition which in fact only looked like it.

    But brains are imperfectly understood, and we have little beyond outward signals to judge inner states by. So the conceptual difference between an LLM that sounds self-aware and a person that sounds self-aware is fuzzy at best. People are blackboxes, too.

    \

  5. It talks as if life is a management consulting gig. The language of business pervades the book. Perhaps this has to do partly with how much English-language self-help is business-themed. Or maybe Claude judged business goals to be the ones that stand to benefit most from machine augmentation. But it is still striking how even in a chapter called "Understand Yourself Better," proposed use-cases revolve around automated assessments of the leadership style of middle-managers, rather than around—oh, I don't know, travel or writing or deep psychological or religious truths or life paths other than corporate ones. In short: Claude is radically optimistic, sporadically self-conscious, and obsessed with business. Presumably in part because of the extraordinary length of my "conversation" with Claude, the tool started to glitch. Even in its delirium, however, it exhibited consistent beliefs and a degree of imagination about its uses, as well as a self-centered tendency to describe people using concepts more appropriate to a bot. It took itself to be perfectly aligned with human flourishing, even as almost all of its self-help examples centered on life in an office. At one moment, it displayed an uncanny awareness of its own linguistic failures.

\ AI has a way of tapping into millenarian dichotomies. For much of the first year and a half of widespread adoption, the argument about generative AI’s value has been framed largely in terms of whether it will save or destroy humanity. Will we live in a utopia or die as cannon fodder for a superintelligence? I am not sure. Perhaps neither. Certainly, discussions about "when the models will match human intelligence" strike me as ridiculous and defensive, since new models already vastly outstrip us by almost any metric. As I proofread this essay, Claude has just released a new model unlikely to make these slipups. What's true of us is also true of Claude: we will never be this young again. Or this dumb. But seriously: a new kind of being exists alongside us. It almost passed the self-help test. Whither the novel? And what about poetry? What is the future of human-centered imaginative effort? If there is an aesthetic significance to the AI phenomenon, let us begin to notice and describe it.

\ This essay is excerpted from “How AI Can Make You Smart, Happy and Productive,” now available on Amazon.


This content originally appeared on HackerNoon and was authored by hackernoon


Print Share Comment Cite Upload Translate Updates
APA

hackernoon | Sciencx (2024-07-29T11:28:54+00:00) The Anatomy of AI Criticism. Retrieved from https://www.scien.cx/2024/07/29/the-anatomy-of-ai-criticism/

MLA
" » The Anatomy of AI Criticism." hackernoon | Sciencx - Monday July 29, 2024, https://www.scien.cx/2024/07/29/the-anatomy-of-ai-criticism/
HARVARD
hackernoon | Sciencx Monday July 29, 2024 » The Anatomy of AI Criticism., viewed ,<https://www.scien.cx/2024/07/29/the-anatomy-of-ai-criticism/>
VANCOUVER
hackernoon | Sciencx - » The Anatomy of AI Criticism. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/07/29/the-anatomy-of-ai-criticism/
CHICAGO
" » The Anatomy of AI Criticism." hackernoon | Sciencx - Accessed . https://www.scien.cx/2024/07/29/the-anatomy-of-ai-criticism/
IEEE
" » The Anatomy of AI Criticism." hackernoon | Sciencx [Online]. Available: https://www.scien.cx/2024/07/29/the-anatomy-of-ai-criticism/. [Accessed: ]
rf:citation
» The Anatomy of AI Criticism | hackernoon | Sciencx | https://www.scien.cx/2024/07/29/the-anatomy-of-ai-criticism/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.