Posted on

AI and me

This post was written without the assistance of AI. Spelling and grammatical errors are my own. Failures of understanding or nuance may be my own as well, or they may result from a joint effort between you and me. In that case, neither of us can be entirely sure where the blame falls.

Social media activity in the recent past has been preoccupied with artificial intelligence (AI) and variations on that theme, such as Large Language Models (LLMs), small language models (SLMs), Agentic AI, Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI). I’m not an expert in that field, so I’m going to follow a longstanding Internet tradition and express opinions about it anyway.

It seems to me several distinct lines of thought about AI are evident in the social media discussions.

One focuses on using AI-based tools to accomplish mundane tasks or, if I may be so rude, to convert creative tasks into mundane ones so that “anyone” can do them. This runs the gamut from replacing all human software developers with AI agents to replacing all human artists, writers, and musicians with AI agents.

Essentially, the goal appears to be to replace everyone with AI agents, and to reduce the quality of literally everything to a common baseline achievable by an agent that lacks genuine creativity. As people’s aesthetic sensibilities deteriorate due to their dependency on AI, they will lose the ability to tell the difference between “mediocre” and “good” quality.

Everyone will then be an Einstein, a Turing, a Beethoven, or a Van Gogh. Well, assuming they remember those names…or they retain the ability to remember anything at all, since they can ask AI tools to generate a “good enough” result at any time.

Another line of thought focuses on the ongoing effort to improve the technology behind AI, trying to reach the goals of AGI and ASI. The people working in this area are developing the technology, as opposed to learning how to craft effective “prompts” for every little thing they do. They are driving innovation to create something that hasn’t existed before.

In case it wasn’t already evident, I respect the latter more than the former.

There are also discussions about the energy consumption by massive data centers needed to power the AI world. This topic appears to sort itself into three points of view:

  • Let’s grind up the planet to the point of physical destruction in order to power the data centers, and anyone who objects is a Luddite;
  • Let’s look for new ways to generate energy, to get the power we want without the environmental damage; and
  • Let’s look for new ways to design AI solutions so that they have smaller energy demands.

There’s another aspect of the topic that concerns me: Where do humans fit into the picture? I’d like to consider that aspect for a few minutes.

AI is a tool, not a lifestyle

Some people seem to want to force AI into every situtation just because it’s AI, and not because AI is a good fit for the task at hand.

One person on a social media site made the argument that anyone can use an AI tool to generate the hexadecimal codes for HTML colors without having to learn the codes. He thought this was a really compelling case for AI.

My response in that discussion was that I use a color chart. When I need a hex code for an HTML color, I look at the chart.

He replied at some length that I wouldn’t have to memorize the hex codes if I used an AI helper. He constructed a hypothetical exchange between a user and an AI helper in which he gradually narrowed down the color by saying things like, “a little less blue” and “a little lighter.” Now people who haven’t memorized the hex codes can do the same work as those who have. He invested several minutes in typing all that, so I think he felt it important to make the point.

It’s true that you could do it that way.

A catch phrase of mine for many years has been, “That’s true, but is it usefully true?”

HTML color codes are only one example, but I think it’s a pretty good example because so many other tasks we perform are at the same level of difficulty, more or less.

It takes far less time to look at a color chart than to undergo a series of exchanges with an AI helper, but that’s not the only issue with his argument.

I’ve been working with HTML since it was invented. I’ve never memorized the color codes. What I have memorized is this: It goes red, green, blue from left to right. All zeroes is black. The more bits you set, the lighter the color gets. If you set all the bits, you get white.

There’s no need to memorize any codes. You can pick them off the color chart. Depending on how the color chart is implemented, this may involve a mouse click (and possibly a slider) or it may involve typing as few as four characters or as many as nine, depending on the color you want and whether you need to enclose the value in quotation marks.

So, the thing about avoiding having to memorize the codes is not a “thing” at all. No one memorizes them unless they just want to, for fun. (Not my idea of fun, but different strokes, I guess.)

Experimenting with the colors until you get them just right is fine if you’re playing around or if you’re trying to come up with a look and feel for your own site. If you’re working on a project for a client, they will tell you which codes to use to match their branding standard. You aren’t going to spend any time saying “a little less blue” and “a little lighter” and so forth. You must use their colors. Period.

Finally, there’s the question of choosing the right tool for the job. This person was suggesting, rather insistently, that it’s a good idea to use a tool the size of a planet to perform a task the size of a peanut.

When I have a gig that involves Web front-end work, I print a cheat sheet for the client’s standard color codes and tape it up near my monitor. I won’t need the whole palette; just the colors the client specifies. After the project, I file it with other client paperwork in case I work with the same client again. Low effort, low impact, effective.

There must be a million (well, okay, a hundred) little things people do every day that don’t really demand the resources of AI, or the tedium of having a “conversation” with an AI helper.

One contributor on a social media site mentioned that in his company, people were using AI assistants for tasks such as extracting email addresses from text, changing lower-case text to upper-case, scanning text for obscene words, and other very small tasks that can easily be handled with simpler, lower-tech tools. In his case, they were spending $1,200 a month for a subscription to an AI service. The obsession with AI was costing them more than it was worth. I wonder how many other companies are in the same boat.

Why not use AI where it really helps, and otherwise use whichever other tools actually fit the task?

Our brains on AI

Many claim that using AI increases our productivity, shortens our time to market, and other things related to sheer, blind speed of helter-skelter output generation, and that the extensive use of AI tools by everyone for everything will advance civilization rapidly. Those who fall behind will be left behind! Bwa-ha-haaa!

Others have a different notion of the effects of heavy AI use by humans. Kosmyna et al, in an MIT study entitled “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” made some interesting observations. The paper is here: https://arxiv.org/abs/2506.08872. There’s a nice summary here: https://futurism.com/neoscope/brain-eeg-chatgpt.

Some key observations:

  • Brain connectivity declines with AI use
  • LLM users forget what they just wrote
  • AI use disrupts memory and learning pathways
  • LLM users felt detached from their work
  • Switching from LLM to brain use doesn’t fully restore function
  • Search engine users showed healthier brain engagement
  • AI dependency leads to cognitive offloading
  • Short-term gains, long-term cognitive debt
  • Over time, participants reported a decline in engagement, performance, and satisfaction
  • When participants stopped using AI, they exhibited symptoms similar to drug addicts going through withdrawal

EEGs of participants’ brains while they were engaged in the task showed very little activity in the brains of people who were using AI to write their essays. The brains of people who didn’t use AI were by far the most active. The group that used Google search as their primary source of help showed activity between that of the other groups.

I’m not a scientist, so all I can do is report my own “anecdotal evidence.”

For software development, when using Claude in the context of VSCode with Copilot, I experience the sort of detachment and lack of satisfaction the participants in the MIT study reported. Even after reviewing the code Claude produces, it doesn’t feel like “my” code and I don’t recall exactly what the code does. I can read it and understand it, but I don’t particularly recall it in detail.

When it comes to “memory and learning pathways,” I can’t speak to the physical pathways in the brain. My personal experience is that when I’m exploring an unfamiliar area of software development, I can get something to run pretty quickly using an AI assistant, but at the end of the day I haven’t learned much, if anything, about the topic.

Subsequently, when I need to apply the same “new” skill again, I find I must use the AI assistant again…every time. I haven’t surpassed the basic learning curve. I don’t feel as if I, personally, have accomplished anything, even if I can “ship code.”

Learning how to refine the prompts doesn’t feel like learning anything real. It’s trivial; almost mindless. If the EEG readings from the MIT study are accurate, then maybe the use of AI really does become mindless over time, quite literally.

Working through mistakes and directly seeing the results of small changes made “by hand,” I learn much more, even if it takes longer to produce the initial results. After the initial results, subsequent applications of the “new” skill go smoothly; I’m over the basic learning curve. This never happens when I “learn” something via an AI tool.

The concept of the “beginner’s mind” is powerful, but to take it to the level that one physically degrades one’s brain until one can never be anything but a beginner? That seems excessive. But maybe we don’t remain beginners forever. Maybe we cross a threshold at some point beyond which the AI is the user and we’re the tool. We become less than a beginner, because a beginner at least is going somewhere.

Another aspect of learning new things is the visceral pleasure it produces. This comes from the process of learning by doing, not from the speed with which a tool can spit out a result. This pleasure is completely absent when I build something with AI assistance. I think this is because there is no learning process. I’m not keen to give this up in the interest of delivering mediocre results faster.

AI assistants for software development

Colleagues who have taken the trouble to develop detailed, extensive work flows involving AI assistants for software development work have reported good results, provided the developer already has significant professional experience that enables him/her to assess (and either reject or correct) the AI-generated results in a knowledgeable way.

Although their reports are positive, I can’t help noticing they spend considerable time refining the “context” information they provide to their AI tools. This makes me wonder if they’re really saving time, when we consider the overall development process and not just the code-production activity.

When I’ve used AI to help create common types of applications, like a Webapp based on React that performs CRUD operations against a relational database, I’ve gotten mixed results out of the box. On one occasion, the tool generated a working app that met all the specifications I gave it. It produced code that was well-structured and easy to understand, too. A win!

On another occasion, with similar context provided, the tool fell down. It didn’t seem to know what to do at all. Copying and pasting a sample app as a starting point would have been faster and less frustrating.

Using AI to help generate a typical kind of CRUD application in COBOL for z/OS, CICS, and DB2, I found that the tool did respond to context information and sample files that I provided, but still never came close to generating usable code.

One colleague suggested referencing the IBM documentation as part of the tool’s context, but this didn’t help. The tool “hallucinated” things that were not documented, and that don’t exist – CICS commands, JCL DD statement parameters, and in one case an assembler instruction.

Another issue with mainframe technology is that there is no single commonly-accepted style or “way” of doing things. Back in the day, mainframe applications were proprietary code. People didn’t share code on the Web, not only because there wasn’t a Web (which there wasn’t) but because the source code was closed.

Each company developed its own standards and conventions. Even with more and more context, in my experience the AI tools tended to slide away from the standards and conventions that were set up in the context. I found I had to provide “context” so detailed that it amounted to the code itself. There are many ways to code source code statements, and every organization has its own standards.

This is in contrast with languages like C# or Java or Python, for which widely-accepted standards and conventions exist, and for which many common functions are provided by well-known libraries. It’s easier for the AI tool to make reasonable guesses when details are omitted from the prompt and context.

In the end it was much faster and much more reliable to code my own boilerplate programs and use those as the context for the AI tool, telling it to generate code to support business rules and place that code only in specified locations in the program source file.

But I must say this approach still took longer than to copy one of the boilerplate programs and just type in the business logic myself. Also, when I write it myself I know exactly what the code is doing.

You may remember a tool called TELON, originally developed in the 1970s and still existing today, after many updates and enhancements over the years. The original version, for mainframe applications, consisted of a set of IBM System/360 assembly macros that could spit out code in COBOL, PL/I, or Assembler for CRUD applications targeting batch execution as well as the most popular interactive environments of the era, CICS, IMS/DC, ISPF, or DATACOM/DC, and operating against any of VSAM KSDS, IMS/DB, IDMS/R, DATACOM/DB, or DB2 as the back-end data store. The generated source programs had well-defined places where we could drop in our application-specific logic.

The “feeling” of using TELON was quite similar to writing your own boilerplate programs and then copying/pasting them. In fact, you could tweak the macros so they generated code aligned with your company’s internal standards and conventions. Early versions of the tool shipped with the assembler source. Convenient. Much more effective and efficient than using an AI assistant to do the same work today.

So, it seems to me using an AI assistant actually takes longer than writing my own code. That has been true in the cases when I’ve tried using an AI assistant. I can’t speak to the whole universe of use cases, of course.

I can see using an AI assistant to generate a starter app, but how is this better or faster than using one of the numerous starter app generation scripts, like ‘create-react-app’ or ruby ‘scaffold’ or ‘dotnet new’ – modern-day equivalents to TELON?

A script generates consistent results, while an AI tool is non-deterministic. We don’t need to review the output of a standard script line by line to make sure it hasn’t hallucinated something. After reviewing the output once, we know exactly what the tool will produce every time.

I know of at least one specific situation where an AI coding assistant adds value. It’s quite possible there are other good use cases for it, too. I’m thinking of Event Modeling, using the tooling Adam Dymitruk developed and uses for client projects. With an AI assistant integrated into the modeling tool, it’s quick and easy to produce the underlying code for a slice.

By the time we get to that point in development, the functionality of the slice has been defined at a pretty fine-grained level in the model, so there isn’t much wiggle room for the AI assistant to go wrong. It generates correct code around 80% of the time, and otherwise requires very little correction.

AI can produce source code for a very well-defined and very small module. The smaller the better, because there’s less to review and less to fix.

There’s a brief overview of Event Modeling here: https://eventmodeling.org/posts/event-modeling-cheatsheet/, if you’re interested. It’s a bit different from conventional coding practices.

You might point out that most of our work involves addressing existing code bases rather than creating wholly-new solutions. I’ve found AI assistants to be useful for generating a summary of the functionality of unfamiliar code, as well as helpful in refactoring existing code.

Besides those, I’ve found a couple of common situations when an AI assistant is useful. One is creating a regular expression. The other is developing a complicated SQL query. I wouldn’t blindly accept whatever the AI tool generated without checking it first in either case, but I have found such tools helpful.

So, AI does have some practical uses in this field. The problem is that people are becoming married to AI for too many trivial tasks. They’re forgetting how to do basic things.

Can a robot write a symphony?

In the 2004 movie version of Isaac Asimov’s “I, Robot,” detective Del Spooner asks a robot, “Can a robot write a symphony?” The robot replies, “Can you?”

Proponents of the idea that AI has reached human levels of creativity are fond of citing that dialogue from the movie triumphantly, as if it ends the argument about whether AI can “create.”

We’re circling back to something I mentioned early on, that some people want to convert creative tasks into mundane ones so that “anyone” can do them by using AI. People are saying, quite seriously, that given the state of AI today, we’re already at the point that an AI could write a movie score that is “just as good” as a human composer’s work.

When pressed, some of them take a step back and say that it’s unfair to compare an AI with a top-tier composer (or artist or musician or writer), and that we should be comparing them with “average” creative output. What’s that, exactly? Apparently, it’s the level of quality an untrained person could achieve after two or three tries.

The problem with the “I, Robot” quote is that detective Del Spooner is not a composer. If the robot asked me, I could say “Yes. I’ve written 13 symphonies so far. Stay tuned.”

Maybe the robot would try again. “Okay, meatbag. Can you write a book?” I could say “Yes. I’ve written a half dozen or so. Still typing away.”

“I’m not giving up,” says the robot. “Can you paint a picture?” I could say, “I can almost draw a crude stick figure, if I try three or four times, and if you don’t care too much about how good it looks.”

“Aha! Gotcha!” proclaims the robot. It has discovered the degree of quality at which it can claim human-like creativity.

Whether we’re talking about music, painting, sculpting, poetry, prose, or any other form of art, there is no way an AI can match the quality a trained and experienced artist can achieve. Can it spew out something an untrained person might be satisfied with? Probably, usually, maybe, sometimes, I guess, sort of, if you say so.

AI can draw a stick figure at a level of quality that matches or exceeds my own. Is that the benchmark you really want? Is that what you mean by “just as good as a human can do?”

You can always find a human, somewhere, who can’t do the thing you’re trying to prove the AI can do. But you can’t find an AI, anywhere, who can do anything whatsoever without being prompted and guided by a human. Any creativity in play is that of the human who is using the AI, and not of the AI itself.

If we’re talking about art, then what’s the value of the thing the AI produced? I’m not asking whether it has a sale price. What’s the true value of it, as art? Do proponents of “AI everything” even understand the question?

Using AI to generate “art” has the same issues as using it to generate software. You’ll feel disengaged from the work. You won’t really understand how it was put together. You won’t be able to produce anything similar to it without depending on the AI again and again. You won’t get the visceral pleasure of learning by doing. And the result will never rise to the level of the work of a skilled human.

So what?

Am I arguing against the use of AI? No. I use it myself. I just avoid becoming dependent on it. If I want to disable 75% of my brain, I’ll use a spoon. Eye sockets are about the right size, I think.

I use LLMs for the things they’re really good at. I often can’t find the right search terms to dig up information I’m looking for online. ChatGPT can take a vague description of what I’m looking for and scour the Internet in seconds. It usually finds what I need, or at least finds clues I can use to find what I need. But it’s not my primary search tool.

As the name “language model” implies, LLMs are great for translation between human languages. I use ChatGPT as one of several tools for learning foreign languages. But it’s not my primary language-learning tool.

Now, some AI proponents will say I don’t need to learn foreign languages. I can download a phone app that can translate conversations in real time.

Yes, I know. I have one of those. I’ve used it when necessary. But there’s still the visceral pleasure of the learning process as such. There’s a sense of accomplishment when I can read something in another language and not have to look up many words, or when I can follow the gist of the dialogue in a foreign movie or TV show, or when I can have a conversation with someone beyond the level of pointing and grunting.

And if my phone battery dies, I can still get along in the foreign country. More importantly, I can connect with people more effectively than I can when using the app.

If you’re working in another country, rather than just visiting tourist sites, you can’t use the phone app all day long with your colleagues. They would get pretty tired of that.

As a technical coach, consultant, and trainer, I incorporate AI tools with my day job, too. I don’t present them as the ultimate solution to every problem, because they aren’t. They have their place. They have their strengths. They have their limitations. They can be learned and used in conjunction with learning and using other skills.

I’m not opposed to AI. What I’m opposed to is subordinating our humanity to AI.