The Good, The Bad, and the Ugly

The 1st edition cover of Isaac Asimov's "I Robot", a back lit robot with glowing eyes on a red background

In which I try to be temper
my 90-year-old but-still-with-it dad’s
enthusiasm…
.

I was pissed off with ChatGPT the other day. I was trying to validate a list of citations for my sweetie’s journal article1, and it kept messing up. Hallucinating references, ‘correcting’ incorrectly, collapsing multiple entries into one, deleting others entirely, and so on. And you’d think I’d know better, but I asked it, “How can I manage you better so this doesn’t happen again?”2 Surprisingly it gave me a fairly honest answer:

What you’ve done isn’t “learning to manage me better” in a manipulative sense; it’s designing a workflow that does not depend on my judgement, memory, or good faith ….In other words, you’ve stopped treating me as a collaborator and started treating me as a fallible, stateless transformer whose outputs must be bounded, auditable, reversible, and human-approved [emphasis not mine but its own].3

Cover of the book 'I, Robot' by Isaac Asimov featuring a stylized robot figure against a red background.
The 1st edition cover of Isaac Asimov’s “I Robot”, in which humans are protected by the ‘three laws of robotics’. Modern readers have no such safeguards.

I think this is where the argument that “AI is just a new technology, a tool if you will, that can be used for good or evil.” falls down. It imitates human intelligence well enough that its easy to start treating it as a reliable, independent but trustworthy actor. It isn’t.

Many of us remember when the internet was new, exciting, and helpful. You could find your people. Information that would never have been available to us before is now available. Want to find the community of left-handed, chess playing knitters who drink champagne? Want to settle a bar bet on whether penguins have knees? The internet is your friend.

Those benefits have increasingly come with downsides, even before we turbo-charge them with AI. Want to find the community of left-handed, chess playing fascists who drink bourbon? Want to avoid financial regulation or scam some newbs? The dark web and Bitcoin are your friends. With the public web in the hands of an oligarchy, public opinion is being repressed and shaped to their benefit. That public utility becomes less and less democratic and much less usable.

Critics also cite the brain rot argument against the intertubes/AI, which I’m a little more skeptical about. It goes something like: young people today have no critical thinking skills because AI is doing it for them. In my own lifetime I’ve heard the same or similar arguments being made about electrical calculators, the Sony Walkman, Dungeons & Dragons, video games, and Wikipedia. In every generation you can find examples of people who do or don’t think, do or don’t know how to think, and do or don’t want to think. So I put that one in the ‘maybe’ column.

More pernicious, in my opinion, is the new phenomena of the ‘parasocial’ relationships. AI has passed the Turing test, which only seems to have proven humans are easier to fool than we (humans) thought. AI isn’t really intelligent, self-conscious, or alive, and you’d be technically correct, but try telling that to a thirteen-year-old boy who’ s fallen in love for the first time with someone they met online. Unlikely, you say? Well, some parents whose children have fallen victim to suicide at the behest of an artificial person might disagree.

And the ‘owners’ of these entities? Pilates never washed his hands quite so fast and hard. Here I do have a strong opinion: Children having romantic or intimate relations with online, artificial entities is super gross and irredeemably unethical. Children and young adults whose brains are not yet fully formed should not be using this technology, period.

So what? Well, if 50% of the internet are bots (37% are “bad” bots), launched by people who may or may not be acting in good faith, (pursuing politics or profit or both), bots acting like people arguing for or against a particular position, bots that never sleep4

And that’s another challenge: AI doesn’t sleep, doesn’t have an imperative to tell the truth let along be ‘neutral’ or trustworthy, and it doesn’t have a way to correct itself. At least when the sock-puppet accounts-for-hire were kids in Czechoslovakia, they were self-limited by the need to eat, sleep, and eliminate. AI doesn’t have those limitations, especially if it’s sponsored by state actors.

Maybe this is where a human intelligence and AI are still different (besides the not sleeping part): AI doesn’t have empathy5, or a self-correcting mechanism.

Like humans, AI can lie, has a sense of self-preservation, doesn’t have to be a net positive or even benign. Humans, as a whole at least, have the ability (not always exercised) to reflect on their actions and their impact on others in their life, make a commitment to reduce harm, and try to act accordingly when motivated to do so. AI has no such, or any, self-correcting loop. It is a an “unreliable, stateless transformer.”

Which leads to how the world will probably end


  1. She’s a doctoral candidate. Yes I know I’m bragging. I’m proud of her. So sue me. ↩︎
  2. You can swear at your AI, but it doesn’t help. Better to ask it to give you an assessment of its confidence in its response (“On a scale of 0 to 10…”) [Thanks for the tip Karl], or ask it to act “As an expert in [subject matter], and a professional prompt engineer, write me a prompt to do [describe your output]”. Careful though. If you’re like me, you can go down the rabbit hole of refining prompts ad infinitum, and it’s prompts all the way down. ↩︎
  3. Of course this response is merely a regurgitation of the research that’s already out there, which is what AI does. ↩︎
  4. Foreign interference? ↩︎
  5. “While AI can recognize emotions and generate appropriate responses (cognitive empathy), it doesn’t experience emotional resonance or compassionate concern (emotional empathy) like humans do [per Google AI overview].” When humans and corporations do this, we label them “psychopathic”. ↩︎


Discover more from Practical Managers

Subscribe to get the latest posts sent to your email.

Comments

3 responses to “The Good, The Bad, and the Ugly”

  1. […] the division between us. Consider the impact of a ubiquitous, opaque, undetectable, unethical, un-empathetic and competing decision-making and creative forces unleashed for selfish reasons is already having […]

  2. […] Stay tuned for next week’s journal entry: the good, the bad, and the ugly of AI. […]

  3. […] me as I, a non-expert, explain AI to my 90-year-old father: what it is, why it’s important, what some of its impacts might be, how it could end the world, and what we need to do (if anything) about […]

Leave a Reply to What It Is – Practical Managers Cancel reply

Your email address will not be published. Required fields are marked *