Curated Resource ( ? )

Why I'm Not Worried About My AI Dependency

my notes ( ? )

Hello Timo,
I have been thinking a lot about AI lately, and specifically about whether we should be worried about our over-reliance on it. Because if I am being completely honest with myself, I use AI for absolutely everything now. Every email that comes in gets pasted into Claude for analysis. Every project brief gets discussed with it. Every piece of writing gets shaped by it. When Claude goes down, my entire workflow grinds to a halt.

So should I be worried about this dependency? Should you?

After spending the last few weeks working through this question, I have landed somewhere that might be useful to share. Because I think the conversation about AI is happening right now in organizations everywhere, and the dividing line between those who embrace it and those who resist it matters more than most people realize.

The dependency question

When I first noticed how reliant I had become on AI, my immediate reaction was concern. I started thinking about all the things that could go wrong. What if Claude disappeared tomorrow? What if I was outsourcing too much of my thinking? What if I was losing critical skills?

But then I started looking at all the other dependencies in my working life:

  • If the internet goes down, work stops
  • If the power goes off, my life stops.
  • If AWS servers fail (which seems to happen every other week), half the tools I rely on become useless
  • If Figma stops working, design work halts

Just one more dependency

We have built our entire professional lives on top of dependencies we barely think about anymore. AI is just one more in that stack.

The question is not really whether we should be dependent on it, because that ship has already sailed for most of us. The question is what kind of dependency we are building.

The thinking question

The more interesting concern for me is whether AI makes us stop thinking. I have heard this worry from a lot of people, and I understand where it comes from. Because when you watch someone paste a problem into ChatGPT and blindly implement whatever comes back, it does look like they have outsourced their brain.

But I think this misunderstands what most of us are actually doing with AI.

Three layers of thinking

There are different levels of thinking that happen in any given day:

  • Strategic thinking about project direction, what problems need solving, what approach makes sense
  • Analytical thinking about whether an idea is sound, whether evidence supports a conclusion, whether a design solves the actual problem
  • Mundane thinking about how to word an email, how to structure a document, how to format a proposal

AI as a thinking partner

What I have found is that AI handles that bottom layer beautifully. When a client sends me a long rambling email with five different questions buried in three paragraphs of context, I no longer spend mental energy untangling it. I paste it into Claude and say, “Summarize the key questions here.” Then I think about my answers. I tell Claude what I think about each point. Sometimes I ask for its perspective on one or two where I am genuinely uncertain, not because I cannot think through it myself, but because having a sounding board helps me think better.

When I worked in an agency, I had colleagues for this. I would turn to Marcus or Chris and say, “What do you think about this?” I do not have that anymore. AI fills that gap. It does not replace my thinking. It helps me think more clearly by taking away the low-level cognitive load and giving me something to bounce ideas against.

The value question

Where this gets really interesting is in what it lets me deliver to clients.

The landing page playbook example

I worked on a project recently where a client wanted to improve the conversion rate of their landing pages. They had a budget that, in the past, would have stretched to maybe three or four sample landing pages and a conversation about why I built them that way. That would have been useful, but limited. They would have had some examples to work from, but not much guidance on how to replicate the approach themselves.

With AI, I was able to create an entire playbook. Detailed guidelines for every component. Design principles explained with examples. A system they could use again and again. I delivered probably four times the value in about a third of the time it would have taken me before. The strategic thinking was all mine. The understanding of what makes landing pages convert came from 30 years of doing this work. But the documentation, the articulation, the packaging of that knowledge into something comprehensive and usable came from working with AI.

Why clients still need expertise

Most of my clients will not do this work themselves, even with AI:

  • They do not know what questions to ask
  • They do not have the pattern recognition that comes from seeing hundreds of projects
  • They cannot evaluate whether the output is actually good or just sounds convincing
  • They haven’t the time to review and iterate upon the output to improve things.

That is what they are paying me for. AI does not replace that expertise. It amplifies what I can do with it.

The real conversation

I think what bothers me most about the anti-AI sentiment I see is that it misses the point. People post about “AI slop” and declare they are “AI-free” as if that is some kind of badge of honor.

The conversation should not be about whether to use AI. That question has already been answered by the market. The conversation should be about how to use it well. How to maintain the strategic thinking while leveraging the tool. How to keep the human insight while letting the machine handle the grunt work. How to deliver more value in less time without sacrificing quality.

Because in my experience, the people who need UX professionals are not suddenly going to do it themselves just because AI exists. They still do not have the time. They still do not know what questions to ask. They still cannot evaluate quality. What changes is that the UX professionals who embrace AI can deliver significantly more value than those who resist it.

The symbiosis advantage

I am not threatened by AI. I am empowered by it:

  • It lets me hold far more complexity in my head than I could before
  • It lets me process larger amounts of information
  • It lets me deliver more refined, more thorough, more valuable work

All the things AI does badly (high-level strategy, judging quality, understanding human needs, driving projects forward) are exactly the things clients need me for.

So I am leaning into this dependency. Deliberately. Because it allows me to deliver more value in less time. My clients get better work, delivered faster, for the same investment. That is why I am in business. AI has become another tool in my arsenal, like Figma or analytics platforms or any of the other things I rely on to do my job well.

Read the Full Post

The above notes were curated from the full post www.linkedin.com/pulse/why-im-worried-my-ai-dependency-paul-boag-lba6e/.

Related reading

More Stuff I Like

More Stuff tagged artificial intelligence

See also: AI, chatGPT, LLM

Cookies disclaimer

MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.