5 min read

Are we engineering the AI, or is the AI Engineering us?

Table of Contents

Assumption of Control

I was recently watching this TED Talk about how we have found ourselves at a time when our usage of AI has begun to influence our language. We talk about AI as something we are building, through the engineering of models, architectures, and objectives. But lately I’ve been wondering whether that framing is incomplete. Not because AI is conscious or malicious, but because something quieter is happening. The systems we design to respond to use are beginning to shape how we think, what we notice, and what feels worth pursuing. And I’m not sure where engineering ends, and influence begins.

It’s subtler than a dramatic loss of agency. Instead of being controlled, we are being gently guided and nudged towards certain kinds of curiosity and mental shortcuts. And it raises an uncomfortable question: are we still engineering these systems, or are they slowly engineering us in return?

Tools That Replace vs Tools That Extend

Humans have always built cognitive tools. Writing extended memory. Telescopes extend perception beyond human limits. But an important distinction we often overlook is that some tools replace effort, while others extend it. A calculator spares us from arithmetic, but a telescope doesn’t replace seeing; instead, it extends what can be seen. Both tools are valuable, but they change the user in different ways. If you were asked to climb a hill, you would put on some gear and climb it step by step; you wouldn’t bring in an industrial machine and flatten the hill.

Most of the current AI applications promise to help us learn, think, and create, but often tend to remove the very processes via which learning occurs in the name of friction. When we used to study without LLMs, it was just a book, perhaps a notebook for writing down things. We would read a chapter whilst noting what we understood, then do a second, third, or even nth pass depending on how confused we felt at the end. Finally, we would solve some past papers or practice questions to reinforce that logic in our brains.

Unfortunately, reading now collapses into summarization, writing into prompting, and practice questions into explanation on demand.

Real learning comes from engaging with a difficult concept in your head until the concept finally “clicks” and you gain clarity. That journey of you reaching the “click” is an intellectual exercise that also gives you the ability to recall that concept for a longer time. AI tools have reduced that journey into a single prompt. I realize that whenever I try to learn using AI tools, it’s easy for me to forget that concept in a matter of days.

Outsourcing the Click

Using AI tools feels efficient; you think that you are learning faster. Knowledge becomes something we access rather than something we build over time. But here is the problem: the conclusions we arrive at are not really our own. We reach them without fully understanding the knowledge behind them, which would have given those conclusions meaning.

Struggle is a part of understanding; it is the very thing that enables understanding. Modern AI tools have optimized that part away under the excuse of saving time. By removing friction, they have led users to believe that friction was an unnecessary part of learning.

Cognitive Ascension

Nobody told us to think less. The environments created by these apps do that for us. When answers are readily available, our minds begin to treat “trying” as optional. Why struggle with hard concepts or wrestle with difficult questions when a model can do that for us? Over time, this mindset hurts our curiosity by narrowing it.

This is not the only way to use AI-assisted thinking. It is possible to imagine applications that do not outsource thinking, but instead elevate it. Systems that help you articulate your own thinking before offering theirs. What is missing is a sense of time. We need tools that connect our ideas across time, allow us to reflect beyond what we are thinking now, and show us how our thinking has evolved. In this vision, AI doesn’t eliminate effort but redistributes it. A tool like this won’t optimize for speed, but for extrapolation. Such tools should not feel like shortcuts, but more like climbing aids that allow you to access higher ground without flattening the mountain.

Who is engineering whom?

The process of thinking is more like a muscle than a service. It grows through resistance, not convenience. Tools that remove this resistance may feel helpful, but in the long term, they risk weakening our thinking capacity, the very thing they were meant to support. Tools that reshape resistance by making it more focused or intentional can help strengthen it instead. The concern isn’t our dependency. It’s brain atrophy disguised as efficiency.

In the question of who is engineering whom, I think the answer is not either-or, but rather a feedback loop. We shape the systems we build, and those systems end up shaping us by altering our habits, expectations, and the thoughts we have. If intelligence is shaped by its environment, then the most important question isn’t what AI can do for us, but how it should be designed such that it accompanies our thinking instead of making things easier for us.

Acknowledgements

Thanks to Arnav Jhajharia for proof reading the blog.