We live in a world of frameworks, APIs and out-of-the-box solutions.
Want a website? You have Next.js.
Training a neural network? Just use a hugging face pipeline.
Deploying an app? Ohh Digital Ocean must have a template for that.
Don’t get me wrong, these tools are great and make our lives so much easier. They allow us to abstract away complexities and increase our productivity. But recently I have started to question, do I TRULY UNDERSTAND what I am building?
Now this thought has driven me to start creating known models like RNNs, Neural Nets and etc from scratch.
How It started
Recently, I decided to follow Karparthy’s tutorial on a small autograd engine. I copied his code and then started making my own components (Relu, Adam etc). Just a numpy and a blank file. A single weekend project taught me more about neural networks than any course ever had!
I was just struggling with errors, bugs and a bunch of silly mistakes. But that painful and slow struggle was helpful as it helped me concretize the ideas within my mind in a way that it makes them hard to forget.
Why I want you to start building from scratch
You learn how things work
When you build things on your own, there are no black boxes. Every decision you make, every tradeoff you have to choose are all your own which allows you to understand the system a bit more deeply.
When you’re just using a framework like PyTorch or TensorFlow, many design choices feel arbitrary. But when you build a neural network from scratch, you begin to see the hidden reasons behind those decisions often rooted in stability, performance, or simplicity.
Take activation functions, for example. At first, they all seem interchangeable. But once you implement them yourself and observe how gradients flow during backpropagation, you quickly learn the difference. Sigmoid and tanh may look smooth and elegant, but they can squash gradients to near zero, especially in deep networks. Suddenly, your model stops learning and you understand why ReLU became the default in modern architectures.
These are the kind of insights you would only understand when you push yourself to start building from the ground up.
Memorization vs Actual Understanding
Using frameworks makes it easy to fall into the trap of memorizing steps instead of understanding. Most AI Engineers can write a PyTorch training loop from memory all they have to do is to define the model, specify the optimizer, loop over the data call backward and step. But a great AI Engineer would know what those steps do under the hood. This understanding is not just an abstraction, but it becomes essential when you are debugging tricky behaviors, implementing something novel or even optimizing for performance.
Knowing the system makes you a better user of those systems.
It’s not important to reinvent everything. But, when you have built the underlying systems yourself, you start using tools with a far greater intention.
When you’ve implemented backpropagation by hand, you’re not just calling .backward()
anymore but instead you’re thinking about how gradients flow, where they might vanish or explode, and how each layer contributes to the final loss. That awareness changes how you design models, choose activations, and debug training failures.
This kind of knowledge allows you to become a more active, critical user. All that magic becomes your intelligence.
Why reininvent when a wheel already exists?
Yes, rebuilding things blindly is a waste. But if you want to master something, then building from scratch is the ultimate stepping stone towards earring that mastery. The goal is to unbox the black box and think like the creator/inventor itself, in turn unlocking every decision, every little detail about that system.
Ending note
After you have started with nothing, just an idea and your own understanding. It’s challenging and quite hard honestly but after you have implemented it, it’s an incredibly fulfilling experience. I always feel like I have climbed a mountain whenever I implement something from scratch.
If it helps, Richard Feynman agrees with me.
“What I cannot create, I do not understand.”
— Richard P. Feynman
Thanks to Pradyu for proof reading!