Photo by 🇸🇮 Janko Ferlič on Unsplash
Preface
As the saying goes "Learn, unlearn, and relearn", where one picks up new knowledge/skill etc, discards outdated or inefficient processes and revisits understanding/approach to known, used workflows and updates accordingly.
A cycle to keep us adaptable to the rapid changes in and around us.
With AI agents, I am seeing perspectives which go like: "structure, constraints and harnesses that induce as much determinism as possible, are necessary facets to make these generation machines work at scale and be reliable to a respectable degree."
It's almost as if, if we were only able to create some reduced instruction sets to non-ambiguously convey consistent behavior to computers!
Anyways, I digress.
Looking back
Along this trajectory, in a way we are rediscovering, old and tested design/development principles.
TDD, spec driven, intent based APIs, Extreme Programming and others are being revisited, relearned to realize things up from first principles.
Some are looking at what it takes to solve for AI infrastructure, others at tooling for design/formal verification, some at context management etc and some more recently at cybersecurity (like enterprise solutions that may come up centered around Mythos).
And in many a case, we are noticing best practices or approaches as emergent properties not very different than what we already had over the decades of learning and unlearning.
The traction of https://github.com/juliusbrussee/caveman which minimizes token consumption by dropping filler words from prompts, formats like https://github.com/toon-format/toon,
or intent specs like https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview, or programmatic interaction in https://github.com/stanfordnlp/dspy shows we are revisiting concepts that already exist.
If we consider coding is commoditized (assume it is), it's still just a fraction of what software is all about, and we need to build and learn systems that can manage the rest reliably with AI in the loop.
With the advent of LLMs, a feeling of "all that can be explored should be explored" permeates current discourse. Basically look at what's possible.
But I think eventually we will reach a stage where we focus again on problems that are worth solving "in the right way" and make sense economically.
So what to unlearn?
The path to identify problem spaces aren't different to what they used to be (though we have had silicon valley and VCs manufacture demand in many cases, but for finding problems let's treat it as the exception approach).
For example, personal experiences have driven entrepreneurs like forever.
Pierre Omidyar felt there isn't an auction space to sell stuff, and built eBay.
Forgot to bring your USB, still want access to your files? Relatable?
Drew Houston built Dropbox to solve this.
Now that a lot more people have power to create software, we will have more niches explored. Some creations will survive / have traction, most will perish.
Personal experiences and background will direct us to various domains in this exploration.
Exposure to functional programming / category theory in my earlier years may not have translated to tangible artifacts as my day job never needed it. But for sure it moulded my thinking in certain ways that might help to reason differently.
When I am looking what to build, I may start looking for formalisms and models first, as opposed to getting code generated.
Someone from humanities or legal may think about social problems in vastly different and creative ways compared to programmers.
And in this journey, all experiences and perspectives matter.
Same problems of distributed consensus, storage engine, schema evolution etc. was and is being tackled in different and distinct ways, there is no one true way.
Mobility is not solved (no, Loop doesn't work), AGI isn't around the corner.
The pursuits will continue. Learn and relearn to see what gives you your unique insight and use all your past experiences to reach at the what/why/how. There's nothing to unlearn.