Search

Saturday, March 14, 2026

The Real AI Impact on Software Engineering

 



In this post I attempt to separate the hype as well as the nay-sayers with what I am actually experiencing and seeing around me.

There is a well-worn observation that we tend to overestimate the short-term impact of new technology and underestimate the long-term impact. AI fits this pattern almost perfectly.

On one end, you have tech CEOs claiming all white-collar work is about to disappear. On the other, you have experienced engineers who tried ChatGPT once a many months ago and concluded it is a toy. Both are wrong, but for different reasons.

Why the Hype Exists

The bold claims are not accidental. When you need to raise tens of billions of dollars to build GPU datacenters, nuanced messages do not get the job done. "AI will meaningfully improve the productivity of senior engineers over the next few years" does not unlock capital markets. "AI will replace all software engineers" does.

Similarly, the news cycle swings between "vibe coding can build anything" and "vibe coding caused a massive outage." Neither framing is particularly useful if you are trying to make real decisions about how to staff and run your teams.

Why the Skepticism Exists

When I talk to friends and former colleagues outside the large tech companies, I notice a consistent gap. Many of them tried AI tools months ago, maybe with GPT-3.5 or an early Copilot version. Their company does not provide access to frontier models like Claude Opus or GPT-5.4, let alone unlimited tokens. They tried it, it felt mediocre, and they moved on. 

The pace of improvement has been so fast that the tool they evaluated six months ago bears little resemblance to what is available today. And if keeping up with AI is not your primary job, you simply cannot track all of it. Most have never experienced agentic workflows, SRE agents, SWE agents, or team-shared skills that give these models custom context for your specific codebase and organization.

What Is Actually Happening



Here is what I see when I talk to principal engineers and senior leaders who have been using these tools seriously: their day-to-day engineering work has shifted completely over the last few months. 

In the earlier model, a senior engineer would write a design, sometimes a very detailed specification, hand it to an early-career engineer, wait for the code, review it, iterate, and eventually accept it. That loop could take days or weeks.

Now, those same senior engineers are directing AI agents to do much of that execution work. They provide the design intent, the constraints, the tradeoffs. The agent produces the code. The senior engineer reviews, adjusts, and ships. The cycle compresses dramatically. 

The engineers who understand systems deeply, who know the principles of distributed computing, who can reason about tradeoffs across reliability, performance, cost, and security: those engineers are becoming genuinely 10x productive. Not because the AI is magic, but because it removes the bottleneck of translating their knowledge into code line by line. At the same time in this scenario the AI is not going off the rails doing stuff, but rather this is agents with humans in the middle. Imagine the agents work as early career even interns who do specific things inside the bound and then it gets reviews and put into production. 

Where the Impact Starts

The uncomfortable truth is that this shift starts at the bottom of the experience ladder. Entry-level engineering roles are the first to feel the pressure. Not because those engineers are not talented, but because the work they typically do, taking a well-defined spec and implementing it, is exactly what AI agents are getting good at.

For the foreseeable future, we will need experienced engineers to direct these agents. You still need someone who knows what good looks like. But instead of that person supervising three or four junior engineers, they are now supervising a set of AI agents. The volume of output goes up, and over time, the agents will mature and move up the complexity ladder as well.

Beyond Software Engineering

This is not limited to code. If someone's job involves looking at data on a screen, making a decision based on patterns, and taking an action on a computer, that role is already under pressure. It may be held in place temporarily by legal constraints, organizational inertia, or government regulation. But if the generic frontier models do not disrupt it today, a startup training a specialized model for that exact problem will.

The framing I find most useful is not "will this role disappear entirely?" but rather "can one experienced person with AI agents do the work of a small team?" In many cases, the answer is already yes.

The Cost Objection

I hear this frequently: "Sure, this works at Microsoft scale, but we cannot afford unlimited tokens and frontier models." That is a fair point today. But the history of compute costs in this industry is unambiguous. Prices drop exponentially. What is expensive today becomes commodity tomorrow. The organizations that wait for costs to drop before even experimenting will find themselves behind the ones that built the muscle early.

The Honest Takeaway

AI is not replacing all engineers. It is not replacing all white-collar workers. Those are fundraising narratives, not operational realities.

But it is genuinely changing the ratio. Teams that once needed eight engineers to deliver a project can do it with three experienced ones and a set of well-directed agents. That is not hype. That is what I am seeing, month after month.

The engineers who will thrive are the ones who understand systems, who can articulate what good looks like, and who learn to work with these tools as a force multiplier. The ones most at risk are those whose primary value was the mechanical translation of someone else's design into working code.

That is the real shift, and it is already here.

Monday, February 16, 2026

Aspiration ≠ Plan



The hidden execution killer: treating aspirations like plans

While this might seem very common sense, I have indeed experiences in many large systems that folks do not distinguish between both and results in one of the most expensive mistakes.

We talk about aspirations as if they’re plans.

It happens in good faith. A team wants the right thing:

  • improve reliability
  • reduce latency
  • simplify operations
  • close security gaps
  • pay down tech debt
  • left shift quality issues

And in conversation, it’s easy to use plan-shaped language:
“Yeah, we’re planning to do that.”
“We have it in our plans.”
“It’s on the roadmap.”

The problem is that partner teams hear that and make downstream decisions. They assume dependencies will be met. They set expectations with their stakeholders. They schedule their own work around it.

Then nothing ships.

And the failure isn’t usually competence or effort. The failure is that what existed was an aspiration, not a plan.

Why this happens more in complex systems

If you’re building and operating complex systems (distributed services, production workloads, multi-team platforms), there is always more work than time:

  • incidents pull attention
  • customer asks arrive mid-flight
  • urgent operational gaps appear
  • priorities shift because risk shifts

In that environment, any work that isn’t made concrete tends to lose the fight for attention.

That’s why aspirations often “feel real” in a meeting, and then evaporate in the day-to-day.

How to tell if something is a plan

A plan isn’t a slide. It isn’t a statement of intent. It’s something you can execute.

At minimum, the idea has been expanded into workitems/tasks and each one of them has the W’s:

  • What exactly will be delivered (not a theme, a measurable outcome)
  • Who owns it (one accountable owner)
  • When it will land (a date, sprint, or milestone)

And just as importantly: it’s written down, assigned, and tracked in whatever system the team actually uses to run its work. Doesn't matter if that is ADO/Jira or an excel sheet.

If those pieces don’t exist, then even if everyone agrees it’s important, it will struggle to land in a large team. Not because people don’t care, but because the system of work will always prioritize what is explicit over what is implied.

The “phantom plan” problem

A common pattern in cross-team work goes like this:

  1. Team A says: “We’re planning to do X.”
  2. Team B walks away believing X is coming, and plans accordingly.
  3. Later, Team B asks for an update.
  4. Team A replies: “We haven’t started yet, but we still want to.”
  5. Team B realizes X was never actually planned work.

This is the worst kind of misalignment: everyone thinks progress is happening, until reality catches up.

The simple habit that helps

So when someone asks, “Do you have a plan for X?”, the most helpful thing isn’t more confidence. It’s more precision.

Either:

  • “Yes: who owns it, what is the next deliverable, when is the milestone,” or
  • “Not yet: we agree on the aspiration, but it’s not planned work until we assign and track it.”

It’s a small distinction, but in large complex engineering environments, it’s the difference between:

  • “we all agree this is important” and
  • “this will actually land.”

Sunday, July 06, 2025

Home Backup


I just returned home from a trip and had one of those dreaded moments: one of the disks on my home desktop had failed! Thankfully, I have a backup strategy in place that is working for me and it is the second time I had to use it.

How I Choose What to Back Up

This experience reinforced my belief in keeping things simple. I categorize my data as:

  • Important: Family photos, essential documents, critical projects.

  • Not Important: Old files, random downloads, stuff that doesn’t need my attention.

Less clutter ensures lower cost to backup, easier to backup and faster restore. I really care ensuring multiple backups including remote backups of a few critical folders, everything else, I can delete at will and never worry. 

Staying Cost-Effective

Finding affordable yet effective backup options has always been my goal. No one wants to spend money unnecessarily on backups that provide minimal additional benefits.

My Backup Philosophy: The 3-2-1 Rule

I follow the popular 3-2-1 backup rule:

  • 3 copies of all important data.

  • 2 different storage types (local NAS and cloud).

  • 1 off-site backup for worst-case scenarios (think fires or theft).

My Actual Backup Setup

Here's the practical side of my setup:

  1. Local NAS: Quick, reliable backups. All my devices constantly backup their entire contents here, which allowed me to quickly recover my desktops data this time. I treat is as a local cache. I am using a Synology DS2xx series for some time now.

  2. Cloud Storage (OneDrive): Essential files from laptops, desktop, and my phone (including camera) sync automatically. This ensures easy access from anywhere. OneDrive is an easy choice for me. The family tier is 1TB per person, and this alone covers for everything critical for me except the huge cache of raw images (see below).

  3. Cloud Backup (Backblaze): If not for my huge collection of RAW images from my DSLR, I'd likely have just stuck to the above two. However, my desktop has few terrabytes of images, that I simply cannot fit in onedrive, hence my desktop backs up fully (all drives) to BackBlaze against serious emergencies. I have been using BackBlaze for years. I use their 30 day versioning and have in the past had to get a restore, where they sent me a SSD with the folders I wanted to restore. BackBlaze allows you to give an encryption key that is used in the client and not kept with backblaze. So if you loose that key, all data is lost. 

Backup is Easy, but Restore is Crucial

Backing up data is easy—restoring it reliably when something goes wrong is where the real test lies. Every six months, I run a mini disaster recovery drill:

  • Checking that backups are up-to-date.

  • Restoring a few random files to verify everything works.

This practice made my recent restore straightforward and stress-free.

This disk failure ended up being a positive reminder of how valuable a good backup strategy is. I’d love to hear your backup strategies or any tips you've learned through your experiences. Drop your suggestions and ideas in the comments below!

Resume Tips

 



As I am filtering through the applications for a open Principal Manager position in  my organization, I thought I'd share some quick tips. Also if you have been impacted by the recent layoffs and need a fresh pair of eyes to go through yours, ping me, happy to help in any way I can.

1️⃣ Keep it lean
Two pages max. Hiring managers, recruiters are pressed for time, and your résumé is your first filter.

2️⃣ Tailor for the target
Start with a slightly longer master résumé, then tighten it for each application. Highlight the experiences that mirror the job requirements and trim the rest. If the role asks for deep technical chops, surface your biggest engineering wins. If it values people leadership, spell out the size, geography and diversity of teams you have guided.

3️⃣ Know every line
Interviewers love to probe the details. Even that “obscure” project from years ago can surface. Be ready to dive deep on anything you list.

4️⃣ Proof. Proof again. Then proof once more.
Ask a friend to review for clarity and errors. Yes, I just reviewed the resume of “Software Engineering Leafer”.

5️⃣ Expand acronym on the first usage
Clarity beats cleverness. Write “Time-to-mitigate (TTM)” before you rely on TTM alone.

6️⃣ Show impact, not activity
Microsoft and other top firms care about results. Replace “led migrations” with specifics: “migrated 120M users to new platform, improving sign-in latency by 35 percent.” Include metrics on revenue, cost, uptime, or customer satisfaction whenever possible, and be prepared to defend them in interviews.

7️⃣ Cut the buzzwords
“Visionary, results-oriented ninja” is outdated. Run your résumé through a GenAI tool and ask it to flag fluff and repetition.

💡 Bottom line: Precision, relevance and quantified impact separate strong leaders from a sea of applicants. Invest the extra hour to refine your résumé and you will earn the next-step conversation.

Monday, November 28, 2022

Wordament Solver

 


Many many years back in an interview I was asked to design a solver for the game Wordament. At that time I had no idea what the game was and the interviewer patiently explained it to me. I later learnt that couple of engineers in Microsoft came up with the game for the Windows phone platform and it was such a success that they went and bootstrapped a team and made that game their full time job. 

I was able to give a solution in the interview, but that always remained at the back of my mind. I wanted to go further than the theoretical solution and  really build the solver. I began tinkering with the idea a couple of weeks back and over the Thanksgiving long weekend I got enough time to sit down and complete the solution.

You can see it in action at bonggeek.com/wordament/



Basic Idea

We begin by loading dictionary into a Trie data-structure.  Obviously there are fantastic Trie implementation out there, including ones that are highly optimized in memory by being able to collapse multiple nodes into one, however, the whole idea of this exercise was to write some code. So I rolled out a basic Trie.

If a particular Trie node is a end of word, then that node is marked as so. As an example a Trie created with the words, cat, car, men, man, mad will look as below. The green checks denote these are valid end of word nodes.


Now starting from each cell of Wordament, we start at the node for that cell character in the Trie. We look at the 8 adjacent cells (neighbors) and if there are Trie children with the same character as the neighbor, then it is a candidate to look into. And we recursively move to that node. At any point if we arrive at a valid word node, then we check to see if that word was previously found, if not, we add the word and the list of cells that created that word in the result.

Finally since Wordament gives higher score for longer words, we sort the list of words by their length.

The logic of this solution is implemented in wordament.go.

I built the solver into a web-service, that runs in a docker container inside Azure VM. The service exposes an API. Then I built a single page web-application, that calls this web-service and renders the solution.

You can hit the API directly with something like
curl -s commonvm1.westus2.cloudapp.azure.com:8090/?input=SPAVURNYGERSMSBE | jq .

The input is all the 16 characters of the Wordament to be solved.

Wednesday, March 02, 2022

CAYL - Code as you like day


Building an enterprise grade distributed service is like trying to fix and improve a car while driving it at high speed down the free-way. Engineering debt accumulates fast and engineers in the team yearn for the time to get to them. A common complaint is also that we need more time to tinker with cool features and tech to learn and experiment.

An approach many companies take is the big hackathon events. Even though they have their place, I think those are mostly for PR and getting eye candy. Which exec doesn’t want to show the world their company creates AI powered blockchain running on quantum computer in just a 3 day hackathon.

This is where CAYL comes in. CAYL or “Code As You Like” is named loosely on “go as you like” event I experienced as a student in India. In a lot of uniform based schools in Kolkata, it is common to have a go as you like day, where kids dress up however they want.

Even though we call it code as you like, it has evolved beyond coding. One of our extended Program Management team has also picked this up and call it the WAYL (Work as you like day). This is what we have set aside in our group calendar for this event.


“code as you like day” is a reserved date every month (first Monday of the month) where we get to code/document or learn something on our own.
There will be no scheduled work items and no standups.
We simply do stuff we want to do. Examples include but not limited to
  1. Solve a pet peeve (e.g. fix a bug that is not scheduled but you really want to get done)
  2. A cool feature
  3. Learn something related to the project that you always wanted to figure out (how do we use fluentd to process events, what is helm)
  4. Learn something technical (how does go channels work, go assembly code)
  5. Shadow someone from a sibling team and learn what they are working on
We can stay late and get things done (you totally do not have to do that) and there will be pizza or ice-cream.
One requirement is that you *have* to present the next day, whatever you did.  5 minutes each

I would say we have had great success with it. We have had CAYL projects all over the spectrum
  1. Speed up build system and just make building easier
  2. ML Vision device that can tell you which bin trash needs to go in (e.g. if it is compostable)
  3. Better BVT system and cross porting it to work on our Macs
  4. Pet peeves like make function naming more uniform, remove TODO from code, spelling/grammar  etc.
  5. Better logging and error handling
  6. Fix SQL resiliency issues
  7. Move some of our older custom management VMs move to AKS
  8. Bring in gomock, go vet, static checking
  9. 3D game where mommy penguin gets fish for her babies and learns to be optimal using machine learning
  10. Experiment with Prometheus
  11. A dev spent a day shadowing dev from another team to learn the cool tech they are using etc.
We just finished our CAYL yesterday and one of my CAYL items was to write a blog about it. So it’s fitting that I am hitting publish on this blog, as I sit in the CAYL presentation while eating Kale chips

Monday, February 07, 2022

Go Generics


Every month in our team we do a Code as You Like Day, which is basically a day of taking time off regular work and hacking something up, learning something new or even fixing some pet-peeves in the system. This month I chose to learn about go-lang generics.

I started go many years back while coming from mainly coding in C++ and C#. Also in Adobe almost 20 years back I got a week long class on generic programming from Alexander Stepanov himself. I missed generics terribly and hated all the code I had hand role out for custom container types. So I was looking forward to generics in go.

This was also the first time I was trying to use a non-stable version of go as generics is available currently as go 1.18 Beta 2. Installing this was a bit confusing for me.

I just attempted go install which seemed to work




but seemed like it did not work. I had to do an additional step of download. That wasn't very intuitive.

For my quick test, I decided to do a port my quick and dirty stack implementation from relying on interface{} to use generic type.

I created a Stack with generic type T which is implemented over a slice of T.

var Full = errors.New("Full")
var Empty = errors.New("Empty")

type Stack[T any] struct {
    arr  []T
    curr int
    max  int
}

Creating two functions to create a fixed size stack or growable was a breeze. Using the generic types was intuitive.

func NewSizedStack[T any] (size int) *Stack[T] {
    s := &Stack[T]{max: size}

    s.arr = make([]T, size)
    return s
}

func NewStack[T any]() *Stack[T] {
    return &Stack[T]{
        max: math.MaxInt32,
    }
}


However, I did fumble on creating the methods on that type. Because I somehow felt I need to write it as func (s *Stack[T])Length[T any]() int {}. However, the [T any] is actually not required.

func (s *Stack[T]) Length() int {
    return s.curr
}

func (s *Stack[T]) IsEmpty() bool {
    return s.Length() == 0
}


Push and Pop worked out as well

func (s *Stack[T]) Push(v T) error {
    if s.curr == len(s.arr) {
        if s.curr == s.max {
            return Full
        } else {
            s.arr = append(s.arr, v)
        }
    } else {
        s.arr[s.curr] = v
    }

    s.curr++

    return nil
}

func (s *Stack[T]) Pop() (T, error) {
    var noop T // 0 value
    if s.Length() == 0 {
        return noop, Empty
    }

    v := s.arr[s.curr-1]
    s.arr[s.curr-1] = noop // release the reference
    s.curr--

    return v, nil
}
However, for pop I needed to return a nil/0-value for the generic type. It did seem odd that go does not implement something specific for it. I had to create a variable as noop and they return that.

Using the generic type is a breeze too, no more type casting!
s := NewStack[int]()

s.Push(5)
if v, e := s.Pop(); e != nil {
    t.Errorf("Should get poped value")
}

Tuesday, June 16, 2020

Raspberry Pi Photo frame


This small project brings together bunch of my hobbies together. I got to play with carpentry, photography and software/technology including face detection.

I have run out of places in the home to hang photo frames and as a way around I was planning to get a digital photo frame. When I upgraded my home desktop to 2 x 4K monitors I had my old dell 28" 1080p monitor lying around. I used that and a raspberry pi to create a photo frame. It boasts of the following features
  1. A real handmade frame
  2. 1080p display
  3. Auto sync from OneDrive
  4. Remotely managed
  5. Face detection based image crop
  6. Low cost (uses raspberry pi)
This is how it looks.


Construction

In my previous project of smart-mirror, I focused way too much on the framing monitor part and finally had the problem that the raspberry-pi and the monitor is so well contained inside the frame that I have a hard time accessing it and replacing stuff. So this time my plan was to build a simple lightweight frame that is put on the monitor using velcro fasteners so that I can easily remove the frame. The monitor is actually on its own base, so the frame is just cosmetic and doesn't bear the load of the monitor. Rather the monitor and its base holds the frame in place.

I bought a 2" trim from Homedepot and cut out 4 pieces using a saw and then joined them using just wood glue. To let the glue cure, I held the corners using corner clamp for 12 hours. The glue is actually stronger than the trim itself, so once it dries there is no chance of things falling apart.



On the back of the frame I attached a small piece of wood, on which I added velcro. I also glued velcro to the top of the monitor. These two strips of velcro keeps the frame on the monitor.



Now the frame can be attached loosely to the monitor just by placing on it.

After that I got a raspberry-pi and connected it to the monitor using hdmi cable and attached the raspberry pi with zip ties to the frame. All low tech till this point.
On powering up, it boots into Raspbian.

Software

Base Setup

I always get my base setup 

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install xrdp # install remote desktop
sudo apt-get install vim  # my editor of choice
sudo apt-get install git

git clone https://github.com/abhinababasu/share # get my shell
cp share/.vimrc .
cp share/.bash_aliases .
cp share/.bashrc .
cp share/.bash_aliases .

sudo apt-get install unclutter # hide mouse pointer in slide show

To keep things fresh, reboot midnight every day, add the following to /etc/crontab
0  0    * * *   root    reboot

Enable ssh
sudo raspi-config

Portrait mode
1. sudo vim /boot/config.txt
2. Add the line: display_rotate=3

Push Pics

I use FrameMaker for managing photos I take. My workflow for this case is as follows

  1. All images are tagged with keyword "frame" in lightroom. 
  2. I use smart folder to see all these images and then publish to a folder named Frame in  OneDrive

Sync OneDrive to Raspberry Pi

I used the steps in https://jarrodstech.net/how-to-raspberry-pi-onedrive-sync/ 
  1. curl -L https://raw.github.com/pageauc/rclone4pi/master/rclone-install.sh | bash
  2. rclone config
    1. Enter n (for a new connection) and then press enter
    2. Enter a name for the connection (i’ll enter onedrive) and press enter
    3. Enter the number for One Drive
    4. Press Enter for client ID
    5. Press Enter for Client Secret
    6. Press n and enter for edit advanced config
    7. Enter y for auto config
    8. A browser window will now open, log in with your Microsoft Account and select yes to allow OneDrive
    9. Choose right option for OneDrive personal
    10. Now select the OneDrive you would like to use, you will probably only have one OneDrive linked to your account. This will be 0
    11. Y for subsequent questions
  3. To Sync once: rclone sync -v onedrive:Frame /home/pi/frame
  4. Setup automatic sync every one hour
    1. echo "rclone sync -v onedrive:Frame /home/pi/frame" > ~/sync.sh
    2. chmod +x ~/sync.sh
    3. crontab -e
    4. Add the line: 1 * * * * /home/pi/sync.sh

Setup Screensaver

There are many options that I could find online to show the photos. But I chose to go with the easiest one, use the xscreensaver. However, there are some issues and most likely this is something I will revisit.

  1. Disable screen blanking after some time of no use
    1. vi /etc/lightdm/lightdm.conf
    2. Addd the line[SeatDefaults]
      xserver-command=X -s 0 -dpms

  2. Enable auto-login, so that on restart you directly get logged in and then into screensaver
    1. sudo raspi-config
    2. Select 'Boot Options' then 'Desktop / CLI' then 'Desktop Autologin'. Then right arrow twice and Finish and reboot.

  3.  Setup screen saver
    1. sudo apt-get -y install xscreensaver
    2. sudo apt-get -y install xscreensaver-gl-extra

These are my screen saver settings to show the photos in /home/pi/frame as slideshow





Problems and solving with Face Detection

My photos are rarely 9:16 portraits, that means an ugly black box on the top and bottom of the images. 


Obvious approach is to crop using some batch tool. But that would mean the crop could arbitrarily cut images out. Consider the following image 
Cropping in a batch tool that picks up arbitrary area of the image generated something like below, which is obviously not acceptable.
To solve this I build a tool at https://github.com/abhinababasu/img. It takes my other project on detecting faces in images and then ensures that in the cropped image the face is retained. E.g. the tool above generates the following image.