Working Effectively with AI Is a Soft Skill

Notebook on desk

This simple fact becomes more evident each day, as people completely detached from technical domains are making use of and excelling at AI tools and workflows.

But many people, especially individuals with a technical background, are still stuck in the mindset that working with AI is a hard technical skill, similar to learning a new programming language or framework.

This is, in my humble opinion, a fundamentally flawed approach and limits the scope of what can be achieved. Working with AI should be thought of more similarly to the jump from working with waterfall to agile or from working with a monolith to working in microservices. If done correctly, it’s not only a new tool but a fundamentally different way of working.

Similar to teamwork or leadership, this is not a thing that can be learned in a 4-hour workshop or a masterclass. You can read about tools, models, and workflows all day, but it will never prepare you for the real hurdles and nuances of daily development. You need to hack your way through; there is no way around it.

If you are replacing your coding with prompting an agent, that’s ok, but that’s only the start. Working effectively with AI requires thinking about your whole development process differently. What does this mean?

“Decently” defined requirements

Defining requirements properly has always been important, but we have all dealt with a 1-line ticket that says something like “Implement account endpoint”. This was manageable pre-AI, as we could be quite resourceful and get all the information we needed from stakeholders to at least implement a workable solution based on tons of implicit knowledge we had about similar requirements in the past, what the company expects, etc. All of this breaks down completely if we have any expectation of tackling these issues with AI in any semi-autonomous way.

This doesn’t mean creating a 2-page document beforehand for each requirement, but it does mean that each ticket should contain at least the bare minimum:

  • A minimal description that captures the scope of the story
  • Well-defined and verifiable acceptance criteria
  • Implementation details that capture, in a condensed way, important facts or decisions relevant to the implementation which are not self-evident.

Making the implicit explicit

This point is closely related to the last one, but it is a more general one, and maybe even encapsulates a big part of the whole mindset shift. We humans are specially equipped to work effectively in messy environments with fragmented and contradictory information, but LLMs are not. Everything that is not in their training data or in the context you feed them simply doesn’t exist at all. And since LLMs love to answer every request anyway, they will hallucinate to fill the gaps. The more niche your technology or domain is, the truer this becomes.

Acknowledging this mechanistic fact prepares us to provide LLM tools with the things they need, but we need to think about the whole coding process differently. You need to translate your internal thinking, assumptions, and workflows into documentation. It doesn’t need to be extensive, perfect, or even polished, but everything required for the task at hand should be more or less there.

Here is where your mileage may vary. Depending on how well your project is documented and tested, the delta you need to provide to the LLM to work effectively could be minimal or gigantic. Without context, LLMs start fresh each time. So you need to think from first principles.

This is roughly the mental workflow I follow to start this discovery process:

Imagine your notebook is brand new; you just cloned the repo of your project. What do you do next? Do you need Java? Do you need Node? Which version? Do you need a version manager? Do you need multiple versions?

How do you start the project? How do you run the tests? Do you need to do something extra to set up your local environment? Do you need a database? Does it use Docker? If so, how do you start it? Do you need test credentials? Where do you get those?

Write all of that down or copy a reference from where the information can be located. Hopefully you already have most of this documented somewhere; if not, just put it condensed and in simple words.

Then imagine you start implementing a feature. What do you need to know to implement a new endpoint? Do you have any special naming conventions? Do you have any special folder structure? How do you write tests? Do you have multiple ways of writing tests? Do you have testing conventions? Do you write mostly unit tests or mostly integration tests? Both? Do you have e2e tests? Do you need to mock things in any special way? Do you have multiple testing libraries?

Write all that down, literally however it comes out of your mind; the exact words don’t matter. You can polish the language and style later.

Once you have a rough idea of everything you need to implement a simple feature written down, put all of that into a context file for your LLM tool (aka AGENTS.md, CLAUDE.md, etc.).

Then you just need to start using the tools. It doesn’t matter if it’s a real feature or a made-up one. This is just an exercise in exploration and discovery. It’s even better if it’s a feature you know exactly how you would implement it, so you can easily spot the errors.

Have it implement a simple endpoint/feature using only high-level language to draft a plan and then implement it. Does the plan make sense? If no, tell the AI why in plain English and keep iterating until the plan makes sense. It doesn’t need to be perfect or be the exact way you would implement it yourself, but it just needs to make sense. Resist the temptation to overcorrect or go too low on the technical side. It just needs to make sense in general.

When it more or less makes sense, tell it to implement it and see what it does. First, does it work? No? Is it completely bonkers? Or does it make sense? If it works, fine, you are more or less there. If it’s completely outrageous, go back to the planning stage, iterate further, and repeat until you get a “workable” solution.

Any output at this point is valuable and it will tell you what your documentation and context are lacking. From here on, you need to start tweaking it and trying again until the solution makes sense. Are the tests wrong? Then add more information about testing. Is the domain knowledge wrong? Then add references to documentation or even plain-english references to the domain knowledge required to build stuff there.

Once you have implemented a full simple solution end-to-end, you are ready to incorporate it into your daily workflow. The next time you need to implement something, do this same process and keep making small corrections to the whole AI setup each time.

Trusting the process

If you did the mental exercise I mentioned in the last point, you already have a starting workflow you can use to iterate with LLMs for coding. Now the hardest part is to keep iterating in the high-level language I mentioned and resisting the temptation to jump directly in to fix the code yourself. Your goal is to steer the LLM. If you just jump in to fix the code manually, you are not making the process better for the next time. You need to provide enough guidance so the agent can correct itself. Once you have made it successfully do what you want, you can even ask it directly: “What do we need to add to the context so next time you don’t commit the same mistakes?” and it will provide something you can most of the time add to the context files.

In the beginning it will feel stupidly slow and cumbersome, and it’s true. But bear with me; it’s part of the learning process. If you skip it, it will never get better. Once you go through a couple of iterations the thing starts picking up speed, and then you only need to do small tweaks here and there. The agent starts knowing the project better, and you also start knowing how to work better with the agent. You will start gathering instinct for how to prompt it to do things more efficiently and in which parts it can have more trouble.

You don’t need to be a fundamentalist about this. From time to time, things will just not work and you will eventually have to jump in and fix the code yourself. The important thing is that this should be the exception, not the rule. Your first instinct should be “What do I need to provide the LLM so it can make it work?” not “This sucks, I will just do it myself.”

Be open minded

These tools are getting surprisingly better over time. Just because you tried something today and it didn’t work, it doesn’t mean it won’t work in one month or two, even if you don’t change a single thing yourself. Don’t fall into the trap of pre-judging what can and can’t be done with AI. When in doubt, give it a try, make the AI try to do it and see what happens. Even if it doesn’t work, AI may suggest things that can unblock your way of thinking about the problem.

I faced this issue myself multiple times. Three months ago I was trying to make an agent help me do a medium-sized refactor of a legacy app in Python. Poorly documented, old, and super bad quality business code that was mixed with frontend code all over the place. The goal was ambitious: move all the business logic into a service layer, so then we could get rid of the frontend library.

So, I started as always: there was already quite decent context I reverse-engineered from the code myself that I provided the LLM. We made a plan and we started the refactor. It sounded great, but nothing worked. The LLM worked on the plan for 20 minutes and got stuck in the middle of the refactor with duplicated snippets of code all over the place and nothing running. We tried again. We rolled back the repo, we defined additional structure and procedure and made a new plan. It happened again. Then again. And 4 hours and many tokens later, there was no working app. We were still stuck at step 1. Ok, this refactor was not important, so I shelved it.

Some months passed, and a couple of weeks ago I was bored and remembered it. I decided to give it a try. I already had wasted a lot of time on it, so I had zero expectations. I wasn’t going to spend 4 hours again. So I jumped into the code, paraphrased what I wanted to do, told the AI to create a plan with it. I skimmed through and told it to implement it. 10 minutes later, it was finished, and the app was surprisingly working.

It wasn’t perfect; we still had to do some further fixing and tweaking, but it was there. Almost the same setup, but this time it worked almost flawlessly the first time. 30 minutes later the whole refactor was ready and working.

And these kinds of things are happening to me more and more. So nowadays I try to never think “there is no way the AI knows how to solve this”; I just give it a quick try and see what happens. You might be surprised about the response.

Think in workflows, not in tools

There is no one to trick here; we are still super early. We are in the “hacking” phase. No one has any idea what they are doing. We have like 200 coding tools available, and new ones are coming out every week. Ralph loops, multi-agent orchestration, memory layers, MCPs, UI tools, CLI tools, voice tools, etc.

It makes no sense to lock yourself in this early to any ecosystem. This month Claude Code is amazing; next month it can suck. This month the Chinese models are great; next month Gemini comes out with a model that destroys everything that was built so far. This is the natural state of any frontier development. The important thing is to look into the general trends the industry is converging toward and try to follow them in the most “provider-agnostic” way possible.

Don’t fall in love with your current AI setup, because most likely in 6 months it will be completely obsolete. The important thing is the workflows we build and the knowledge you can gather about your own software development process through the kinds of discovery processes we discussed in point 3.

First consider open standards. AGENTS.md is a good first candidate. Now it is quite “mature” and accepted by most coding tools out there. And this captures the essence of the patterns you should be looking for. The idea behind it was: “We need some kind of semi-structured way of storing facts about our codebase that agents can use to work more effectively on it.” At the beginning each tool did it in their own way: .cursorrules, CLAUDE.md, etc. But if you correctly identify the idea, the implementation becomes irrelevant and you can easily migrate between them.

Another good example is planning workflows. Not so long ago, not everyone was convinced that doing a plan before coding was a good idea. But many tools did, and they did it in their own way. Some tools went the UI route like Kiro, some took a more barebones approach with just plain .md files, but they were all doing the same thing. Now nearly all the tools have some kind of a “planning mode”, but again, once you have identified the idea behind it, the implementation becomes irrelevant.


Closing thoughts

The hype is big, everyone is selling some tool or solution and promising to change the way you do stuff forever. As with any tech trend, the space quickly gets crowded with noise. In these times, it’s more important than ever to fall back to the basics and routinely think about the problems from first principles. Only you know how your development process works; you know your strengths and weaknesses. Only you have the correct domain knowledge about the business and the technology. No generic coding tool will provide any of that for you.

The stronger your technical and business foundations are, the more leverage you will be able to extract from AI. The more you know what you want to use AI for, the more you will be able to see through the 80% useless noise and buzzwords.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top