Beyond the Boilerplate: How to Partner with Your LLM for Deeper Coding Challenges

Introduction

cursor-copilot-windsurf

With the rise of “vibe-coding,” it’s easy to forget that LLMs offer much more than just code generation. Their real strength lies in natural language understanding, making them invaluable for brainstorming, debugging, and documenting. In this article, we’ll explore some of the best usages of LLMs besides generating code.

Pair Designing

When you don’t have a human partner to bounce ideas off for an implementation, an LLM can fill that role effectively. My typical workflow involves preparing one or two designs that I believe could solve my issue, then asking ChatGPT or Claude how they would approach the problem. This process could yield two equally valuable outcomes:

  • The LLM suggests a solution similar to one I devised (Yay! A great sign I’m on the right track.).
  • The LLM proposes a completely different approach I hadn’t considered, which triggers further research.

From there, you can iterate: use the LLM to weigh the pros and cons of each solution, or even consult multiple LLMs for a broader perspective. As with any LLM interaction, the more implementation details and constraints you provide, the more useful and precise the responses will be. For complex problems, documenting details in a markdown file is helpful as this file can easily be shared with multiple LLMs for comparison.

Tools like Google’s NotebookLM are particularly useful for gathering initial research and organizing information to kick off any design.

Pair Debugging

Debugging often starts with a cryptic error message and a familiar routine: pasting the error in chatgpt, googling in hopes of finding a 7-year-old Stack Overflow post with a similar problem, pinging coworkers on Slack, and scouring internal docs. While this workflow is still valid, some specific AI tools can significantly streamline the process.

  • Quick Error Resolution: Tools like PerplexityAI efficiently surface solutions for errors and bugs with minimal effort. Simply paste in raw logs (be mindful of sensitive information!) and provide some context about the error. For common issues, you’ll often get a working solution within minutes.
  • Collaborative Debugging (The most powerful variation): You can treat the LLM as a debugging partner. Share logs, error messages, and your own theories; the LLM can validate your ideas or point out gaps. By iteratively adding information and running tests, you can zero in on the root cause. Once identified, you can use the LLM to help plan and validate your fix. From zero to a working solution, with the help of an LLM at every step.

Creating and Maintaining Documentation

Documentation is both essential and often tedious. LLMs can help make this task more manageable:

  • Updating Internal Docs: LLMs can review your README.md or other documentation to check for outdated content. As always, the more context you provide, the better the results.
  • Generating commit messages: Especially Agentic AIs that can directly check all the files you’ve changed in your repo and access internal documentation can generate incredible commit messages with no effort. Bonus points if you can provide the LLM with a template. You can also do the same thing for merge requests.
  • Generating Changelogs and Release Notes: LLMs can draft these based on your commit history or merge requests.
  • Code Documentation: Ask the LLM to generate Javadocs, Python docstrings, or API documentation. While you still need to review for hallucinations or inaccuracies, it’s usually far easier to edit generated docs than to write them from scratch.

As always, be mindful of which content you submit to LLM providers. Especially when working with internal documentation which can contain sensitive info.

Quality of life Improvements

  • Code Review: LLMs can also assist in code reviews. You can ask them to analyze a specific function or class, providing context about its purpose and any known issues. This is particularly useful for catching edge cases or potential bugs that you might have overlooked in simpler code. For complex merge requests involving multiple projects and deep domain knowledge, the LLM will generally struggle due to a lack of context. However, it can still be useful for reviewing smaller parts of these changes.
  • Test Generation: LLMs can help you generate unit tests or integration tests for your code. By providing context about the function or module, you can ask the LLM to create test cases that cover various scenarios, including edge cases. This can save you time and ensure that your code is thoroughly tested. Same as with all the LLM-generated code, it will be as good as the information you provide. So for it to generate good and useful tests you will need to provide context on what, how and why you are testing. You can also ask the LLM to generate tests for specific edge cases or scenarios that you might not have considered. Careful review is still necessary, as LLMs can sometimes generate tests that are too generic or not relevant to your specific use case.
  • Refactoring: If you have a piece of code that works but could be cleaner or more efficient, you can ask the LLM to suggest refactoring options. This can include simplifying complex logic, improving variable names, or breaking down large functions into smaller, more manageable pieces. LLM excel at these kind of small-context task, especially when you provide them with clear guidelines on what you’re looking for.
  • Learning New Technologies: If you want to learn a new tech, LLMs are super useful to get started quickly. You can ask it to generate an optimal learning roadmap tailored to your already existing knowledge and experience. If you point the LLM to the official documentation, you’ll drastically reduce the chance of receiving hallucinated information. You can also ask the LLM to generate a simple project that uses the technology, which can serve as a hands-on learning experience. This is particularly useful for technologies that have a steep learning curve or require a lot of setup.

Conclusion

LLMs are evolving from mere code generators to true partners in the software development process. By leveraging their strengths in natural language, brainstorming, debugging, and documentation, you can tackle deeper coding challenges and improve your daily workflow. If you provide detailed enough context and validate their outputs, you can use these tools as collaborative partners, not just as answer engines.
Like all my content, this is more or less a semi-structured brain-dump of my way of working.

What do you think? Does this ring any bell? Is this already similar to your current workflows?

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top