#34 How I use LLMs as an engineer
👋 Welcome back to the PE newsletter!
A month ago, I shared on LinkedIn that I took a mid-year break to make more room for new experiences and reading. Also, writing a weekly blog while working a full-time job is a very challenging thing to manage. But, I’m planning to get back to writing again, at a more relaxed pace (once every 2 weeks) and also share my journey using AI. Today, I’ll share how I use LLMs as an engineer.
Before that, a quick word from Scalekit, who is kindly sponsoring this email.
Scalekit
Today’s fast growing AI apps are collectively thinking about growth, activation, and expansion. But behind that frictionless product adoption, lies the most overlooked systems in product design: authentication & authorization. To understand how this new generation approaches these choices, Scalekit did a manual teardown of 50+ modern AI apps — studying how users sign up, join orgs, switch contexts, and scale into enterprise accounts.
Intro
In early 2025, many engineers were skeptical about using AI. Some still are, while others fall somewhere in between. However, leveraging AI in software engineering has become a key skill companies look for when hiring engineers, and those who invest in using AI are now being described as AI-native software engineers. We’ll hear the term “AI-native” a lot in 2026.
Addy Osmani, engineering leader at Google, described it in his newsletter as:
An AI-native software engineer is one who deeply integrates AI into their daily workflow, treating it as a partner to amplify their abilities.
I was one of the skeptical engineers, but I became invested in AI adoption for engineering teams and started researching ways to optimise my workflow. So far, I’ve worked with tools like GitHub Copilot, CodeRabbit, Cursor, and Windsurf. I currently use Windsurf as my main editor with state-of-the-art models like Claude Opus 4.5, GPT-5, and SWE-1.5 (default).
I’ve yet to play around with Claude Code, but I’ve heard engineers at Incident.io love using it and that it weaves nicely into their codebase. They wrote a post about it, which I highly recommend reading: Shipping faster with Claude Code.
So, here’s how I use LLMs at work.
1. Coding (logic replication) and writing tests
I use LLMs for coding, testing, scripting, and documentation. For coding, I lean more on logic replication rather than innovation. Replication is things like: copying a design pattern from another module, extending test cases, configuring a new Terraform module, and so on. It’s the local context I can quickly pass to the LLM so it types faster while I review and accept the changes. Also, I spend time writing more specific prompts for the LLM to get better output, just like I’m writing a JIRA ticket for an engineer.
For innovation, I prefer to write the critical myself and delegate the repetitive parts to LLMs. I want the core logic to be crystal clear in my head as I write it. It’s a business asset I must fully understand with its edge cases before shipping.
That said, I still enjoy the tab-tab experience of AI-generated code. It significantly speeds up my development process, and it has certainly made me a better code reviewer.
2. Learning a new programming language
I use LLMs both as a teacher and a study buddy. My common use cases include learning programming languages and onboarding to new repositories.
At my previous company, I had to pick up Rust to write a high-performance simulation library. Using LLMs was helpful at translating ideas from idiomatic Python to Rust. Overall, it accelerated my learning process, especially when figuring out idiomatic patterns like managing memory, handling errors, etc.
Now I always use an LLM when working with new codebases. It’s my AI onboarding buddy. Sometimes I need to jump into an infrastructure monorepo and ask questions like “what are the dependencies of X” or go through another team’s repo and ask “Explain this service to me”.
Being able to ask an infinite amount of (stupid) questions is wonderful. Of course, hallucinations or missing context can produce wrong answers, but still a small risk I’m willing to take. Also with experience you develop an engineering sense to spot incorrect results quickly.
3. Debugging (sometimes)
POV: I find debugging with LLMs harder than debugging solo.
In my experience, LLMs sometimes give you a slightly wrong fix that sends you in a different direction, creating a cascade of misleading answers and fixes. A lot depends on the context you provide.
At the moment, I prefer debugging in focus mode, but if I’m desperate, I’ll try asking an LLM, though without high expectations. However, Anthropic mentioned that their engineers who used the latest model Opus 4.5, solved a critical bug that Sonnet 4.5 couldn’t fix, in 2 mins. So it may just be a matter of time before I can rely on LLMs entirely for debugging.
Generally, when working with large log files, JSON blobs, etc. I definitely will use an LLM for interpretation, since it can process large amounts of information much faster than the human brain.
4. Proofreading engineering docs
I write a lot of documents as part of my tech lead work. Before sharing design proposals or wikis across teams, I use LLMs to proofread them. Covering edge cases, adding missing links, and fixing typos.
But I don’t use LLMs for drafting new documents. As an engineer, I’m responsible for the technical decisions and proposals I produce. That means taking the time to think through solutions that align with the product spec, and also to carry the project in my head end-to-end (similar to the 1st point about coding).
Execution can be delegated to LLMs, not Strategy.
For me, writing documents is a strategic exercise that I don’t delegate to LLMs.
5. Prototyping ideas (faster)
Before LLMs, I used to jot down ideas on paper and allocate some focus time to code up a silver bullet prototype.
With LLMs, I can now branch out a repo, vibe-code a solution, test it, and quickly decide whether it’s worth sharing.
This cycle of ideation → prototyping → testing → validation can be executed much faster, and I can iterate multiple times a day.
On Friday mornings, I like to vibe-code ideas I’ve had throughout the week, experiment with throwaway solutions in local branches and see if one or two are worth sharing with the team.
That said, I am skeptical about engineers vibe-coding in production repos. To me, it adds a risky layer of abstraction (English → Machine Code) instead of (High-Level Language → Machine Code). Experienced engineers are careful not to blindly accept AI-generated changes that might end up creating an unwanted incident in production. This is especially true for large organisations and enterprise.
But who knows? With the performance improvements I’m seeing, I might change my mind soon. For now, just throwaway prototypes.
Wrap up
I wrote this post knowing there’s still more to leverage with LLMs, but I’m glad I’ve gone from a skeptical engineer to a full-on AI-native (still a work in progress). To recap, here’s how I use LLMs:
Coding (logic replication), testing, and documentation
Learning new language(s)
Debugging (sometimes)
Proofreading engineering documents
Prototyping ideas (faster)
I don’t use it for:
Coding (non-replication)
Writing documents
Diagramming
How do you use LLMs at work? Let me know in the comments below



