How I Use LLMs

Earth Walker, March 2026


In the last few months, I started using large language models (LLMs) to help me with coding projects and solving computer-related problems. I want to take a moment to reflect on my use of this technology and how I’d like to engage with it going forward.

What is an LLM?

An LLM is a form of generative artificial intelligence (genAI) that is designed to generate text in the form of language. LLMs can generate human languages as well as programming languages. LLMs can be used as a productivity tool, to brainstorm ideas or to just mess around. They can draw from a wide range of information, but they often make mistakes in their output.

LLMs are currently fueling a technological gold rush, which is probably an economic bubble. Startups such as OpenAI and Anthropic, as well as big tech companies such as Google have brought their own LLM products to market, which have radically changed the way that humans interact with computers in the last few years, with mixed results.

LLM Tools I’m Using

Most of my use of LLMs has taken place on chatgpt.com and via the chat tool in VSCode (a popular code editor). I’ve mostly used ChatGPT and Claude models.

I recently set up a local LLM system with Ollama and Open WebUI to run open source LLMs on my desktop. I’m mostly running Ministral-3:14b on my mid-range gaming GPU. I’ve tried it out in a few scenarios, and I’ve gotten useful output for some tasks. In particular, I’ve had good results with doing code reviews, general discussion of computer science topics, and generating alt text for images. I haven’t had much opportunity to test it on troubleshooting and debugging, which as I will discuss later, is my most valued use case for LLMs.

I’m not using automation software like Claude Code or Openclaw, because I don’t trust LLMs to take actions on my behalf, other than occasionally editing a specific file.

What I’m Doing With LLMs

My use of LLMs has been almost entirely confined to the domain of computer science. I’ve tried using LLMs for creative or fun applications and I found the results to be not worth using.

Within the realm of computer science, there are a few applications I’ve found LLMs to be really useful for, namely software development, devops and system administration. In plain terms, creating apps and setting up online infrastructure that the apps run on.

My Workflow

There are lots of different ways to use and apply LLMs. I’m developing my own workflow which takes advantage of the strengths of LLMs while avoiding some of the pitfalls I’m concerned about.

First of all, I generally try to avoid using LLMs whenever possible, for the following reasons. I want to use my own mind to solve problems as much as I can, because it makes me a smarter, more capable person. Using an LLM is literally offloading cognitive work to a machine, and doing that too much will result in lost opportunities to learn and grow in whatever discipline you are applying the LLM. Using cloud-based LLMs also sends your data off to big tech, which is a major privacy concern. I try to censor some of my personal details when using these tools, and I don’t have accounts with any cloud LLM providers, but it’s still a major issue for me. Local LLMs are preferable to me for this reason. Cloud LLMs also consume a lot of electricity and water, which is driving up electricity prices across the US, so I keep that in mind as well. Finally, LLMs are often imprecise, so I prefer to depend on more trustworthy sources of information first.

Before going to an LLM, I generally go to official documentation, guides and tutorials, and forums to answer questions I have. This often obviates the need for using an LLM.

With all of this being said, there are some situations where LLMs are really useful for me. This generally comes down to troubleshooting or debugging situations where I’m having trouble finding information about the problem, as well as cases where I’m working with technologies that I’m not all that interested in learning, that I’m forced to use for some contained aspect of a project.

For example, if my app is producing an error which I don’t understand, and running a web search for the error does not produce any useful results, I will plug the error and the code which I think is relevant into an LLM query, and oftentimes it will quickly tell me what the problem is and how to fix it. I can then use this information to learn about the technology I’m working with and get back on track.

Another example, the process of getting my app online may involve fiddling with some technology that I’m not all that interested in learning, at least not in the middle of my project. In this case, I can explain what I want to do to the LLM and it will help me get that part of the project working so I can get on with the work that I want to be doing. This may be an indication that I should spend some time learning the unfamiliar technology later, but at least I can put it off for a while until I feel like diving into it.

I avoid directly using code generated by an LLM in my project. I prefer to write code myself, again because I want to learn and become more capable. Usually I use the LLM in a “chat” context instead of allowing it to directly edit code. I will look at the code it generates and try to understand what it’s doing. If I don’t fully understand it yet, I may ask the LLM how part of the code works, or look it up with a web search. Once I understand what the code does, I’ll use that knowledge to write my own implementation.

Occasionally I will use generated code in a project, mostly to automate the conversion of one language to another (for example, converting a CSV table to SQL queries), or to assist with implementing technologies that I’m not currently interested in learning deeply. I always read through this code and make sure nothing seems off. In these cases, using a tool built into my code editor is really convenient, so I don’t have to copy and paste output into my files.

The only time I have really vibe coded part of an app was with my Simon project. I built a fully functional version of the app without generating code, but the audio was slow and laggy. I read about the web audio API, and decided that it would make sense to use it in the app, but it was going to be a big lift to implement, and I was feeling like I wanted to be done with that app and move on to another project. So I used ChatGPT through the VSCode Copilot extension to implement the audio features of the app using the web audio API, and it worked. The code looked fine to me, so I shipped it. It did feel a bit off to use AI to build part of the app when I could have used that as a learning experience, so I will try to avoid situations like this in the future, but at this point I don’t feel that bad about it either. I feel that this was a reasonable use of code generation.

Why I Use LLMs

There are a few reasons why I currently use LLMs and I probably will continue to do so in some capacity going forward. LLMs can help to smooth over some of the roughest parts of software development, which includes debugging and troubleshooting obscure problems. This allows me to spend more time on the creative part of coding, which improves the overall experience and allows me to complete projects faster. Another big reason I use LLMs is because I feel a lot of pressure to familiarize myself with this technology in order to get a job as a web developer. A lot of employers in tech seem to be bullish on how LLMs can improve productivity at this time. Whether that is true or not, their perceptions matter when reviewing my job application and during interviews. I learn by making projects, so I’m learning how to use LLMs by using them to help me make projects.

I’m open to changing the way I use LLMs. For example, if I find myself using an LLM to do the same kind of task a lot, I might take that as an indication that I need to learn how to do the task myself or learn how to automate it with simpler, less resource-intensive tools. I’m also open to using LLMs more if I can get useful output from a self-hosted, open source LLM and it helps me build bigger, more complex projects more quickly.

Perception of LLM Use

At this time, different people have radically different ideas about the value and ethics of using LLMs. Some people are absolutely opposed to their use in any context. Some people use them for some tasks and don’t think much about it. Some people are very enthusiastic about the technology and are pushing the limits of what can be done with it.

I would place myself more on the anti-LLM side of the spectrum, but I’m not opposed to the technology itself; rather I have a lot of problems with the ways it is being designed and delivered under capitalism. I think there could be much better ways to use the technology than the ones we currently have. I believe that the overall hype currently surrounding LLMs and genAI in general is extremely overinflated, but not entirely unfounded. LLMs can be very useful, as I have recently witnessed firsthand, and historically speaking, significant new forms of automation often disrupt culture and economies. I don’t see this as entirely bad, but I do think we need to be very skeptical and careful about how we create and apply this technology to our work and our lives. I don’t think that we should just let the big tech companies and the fascist-leaning US government lead the way on this issue. We need to find better ways forward for ourselves, our communities and our workplaces.

How I Think LLMs could Improve

I am looking forward to a future where smaller, more efficient LLMs that can run well on local hardware are used for specific tasks. I don’t want to use cloud LLMs that collect user data for profit, and I don’t think it makes sense to use one model or technology for every task.

As with all technologies, I think that more ethical, privacy-respecting and open source solutions will slowly develop behind the corporate solutions which have the most resources. I’ll be keeping an eye on that space in the future.

I also think there should be regulatory guardrails around LLM products to avoid some of the worst mental health outcomes, as well as guarding against environmental destruction.

Finally, I do think there’s an element of personal responsibiity in all this, where people should take a step back and think about the consequences of using new tools like this rather than uncritically applying them to everything in their lives and businesses.