Something’s Changed
In the past couple of months of this year, the implications of advances in AI, (specifically the power of Claude Code as an AI model + harness combination,) has completely up-ended my job description and my career, and this post is about what I want you to know about it. (Assuming somehow you don’t know already, though I’m somewhat guessing you’ve already experienced the same thing, and probably do already know.)
Frontmatter
Yeah, I know this is a climbing blog, and I know this is off-topic, but it’s a thing I felt the need to write about and post somewhere publicly, and this happened to be the most convenient avenue I had available.
Also: my day-job is software engineering, and I am primarily thinking about other software engineers as the audience here. (And, this article is specific to this point in time. It’ll probably all be completely different a year from now.) Not that you have to be a software engineer to read any of this, and there probably are a few interesting nuggets in here no matter what it is that you do to pay the bills, I just wanted to be upfront so you can decide if this is relevant to you.
And finally: this article is probably a bit behind the curve, because I’ll bet something like 60% of software engineers have already been exposed to this & figured all this out, but I’m writing this anyway, just in case it helps anyone in that 40%. Those are the people I’m writing this for. Just in case you (somehow) haven’t jumped on board (or been run over by) this train yet: Then you need to know. And that’s why I’m taking the time to write this, just in case it helps.
- Something’s Changed
- Frontmatter
- I’ve been vaguely skeptical / uninvested in AI. Until Now.
- About that hype…
- What Claude Code is, and some structural AI Basics & Terminology
- How to Get Good Results out of Claude Code
- Helping AI simulate Long-Term Memory
- I have a new job-description now
- What does this all imply for the world ahead?
- My Past-Life in Tech
- Footnote
Fundamentally, AI code assistants act as a kind of amplifier. If you’re already doing the right things, AI will amplify the impact of those. If you’re already doing the wrong things, AI will help you to dig a deeper hole faster. Tools amplify capability, they don’t replace it.
– Dave Farley
I’ve been vaguely skeptical / uninvested in AI. Until Now.
I’ve been programming and writing code for a long time. I rarely jump on hype-trains. In my own approach to engineering, the phrase “Choose boring technology” really, really resonates with me. (I learned the phrase from an excellent blog by Rod Hilton that used to be called “No Machete Juggling.”) I know this sounds quaint, but I really do care about the user of the software I’m writing for – in fact that’s often all I care about. I want to build a reliable system for them, that user, that gets their job done in as invisible a way as possible, that stays working as long as possible, and in a way that lessens the likelihood & frequency of the inevitable-future-headaches, as all software project have some. I didn’t give a lick about many of the different past hype-cycles in Tech, like “Crypto/Blockchain” or “Big Data” or “IoT” or “Social Networking” unless a clear-user-need specifically led to one of them (which, on the projects I landed on, almost none actually did.) I’d rather do the software-engineering equivalent of building Volvos than building Lamborghinis. I always valued producing something I was sure would really be useful, not just “shiny.”
So, as LLMs and generative AI’s have been on the rise over the past few years, I haven’t had much of an opinion on it. My reaction has been, “Okay, sure that’s interesting, but what can it do that’s useful?” I certainly am horribly-adverse to the idea of generating slop, I definitely didn’t want to do that. So, could I find some use for them that actually was useful?
The “Stack Overflow Replacement” era
Somewhere around 2024/2025, ChatGPT through it’s web-interface did prove to be lightly-helpful in a way that was essentially a “Stack Overflow replacement” in my workflow. If we’re being honest, so much of the job of writing code over the last 20 years was a heavy dose of running into a problem, the going out on the internet to search for someone else’s example of how they had fixed something like it, then some degree of copy-and-pasting it, with quite a bit of adapting it or re-writing a version of that example code to fit your codebase that needed it. The skills were the searching, and the adapting. With ChatGPT’s browser-based chat window, I could ask it: “Show me an example of code in X language that accomplishes Y with such-and-such customizations”, and often get a chunk of example code that I could copy in that did the thing, with much less time on my part searching for that example, and also often less time adapting it, because the customization I wanted was more-likely to be already generated-in. Did this make me faster? Yeah, a tiny bit, and it was a nice tool, but it was far from a “revolutionary” impact on my work.
The “Primitive-Paired-Programming-Partner” Era
By late 2025, and still at the very beginning of early 2026, in-IDE integrations existed, such that there was an AI-chat window built into the IDE you’re using to write code (Visual Studio, Xcode, etc.) These were indeed a notable step-up from the separate web-browser chat interface, but it sometimes generated code I hated, so I had to use them to do small chunks that I very-carefully explained. The experience of using it began to feel like somewhat-distant-echo of times I had done paired-programming with a real human, both sitting together at the same keyboard & same computer. I could talk to the chat window about the problem I was trying to solve. The interaction was still nowhere near as rich as it would be with a real human, but the AI could do some things for me. One type of interaction would be that I’d stub out the interesting & decision-heavy parts of the code: the interface of a class, or the interface to a complex function, and then I’d ask the AI assistant to work on the contents of that chunk of code, essentially handing it things that were like well-specified basic programming interview whiteboard problems, and it would do it. If I asked it to do anything too large or too complicated, it would go off the rails and write some ugly code (like a triple-level nest of anonymous-function-closures within each other; horrible for readability,) so I had to really carefully modulate the size of the requests I made to it. Another type of interaction that was pretty wonderful was the “What does this error mean?” interaction, where often the AI assistant would be able to spot a thing I wasn’t aware of, or better-interpret some completely obtuse error message (like Xcode giving me the incredibly-vauge compiler-error “Type of expression is ambiguous without a type annotation”), really helping me move past errors that I would otherwise have had to bang my head against a wall for a while on. Did this make me faster? Yeah, in an average working-day, it would typically save me an hour’s-worth-of-work, maybe two-hours if I was lucky. A real productivity improvement, but still not a threat to my job in any way.
The “It’s basically a junior-dev working for me” era
In spring of 2026, something shifted. I wasn’t paying attention to the “bleeding edge” of what was happening with AI, so I didn’t really get the memo until sometime late in March 2026, but when I finally did install and start using Claude Code: Woah. Things are different now. The results I’m getting from Claude Code in the terminal blew my mind. It’s shockingly good. It’s like having a full junior dev working for me, or heck, even a team of junior devs. (But like having junior devs, managing it & steering it is still a lot of work, and we’ll come back to that in a minute…) I don’t even need the IDE anymore.
Launching Claude Code turns the terminal into a chat-window, where I can chat with it like I might chat with a remote coworker over Slack, describe an engineering task that would normally have been a ticket or User Story before, discuss a plan for how they’re going to work on it, then let it work for a while, and after a few minutes Claude comes back to me with the work completed, even potentially openning a PR (Pull Request) for me to review, if I want that. It’s no longer the primitive-paired-programming-partner that I would have to babysit and spoon-feed during each atomic code-change; it really is like having an employee. A very junior one, that does need a lot of onboarding and a lot of training, but it is something I can delegate whole work-items and even small projects to, let it work on for a bit (typically 5 to 20 minutes for Claude Code to do something similar to what a junior dev would do in typically one day to a week), and then I can either approve its work, or give it feedback asking it to shift its direction. Did this make me faster? The answer is both: “yes indeed” in a huge way that will-be, and already-has-been, enormously impactful; but also, there’s a lot we need to talk about here to unpack and not get carried away in the hype, which I’ll do in sections ahead…
At this moment in time, I am ready to use the term “intelligent” for what AI is now, though in a limited sort of way. I mean intelligent in the way I’d say “Oh, that was a really intelligent book that I read” or “That was a really intelligent article I read”. The AI tool is still an inanimate object, it is just a very-complicated-math-calculation under-the-hood to put words together, but the resulting words it is putting together now are surprisingly well-thought-out and often contain good-ideas beyond what I thought I had originally asked for in the prompt. I can’t deny at this point that it’s often generating outputs that I read, and would describe as “intelligently-written.”
(And just to be specific, it is interactions with Claude Opus 4.something that have made me feel that way. Not all AI models live up to the same standard; and I am sure that other models exist that do just as well or better than Claude Opus, I’m just saying I happened to be using that specific model when my opinion shifted here.)
About that hype…
I don’t have to tell you that there is a mind-boggling amount of hype out there in the world about AI out there right now. My experience has shown me that both of the following are true: There certainly is a lot of hype that is empty-hype (vapor-ware, wish-listing, jumping-to-conclusions, prognosticating about future that–to be fair–really could go in a lot of wildly different directions, but obviously no one really knows which direction that ultimately will be yet, etc.) It is also true that there is a lot of hype out there that is warranted-hype, that there is indeed good-reason for. A lot of the general hype is because there is there is something real & substantial in what is happening with AI. The trick is figuring out for yourself which is which. To that end, I think the answer is: you need to use it yourself, and connect with a ground-truth of what you actually see it doing. And don’t just stop at that, examine hype through an appropriate lens: sometimes ignore it (you have to, there’s way too much to engage with all of it,) but also: sometimes do get curious about it, and see if it leads you to something new that you hadn’t yet tried yourself.
What Claude Code is, and some structural AI Basics & Terminology
These are some super-basics that probably nearly-all of you reading this already know, but just in case, let’s lay out the basics. (And in some cases, I’m going to oversimplify, because I think that’ll land better.)
Claude Code is an “agentic command-line tool”. To break that down:
- Anthropic –> This is the company that has-built and is-continuing-to-build Claude.
- Claude –> The word Claude here is effectively a brand-name, a generalized catch-all that could be referring to any number of more-specific products that Anthropic makes.
- Model –> In the context of “Large Language Model”, or “LLM”, often just referred to as “The Model” since that’s easier to say in casual conversation. The model is the text-generating-engine at the heart of whatever AI tool you’re using. It is, essentially, a gigantic math equation, that takes inputs, and produces outputs. The inputs are some amount of text, and the outputs are more-text, based on the inputs. It is just a math-calculation, but one that feels kind of like magic, because of how “intelligent” the outputted-text appears to be. The model is just a collection of numbers and formulas, but it is a mindblowingly-large set of numbers. When you think of “math equation”, you might think of simple algebra, like “y = m*x + b”, an equation that describes a simple straight line, which involves effectively just 4 numbers. LLM’s are a math equation, but one that involves billions and billions of numbers. There are two distinct phases of life to be aware of for those “billions and billions of numbers”: the training phase, which is the process by which a model is created in the first place, how those “billions and billions of numbers” get their numeric values in the first place, the process of coming up with the math-equation that effectively is the model. Then, the execution or “inference” phase, which is to take all those now-defined “billions and billions of numbers”, and uses them to turn inputs into outputs. Oh, and since “billions and billions of numbers” is too long to keep saying, and just “numbers” is a little too generic and ambiguous, the official term for those numbers is effectively the “weights”, the model’s weights refers to it’s billions and billions of numbers that make it up. If you want to get way, way more geeky on the math, check out the YouTube channel “Three Blue One Brown”, that guy makes some absolutely incredible explainers.
- “Opus“, “Sonnet“, “Haiku” etc. –> These are names of specific models that Anthropic has already-trained, and make available for users to use (the execution or “inference” phase, of putting inputs in, and getting outputs out.) Some are larger models, where larger tends to mean both more-capable and more-expensive to run, like “Opus”. And some are smaller model, where smaller tends to means not-as-top-of-the-line capabilities, but also not-as-top-of-the-line cost to run.
- Sidenote on what “expensive to run” is kind of referring to: Around here, I find myself asking: “What does it mean that a model is expensive to run?” Part of the story is the upfront cost of training, it takes more work and more computational time to train and therefore create the defined-weights for larger models; that’s a sort-of-one-time cost the company creating the model has already experienced, and it’s reasonable to think of that cost as amortized across all later uses of that created model. Another part of the story is the execution, or inference, where the now-created model is run. The hardware system-requirements of running an LLM is a little staggering. Anthropic hasn’t released numbers, but some back-of-the-napkin estimation is that Sonnet, as a computer program, might require somewhere in the ballpark of 200GB of RAM to run, and Claude might require somewhere in the ballpark of 500GB of RAM to run. That’s why we, as end-users, don’t get an installable program we can run on our home computer, but rather have to make requests to that company’s “in the cloud” remote servers, that are made up of massive physical datacenters that have some really serious build-outs of computational hardware in them, and why questions of even just the raw electricity-draw are non-trivial.
- Tokens –> I know this is a slight oversimplification, but it’s easiest to just think of “tokens” as a synonym for “words”. E.g. if you write a 200-word input-text and feed it into a text-generating AI-model, it sees that input as 200-tokens. (Okay, more technically, tokens are sub-pieces of words, so 200 words might actually be like ~260 tokens, but that’s not an important distinction if you’re just trying to have a general understanding, just assume tokens and words are virtually the same thing, and let your brain focus on some other piece of more-important complexity.)
- Context Window –> This often refers to the maximum-size of input a certain model can handle; how many tokens of input it can accept and then do a calculation (text-generation) on. For example, “Claude Sonnet” has a 200,000 token context window, and “Claude Opus” has a 1-million token context window. LLM’s are effectively just a massive math formula, and that formula only has so many spaces for input-variables. Sure, 200,000 tokens-of-input, of 1-million tokens-of-input, are indeed massive spaces for input, and on your first message to an AI in a chat-conversation you’re not going to be anywhere near hitting that limit. But if you are many many turns into a conversation, or trying to use mechanisms to simulate the AI having something like a “memory” of a previous conversation, the limit of the context window does eventually become a limit.
- Sidenote on simulating something like “memory” in the context of AI chat conversations: The core model, the core LLM, is stateless, it does not remember anything from one run to another. The core model is just a massive math equation. (Not to be confused with deterministic, because they’re not quite deterministic, there is a fair bit of randomness thrown in, so running again with the same input does not always yield the same outputs.) What I’m getting at here is that the outputs of ‘Run #2’ do not and cannot take into consideration any inputs of an earlier run, ‘Run #1’, unless all of those inputs from ‘Run #1’ are included as additional-input automatically concatenated with (i.e. “Scotch-taped together” with) the the new input of ‘Run #2’. From a user-experience point of view while having a conversation with these models: Under the hood, the core model only knows the inputs it was given during this exact turn it is generating text for. To make the model appear to ‘remember’ a lot of the back-and-forth that’s happened in a conversation where both the user and model have taken multiple turns of back-and-forth, the entire conversation history is used as one big concatenated-together input every turn. If you start with a 200-token initial prompt, and the model replies with a 300-token response, then you ask a 100-token follow-up question; then that model’s second turn generating a reply is a stateless-calculation based on one big 600-token copy of the entire conversation so far. It’s not “remembering” past parts of the conversation in the way a normal human brain has memory; rather it’s reprocessing each time the way a patient with severe amneisa but is somehow capable of reading-very-quickly might: on each one of its turns, it’s starting from a completey unaware blank slate to answer the question “Given this entire conversation history as if you’re reading it all for the first time in your life, what’s a sensible next-response to generate to contribute to the conversation at this point?”
- “short term memory” vs “long term memory” –> Okay, the bulletpoint above describes how the entirely-stateless core LLM can be re-given the entire conversation-history of a chat session on every one of it’s turns to make it appear to have “short-term memory” of what’s been discussed earlier within that same conversation. A software-wrapper that does that creates a better user-experience for you, the human chatting with it, than an LLM’s bare under-the-hood behavior, which really is entirely memoryless. But the next obvious user-frustration is: Man, it feels like you are talking to something human-like, and if you were talking to a real & normal human, they’d remember bits and pieces of other conversations you have had together on other past-days, that real human would have more context to draw on based on your past conversations on other days further in the past, not just whatever conversation you are in right now. It feels frustrating to a user chatting with an AI model that it cannot “remember” things from other conversations. How do you solve that? Simply: build a mechanism that saves plain-text files containing summarized important details from Conversation A, so that sometime later, when a totally-new Conversation B starts up, that plain-text notes file from Conversation A can be concatenated in at the beginning as an extra under-the-hood input that’s given to the core LLM. In Conversation B, the stateless under-the-hood core LLM that has no inherent memory is asked to generate a response based on the question “Given this entire conversation history, and the contents of the text file named
past_coversation_A_summary.txt, as if you’re reading it all for the first time in your life, what’s a sensible next-response to generate to contribute to the conversation at this point?”
- “Agentic” –> “Agentic” means “acting as an agent”, and refers to an AI tool that not-just generates text, but takes action on your behalf. (The mechanisms under the hood are still based on text generation, it’s generating some text that’s not shown to the user, where that text says something like “I would like to run specific tool XYZ, or push specific button ABC”, etc., and the connector-software that you are using to interface with that LLM is capable of reading that generated text, and mechanisticly performing that automated-action according the text the LLM generated.
- “Agents” –> I feel like there’s some fuzzy terminology and imprecise language going on here, and I find the difference between “Agentic” and “Agents” a bit confusing and unclear, but I also think it doesn’t matter too much. “Agentic” means the AI tool can perform actions on your behalf. “Agent” tends to refer to sub-agents, or separate running-environments for AI tools. For now, I’m not going to try to untangle this terminology further.
- “Harness” –> Two bullet points ago, when I was trying to define “Agentic” in the first place, I mentioned “the connector-software that you are using to interface with an LLM”, and the official term for that category of “connector-software” is the “harness”. The harness is a computer program that you install and run locally, that talks to an LLM (where that LLM is typically running on a remote server in the cloud), but the harness adds bells & whistles to your user-experience by doing things like performing the resulting-agentic-actions the core LLM has described in it’s generated text, and feeding the LLM additional files of input on your behalf, like a file that you mentioned to it implying that you want it to read that file, the harness is the software that actually goes and gets the file and gives a copy of it to the remote LLM so it can read it. The harness also takes care of some infrastructure around the LLM, like creating/managing/re-accessing “memory” files to make the LLM appear to have “long-term-memory”, even though those memory files are just plain text file stored on your local computer.
- “AI primitives” –> As a term, this tends to refer to some of the basic building-block-components that the harness deals with, like those plain text “memory” files I already mentioned.
- Claude Code –> Okay, we finally have the complete vocabularly to revisit the original “What exaclty is Claude Code” question. Claude Code is an “agentic command-line tool”. Claude Code is primarily the harness, the connector-software that you’ve installed locally on your computer, and happen to open via the command-line, though it then becomes a chat-interface like almost any other AI tool. But what makes it special and powerful is it’s ability to be agentic, and it’s addition of “AI primitives”, plain text files that are automatically concatenated into your conversation so that you don’t have to re-explain everything every time in every prompt of every new conversation. It’s the addition of those “AI primitives” that I’ve found have completely changed the usability and effectiveness of this tool for me, and I’ll get into them and how I use them in sections ahead…
How to Get Good Results out of Claude Code
Despite what the hype out there seems to imply, we’re certainly not at a world where someone can type in a one-liner like “Write me an app that does X” and get a result you’d actually be happy with. Claude Code is astounding, but it still actually takes a lot of work to get good results out of Claude Code.
I find that I get the best results by interacting with Claud Code in the same ways that would get the best results during collaboration with a human. You wouldn’t just order another human around with flat, poorly explained statements, and then get upset when they didn’t understand what you didn’t explain well. Instead, engage with them, ask them questions, ask them what they understand and what they don’t, and ask them what they recommend. Put time into “teaching” them when you feel there is something they don’t know but you personally value. Tell them the “what” you are trying to achieve, and give them some space to choose the “how” for implementing it. The age-old golden rule of “treat others how you would want to be treated” is important not just to be a decent human with respect to other people’s feelings; it is also important from a self-interested point of view, since doing so helps you actually get the more-favorable results that you want most out of other people. That always has been true, but I’ve found it interesting how much that rule ends up having applicability when interacting with a mathematical-word-generating-model that does not actually have feelings (at least not in the traditional sense) for us to worry about hurting. Treating it collaboratively, as a reasoning-partner, as mentally-contributing co-creator of the idea you are refining, is still the most-effective strategy to get the best version of the results that you want.
The key is asking it questions. I strongly recommend working in one or more of the following questions, or something like them, into your prompts going forward:
- Do you know what I am getting at here?
- Do you know what I mean?
- Do you know why I’m asking?
- Why might I be feeling that?
- What do you think about this?
- What do you recommend?
- What do you think is the best way to achieve our goals here?
- Are there any larger learnings here?
- How can we improve our process here?
- How can we make that durable?
- What’s a good way to record that?
- Would it make sense to make that into a Claude Memory, Skill, or some other .md file stored with this project?
Do use it as a reasoning partner. Talk to it in plain English. Doing so does two things: First, you just taking time to write at-length to it (even if it’s just stream-of-consciousness and unstructured) forces YOU to think, and can lead to you solidify your own ideas and/or realize new things in the act. This part was always an option, certainly I could write to myself at-length to mature my own ideas long before AI existed at all; I just didn’t always do it. Second: it is an excellent sounding-board. Feel free to type at-length to it about your half-formed idea or list of unorganized thoughts, then ask it “What do you think?” or “Why might I be feeling that way?” I often find that it’ll repeat back to me a better-articulated, better-formed repetition of the same idea, and maybe also raise a concern about something related I hadn’t even thought of. It will often get something 90% right, but not 100% right, but having multiple rounds of talking about a topic will help me notice what part of what Claude is doing doesn’t feel right to me, so that I can constructively explain what I want it to do differently. It takes work to steer it. Do go through “plan mode”, and do read what it says, notice your own feelings about what you think it’s doing right vs what you think it’s doing wrong or could improve on, and have discussions with it based on the merits of ideas.
Helping AI simulate Long-Term Memory
As I started to mention in the “Terminology” section earlier, an AI tool’s mechanisms for “long-term memory” are either non-existent, or, somewhat weak in the “Memories” system that Claude Code has built in by default. I find that, by-default, Claude Code quite stingy about how much it proactively-chooses to automatically create a “Memory” markdown file on its own, without you explicitly asking it to. I find that I get much, much better results if I am super-active, almost uncomfortably-active, at reminding it to create text files for memories, documentation, etc., and building up a whole system of text-files that it can reference in future conversations. I have a personal system of many categories of text-file-content that my sessions of Claude Code now know to load-on-demand when relevant, and know to edit liberally to pass on learnings to future Claude Code sessions. Lacking a good umbrella-name for that “general bunch of text files that guide the AI” for me, I eventually came up with the single word “Rigging” to refer to them. I arrived at this word partially-thinking of old sailing ships, and the rope-work that made them actually work; and also partially because it worked well with the already-official term “harness,” as my rigging is a complement to the AI’s existing harness.
Okay, so I’ve come up with the word “Rigging” to refer to my personal ever-growing set of text files that influences all of my Claude Code sessions going forward. All of these files are effectively just plain text files, written in plain English, (or whatever human language you’d like.) I say plain text, but technically they are all markdown files, with the .md file-extension. (I know all developers reading this already know what a markdown file is, but I’m throwing this in here just in case there’s any non-devs in the audience who don’t already know what a markdrown file is.) Markdown effectively still is plain text, but with some very light-touch optional formatting rules, explained here: https://www.markdownguide.org/basic-syntax/ (Example of one of the rules: if a word has double-astrisks around it, like **this**, that’s supposed to represent the word “this” as-if it had bold formatting applied. In other words, the markdown rules are very, very simple, nothing fancy; still very human-readable and human-editable, even if you’re fairly non-technical.)
Like a coworker, You have to on-board it. Continuously improve the rigging. Remind it to record things you care about.
Your job is to notice the things you care about, steer the AI, and remind it to “make durable” the things you care about.
Claude Code’s built-in “AI Primitive” file types. (Its default “rigging” types.)
First, there’s the basic files, (a.k.a. the “AI primitives”) that Claude Code has built-in concepts for:
- The
CLAUDE.mdfile –> This is the starting point, this is the file that is automatically loaded at the beginning of every new Claude Code session going forward. The contents of this file is effectively a “preamble” of instructions or knowledge that is always sent ahead of whatever-first-prompt you write in every session. - “Memory” files –> Claude Code has a built-in mechanism to occasionally tries to automatically remember a thing it thinks you care about, by creating an additional markdown file on disk in an invisible directory, at
.claude/memory/feedback_something_you_said.mdI’m glad it does this, and I do use this built-in “memory” mechanism, but at the same time I also find it vastly-insufficient on it’s own. - “Skill” files –> If you ever find yourself having to explain a step-by-step process that you want Claude Code to follow multiple times over, then it makes sense to ask Claude to create a “Skill” for that, which will lead to it creating an additional markdown file on disk at
.claude/skills/name-youve-chosen-for-this-skill/SKILL.md. In the future, you can make Claude once-again follow all the the steps in that step-by-step process by writing forward-slash, then the skill name, in a prompt, like “/name-youve-chosen-for-this-skill“
There are some other built-in categories of plain-English markdown files that Claude Code tries to create at times, but those ones above are the ones I used the most. It’s good to know about them, so that you can proactively tell Claude “Hey, go add a Memory file about such-and-such” or “Hey, let’s create a skill that does the following repetitive steps…” so that you don’t have to explain those things again in a future session. And it’s good to know about them because you should go find the raw copies, read them, and tweak the instructions/information in them, to better get what you want out of Claude in the future. For both the reading and the tweaking, you can ask Claude to show them to you, and you can ask Claude to make the tweaks by asking “How do you recommend we tweak such-and-such file so that I’m more likely to get such-and-such result out of you in the future?”
About My Additional “Rigging”
Guide files. I built up my interaction with it layer-by-layer. These things take time. I started with asking it “if I was writing a code-style-guide, what are the best practices that should be written down in there?” (If you want, you can also throw in an “if you were a super senior veteran software engineer & architect” or whatever, but I don’t always.)
I read the guide it generates. I make a lot of changes & refinements of my own, I add pieces that I really want to see in there. I maybe go through a couple rounds-of-refinement with the AI. I make sure Claude’s rigging is aware of the guide, and has an appropriate trigger-question to load it on demand at times when that guide-doc would be relevant.
Then I do some work where the guide would be relevant. Write some code. At this point, I’m still carefully reviewing the code it generates. As I find parts of the resulting code’s style or structure that doesn’t quite feel right to me, ask the AI why that happened, in a constructive tone, a “and how can we do better in the future?” question. After enough rounds of this, I stopped finding anything I didn’t like about the code structure it was writing, and it felt less & less important for me to review the code it generated, so I stopped doing line-by-line code reviews, I started to trust it; and switched to just occasionally skimming the code it generates. I’m no longer its close paired-programming partner, I am now its slightly-distant manager that doesn’t have time to closely inspect everything it writes. I just care about its bigger-picture results: the software-experience the user & customer is having, not what’s under the hood.
I did this with a code style guide, a bigger-picture architectural choices guide, and eventually also a written UI/UX Design Principles guide. Now, I’m at a point where sometimes I can start a fresh Claude Code session with no context other than that significant rigging that I’ve built up to be auto-loaded when needed, so from my point of view it’s a fresh blank chat window, and I can same something as pithy as literally “Something feels off on screen Foo to me, why am I feeling that way?” and the AI will respond something like “Ah; it’s probably specific-thing XYZ that feels off to you, because it violates principle ABC in our UI/UX guide. We could fix it by doing X, but for the sake of architecture we should refactor Y first, and also consider what to do about Z. Want me to make those changes for you? And what would you prefer to do about that open-question-Z?”
TODO: I have a bunch more I want to write here, I hope I come back and fill this section out…
Principles vs. Practices
I have a new job-description now
My job-description is different now. I used to be a programmer. With the advent of Claude Code, and enough refinement of my personal rigging files for it, I no longer write any code at all. Now my job is to be the “upper-management-translator”. Both before and after AI, it’s fairly universally true that “upper-management” is generally bad at communicating well what they actually want. I’m not trying to throw shade here, I recognize the reality here: folks in upper-management have other things on their plate and they don’t necessarily have time to communicate in detail about, or with clarity. Even more significantly, they often don’t-yet-know what they want, someone needs to do the process of figuring that out, and that is a big part of why they hired you: to figure out what they want for them. Anyway, terse and poorly-explained asks from upper-management has left real humans thinking “Wait, what?? What did they mean by that?” for all of time well before this, and that will continue to be true. The AI is not going to know what to do with that in order to produce really-refined-results either. So, for a reasonably foreseeable future, my job is secure being the translator, the one who talks to an AI at-length, to better explain what upper-management is probably trying to ask for, and to explore things that upper-management maybe doesn’t even know they need explored yet, but I know what’s implied and will be a thing they’ll eventually want answers on. I can oversee the AI “junior engineers” to get them that, and that overseeing itself takes time, often a lot of time and a lot of active-thinking, which means a job exists there.
What is the Kindest thing to do for other engineers we care about?
We need to think about how-to-be-kind at bringing experienced software engineers into this new world, this new way of working. That kindness isn’t in the discourse enough. Sit with people. Acknowledge their emotions. There is a grief to be felt about the way-of-work-that-was, and now is no more. I liked something in the process of craftsmanship of carefully authored lines of code, and was proud of the elegantly-readable solutions I would create within the code I wrote. Doing that is no longer necessary in today’s world, and that’s okay; we just have to acknowledge it, and acknowledge the emotional transition that comes with acknowledging it.
If you’re helping someone else with this: Sit with them, together, and help them take the first steps, paired-programming style, with the kindness you would bring to showing someone how to use a brand new IDE that you’re familiar with but they are not. Let them drive, so that they go their speed, and by being the one at the keyboard; they are doing rather than watching, and honestly humans only really remember what they have physically done, not what they have just watched.
If you’re helping yourself with this: I don’t want to tell you at all how to spend your money, and it feels very awkward to do this, but I’m going to recommend that you do indeed try out Claude Code, and if your workplace isn’t already paying for it for you, then pay for it yourself. Pay for the $100 a month plan, even if it’s just for a month, but do fully try it out. You need to experiment for yourself and see what it can do, and I hate to say it, but you owe it to yourself to learn to use this tool sooner-rather-than-later, unless you are already fully-retired and completely financially set already. $100 a month is so worth it for what you get here. Compare it to the price & value of having an employee working for you. (Also, it sounds pretty likely that it’s only going to get more & more expensive in the future, for many reasons, one of which is that I think the “competing product” to this is cost of one full-time employee.)
What does this all imply for the world ahead?
I don’t want to get too far off in speculation-land, but it is worth acknowledging some of the future-implications here. Yeah; this new tech probably is going to take a buzz-saw to the job market. A lot fewer engineers will be needed. The job market has already been brutally rough out there, and has been for at least two years (throughout all of 2024 & 2025,) and what’s ahead will probably be worse. This gives me a lot of unease, and I don’t think there is any way to feel secure about it. I so badly wish we were heading toward some more-utopian future, where AI ushers in shorter work-weeks (what if “full-time” became a 32 hour work week?? Bah; I can dream.) And/or AI ushers in a true Universal Basic Income (reading suggestion: Annie Lowrey’s book.) But I think the American-reality is that none of those things will happen. Knowingly or not, our society will implicitly choose a policy of “work more, hustle more, and cut safety nets for any human who does-not or cannot, and blame them for it, call them at-fault for not hustling enough” rather than take on a more we’re-in-this-together family-like approach to our fellow humans on this world of: “sure, those who work hard should be the relatively-most-rewarded to incentivize that, but we also take care of our own, and implement a system where everyone in our country gets housing, food, healthcare, and basics; the same way a family takes care of young-children, or their elderly, or members who are sick or injured, rather than blaming them & telling them they need to hustle more.” (My personal opinion is that I think an ideal system still has to involve capitalism, but combined with a progressive tax structure that makes UBI possible that truly covers the basics. Those who work for it should absolutely still profit from their work, but as a society we don’t totally-fuck-over anyone who isn’t “hustling enough.” I wish we had something more like the Nordic model of democatric-socialism. But yeah, this is probably getting too into politics and off the rails… And I know, the American-ethos would never accept a system like that, it’s just not politically possible here.)
My Past-Life in Tech
I have been programming and writing code for a long time. I was absolutely the biggest computer nerd in my class while growing up, writing my first very-amateur lines of code while in 3rd grade, using a hand-me-down Texas Instrument that plugged into our home TV and would run code written in BASIC, which I learned by going to our school library and checking out actual physical paper books, as that was effectively before the Internet was a thing at all. Later, I was a Mac geek using Metrowerks CodeWarrior as an IDE, and learning from PDF copies of books that I would sometimes print-out (oh, the irony), because our early 33.6kb/s dial-up internet connection was still barely usable. Once I went to college, choosing to major in Software Engineering was such an easy and obvious choice, and for a while, “Design Patterns” by the Gang of Four was legitimately my favorite book. This childhood love of tech coincidentally led to my adulthood love-of-place: the Pacific Northwest, a region of the world previously not on my radar, until a college summer internship at Microsoft brought me out, and now there’s no other place I could imagine living my life. Certainly I had also been very into the outdoors as a child too (I credit my mom there; thanks Mom!), and I probably would have still gotten into climbing in any other mirror-life-possibility, but moving to Seattle certainly helped! (There, I very slightly tied this back into climbing, for a climbing blog!)
I’ve spent most of the last two decades working full-time in Seattle for one tech-company or another, usually small ones, as I discovered I really valued the agileness that startups are more likely to have, and often ones that I thought were doing something meaningful rather than the ones that paid the best, as that was more in line with my values, and to be honest, the pay was still plenty good. I also took some years off here and there (I rather like the phrase: “Taking my early retirement in instalments.”) Unfortunately, all of that means I don’t have enough saved up to truly retire yet, so I am still in this game, and still need an income for the foreseeable future before I can escape this rat-race entirely, to truly just climb and ski. (Ah, the someday dream!) I had thought I had quite a few earning-years still available ahead of me in life to keep raking in that oh-so-helpful tech salary, but now I’m not so sure… I’ve worked in tech a long time, so I’ve been on both sides of many various layoff rounds, each more times than I can count. I thought I was inured to them, but a layoff in January of 2024 felt especially-impactful (partially because the company had been doing work at-least-distantly related to climate change mitigation, which I badly wanted to be a part of.) Very much to my surprise, I had the hardest time I’ve ever had finding a new job after that. Two years went by, essentially all of 2024 & 2025, before finally something landed! I started a new job in late-January 2026, at first writing code the way that I would have considered normal software engineering for the past couple of decades. After two months in, the push to use AI tools suddenly came seemingly out of the blue, and while the transition was a bit emotionally rocky, I can say now that I’m glad it happened. I did need to be pushed to use Claude Code, and completely change my job-description, in order to not get left behind.
Footnote
This post is not AI-written, I don’t want an LLM generating content for me, because they tend to pad and add fluff, which I absolutely do not want. (Rather: I’m sure I’m guilty of overwriting plenty on my own, but it’s my own, gosh darn it 🙂 I would not expect you to take time to read what I didn’t actually write. I definitely did use AI in writing this, but not for text-generation, rather, to occasionally agonize over text-choice. I’d have a back-and-forth conversation with AI about a small but significant word or phrase, in an effort to get some small but important part really right. Kind of the opposite of text/content generation, considering far more words passed between me and the AI model, and only a tiny fraction of those became the words that I chose to manually type into what survived as this blog post. Is this blog post as concise as it could be? Certainly not, but that’s a tendency of my own writing coming through here.)