Author: olahungerford_hmhi1b

  • You Don’t Have to Love Code (But You Still Need To Love The Process)

    soundtrack by Dopo Goto

    I’ve been thinking about whether someone can get good at building software using AI if they’re not interested in programming. With tools that can generate substantial amounts of code from natural language descriptions, it’s tempting to think coding passion might become optional. My hypothesis is that while the required skills are shifting, you still need to love something about the process to succeed in building and shipping useful things.

    Let me share a recent example: I spent multiple hours recently working with an AI assistant to update what should have been a simple note organization component. What started as “just let me edit all the tags” turned into an episode of wrestling with state management, dealing with race conditions, and questioning my sanity. The AI could generate code quickly, but it couldn’t give me the persistence to push through when things got messy.

    This experience highlighted something crucial: one of the hardest parts of software development isn’t necessarily about writing code. It’s about having the stubbornness and curiosity to:

    1. Keep debugging when your “simple” change unexpectedly breaks three other things
    2. Question your assumptions when you’re given plausible-looking code that doesn’t actually solve your problem
    3. Start over when you realize your initial approach won’t scale, even if your code is a dazzling work of art
    4. Admit when you need another human’s perspective, even especially if you’re scared of looking dumb or having wasted your time

    The social dimension is also critical and often overlooked. Even with incredibly capable AI assistants, building valuable software remains a fundamentally collaborative process. The AI doesn’t have perfect information to work with when it helps you make decisions. You will always have blind spots. If you have a good team (whether that is a team at work or generous strangers on the internet) they will help clarify this lack of information (be it parallel efforts, product requirements, tribal knowledge, etc.). In other words, you need to be comfortable with stuff like:

    • Having your assumptions challenged
    • Explaining and defending your decisions
    • Admitting when someone else’s approach might work better

    I’ve noticed that the people who really succeed with building solutions and making them better aren’t necessarily those who love coding for its own sake. I’ve often faced the stereotype myself where programmers are people who love solving puzzles. Personally, I was never a puzzle solver. When I got started I just wanted to make the things in my head show up on a screen. Luckily for me, that turned into a fulfilling career.

    From what I’ve observed so far, the people who succeed at working with technology in the long term are the ones who:

    • Get obsessed with hard problems
    • Stay curious about how things work
    • Value adaptability (i.e. they practice some form of continuous improvement as it applies to their systems, and have a “growth mindset” as it applies to themselves)
    • Are OK with being wrong
    • Develop the persistence to push through obstacles (a.k.a. “grit“)

    So while AI might change how we write code, it doesn’t change the need for genuine interest in problem-solving and creating useful things. The focus of this interest might largely shift from loving a language to loving the process of breaking down complex problems, but something needs to fuel you through the inevitable challenges and ego-busting revelations.

    (Side note: I’m self-consciously avoiding using the word “passion” here which I associate with the stereotype of someone spending nights and weekends sleeping under desks and poring over code. While many people have a positive association with that type of passion, I personally think that its outdated, overrated, and limits our perceptions of what success looks like.)

    Maybe instead of asking whether someone needs to love programming, we should ask: Are you excited enough about what you’re building to push through when AI can’t solve everything? Are you OK with throwing everything away when you realize that someone else already does it better, and then helping that person instead? In short, do you actually enjoy the process? These are all things I keep trying to ask myself, regardless of how (or how often) I create code.

  • Want to get better at prompting (and talking?) Try talking to your computer.

    soundtrack by Deep Sea Current and theycallhimcake

    Lately I’ve found myself talking to my computer more. Not just when I’m mad at it, but also when I’m writing code using AI-assisted tools like GitHub Copilot or Cline. It’s a shift that happened organically as my role evolves from being the primary “driver” of code to more of a “navigator” who guides the overall direction.

    What’s interesting is how this practice can improve communication in both human and machine interactions. When you have to verbalize your intent clearly enough for speech-to-text to understand, you naturally become more precise in your explanations. It’s like the old rubber duck debugging technique, but now your rubber duck can actually respond and help refactor your code.

    There’s also a physical benefit that I can’t ignore. As someone who has spent countless hours hunched over a keyboard like Gollum with his precious – slowly de-evolving, watching my hands morph into claws – this alternative feels like discovering a cheat code for programmer ergonomics.

    It’s not perfect. There’s still plenty of situations where I need to take the wheel, due to both limitations of speech-to-text interfaces and the ability of intelligent coding tools to carry out instructions. But I can definitely feel the difference, even if its not something I can do all the time.

    Going beyond the physical benefits, the real value might be in how it changes the way we think about programming interfaces. We’re moving from an era where we had to speak the computer’s language precisely, to one where we can express our intent more naturally. Computers are becoming better at understanding us, rather than the other way around. It’s another small step toward that Star Trek future where we can just say “Computer, refactor this method to use the Strategy pattern” and actually get meaningful results.

    Of course, your mileage may vary depending on your environment and tolerance for looking like you’re talking to yourself. But in a world where many of us already spend our days talking to screens, maybe that’s less of a concern.

  • Is “Prompt Engineering” just good communication? And why does it matter what we call it?

    soundtrack by Deep Sea Current

    Over the last couple years the buzz phrase ‘prompt engineering’ has morphed into a widely accepted term for using language models, suggesting a specialized skill that you might need to pay someone to learn, like juggling fire or cooking soufflés. At first, I filed it away into the “continuing education” department of my brain like I would with a new programming language or framework. Finally I took a closer look at what people were calling “prompt engineering” and my first thought was “what am I missing, isn’t this just writing?”

    The more I see it used, the less I think we really need another fancy term for what is essentially clear thinking and effective communication. As we hurtle into a future where these real core skills are becoming an endangered species – and the rift between the superpower-havers and the have-nots threatens to get bigger rather than smaller – the more I become convinced that it does more harm than good.

    In The Short Term

    Granted, the various formalized techniques out there for structuring prompts can be super useful as a cheat sheet. This is especially true if you’re not already in the habit of thinking through problems systematically. But if you inspect these strategies more closely, you’ll notice that they follow a few common patterns. They all focus on clearly articulating what you’re trying to achieve, providing relevant context, and facilitating their thought process to make sure they’re considering the most important details. Sound familiar? That’s because these are fundamental skills we use in any form of communication, whether its with humans or machines.

    One counterargument is that there are definitely some model-specific quirks and technical limitations that are invisible to a user lacking specialized knowledge. Things like proactively working around a model’s token limits and context windows can lead to better and more consistent results. Understanding certain special parameters like temperature is useful if you have any control over it. And sure, when you’re building production systems that need to squeeze optimal performance out of these models, that specialized knowledge becomes much more relevant.

    In The Long Term

    But here’s the thing. As these models become more sophisticated, a lot of these limitations are becoming less relevant, even to the power users and enthusiast crowd. Modern models (or rather, ensembles of models a.k.a. ‘agents’) are increasingly good at understanding natural communication and intent. The “engineering” part of prompting is gradually being absorbed into the models themselves. And with the most recent ‘reasoning’ models (like GPT-o1), trying to micro-manage or over-engineer them can actually make them worse.

    Even in cases where we’re working with simpler, more specialized models, we’re likely heading toward a future where more advanced models orchestrate these interactions for us. In other words, the technical details of prompt creation get abstracted away, letting us focus on clearly communicating our goals rather than mastering specialized prompting techniques.

    Why does it matter?

    I worry that by mystifying these skills with fancy terminology, we’re creating real barriers for people who could benefit from this technology. Some engineers I know have been hesitant to try using AI tools because they don’t have time to learn a whole new set of skills. This is exactly the problem – we’re taking fundamental skills people already have and rebranding them in a way that makes them feel inaccessible.

    In software development, we’ve already seen how intimidating terminology can shape behavior. For a minute everyone thought they needed a “DevOps Engineer” to use continuous deployment. “Data Science” still gets branded as a mysterious discipline, artificially separated from analysts and engineers who do similar work. The same pattern is emerging with AI interaction. People who are already excellent communicators and problem solvers are holding back either because they think they need specialized training first, or because they’ve been alienated by the over use of buzzwords.

    The irony is that the best results often come from clear, straightforward communication rather than technically complex prompts. I’ve watched ‘non-technical’ people get impressive results from these tools simply by explaining their needs clearly and iterating on the responses – the same skills they use when working with human teammates. By treating AI interaction as some specialized discipline, we risk overlooking the value of these fundamental communication skills that everyone already possesses.

    So what do we call it instead?

    Maybe instead of searching for the perfect catchphrase, we should think about who we’re trying to reach. Different groups naturally gravitate toward different metaphors and frameworks that make sense to them.

    For the sci-fi crowd (my people), something like “Robopsychology” might actually be perfect, if a little on the wacky side. It captures the essence of understanding how these artificial entities process information, and helps emphasize the squishier parts vs the purely technical aspects of communicating with them.

    The creative community might connect better with concepts like “AI Collaboration.” This framing acknowledges the partnership aspect of working with thinking machines, rather than treating them as just another system to be engineered (or something that autonomously replaces the artist and should be avoided). Writing this blog has been a fun experience in this area.

    For educators, terms like “Learning Design” already exist to embed these tools into the context of existing teaching methodologies and strategies. In this context, prompting is a skill that blooms naturally from forward-thinking educators like Lilach and Ethan Mollick.

    The point here isn’t to create more buzzwords – we have enough of those already. Instead, it’s about finding ways to make these concepts more approachable and relevant to different communities. Just as good teachers adapt their language to their students’ understanding, we should be flexible in how we talk about AI interaction based on who we’re talking to. And knowing your audience is yet another skill which will never become obsolete.

  • The New Disposable Code Economy

    soundtrack by Deep Sea Current

    I’ve been thinking about how generative AI is changing not just how we write code, but how we value it.

    Here’s a specific example of what I mean: I started building a UI for a personal note organization app months ago. While Todo Lists are a popular tutorial project, making an app that matches your exact personal needs can become very complex and time consuming. I eventually decided it wasn’t the best use of my time. Recently I brushed it off again, since I still don’t have a good off the shelf solution that fits my needs. I quickly blew away most of the existing code and used an automated VSCode extension called Cline to recreate a better version within a couple of hours.

    Another more general example calls back to all those gloriously ugly personal Geocities websites that popped up in the early days of the Internet. Except instead of HTML pages with tiled backgrounds, everyone is now spinning up entire frameworks and applications. This has already been a trend for a while, but what used to be a stream of DIY solutions has become a full-on flood now that anyone can generate working code quickly.

    Why the ‘Why’

    Think of it like the transition from hand-crafted furniture to IKEA. Master carpenters used to spend weeks perfecting joinery techniques. Now, the real value is in the design itself – figuring out how to make something functional, appealing, and mass-producible. The actual assembly became commoditized.

    This isn’t just about AI making coding faster. It’s about a fundamental shift in what we consider valuable. When you can generate and regenerate implementation details almost instantly, the precious commodity becomes the product + architectural vision and problem-solving approach. In other words, the “why” of solving the problem becomes even more important to understand and communicate clearly versus the “how.”

    Letting Go

    Luckily, code is less environmentally damaging to dispose of than furniture. That still doesn’t make it easy. Even bad code quickly gets entangled in critical systems if it gets the job done. Creating new things is always more appealing and immediately gratifying than doing thankless surgery on a legacy codebase. This pattern leads to a deep, dark closet full of cruft and the sinking feeling that something in there is important but you don’t have the time to dig it out – until it starts to smell like smoke.

    So just as it’s becoming easier to create code, we need to get better at letting it go. The ability to rapidly generate new code could make technical debt spiral out of control if we’re not careful. Every piece of code we keep around has a maintenance cost, whether it’s actively being worked on or not. It takes up mental space, requires security updates, and adds complexity to our systems. The more easily you can create new things, the more important it becomes to regularly clear out the old.

    Implications

    This shift has interesting implications for how we work, including:

    1. Spending more time on problem definition and system design
    2. Faster experimentation with different approaches
    3. Less emotional attachment to specific implementations
    4. Greater focus on business outcomes over technical perfection
    5. Codebase pruning as a regular practice

    These are all topics that deserve their own posts. But at a high level, what does this mean for engineers and other tech builders? I suspect the most valuable skills going forward won’t be memorizing language features or design patterns, but rather developing strong intuition about system architecture, trade-offs, and knowing when to let go. The ability to quickly evaluate different approaches, communicate their implications, and recognize when code has outlived its usefulness will matter more than ever. Encouraging each other to treat removal as a healthy part of the software lifecycle rather than a failure, and sharing success stories of code retirement can help build positive momentum towards a more future-proof development process.

    The code itself might be disposable, but the thinking behind it – and the discipline to maintain a healthy codebase – certainly isn’t.

  • Botstrapping: A New Approach to Getting Started

    soundtrack by FIREWALKER

    bot·strap·ping /bɒtˌstræpɪŋ/ n.

    1. The practice of using AI tools to rapidly generate initial versions of code, content, or project structures, with the expectation of significant human refinement. “I used botstrapping to get the basic API endpoints in place before customizing the logic.”
    2. A project initialization technique that leverages AI assistance to overcome startup inertia, even when the output requires substantial editing.

    I’ve never used this term out loud, but in my head it perfectly describes a pattern I’ve noticed in my workflow. The term plays on “bootstrapping” – which has evolved from a satirical phrase about an impossible task (pulling yourself up by your own bootstraps) into a metaphor for self-reliant progress. In computing, it describes loading a simple system to launch a more complex one. In business, it means building something without external resources.

    “Botstrapping” intentionally inverts this self-reliance. Instead of pulling yourself up, you’re letting a bot give you a boost – even if you know you’ll need to clean up after it. It’s like having a very eager but sometimes confused intern who can get you 60%-80% of the way there in 10% of the time.

    What I find interesting about botstrapping is how it changes the psychology of starting new projects. That blank page anxiety gets replaced with the more manageable task of editing and refining. The bot’s output might be messy or even completely wrong, but it gives you something concrete to push against. It’s like having a sparring partner for your ideas rather than shadow boxing alone.

    Of course, there’s an art to effective botstrapping. You need to recognize when the foundation the bot has laid is fundamentally flawed versus when it’s just rough around the edges. Sometimes starting fresh is still the better option. But I’ve found that even the process of explaining to the bot what you want to build can help clarify your own thinking.

    The real value isn’t in the code or text the bot generates – it’s in how it changes the activation energy required to start something new. And in a world where getting started is often the hardest part, that’s not nothing.

  • The Artisanal Craft of Text Generation

    There are some words that LLMs really like using at the moment. One of them is using the word “crafting” as a word to mean “creating” or “writing.” I asked Google for thoughts on why that is:

    This rings true to me. I also suspect this reflects how these models were trained on content that tried to sound more sophisticated than necessary. Think LinkedIn posts, technical documentation, and marketing copy – places where people often reach for fancier words to add gravitas.

    It’s also telling that “craft” implies careful, intentional work. The creators of these models would prefer to present them as thoughtful artisans rather than mass-production text factories. But there’s something amusingly ironic about an artificial intelligence repeatedly choosing this artificially elevated language.

    Maybe we need to craft (sorry) a new word that better captures what AI is actually doing.

  • The AI Megapixel Wars – Benchmarks vs Practical Utility

    soundtrack by Jason Sanders

    The non-stop progression of Generative AI benchmarks over the past year has been both exciting and exhausting to follow. While big leaps in capabilities make for great headlines, I’m finding myself getting more skeptical about how much these improvements actually matter for everyday users. When I see reports about the latest model achieving better performance on some arcane academic test, I can’t help but think of my personal experiences where advanced models struggled with tasks like mastering CSS styling consistency, or ran themselves in circles trying to fix unit tests.

    At times this disconnect between benchmark performance and practical utility feels like a repeat of the Great Digital Camera Megapixel Wars. More megapixels didn’t automatically translate to better photos, and I suspect that higher MMLU scores don’t always mean that a model will be more helpful for common tasks.

    That said, there are cases where cutting-edge models can obviously shine – like complex code refactoring projects or handling nuanced technical discussions that require deep ‘understanding’ across multiple domains. The key is matching the tool to the task: I wouldn’t use a simpler model to help architect a distributed system, but I also wouldn’t pay premium rates to use GPT-o1 for basic text summarization.

    Maybe instead of fixating on universal benchmarks, we need more personal metrics that reflect our very specific definitions of real-world usability. For example, how many attempts does it take to write a working Tabletop Simulator script so I can play a custom Magic: the Gathering game format? How well does the model maintain the most relevant context in longer conversations about building out my Pathfinder tabletop RPG character? I doubt that OpenAI researchers are focusing on benchmarks specific to these problems. (Side note: I think its interesting that while embellishing this blog post, Claude suggested I should avoid using examples that are ‘too niche.’ ‘Niche’ is real life. We are all a niche of one.)

    I’d also hypothesize that a skilled verbal communicator working with an older model often outperforms an unfocused prompter using the latest frontier model, just like a pro with an old iPhone will still take better pictures than an amateur with the newest professional-grade digital camera. If this hypothesis is true, it suggests we should focus more on developing our own reasoning and communication skills, and choosing the right tool for each specific need, rather than chasing the latest breakthroughs.

    The most practical benchmark for your own everyday use can be as simple as keeping notes about using different models for your real-world tasks. For example, this post was largely written using Claude 3.5 Sonnet v2 using a custom project, because I consistently prefer the style and tone I get from Claude using this method. Then I asked GPT-o1 to give technical feedback, because I prefer to use o1 as the ‘critic’ rather than the ‘creator.’ My own unscientific personal testing has revealed that while frontier models do often impress me with their ‘reasoning’ abilities, they’re not always the best fit for every step in every task. And as this technology continues to evolve, finding a balance between capability and practicality will become increasingly important for anyone just trying to get things done.