Introduction
The development tools of the past were meticulously crafted, exhibiting stable behavior and restrained interactions. Issues that arose were mostly within expected parameters. However, with the advent of tools like Claude Code and Codex, using AI for coding has become the default approach. While AI has indeed accelerated coding, it hasn’t automatically solved the long-term maintenance of complex software.
The Case of Claude Code
Claude Code is a typical example. Developed by Anthropic, this tool was built almost from scratch, with the team adhering to a radical internal approach: insisting that “100% of the code for Claude Code must be written by Claude Code itself.” Tasks ranging from large code refactoring to various minor coding jobs rely on Claude Code.
The problem arises when the underlying model is inherently non-deterministic, and the product code is rapidly piled up under such a development method. The system can easily fall into a vicious cycle. Over the past couple of years, Claude Code has rapidly expanded its capabilities, leading to increasingly complex interaction logic, which has made the product itself more unstable—resulting in crashes, bizarre error messages, and a growing number of bugs, all while slowing down.
The situation has even reached an absurd point—rather than systematically fixing these performance issues, the team opted to acquire the core dependency, Bun, placing their hopes on the underlying runtime team. In other words, they bought an entire runtime team just to prevent their CLI tool from consuming 2GB of memory at every turn.
The Complex Situation of Cursor
Cursor’s situation is different but equally complex. It started with an extremely large and complex codebase that it did not create from scratch—directly forking VS Code. This starting point meant that from day one, it was engaged in a high-difficulty battle: not creating a product on a blank slate but making incremental modifications on a massive engineering system. They had to continuously develop their differentiated capabilities while maintaining this forked version and keeping up with upstream necessary updates. Anyone who has worked on large-scale engineering knows that this is inherently painful, and over time, the fork will only deepen, increasing maintenance costs.
A Clear Trend: A Wave of Major Rewrites
When looking at these phenomena together, an increasingly clear trend emerges: AI programming tools may undergo a wave of large-scale rewrites. The codebases, having been brought into an increasingly irreversible state during early rapid iterations, will only become more fragile as new features are added. The real solution often lies in acknowledging that the old framework has spiraled out of control and starting anew from scratch.
However, this does not mean all teams will reach this point.
The Interesting Contrast of OpenCode
OpenCode provides an interesting contrast. Built in the wave of AI programming, the team adopted a completely different strategy: emphasizing consistency and constraints in the codebase more than ever before, ensuring no files deviate from established norms. They also extensively used tools and frameworks with stronger constraints and clearer design philosophies, firmly practicing domain-driven design.
They believe that with large models involved in development, once the codebase becomes “dirty,” the consequences will be magnified. Large language models cannot distinguish between “old patterns” and “new patterns,” treating old writing styles as correct examples and continuing to generate code that does not conform to current standards. Thus, the negative impact of a dirty codebase is more severe than in the past.
As a result, they have achieved a somewhat counterintuitive outcome: their codebase is cleaner than ever, “possibly the highest quality batch of code we’ve ever written,” as stated by Dax Raad, one of the founders of OpenCode, in a podcast.
The Role of Handwritten Code
Meanwhile, he has not abandoned the act of “handwriting code” itself. “When I design new features or complex architectures, writing code is part of the thinking process. I’m not good at writing long detailed specifications; instead, I prefer writing type definitions, experimenting with function compositions, and adjusting file structures to understand problems. This has been the working style of most programmers for a long time. I see no reason to abandon this way of working; writing code is how I think.”
He also subtly criticized Claude Code from a code quality perspective: “Claude Code created a prototype with a high product-market fit, and even if the experience isn’t perfect, it will succeed. But that doesn’t mean everyone must sacrifice quality to achieve that speed.”
The Origin of OpenCode
Host: Since December last year, OpenCode has developed rapidly. Can you take us back to how OpenCode was born?
Dax: Our company has been involved in open source for many years, building tools for developers and experiencing the rise and fall of various companies in this field, accumulating a lot of experience in building open source products.
Our previous project, SST, while not as large as OpenCode, was quite popular. It provided us with complete practical experience: how to start an open source project, how to make it successful, how to operate it daily, and what the advantages and disadvantages of open source are. We can say that we have deeply cultivated this field.
Around February 2025, we became profitable. At that time, the company had only three people. After achieving profitability, we began to reflect on what to do next: continue to deepen existing products or explore new directions? AI is clearly an important trend of this era, and ignoring it would seem very unwise.
So we started trying some ideas, exploring what AI could do for developers and what it could do on a broader level. We tried several directions but never really formed a product. Some ideas were helpful to us, but we couldn’t refine them into mature products.
Around that time, we started using Claude Code. Before that, we had seen many AI programming tools, like Cursor, which was already quite popular. But no one on our team really used it extensively. We tried it, but it felt like we were giving up some things we originally liked without gaining enough benefits, so we didn’t stick with it.
However, Claude Code was the first tool that made us feel, “This is the right workflow.” Before that, we had been copying code into ChatGPT and then copying it back, going back and forth. We kept wondering why these things couldn’t connect directly to the file system. Why was it so manual?
Claude Code smartly integrated these processes. We thought, if this is the first tool that truly allows us to “use it,” then this could be significant.
Next, we began to think: what if we applied our experience in open source? There was a clear gap—there was no open source coding agent yet. So we wondered if we could create an open source coding agent that supports multiple models, knowing that competition among these models would persist and intensify.
This entry point was a natural extension for us.
Daily Development Workflow
Host: What does your daily development workflow look like now? How much has it changed? After all, you are both a developer and someone who develops developer tools, which gives you a unique perspective.
Dax: Our team members are all Vim users, and we do almost all our work in the terminal, enjoying the Vim editing experience. Transitioning to Cursor would be costly for us because while we could still edit code, the text editing experience felt worse, and the benefits didn’t compensate for that loss.
Claude Code is usable because we can continue using our original editor while doing AI-related tasks in a separate space, without interference. This is very important for us.
I think Cursor is more like a transitional product; it tries to take you directly from traditional editors to the new paradigm of AI programming. While this has some benefits, for me and many others, it feels a bit awkward—it’s like I just want to use the editor to write code without AI features popping up everywhere.
When using Cursor, I feel overwhelmed by suggestions and new UI panels, which makes me uncomfortable. I prefer to think of the agent as a “dumb colleague sitting next to me”: I occasionally glance at what it’s doing, give it some feedback, and let it continue while I do other tasks. The work can be compartmentalized.
Thus, Claude Code’s biggest advantage is that it provides an independent space outside the editor. When we were developing OpenCode, we continued this direction: making interactions with the agent as rich and effective as possible in this “independent space.”
My workflow remains: I use Neovim to edit code and the agent to handle tasks that require it. We are indeed using the agent more and spending relatively less time in the editor, but I have not completely abandoned handwritten code. I still use the editor extensively to write code manually.
The Debate on Handwriting Code
Host: Many top developers claim they no longer write any code from scratch. Many people interpret this as “programming is dead”; what’s your take?
Dax: I find this statement puzzling. If you ask me what percentage of my coding is handwritten, I would find it hard to answer. I switch between different tools, making it difficult to quantify.
If someone says they hardly use an editor and work entirely within these agent tools, whether OpenCode, Codex, or others, I would be surprised. Because these tools are not really suited for reading code. Do they not conduct code reviews at all? Or do they push code to GitHub and then review it?
Moreover, when I design new features or tackle complex tasks, writing code is part of my thought process. If it’s just adding a button or making a simple change, sure, I can prompt it directly without even looking at the generated code, because it’s likely similar to the surrounding code.
But when I’m working on something entirely new or designing a system, I need to write code to figure out how to proceed. I find it hard to sit there and write a long detailed spec and then let AI implement it. I prefer writing type definitions, experimenting with how different functions combine, and adjusting file structures to understand the problem. This has been the working style of most programmers for a long time.
So I see no reason to stop doing this because it’s how I clarify things.
Therefore, when someone says, “I don’t write code at all,” I tend to be somewhat skeptical. I think there’s a psychological factor at play: people feel a significant change is happening and worry about being obsolete, so they tend to convince themselves, “I’m already at the forefront.”
Additionally, there’s a narrative now that this change will eliminate many people, leaving only a few. So there’s a tendency to exaggerate certain localized successes into a narrative of “everything can be done this way now.” It’s hard to judge the real situation from these statements because they are mixed with many emotions and psychological factors.
Host: I think that’s a good point because I don’t even see it as “intentional marketing.” For example, one of the early authors of Claude Code, Boris, said he hardly writes code from scratch anymore, but he recently mentioned, “Why is Anthropic still hiring developers?” indicating that humans still play a significant role.
Dax: I agree, this isn’t malicious; it’s a result of excitement and anxiety intertwined, making it hard for people to accurately express the reality. Similar phenomena often occur when new technologies or frameworks emerge, with people claiming they “completely change the way we work.” A useful benchmark is to directly examine the output: in many cases, there are no real products that have been implemented, only attempts; even if there are products, the quality may not be better and could even be worse. This is also true in current AI programming practices, where some claim to “rely entirely on AI for coding,” but the output quality is not ideal, reflecting the current level of reality.
Competition Between OpenCode and Claude Code
Host: OpenCode and Claude Code seem to be direct competitors; what’s your perspective? Especially after Anthropic limited subscription usage, has your view changed?
Dax: I don’t think the world is zero-sum; most systems allow for multiple parties to win. However, competition does exist in the commercial realm. Business is more like a sports competition, with everyone vying for different visions of the world. One side may not completely win, but competition is real. More importantly, it’s about positioning. Even if products seem similar, their positioning can be entirely different.
OpenCode’s success derives more from its positioning than just product quality. We judge that competition between models will continue, including closed-source and open-source models. Prices will drop, and competition will intensify. Therefore, we choose to create a tool that is not tied to a single model, allowing us to benefit from model competition. Secondly, we aim to occupy the position of “the first open-source coding agent.” Historical experience shows that most development tools ultimately trend towards open source, as seen with databases, compilers, and editors.
Claude Code follows a vertically integrated route, which differs from our positioning. From a positioning perspective, we may not be in direct competition. However, we do have ideological differences and hope to prove that our values will yield better results.
Host: As a user who has used both Open Code and Claude Code, I can certainly say that Open Code offers a great experience. I summarize it as open source, the ability to switch freely between different models, no lock-in, and first-mover advantage.
Dax: These are indeed core directions and not just slogans; they are reflected in many specific details. For example, why insist on open source? Because open source means more people can try it in different environments. Open Code was designed from the beginning to adapt to various environments. Even on corporate laptops with strict restrictions, it ensures normal operation. The benefit of open source is that while we cannot replicate all environments internally, the community can. Others can test in real environments, report issues, and even submit fixes, thus covering various long-tail scenarios effectively. If the product were only half as good as it is now, we might still achieve similar success because success comes more from positioning than merely product quality.
Host: OpenAI has taken a different approach in its relationship with you. What is the nature of your relationship with OpenAI, and why has OpenAI chosen a different approach?
Dax: This really goes back to our positioning. If we are the open-source option, we have the opportunity to become a “standard,” allowing others to build on us or embed us into their systems. Therefore, before collaborating with OpenAI, we were already in discussions with GitHub, GitLab, JetBrains, etc., hoping they would recommend Open Code as a way to use their large model services, as we have invested more in this area and received better user feedback. After persuading some companies, I approached OpenAI to express that the industry had support and asked if they would be willing to join.
The reason for choosing OpenAI is that they compete with Anthropic, and Anthropic has a higher mindshare in the coding field. Supporting us provides OpenAI with public relations benefits and attracts more users to use Codex. The timing of my contact with them coincided with Anthropic’s ban on Cloud Max and Open Code, and they saw a reverse positioning opportunity.
As for whether OpenAI truly endorses this model or is motivated by short-term competition, I’m not sure. However, we excel at understanding the incentives of all parties and influencing key points to create a situation beneficial to ourselves, our users, and the open-source community. Essentially, it’s about understanding incentive mechanisms and creating better outcomes in the game.
Host: Recently, there have been many acquisition cases in the industry. Will OpenCode be next?
Dax: We have spent many years searching for a truly massive market, and now we have found it. There are 30 to 50 million developers globally, and our product can theoretically serve everyone, which is a rare opportunity. Therefore, it’s a difficult decision to hand it over easily. We have indeed received many acquisition offers, but none have been seriously pursued. Unless the other party offers a very high price, as there are indeed exaggerated acquisition cases in the AI field.
Once, I mentioned in our team chat that a certain company wanted to acquire us, and everyone completely ignored it and continued discussing the product. When I reminded them again, someone said, “Let them add another zero before coming back.” The team genuinely wants to see this through rather than cashing out quickly.
Of course, a few years later, if growth stagnates, my attitude might change. The company can grow because the founders maintain motivation over the years. Many acquisitions happen because founders lose motivation or the path ahead seems too long; currently, we want to see this through to the end.
Balancing Speed and Quality
Host: AI allows us to move faster, but does it also accumulate more technical debt? Has this trade-off undergone a fundamental change?
Dax: This trade-off has always existed. Many times, people use “trade-offs for speed” to explain quality issues. However, looking back, most problems are not due to deliberate trade-offs but rather a lack of experience.
When I first do something, 95% of the problems arise from a lack of experience rather than conscious choices. The next time I do it, I can do better in the same amount of time.
AI is similar; it raises everyone’s capability ceiling but should not be an excuse for laziness. We should still reflect and improve, rather than thinking that just because it “runs” there are no issues.
Some people say, “The code is terrible, but we write it quickly.” In reality, more experienced individuals can write better code at the same speed; this is fundamentally still an issue of capability.
Host: From a product and user perspective, positioning and speed may be more important than quality in the short term. For instance, with Claude Code, was it reasonable to release quickly? Should it have been done differently later?
Dax: I believe everyone will try to move forward as quickly as possible and make different trade-offs based on their experiences. Claude Code’s situation is that they created a prototype with a high product-market fit, and even if the experience isn’t perfect, it will succeed. This situation is common, but it doesn’t mean that “everyone must sacrifice quality to achieve that speed.”
We developed Open Code around the same time, building terminal frameworks, Zig implementations, React and SolidJS bindings, compiling to bun binaries, etc. The reason we can deliver higher quality at a similar speed is that this is our familiar domain. Of course, there are certainly people who can do better than us. Overall, there will always be people in this industry who are ten times worse than you and those who are ten times better.
Developer Choices
Host: When a large amount of code is generated by AI, how do you balance efficiency and quality? For example, in code reviews, do you submit without reading the code?
Dax: A somewhat counterintuitive phenomenon is that I believe our codebase is cleaner than ever before, possibly the highest quality batch of code we’ve ever written. The reason is that the negative impact of a dirty codebase is now more severe than in the past.
In the past, the typical lifecycle of a codebase was: we initially established a set of patterns, and a few months later, we discovered better practices and notified the team to develop according to the new way, but old code wouldn’t be immediately refactored. Over time, multiple layers of legacy styles would form in the codebase. This was acceptable in the past, but not anymore. Because large language models cannot distinguish between “old patterns” and “new patterns,” they treat old writing styles as correct examples and continue generating code that does not conform to current standards.
Therefore, we place even greater emphasis on clearly defining and strictly enforcing a unified pattern, ensuring that no files in the codebase deviate from the norms. In a sense, we care more about code quality now because we “employ” a group of diligent but understanding-lacking LLMs, which have excellent memory but cannot judge which patterns are superior. We extensively use tools and frameworks with strong constraints and clear design philosophies, and we firmly practice domain-driven design.
As for whether I read all the code, my approach is based on risk assessment. In mature and stable modules, I have strong expectations for the output and usually do quick checks; whereas in areas where the structure is not yet stable, I tend to be more cautious and review line by line, and the team generally adopts a similar strategy.
Host: Some people might be shocked to hear that you don’t review every line of code. But looking back, even in large tech companies, not every line of code is thoroughly read. Often, as long as trust is established with the developer, the review can be relatively quick. To some extent, you seem to be suggesting that a certain level of “trust” can also be established with LLMs.
Dax: I still consider myself somewhat conservative. Even in large companies, at least one person truly understands the code—that is, the person who wrote it. If AI generates code and no one understands it, that can be unsettling.
I prefer to use “risk sense” to make judgments. For instance, in a recent situation where I reviewed less code, it was for implementing a new dialog box for a terminal interface. I conducted thorough testing from the user’s perspective to confirm that the functionality was normal. Since the underlying components for building the dialog box are very mature, I judged the risk to be low. While there may be technical implementation details that have flaws, and I did clean them up later, there were no significant issues in the short term. However, I will still fix it as soon as possible because non-compliant code can contaminate subsequent model generations.
This is fundamentally the same as in the past: you can “cut corners” a bit, but remember to come back and fix it.
Host: Many people now believe that the joy of programming has been diminished, and developers have become “prompt factories.” What’s your take? Has AI made you lose interest in programming?
Dax: For me, the answer is no. But I might belong to a minority because I run my own company and can choose my work direction. AI tools allow me to explore new ideas faster and invest time in more creative aspects rather than repetitive tasks.
However, I understand why many people feel frustrated: if you are just assigned tasks, input prompts, and wait for results without more challenging work, it can indeed feel boring. In fact, there has always been a lot of repetitive work in programming, which is now being taken over by agents.
The truly interesting parts—system design, directional judgment, and problem definition—remain human-led and often do not occur frequently. Perhaps once a month, rather than every day.
Host: Personally, I feel that AI has enhanced the fun. It allows me to focus on higher-level abstractions without getting bogged down in syntax details. But I also worry that if we rely too much on tools, our skills might degrade.
Dax: This concern is real. I remember having strong mental arithmetic skills as a child, but now they have clearly declined. Similarly, if one relies on AI for a long time, certain coding abilities may diminish. While this might have limited real-world impact—like calculators replacing mental math—the gap in ability might become evident when facing complex problems.
The issue is that it’s like “the genie is out of the bottle.” As long as there are tools that make it easier for people, they will continue to use them. The key is whether the saved energy is used for more valuable tasks or just to scroll TikTok.
I have experienced both states, sometimes being very engaged and sometimes letting AI work while I zone out. If this phenomenon occurs simultaneously among millions of programmers, it’s hard to judge how much productivity truly increases in the long term, especially if someone is just doing a job they don’t care much about.
Host: So, does enjoying writing code become a disadvantage? For instance, does someone become overly obsessed with technology while neglecting more important skills?
Dax: This is not a new problem; in the past, some developers became obsessed with technical details while ignoring product and business judgment. Excellent people often know how to balance: when to delve into technology and when to focus on product direction.
In my view, programming skills can lead to good career positions, but true breakthroughs often come from a second specialty. If you are an excellent programmer and deeply understand a certain industry (like finance, healthcare, or energy), you are in an extremely scarce position. Programmers can enter almost any industry, which is a huge advantage. If you can accumulate a deep understanding in your field, you can discover overlooked structural opportunities.
Host: You have refused multiple acquisition and high-salary offers. Why did you not choose a more stable path?
Dax: When I was younger, I thought the founders of Snapchat were crazy for rejecting Facebook’s multi-billion dollar acquisition, but later I understood. As you progress in your career, your “safety net” expands. In earlier years, if you received a good offer, you might not be able to refuse it. But when you have accumulated, have capabilities, and have a fallback, your ambition grows. Accepting an acquisition means giving up your original dream; that feeling of “all visions coming to an end” is much stronger than the short-term ease. Therefore, unless the conditions are extremely favorable, it’s hard for me to make that choice.
Host: Many people consider you an “elite developer.” What do you think is your core advantage?
Dax: Frankly, I have many colleagues who are technically stronger than me. I may not be the best programmer. My advantage lies more in having a holistic perspective, being able to anticipate development trends, and making reasonable judgments. My two co-founders are similarly skilled in this area, and we mutually promote each other. We are dedicated to sorting out rules in a complex industry environment, distinguishing between long-established fundamental logic and temporarily established cognition. The team has invested a lot of time in this, and I often engage in deep discussions with friends on this topic. This way of thinking is transferable and can be applied to programming, business operations, personal decision-making, and talent recruitment, which may be my true advantage.
Host: Are these abilities innate, or did you cultivate them intentionally? If they were cultivated, can anyone improve through effort?
Dax: When communicating with top talents in the industry, it seems they possess innate talents. However, after in-depth discussions, it often turns out they started out ordinary and gradually improved through sustained investment. For me, the core of this ability lies in genuinely caring about the ultimate correctness of my understanding, rather than pursuing the victory of current debates—establishing an accurate model of understanding the world. If this is the goal, it will lead to the necessary actions. The core is to maintain clear thinking and recognize oneself and one’s insecurities. Often, insecurities can affect judgment, causing one to selectively trust biased evidence due to subjective expectations. This requires long-term growth and accumulation.
In my younger years, I lacked security and had weaker problem-solving abilities. As my confidence and achievements grew, my way of thinking also improved. At the same time, it’s essential to be cautious about the information received, avoiding information overload that limits thinking and traps one in a singular cognitive environment. Maintaining self-awareness and continuous reflection is a long-term commitment, and in today’s social environment, maintaining clear thinking faces many distractions. Achieving this requires a steadfast pursuit of ultimate correctness.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.