Back to Blog AI Development

Two Months of Claude: How Anthropic's Latest Announcements Are Changing the Way I Build Software

March 23, 2026

I've been building software with Claude Code since it launched. I wrote about why I chose it back in January, and I covered the Opus 4.6 release in February when the new models dropped.

But something happened in the weeks after those posts. The tool I use every day changed more in the last two months than in the entire year before it. And I don't just mean incremental improvements—I mean fundamental shifts in what's possible.

Anthropic shipped four major updates that, taken together, redefined how I work. Here's what each one actually looks like in practice.


The Foundation: Opus 4.6 and Sonnet 4.6

When Opus 4.6 dropped on February 5 and Sonnet 4.6 followed on February 17, I covered the specs: 1 million token context window, 128K output tokens, adaptive thinking. Those numbers sounded impressive on paper.

Two months later, I can tell you what they mean in practice: I stopped managing Claude and started collaborating with it.

What Actually Changed

Before the 1M context window, I spent a surprising amount of time re-explaining things. Every session started with catching Claude up—here's the codebase structure, here's what we did last time, here's the architectural decision we made three conversations ago. It was like working with a brilliant colleague who had amnesia every morning.

Now I load an entire project into a single session and it just stays there. I can say "remember the approach we took on the authentication module" and it does—because it's all in context. That sounds small. It isn't. It's the difference between having a conversation and having a collaboration.

The 128K output tokens changed something else entirely. I used to break every request into pieces. "Write the model." "Now write the controller." "Now the tests." Each response was a fragment I had to stitch together.

Now I describe what I want and Claude delivers the entire feature—model, controller, service layer, tests—in one coherent response. It doesn't lose the thread halfway through because it ran out of output space. The code is internally consistent because it was generated as a single thought, not assembled from disconnected fragments.

Adaptive Thinking

This one's subtle but important. Adaptive thinking means Claude decides for itself how hard to think about a problem. Ask it to rename a variable? It responds instantly. Ask it to design a data migration strategy? It takes a few extra seconds to reason through the implications.

In practice, it just feels smarter. Not because the model is fundamentally different, but because it allocates its intelligence where it matters instead of overthinking simple tasks or underthinking complex ones.


Claude Code Got Serious

I spend most of my day in the terminal. Claude Code is where the real work happens. And in the last two months, it crossed a threshold from "AI-assisted coding tool" to something I'd genuinely call a colleague.

Voice Mode

This one surprised me. When /voice launched, I figured it was a novelty—who talks to their terminal? Turns out: I do, now.

The use case isn't dictating code. It's thinking out loud. I'll be reviewing a complex piece of architecture and instead of typing a multi-paragraph prompt, I just hold the spacebar and talk through what I'm seeing. "This service is handling too many concerns. I think we should split the notification logic into its own module, but I'm worried about the circular dependency with the user service. What do you think?"

It's faster than typing, and something about speaking out loud forces me to organize my thoughts before Claude even responds. I end up with better prompts and better answers.

/loop

This is one of those features that sounds minor until you use it. /loop runs a command or prompt on a recurring interval. I use it to monitor test suites during refactors, check build status, and run periodic code quality checks.

It's like having a junior developer who never forgets to check on things. "Run the tests every five minutes and tell me if anything breaks." That kind of background awareness frees up mental space for the actual creative work.

The Million-Token Terminal

The 1M context window in Claude Code changed the game for real-world projects. I work on applications with dozens of files and complex interdependencies. Before, I had to be strategic about what I loaded into context—which files does Claude need to see for this specific task?

Now I don't think about it. The entire project is in context. Claude can trace a bug from the API endpoint through the service layer to the database query to the test that should have caught it. No more "can you also look at this file?" back-and-forth.

Channels

Channels is still in research preview, but the concept is compelling—hook Claude Code up to Discord or Telegram so it can push updates to you asynchronously. I haven't integrated it into my workflow yet, but I can see the endgame: Claude working on a long-running task and pinging me on my phone when it's done or when it needs a decision.


Cowork: Claude for the Rest of Your Work

I wrote about Cowork when it launched, comparing it to Microsoft's Copilot Cowork and Google's Gemini. That was the feature comparison. Here's the practitioner's report after two months of daily use.

Where It Actually Shines

Cowork isn't where I write code—that's Claude Code's job. Cowork is where I do everything around the code.

Research and analysis. When I was writing the AI brain fry post, Cowork pulled the BCG study, cross-referenced it with the Amazon outage reporting, and organized the key findings into a structured brief—all from a single prompt. That research would have taken me an hour of tab-juggling. It took Cowork about three minutes.

Writing and editing. Blog posts, proposals, documentation—I draft ideas and Cowork helps structure them. It reads my existing posts for voice and tone, then helps me maintain consistency. It's not writing for me. It's catching the places where I'm being unclear or repetitive, which is exactly the kind of editing I'm worst at doing for my own work.

Email and communications. With the Gmail and Google Drive connectors, Cowork can draft responses based on the full context of a conversation thread. It reads the back-and-forth, understands what's being asked, and proposes a reply. I still review and send everything myself, but the first draft is usually 80% of the way there.

Projects

The Projects feature deserves a callout. You can group related tasks into a workspace with its own files, context, and memory. I have a project for this blog, a project for client proposals, and a project for ongoing market research. Each one remembers where we left off.

It's the difference between a chatbot and an actual assistant with a filing cabinet.


Computer Use: The One I Haven't Tried Yet

This one launched today. Literally today—March 23. Claude can now control your Mac: point, click, navigate applications, use the browser, fill in spreadsheets. It's available as a research preview for Pro and Max subscribers.

I haven't used it yet. I'd be lying if I wrote about it like I had. But I want to talk about why it matters.

The Trajectory

Look at the progression over the last year:

  1. Chatbot — you ask a question, you get an answer
  2. Code assistant — you describe what you want, it writes the code
  3. Desktop agent — it reads your files, coordinates subtasks, works across applications
  4. Computer operator — it controls your screen, your mouse, your keyboard

Each step removed a layer of friction between "what you want done" and "it getting done." We've gone from an AI that generates text to an AI that does things.

Anthropic built in safety guardrails—Claude asks permission before accessing new applications, and the system scans for prompt injection in real time. That's reassuring. But what excites me is the direction. If computer use works even half as well as the rest of these tools, we're looking at a future where you describe a complex multi-application workflow in plain English and Claude just... does it.

I'll report back once I've put it through its paces.


What This Means for Productivity

The individual features are impressive. But the real story is what happens when they compound.

A typical morning for me now looks like this: I open Claude Code with the full project in context. I talk through the day's architecture decisions using voice mode. I ask Claude to implement a feature and get the entire thing—model, service, controller, tests—in one response. While I review that code, /loop is running the test suite in the background. Meanwhile, Cowork has already pulled together the research I need for an afternoon client proposal.

Six months ago, that morning would have been an entire day. Maybe two.

But here's the thing I keep coming back to: speed without guardrails is its own risk. I wrote about the real cost of moving fast with AI last week—brain fry, broken code, the Amazon outages. All of that is still true. Maybe more true now that the tools are this powerful.

The productivity gains are real. The danger is letting the speed lull you into complacency. I still review every line of AI-generated code. I still take breaks when I feel the cognitive fog setting in. I still run tests, check edge cases, and think critically about architectural decisions rather than rubber-stamping whatever Claude suggests.

The companies and individuals who get the most from these tools won't be the ones who use them fastest. They'll be the ones who use them most deliberately—combining AI speed with human judgment, building guardrails into their workflows, and treating AI as a force multiplier rather than a replacement for thinking.


The Bottom Line

Anthropic shipped a year's worth of progress in a single quarter. The models got smarter and more capable. Claude Code evolved from a coding assistant into a genuine development partner. Cowork brought that same capability to everything outside the terminal. And computer use is pointing toward a future where the boundary between "what AI can help with" and "what AI can do" gets very, very thin.

If you've been on the fence about AI tools, the gap between early adopters and everyone else just got significantly wider. Not because these tools are magic—but because they've reached the point where someone who knows how to use them well has a genuine competitive advantage over someone who doesn't.

This isn't about replacing developers or knowledge workers. It's about amplifying what they can do. And right now, that amplification is accelerating faster than most people realize.

If you're figuring out how AI fits into your workflow—or you're ready to stop experimenting and start building with it seriously—let's talk.


Sources: Anthropic: What's New in Claude 4.6, Anthropic: Introducing Claude Sonnet 4.6, Anthropic: Dispatch and Computer Use, Claude Code Changelog, CNBC: Anthropic Updates Claude Cowork, 9to5Mac: Claude Can Now Use Your Mac

Joe Baker

Joe Baker

Software architect with 30 years of experience helping businesses transform their operations through custom technology solutions.

Connect on LinkedIn

Need Help With Your Project?

Let's discuss how I can help solve your technology challenges.

Schedule a Call

Ready to Start Your Project?

Let's discuss how I can help transform your business with custom software solutions.

Schedule a Free Consultation