For the last several weeks, I've been using Claude Code as my primary AI coding tool — and I'm not looking back. After spending months with Cursor's agent sidebar, the switch to a terminal-based workflow felt counterintuitive. But once you understand how to use it properly, Claude Code is genuinely a better experience.
Here's everything I've learned, the tips I wish I had on day one, and why I think Claude Code has a structural advantage over the competition.
Getting Started: The VS Code Extension
The first thing I recommend is installing the Claude Code extension. It works with VS Code, Cursor, and other forks like Windsurf. It doesn't do a lot — but it makes launching Claude Code from your IDE effortless.
I still use Cursor as my editor because Command+K and tab completions are nice to have. But the only time I've touched Cursor's agent sidebar is when Claude was down.
What the extension gives you:
- Quick launch — open Claude Code right in your IDE
- Multiple panes — run parallel sessions working on different parts of your codebase
- Auto context — if you have a file open, it gets pulled into the conversation automatically
The Terminal UI Is Better Than You Think
I was hesitant about a terminal interface at first. But Anthropic did a really good job with it. You can tag files easily, choose what to include in context, and use slash commands for common workflows.
A few things that aren't obvious:
- Drag files in by holding Shift while dragging into the terminal — otherwise your IDE opens them in a new tab
- Paste images with Control+V (not Command+V) — this one took me way too long to figure out
- Stop generation with Escape, not Control+C — hitting Control+C twice exits the session entirely
- Jump to previous messages by pressing Escape twice to see a navigable list
- Shift+Enter adds new lines — but you may need to configure this the first time (just ask Claude to set it up)
There's also a Vim mode if you're into that, but I'm not.
Model Selection and Context Management
I use the /model command a lot. My default is Opus — it's noticeably better than Sonnet, and it's no longer painfully slow like the old 3.5 models were.
If Opus is having issues (it happens), I switch to Sonnet. Most people should probably just use the defaults: Opus until you hit 50% of your usage limits, then Sonnet for cost efficiency.
The other command I use constantly is /clear. My rule is simple: every time you start something new, clear your context. You don't need stale chat history eating up tokens. And you don't want Claude spending time on compaction — which runs another LLM call just to summarize the conversation. Just clear and start fresh.
The up arrow key lets you navigate back to past chats, including sessions from prior days. So you never lose history — you just keep your active context clean.
Skip the Permission Prompts
Here's the most annoying thing about Claude Code out of the box: it asks permission for everything. Can I edit this file? Can I run lint? Can I execute this bash command?
Yes. That's the whole point of an agent.
Every time I open Claude Code, I hit Command+C and run:
claude --dangerously-skip-permissions
It's similar to what Cursor used to call "YOLO mode." There's a theoretical risk of a rogue command, but I've used it for weeks without a single issue. Your call on the risk tolerance.
Queue Up Your Work
This might be my favorite feature. In Cursor, I used to draft prompts in a notepad, wait for the agent to finish, paste the next one, and repeat. Half the time I'd come back from Slack to find the agent had been idle for 20 minutes.
With Claude Code, you can queue messages. While Claude is working on one task, just type your next prompt. And the next one. Claude is smart about knowing when to execute them — it won't blindly run queued messages if it needs your feedback first.
This means you can queue up several tasks, go about your day, and come back to a pile of completed work.
Set Up GitHub Code Reviews
One of the coolest slash commands installs the GitHub app for automated code reviews. Every PR you submit gets reviewed by Claude automatically.
This matters because as your volume of pull requests increases with AI tools, human review bandwidth becomes a bottleneck. And honestly, Claude finds real bugs that humans miss. While humans tend to nitpick naming conventions, Claude catches actual logic errors and security issues.
The key tip: edit the default claude-code-review.yaml prompt. Out of the box, it's too verbose. Change it to something like:
Look for bugs and security issues only. Report on bugs and potential vulnerabilities. Be concise.
That one line change transforms it from noisy to genuinely useful.
Why Claude Code Handles Large Codebases Better
We have a React component that is 18,000 lines long. No other AI agent can reliably update this file. Cursor struggles with patch resolution, has to rewrite files constantly, and chokes on extremely large files.
Claude Code handles it without breaking a sweat. Not even remotely an issue.
It's also exceptionally good at navigating large codebases — searching for patterns, understanding relationships between components, tracing shared state across files. The difference is stark.
The Structural Advantage
Think about why this works so well. Cursor built a general-purpose product that supports multiple models. They trained custom models, manage additional layers of complexity, and don't control the core AI.
Anthropic makes the best coding models and builds Claude Code to use them optimally. When they hit challenges, they improve the model itself. They only support their own models, so they know every detail about training, capabilities, and optimal usage.
This also means better economics. With Cursor, you're paying for Cursor's margin plus the model provider. With Claude Code, you're paying Anthropic directly — maximum access to models like Opus without the middleman markup.
The Max plan at $100/month is, frankly, a steal. Compare it to what even a junior engineer costs per hour, anywhere in the world. The ROI isn't close.
Claude Code isn't just a different interface for AI coding — it's what happens when the company that builds the best models also builds the best tool to use them.