Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Graphite is joining Cursor (cursor.com)
232 points by fosterfriends 19 hours ago | hide | past | favorite | 233 comments




Imo Cursor did had the first mover advantage by making the first well known AI coding agent IDE. But I can't help but think they have no realistic path forward.

As someone who is a huge IDE fan, I vastly prefer the experience from Codex CLI compared to having that built into my IDE, which I customize for my general purposes. The fact it's a fork of VSCode (or whatever) will make me never use it. I wonder if they bet wrong.

But that's just usability and preference. When the SOTA model makers give out tokens for substantially less than public API cost, how in the world is Cursor going to stay competitive? The moat just isn't there (in fact I would argue its non-existent)


Yeah, hard disagree on that one, based on recent surveys, 80-90% of developers globally use IDEs over CLIs for their day-to-day work.

I was pretty worried about Cursor's business until they launched their Composer 1 model, which is fine-tuned to work amazingly well in their IDE. It's significantly faster than using any other model, and it's clearly fine-tuned for the type of work people use Cursor for. They are also clearly charging a premium for it and making a healthy margin on it, but for how fast + good it's totally worth it.

Composer 1 + now eventually creating an AI native version of GitHub with Graphite, that's a serious business, with a much clearer picture to me how Cursor gets to serious profitability vs the AI labs.


As the other commenter stated, I don't use CLIs for development. I use VSCode.

I'm very pro IDE. I've built up an entire collection of VSCode extensions and workflows for programming, building, customizing build & debugging embedded systems within VSCode. But I still prefer CLI based AI (when talking about an agent to the IDE version).

> Composer 1

My bet is their model doesn't realistically compare to any of the frontier models. And even if it did, it would become outdated very quickly.

It seems somewhat clear (at least to me) that economics of scale heavily favor AI model development. Spend billions making massive models that are unusable due to cost and speed and distill their knowledge + fine tune them for stuff like tools. Generalists are better than specialists. You make one big model and produce 5 models that are SOTA in 5 different domains. Cursor can't do that realistically.


> My bet is their model doesn't realistically compare to any of the frontier models.

I've been using composer-1 in Cursor for a few weeks and also switching back and forth between it, Gemini Flash 3, Claude Opus 4.5, Claude Sonnet 4.5 and GPT 5.2.

And you're right it's not comparable. It's about the same quality of code output of the aforementioned models but about 4x as fast. Which enables a qualitatively different workflow for me where instead of me spending a bunch of time waiting on the model, the model is waiting on me to catch up with its outputs. After using composer-1, it feels painful to switch back to other models.

I work in a larg(ish) enterprise codebase. I spend a lot of time asking it questions about the codebase and then making small incremental changes. So it works very well for my particular workflow.

Other people use CLI and remote agents and that sort of thing and that's not really my workflow so other models might work better for other people.


Does it have some huge context window? Or is it really good at grep?

The Copilot version of this is just fucking terrible at suggesting anything remotely useful about our codebase.

I've had reasonable success just sticking single giant functions into context and asking Sonnet 4.5 targeted questions (is anything in this function modifying X, does this function appear to be doing Y) as a shortcut for reading through the whole thing or scattershot text search.

When I try to give it a whole file I actually hit single-query token limits.

But that's very "opt-in" on my part, and different from how I understand Cursor to work.


It is really good at grep and will make multiple grep calls in parallel.

And when I open it in the parent directory of a bunch of repos in our codebase, it can very quickly trace data flow through a bunch of different services. It will tell me all the files the data goes through.

It's context window is "only" 200k tokens. When it gets near 200k, it compresses the conversation and starts a new conversation..... which mostly works but sometimes it has a bit of amnesia if you have a really long running conversation on something.


> It is really good at grep and will make multiple grep calls in parallel.

How does that work? Multiple agents grepping simultaneously?


When other models would grep, then read results, then use search, then read results, then read 100 lines from a file, then read results, Composer 1 is trained to grep AND search AND read in one round trip It may read 15 files, and then make small edits in all 15 files at once

Presumably if it knows it needs to perform multiple searches in order to gather information (e.g. searching for redundant implementations of an algorithm, plus calls to the codebase's canonical implementation) it should be able to run those searches in parallel grep calls.

I'm trying to figure that one out.

LLMs are inherently single-threaded in how they ingest and produce info. So, as far as I can gather from the description, either it spawns sub-agents, or it has a tool dedicated for the job.


Probably something closer to ripgrep, if not actually ripgrep.

composer 1 has been my most used model the past few months. but i only use it to execute plans that i write with the help of larger, more intelligent models like opus 4.5. composer 1 is great at following plan instructions so after some careful time providing the right context and building a plan, it basically never messes up the implementation. sometimes requires a few small tweaks around the edges but overall a fantastic workflow that's so delightfully fast

OP isn't saying to do all of your work in the terminal; they're saying they prefer CLI-based LLM interfaces. You can have your IDE running alongside it just fine, and the CLIs can often present the changes as diffs in the IDEs too.

This is how some folks on my team work. Ran into this when I saved a file manually and the editor ran formatting on it. Turns out that the dev that wrote it only codes via CLI though reviews the files in an IDE so he never manually saved it and ran the formatter.

I expect the formatter/linter to be run as part of presubmit and/or committing the code so it doesn't matter how it's edited and saved by the developer. It's strange to hear of a specific IDE being mandated to work around that, and making quick edits with tools like vi unsupported.

This was with Rider which had its own rules for which we hadn't set up an override in .editorconfig.

allow me to introduce you to the lord and savior, https://pre-commit.com/


and if your pre-commit runs fast (thank you, ruff), you can set this up as a post-edit hook in claude code

I would not recommend this. A hook that modifies files causes Claude to have to re-read the file before modifying it again, which can build up your context window fast

Part of a healthy codebase is ensuring that anyone can hack on it, regardless of their editor setup. Relying on something in .vscode and just assuming people are using that editor is what leads to this kind of situation.

Bake that into the workflow some other way.


Or just enforce that the team all uses the same tools, and you save quite a lot in productivity between making things work on different tools, more so than whatever productivity gains individual devs on the team get from using their own preferred tools. Many teams I know issue everyone MacBooks and enforce VSCode usage, for example, and that has saved so much time compared to other teams I've seen where devs can choose between macOS or Windows for example.

> Yeah, hard disagree on that one, based on recent surveys, 80-90% of developers globally use IDEs over CLIs for their day-to-day work.

I have absolutely no horse in this race, but I turned from a 100% Cursor user at the beginning of the year, to one that basically uses agents for 90% of my work, and VS Code for the rest of it. The value proposition that Cursor gave me was not able to compete with what the basic Max subscription on anthropic gave me, and VS Code is still a superior experience to Claude in the IDE space.

I think though that Cursor has all the potential to beat Microsoft at the IDE game if they focus on it. But I would say it's by no way a given that this is the default outcome.


This is me. Was a huge Cursor fan, tried Claude Code, didn't get it, tried it again a year ago and it finally clicked a week later I cancelled my Cursor sub and now using VS Code.

I don't even like using CLI, in fact I hate it, but I don't use CLI - Claude does it for me. Using for everything: Obsidian vault, working on Home Assistant, editing GSheets, and so much more.


How does company X dependant on company Y product beat company Y in what is essentially just small UI differences? Can cursor even do anything that vscode can't right now?

> Can cursor even do anything that vscode can't right now?

Right now VSCode can do things that Cursor cannot, but mostly because of the market place. If Cursor invests money into the actual IDE part of the product I can see them eclipsing Microsoft at the game. They definitely have the momentum. But at least some of the folks I follow on Twitter that were die-hard Cursor users have moved back to VSCode for a variety of reasons over the last few months, so not sure.

Microsoft itself though is currently kinda mismanaging the entire product range between GitHub, VS Code and copilot, so I would not be surprised if Cursor manages to capitalize on this.


Hard disagree.

Composer is extremely dumb compared to sonnet, let alone opus. I see no reason to use it. Yes, it's cheaper, but your time is not free.


> Yeah, hard disagree on that one, based on recent surveys, 80-90% of developers globally use IDEs over CLIs for their day-to-day work.

This is a pretty dumb statistic in a vacuum. It was clearly 100% a few years ago before CLI-based development was even possible. The trend is very significant.


CLI based development predates IDEs for a couple of decades, and we moved away for very good reasons.

Strawman.

Imaginary situation: People are using claude instead of cursor, and you can run claude in a terminal, so this is going back to the days of not using an IDE for the people that do it.

Straw man shake down: Terminal based development like vim and emacs are old and shit, and we moved away from that for a reason, and so (although totally unrelated) this means 'using claude' means going back to using a terminal for everything, which is similarly old and shit.

...but, obviously wrong.

- There's a claude desktop app that isn't done via the terminal.

- Agents use the terminal/powershell to do lots of things, even in cursor because that's the only way to automate some things, eg. running tests.

- Terminal environments like vim and emacs are ides. :face-palm:

- It literally makes no difference what interface you copy and paste your text prompt into and then walk off to get a coffee in agent mode.

Anyone who's seriously arguing that IDE integrated LLM chat windows somehow beat command line LLM chat windows is either a) religiously opposed to the terminal window, or b) hasn't actually tried using the tools.

...because, you'll find it makes no difference at all.

Why is cursor getting involved with graphite? ...because the one place where is makes a difference is reviewing code, where most CLI based tools (eg. `git diff`) are just generally inferior to visual integrated code review tools.

You know what that is?

An acknowledgement that cursor, in terms of code generation has nothing that qualifies as the 'special sauce' to use it over any other tool. CLI or not.

So they're investing in another company that actually has a good, meaningful product.


Lets see how that holds in five years time.

I am betting it won't.

By the way, there are OS APIs, I am yet to write a CLI driven agent, as part of iPaaS deployments, which are basically SaaS IDEs.

vi and Emacs are certainly not IDEs, they are programmer editors, although with enough effort they may pretend to be one.


Kilocode as an IDE plugin has completely removed Cursor from my toolkit.

Cursor has been both nice and awful. When it works, it has been good. However for a long time it would freeze on re-focus and recently an update broke my profile entirely on one machine so it wouldn't even launch anymore.

Kilocode with options of free models has been very nice so far.


It does not matter what 80-90% of developers do. Code development is heavily tail-skewed: focus on the frontier and on the people who are able to output production-level code at a much higher pace than the rest.

No kidding. Arguing "90% of devs do this" makes it that much more likely that it's something that the bottom 90% of devs do.

I use an IDE. It has a command line in it. It also has my keybinds, build flow, editor preferences, and CI integrations. Making something CLI means I can use it from my IDE, and possibly soon with my IDE.

Say more? It's the first time I see Composer 1 being talked about outside of the Cursor press stuff, with high praise no less.

What are we talking about? Autocomplete or GPT/Claude contender or...? What makes it so great?


GPT contender. There has been talk on the cursor forums. I think largely people have e slept on coding models and stick with Anthropic thinking it’s the best. Composer fit that niche of extremely fast and smart enough. Sometimes you just want a model that has a near instant response. The new Gemini preview is overtaking my usage of Composer.

The problem is companies like OpenAI have the upper hand here as they show with the Codex models.

Which is what I was mentioning elsewhere. They build huge models with infinite money and distill them for certain tasks. Cursor doesn't have the funding, nor would it be wise, to try to replicate that.


Why do you think so? Cursor has raised what north of $3bn. That’s enough money to train or tune a model for coding. With their pricing changes I suspect they are trying to get at least to breakeven as quick as possible. They have massive incentives both on the quality of the model for tool chain use and from a cost perspective to try and run their own model generation.

I used it extensively for a week and gave it an honest chance. It’s really good for quickly troubleshooting small bugs. It doesn’t come anywhere close to Opus 4.5 though.

Apples and oranges comparison. I don’t think it’s the same and good for you for waiting on Opus to respond. I don’t have the energy.

> Waiting for Opus

Sir Opus is the fast one of the bunch. Try GPT 5.2 high.


As someone who uses Cursor, i don't understand why anyone would use CLI AI coding tools as opposed to tools integrated in the IDE. There's so much more flexibility and integration, I feel like I would be much less productive otherwise. And I say this as someone who is fluent in vim in the shell.

Now, would I prefer to use vs code with an extension instead? Yes, in the perfect world. But Cursor makes a better, more cohesive overall product through their vertical integration, and I just did the jump (it's easy to migrate) and can't go back.


I agree. I did most of my work in vim/cli (still often do), but the tight agent integrations in the IDEs are hard to beat. I'm able to see more in cursor (entire diffs), and it shows me all of the terminal output, whereas Claude Code hides things from you by default, by only showing you a few pieces and summaries of what it did. I do prefer to use CC for cli usage though (e.g. using aws cli, Kubernetes, etc). The tab-autocomplete is also excellent.

I also like how cursor is model-agnostic. I prefer codex for first drafts (it's more precise and produces less code), for Claude when less precision or planning is required, and other, faster models when possible.

Also, one of cursor's best features is rollback. I know people have some funky ways to do it in CC with git work trees etc, but it's built into cursor.


Mobile developer here. I historically am an emacs user so am used to living in a terminal shell. My current setup is a split pane terminal with one half running claude and the other running emacs for light editing and magit. I run one per task, managed by git worktrees, so I have a bunch of these terminals going simultaneously at any given time, with a bunch of fish/tmuxinator automation including custom claude commands. I pop over to Xcode if I need to dig further into something.

I’ve tried picking up VSCode several times over the last 6-7 years but it never sticks for me, probably just preference for the tools I’m already used to.

Xcode’s AI integration has not gone well so far. I like being able to choose the best tool for that, rather than a lower common denominator IDE+LLM combination.


Emacs has a number of packages for AI integration which I haven't tried yet. Have you?

I haven’t, so far I’ve only looked into ones that require terminal emulation and I’ve never loved how that works in emacs.

Now that I can do a lot with 3-6 AI agents running usefully 2-5min at a time to crank through my plans, the IDE is mostly just taking valuable space

For backend/application code, I find it's instead about focusing on the planning experience, managing multiple agents, and reviewing generated artifacts+PRs. File browsers, source viewers, REPLs, etc don't matter here (verbose, too zoomed-in, not reflecting agent activity, etc), or at best, I'll look at occasionally while the agents do their thing.


The Claude Code integration with IntelliJ (or any Jetbrains IDE for that matter) is the perfect combination. That is the perfect world to me. An entire company maintaining a fork of VS Code just doesn't compute to me, but its how you sell it to shareholders.

It's a rather extreme level of vendor lock-in.

What’s an example of? The only thing I can think of is providing approval per section, but that doesn’t really scale well

And Claude Code run inside VSCode does as well. An extension to give those extra integration features to a CLI agent to me is far better.

Multi-agents.

It is very easy to open multiple terminals, have them side by side, do different things. It is more natural to invoke agents and let them do their things.

I don't understand what you gain by using an "integrated IDE with AI". No snark, really asking please share always eager to learn better workflows.

I use VS Code, open a terminal with VS Code, run `claude` and keep the git diff UI open on the left sidebar, terminal at the bottom.


I think beginner programmers like the fact that they can just open one app and the AI chat box is right next to their editor window. Other than that, I agree that it's pretty silly to maintain a whole IDE just to attach an AI chat box to it.

Now that there's MCP, the community will out-innovate anything a single company can do in terms of bolting on features. It's easy enough to get all the LSP integration and stuff into Claude code.

So it all comes down to model differentiation. Can cursor compete as a foundation model creator? Maybe, but even so, that's going to be a very tough market. Margins will be razor thin at best. It's a commodity.

Anyway, the last thing I would want if I were them is to keep worrying about maintaining this IDE themselves.


One of the biggest values for Cursor is getting all these different models under a single contract. A contract that very importantly covers the necessary data privacy we want as a business. We can be sure that no matter which model a developer chooses to use, we are covered under the clauses that disallow them from retaining and training on our conversations.

I struggle with understand why engineers enjoy using these CLI coding tools so much. I have tried a few times and I simply cannot get into a good workflow. Cursor, Kline and others feel like the sweet spot for me.

It's really nice that the integrated nature means that, with no extra work on my part, the agent can see exactly what I'm seeing including the active file and linter errors. And all the model interaction is unified. I point them to specific files in the same way, they all have access to the same global rules (including team-global rules), documentation is supplied consistently, and I can seamlessly switch between models in the same conversation.

That has been my experience as well. When I am prompting an agent it is using my open tabs first. When changes are made I get green and red lines and quickly can digest the difference. I don’t want it going off building a big feature form start to finish. I want to maybe use an AI to map out a plan but then go through each logical step of the implementation. I can quickly review changes and at least for me have the context of what’s happening.

As an older engineer, I prefer CLI experiences to avoid mouse usage. The more I use the mouse, the more I notice repetitive stress injury symptoms

But also, 90% of the time if I'm using an IDE like VSCode, I spend most of my time trying to configure it to behave as much like vim as possible, and so a successful IDE needn't be anything other than vim to me, which already exists on the terminal


I use vs code mostly without a mouse same with most of my in IDE AI usage.

I dont disagree on the workflow - struggling with the same. But CLIs have an absolute sweetspot abstraction.

A simple text interface, access to endless tools readily available with an (usually) intuitive syntax, man pages, ...

As a dev in front of it super easy to understand what it's trying to do, and as simple as it gets.

Never felt the same in Cursor, it's a lot of new abstractions that dont feel remotely as compounding


What I don't understand why people would go all in on one IDE/editor and refuse to make plugins for others. Whether you prefer the CLI or the integrated experience, only offering it on vscode (and a shitty version of it, as well) is just stupid.

Codeium (now Windsurf) did this, and the plugins all still work with normal Windsurf login. The JetBrains plugin and maybe a few others are even still maintained! They get new models and bugfixes.

(I work at Windsurf but not really intended to be an ad I’m just yapping)


Windsurf is at least 10x better than Cursor in my opinion... I'm honestly still puzzled it doesn't seem to get as much buzz on HN! I had to literally cmd+F to find a reference here and this is the only comment ;-;

Cursor if I recall actually started life as a VScode plugin. But the plugin API didn’t allow for the type of integration & experiences they wanted. Hit limits quickly and then decided to make a fork.

Not to mention that VSCode has been creating many "experiemental" APIs that are not formalized for years which become de facto private APIs that only first party extensions have access to.

Good thing that Copilot is not the dominant tool people use these days, which proves that (in some cases) if your product is good enough, you can still win an unfair competition with Microsoft.


Yeah! Integrate with emacs!

Cursor also has a CLI agent called cursor-agent that is quite good. It can be run in any editor with an integrated terminal.

Also with ACP (https://agentclientprotocol.com) we can all have a native IDE/editor experience integrating agents, kinda like what LSP brought us.

Even Emacs nuts like me can use agents natively from our beloved editor ;) https://xenodium.com/agent-shell-0-25-updates


I think calling Open AI Codex or Claude Code "CLI" is a bit a of minomer. It's more of a GUI, just rendered in a terminal. I honestly think a "regular" for GUI for OpenAI Codex / Claude Code could be much better.

Did copilot fail so hard that it's now believed that cursor was the first?

Cursor is better suited for enterprise. You get centralized stats and configuration management. Managers are pushed for AI uptake, productivity and quality metrics. Cursor provides them.

> As someone who is a huge IDE fan, I vastly prefer the experience from Codex CLI compared to having that built into my IDE, which I customize for my general purposes

Fascinating.

As a person who *loathes VS Code* and prefers terminal text editors, I find Cursor great!

Maybe because that I have zero desire to customize/leverage Cursor/VS Code.

Neat. Cursor can do what it wants with it, and I can just lean into that...


Cursor still has the advantage UX wise. The biggest reason I avoid using them though is their pricing structure being abysmal.

I can't randomly throw credits into a pit and say "oh 2000$ spent this month whatever". For larger businesses I suspect it is even worse.

If they had a 200$ subscription with proper unlimited usage (within some limits obviously) I would have jumped up and down though.


I don't understand where $2000 comes from.

Relatively heavy cursor usage in my experience is around 100USD/month. You can set a limit to on demand billing.


I work at a company with thousands of engineers and have people hitting $200 limits in just a few days. I think the shortest was around 1.5 work days.

I'm sure out of thousands you can have outliers, either people doing very large refactors or being kind of wasteful.

Something like 90th percentile usage is what I'd call relatively heavy.


I think your definition of heavy is different than lots of other folks I know -- I'm at $1500 / mo and am actively holding back.

I had the same and switched to Claude code max and have been continuing the way of working on Opus. Now with the lower credit burn of Opus 4.5 i haven’t had a rate limit since. Imo the Claude Code token proposition and the Claude ecosystem far outweigh the benefits of cursor. This stuff is far too effective to hold back on

Even 100/mo is a lot when you can get unlimited Sonnet for 20/mo.

> If they had a 200$ subscription with proper unlimited usage (within some limits obviously)

I don't understand the "within some limits" people ask for.

If we use a service to provide value, and it is worth the value it provides, why would we ever accept a limit or cap? We want to stop adding value until next calendar month?

Or if the idea is $200 plus overages, might as well just be usage based.

Imagine a rental car that shut off after 100 km instead of just billing 20 km overage to go 120 km. Would you be thrilled for a day of errands knowing the hard cut off? Or would you want flex? You go 60 km out, 40 km back; now it's not worth paying to drive the last 20? If that's the case, probably should have walked the whole way?

Perhaps not a terrible analogy if some devs think of using these models like hitchhiking. Mostly out for the hike but if I can get an Uber now and then for $200/month, then I can do some errands faster, but still hike most places…

OR, hitchhikers don't think they need that much, they only run an errand a week, in which case, back to usage pricing, don't pay for what you don't use.

- - -

As an example: The primary limiter for our firm's wholesale adoption of Anthropic are their monthly caps. The business accounts have a cap! WTH, Anthropic, firms shouldn't LLM review code for the last week or two of the month? It can't be relied on.

To be clear, there's no cap on the usage per se, the cap is at the billing, even if you have it on a corp card that recharges fully constantly, it can tick over at $1500/day for 3 days, then halfway through day 4, it won't recharge again, because you hit $5k/month limit.

If you write to them and ask (like the error messages tells you) they say: Move to Enterprise, it's X users. Well, no, we don't have X people? Sure, but Enterprise is X users. What if we buy empty seats? Um...

(The simplest explanation is that $5k/month really burns more than $5k/month of costs so every API call loses them money, and they'd rather shepherd people to occasional subscription usage where they train them to leave it idle most of the time. Fine, offer usage at cost instead of loss, see who bites.)

Meanwhile, we use unlimited from their competitors, and have added several other ways to buy Anthropic indirectly, which seems weird they'd want to earn less per API call but someone somewhere is meeting their incentives I guess.


Tab complete is still useful and code review/suggesting changes can be better in a GUI than in a terminal. I think there is still a radically better code review experience that is yet to be found, and it's more likely to come from a new player like Cursor/Graphite than one of the giants.

Also Cursor's dataset of actual user actions in coding and review is pure gold.


God cursors tab complete is woeful in basically all of my usage at work. It’s so actively wrong that I turned it off. It’s agent flows are far far more useful to me

Glad i'm not the only one. I use SQL primarily and it is awful and distracting

I pair program literally all day and my coworkers fight against it haha it’s wild to watch

I personally use CLI coding agents as well, but many people do prefer tight IDE integration.

I’ve tried every popular agent IDE, but none of them beat Cursor’s UX. Their team thought through many tiny UX details, making the whole experience smooth like a butter. I think it’s a huge market differentiator.

Also their own composer model is not bad at all.


Which ux details come to mind?

Cursor's cursor-agent can be run interactively from the CLI or headless.

Virtually anybody going all in AI is exposing itself of being redundant.

I don't envy startups in the space, there's no moat be it cursor or lovable or even larger corps adopting ai. What's the point of Adobe when creating illustrations or editing pics will be embedded (kinda is already) in the behemoth's chat it's?

And please don't tell me that hundreds of founder became millionaires or have great exits or acquihires expecting them. I'm talking about "build something cool that will last".


I agree. The reason Cursor’s “first mover” advantage doesn’t matter is because there’s fundamentally no business there. I’ve used 3 IDE or text editors my whole life, I’ve never paid for one. If I wanted, I could use AI to write myself a new text editor. Like you said, there’s no moat for any of this shit, and I’m guessing that by 2027 the music will stop.

I also would think Cursor would be screwed, but I tried out the Codex VS code extension and its still very barebones, and Cursor seems to update like 5 times a day and is constantly coming out with mostly great new features. Plus it is nice to be able to use any model provider.

Hi all! Graphite cofounder Greg here - happy to help answer questions. To preempt one: I’ve been asked a few times so far why we decided to join.

Personally, I work on Graphite for two reasons. 1) I love working with kind, smart, intense teammates. I want to be surrounded by folks who I look up to and who energize me. 2) I want to build bleeding-edge dev tools that move the whole industry forward. I have so much respect for all y’all across the world, and nothing makes me happier than getting to create better tooling for y’all to engineer with. Graphite is very much the combination of these two passions: human collaboration and dev tools.

Joining Cursor accelerates both these goals. I get to work with the same team I love, a new bunch of wonderful people, and get to keep recruiting as fast as possible. I also get to keep shipping amazing code collaboration tooling to the industry - but now with more resourcing and expertise. We get to be more ambitious with our visions and timelines, and pull the future forward.

I wouldn’t do this if I didn’t think the Cursor team weren’t standup people with high character and kindness. I wouldn’t do this if I thought it meant compromising our vision of building a better generation of code collaboration tooling. I wouldn’t do it if I thought it wouldn’t be insanely fun and exciting. But it seems to be all those things, so we’re plunging forward with excitement and open hearts!


Really appreciate the tone of this post. We need more leaders prioritizing words like love, kind, people, heart and character. Good on you. reply

As someone who loves all the non-AI portions of Graphite (the CLI and the reviewer UI) should I be worried about this acquisition? Or will the CLI and Reviewer Ui continue to be be maintained and improved?

Forgive some ignorance, but we use Graphite at work, and I don't dislike it or anything, but I haven't really been able to see its appeal over just doing a PR within Github, at least if you exclude the AI stuff.

What do you like about the non-AI parts? I mean it's a little convenient to be able to type `gt submit` in order to create the remote branch and the PR in one step, but it doesn't feel like anything that an alias couldn't do.


the stacked changes support, for me, was an absolute game changer. the auto rebasing, etc, is -really- nice. i found it especially useful for Gitops type stuff where you have to make lots of little PRs

Is it better than just using jj locally though?

Maintained, improved, and integrated.

With more resources than ever. We're building whole platform. That's a lot more than just AI.


If my company has an existing Cursor subscription, can we get Graphite for free?

Congrats!! I see this as two great companies joining forces in a crowded space where it is clear the whole is worth more than the sum of their parts. Best of luck on your journey

Makes sense and appreciate the transparency. Have admired what you're building at Graphite and look forward to seeing what you build as part of the Cursor team. Congrats!

congrats.

> I wouldn’t do this if I didn’t think the Cursor team weren’t standup people with high character and kindness

Somebody screenshot this please. We are looking at comedy gold in the next 3 years and there’s no shortage of material.


If these ai companies had 100x dev output, why would you acquire a company? Why not just show screenshots to your agent and get it to implement everything?

Is it market share? Because I don't know who has a bigger user base that cursor.


The claims are clearly exaggerated or as you say, we'd have AI companies pumping out new AI focused IDEs left and right, crazy features, yet they all are Vs code forks that roughly do the same shit

A VSCode fork with AI, like 10 other competitors doing the same, including Microsoft and Copilot, MCPs, Vs code limitations, IDEs catching up. What do these AI VsCode forks have going for them? Why would I use one?


I am validating and testing these for the company and myself. Each has a personality with quirks and deficiencies. Sometimes the magic sauce is the prompting or at times it is the agentic undercurrent that changes the wave of code.

More specific models with faster tools is the better shovel. We are not there yet.


Heyo, disclosure that I work for graphite, and opinions expressed are my own, etc.

Graphite is a really complicated suite of software with many moving pieces and a couple more levels of abstraction than your typical B2B SaaS.

It would be incredibly challenging for any group of people to build a peer-level Graphite replacement any faster than it took Graphite to build Graphite, no matter what AI assistance you have.


It’s always faster and easier to copy than create(AI or not). There is lot of thought and effort in doing it first, which the second team(to an extent) can skip.

Much respect to what have you have achieved in a short time with graphite.

A lot of B2B SaaS is about tones of integrations to poorly designed and documented enterprise apps or security theatre, compliance, fine grained permissions, a11y, i18n, air gapped deployments or useless features to keep largest customers happy and so on and on.

Graphite (as yet) does not any of these problems - GitHub, Slack and Linear are easy as integrations go, and there is limited features for enterprises in graphite.

Enterprise SaaS is hard to do just for different type of complexity


I think trivial GH integrations are easy.

If you've used Graphite as a customer for any reasonable period of time or as part of a bigger enterprise/org and still think our app's particular integration with GH is easy... I think that's more a testament to the work we've done to hide how hard it is :)

Most of the "hard" problems we're solving (which I'm referencing in my original comment) are not visually present in the CLI or web application. It's actually subtle failure-states or unavailability that you would only see if I'm doing my job poorly.

I'm not talking about just our CLI tool or stacking, to clarify. I'm talking about our whole suite, especially the review page and merge queue.

What kind of enterprise SaaS features do you wish you had in Graphite? (We have multiple orgs with 100s-1,000s of engineers using us today!)


The Graphite review UI/UX is at least 3x better than GitHub, and also somehow loads faster. Same with the customizable PR inbox. Love it! Appreciate your work on the platform!

My guess is the purchase captures the 'lessons learned' based upon production use and user feedback.

What I do not understand is that if a high level staff with capacity can produce an 80% replacement why not assign the required staff to complete that last 10% to bring it to production readiness? That last 10% is unnecessary features and excess outside of the requirements.


> If these ai companies had 100x dev output,

I hate the unrealistic AI claims about 100X output as much as anyone, but to be fair Cursor hasn't been pushing these claims. It's mostly me-too players and LinkedIn superstars pushing the crazy claims because they know triggering people is an easy ticket to more engagement.

The claims I've seen out of the Cursor team have been more subtle and backed by actual research, like their analysis of PR count and acceptance rate: https://cursor.com/blog/productivity

So I don't think Cursor would have ever claimed they could duplicate a SaaS company like Graphite with their tools. I can think of a few other companies who would make that claim while their CEO was on their latest podcast tour, though.


Existing users, distribution, and brand are a big part of acquisition. Graphite is used mainly by larger orgs.

Also, graphite isn't just "screenshots"; it's a pretty complicated product.


Who has claimed to have 100x productivity?

Perhaps the company you are acquiring has the product of 100x dev output?

Why build if you can buy? Money is not a scarce resource in AI economy. Time is.

I'm really used to my Graphite workflow and I can't imagine going without it anymore. An acquisition like this is normally not good news for the product.

I really wanted to give them a try actually, now I definitely won’t.

We currently let loose Gemini, Cursor Bugbot, Qodo, and even Sentry started reviewing PRs now.

My usually prefer Gemini but sometimes other tools catch bugs Gemini doesn't.

As someone who has never heard of Graphite, can anyone share their experience comparing it to any of the tools above?


I've never used Graphite's AI features, so I can't compare!

Graphite predates AI code reviews. Obviously includes it now, but the original selling point was support for stacking PRs.

Their AI review is sub par, but everything else is really good.

Graphite isn’t really about code review IMO, it’s actually incredibly useful even if you just use the GitHub PR UI for the actual review. Graphite, its original product anyway, is about managing stacks of dependent pull requests in a sane way.

Our amazing journey...

Heard on the worry, but I can confirm Graphite isn’t going anywhere. We're doubling down on building the best workflow, now with more resourcing than ever before!

We’ve heard this many times before with other acquisitions so don’t be upset if people are a bit skeptical.

Supermaven said the same thing when they were acquired by Cursor and then EOLed a year later. Honestly, it makes sense to me that Cursor would shut down products it acquires - I just dislike pretending that something else is happening.

we are a 70 person team, bringing in significant revenue through our product, have widespread usage at massive companies like shopify robinhood etc, this is a MUCH MUCH MUCH different story than supermaven (which I used myself and was sad to see go) which was a tiny team with a super-early product when they got acquired.

everyone is staying on to keep making the graphite product great. we're all excited to have these resources behind us!


The biggest challenge is that an acquisition like this makes relying on the acquired product a giant risk for us, so our general policy is to stop relying on something once it gets acquired and try to migrate to something else, because it's just way too disruptive to find out a year later it's getting sunsetted and then have a shorter timeline to migrate off.

It's happened so many times that it's just part of how we do business, unfortunately.


Not your fault at all, but there is a ton of precedent to be skeptical that these pronouncements end up being accurate.

Obviously what you need to say but the reality is that you’re not in control anymore. That’s what an acquisition is.

If Cursor wants to re-allocate resources or merge Graphite into to editor or stagnate development and use it as a marketing/lead gen channel, it will for the business.

Anything said at time of acquisition isn’t trustworthy. Not because people are lying at the time (I don’t think you are!) but because these deals give up leverage and control explicitly. If they only wanted tighter integration, they could fund that via equity investment or staffing engineers (+/- paying Graphite to do the same.) Companies acquire for a reason and it isn’t to let the team + product stay independent


I've seen big companies cleave off tens of millions of profitable products on a whim pretty often....

relax

"trust me bro" (6 months later) "An update about your graphite workspaces"

We're aligning our product catalogue to do what we've found is the best fit for what our customers want. We're also excited to announce a migration plan to our new service, PencilLead, and want to offer existing customers preferential pricing to our Professional Services team to assist with the migration.

We know this isn't what all of you want to hear, and we've spent the last year really evaluating this deeply. At the same time, we're glad you're part of our journey to the future of agentic AI and we think you'll find it's the best alignment and fit for you, too, long-term.


There is literally nothing anyone can say to convince me any product or person is safe during an acquisition. Time and time again it's proven to just not be true. Some manager/product owner/VP/c-suite will eventually have the deciding factor and I trust none of them to actually care about the product they're building or the community that uses it

Doesn't getting acquired mean that you no longer have the authority to confirm that?

> Cursor acquires Supermaven.

> "Will the plugin remain up? Yes!"

> https://supermaven.com/blog/sunsetting-supermaven



> I can confirm Graphite isn’t going anywhere...

sweet summer child.



LOL. Just by bayesian logic this statement makes it more likely that it will go to trash.

How does Graphite compare with other AI code review tools like Qodo?

My team has been using Qodo for a while now and i've found it to be pretty helpful. EVery once in a while it finds a serious issue, but the most useful part from my experience are the features that are geared towards speeding up my review rather than replacing it. Things like effort labels that are automatically added to the pr and a generated walk through that takes you through all of the changed files.

Would love to see a detailed comparison of the different options. Is there some kind of benchmark for AI code review that compares tools?


I’m working on something in a similar direction and would appreciate feedback from people who’ve built or operated this kind of thing at scale.

The idea is to hook into Bitbucket PR webhooks so that whenever a PR is raised on any repo, Jenkins spins up an isolated job that acts as an automated code reviewer. That job would pull the base branch and the feature branch, compute the diff, and use that as input for an AI-based review step. The prompt would ask the reviewer to behave like a senior engineer or architect, follow common industry review standards, and return structured feedback - explicitly separating must-have issues from nice-to-have improvements.

The output would be generated as markdown and posted back to the PR, either as a comment or some attached artifact, so it’s visible alongside human review. The intent isn’t to replace human reviewers, but to catch obvious issues early and reduce review load.

What I’m unsure about is whether diff-only context is actually sufficient for meaningful reviews, or if this becomes misleading without deeper repo and architectural awareness. I’m also concerned about failure modes - for example, noisy or overconfident comments, review fatigue, or teams starting to trust automated feedback more than they should.

If you’ve tried something like this with Bitbucket/Jenkins, or think this is fundamentally a bad idea, I’d really like to hear why. I’m especially interested in practical lessons.


> What I’m unsure about is whether diff-only context is actually sufficient for meaningful reviews, or if this becomes misleading without deeper repo and architectural awareness.

The results of a diff-only review won't be very good. The good AI reviewers have ways to index your codebase and use tool searches to add more relevant context to the review prompt. Like some of them have definitely flagged legit bugs in review that were not apparent from the diff alone. And that makes a lot of sense because the best human reviewers tend to have a lot of knowledge about the codebase, like "you should use X helper function in Y file that already solves this".


At $DAYJOB, there's an internal version of this, which I think just uses Claude Code (or similar) under the hood on a checked out copy of the PR.

Then it can run `git diff` to get the diff, like you mentioned, but also query surrounding context, build stuff, run random stuff like `bazel query` to identify dependency chains, etc.

They've put a ton of work into tuning it and it shows, the signal-to-noise ratio is excellent. I can't think of a single time it's left a comment on a PR that wasn't a legitimate issue.


Yeah, it’s exceptionally easy to set this up and we have the same thing. Except the team hasn’t had time to fine tune it, and it shows.

I work at Graphite, our reviewer is embedded into a bigger-scope code review workflow that substitutes for the GH PR Page.

You might want to look at existing products in this space (Cursor's Bugbot, Graphite's Reviewer FKA Diamond, Greptile, Coderabbit etc.). If you sign up for graphite and link a test github repo, you can see what the flow feels like for yourself.

There are many 1000s of engineers who already have an AI reviewer in their workflow. It comments as a bot in the same way dependabot would. I can't share practical lessons, but I can share that I find it to be practically pretty useful in my day-to-day experience.


cursor has a reviewer product which works quite well indeed, though I've only used it with github. not sure how they manage context, but it finds issues that the diff causes well outside the diff.

We have coding agents heavily coupled with many aspects of the company's RnD cycle. About 1k devs.

Yes, you definitely need the project's context to have valuable generations. Different teams here have different context and model steering, according to their needs. For example, specific aspects of the company's architecture is supplied in the context. While much of the rest (architecture, codebases, internal docs, quarterly goals) are available as RAG.

It can become noisy and create more needless review work. Also, only experts in their field find value in the generations. If a junior relies on it blindly, the result is subpar and doesn't work.


> We’re sunsetting Supermaven after our acquisition one year ago.

> After bringing features of Supermaven to Cursor Tab, we now recommend any existing VS Code users to migrate to Cursor.

Supermaven was acquired by Cursor and sunset after 1 year.


I wonder about this. Graphite is a fantastic tool that I use every day. Cursor was an interesting IDE a year ago that I don't really see much of a use case for anymore. I know they've tried to add other features to diversify their business, and that's where Graphite fits in for them, but is this the best exit for Graphite? It seems like they could have gotten further on their own, instead of becoming a feature that Cursor bought to try to stay in the game.

Startups should check the internet before naming them after tools like Graphite for monitoring https://graphiteapp.org/.

Graphite should check the dictionary before naming itself after a soft, black, lustrous form of carbon.

https://www.merriam-webster.com/dictionary/graphite


Yeah well at least the OG graphite is not software.

Sure, but if the concern is googling "graphite" and finding results that aren't the Graphite you're looking for, it's the same problem. There will always be more results for graphite, the mineral than graphite, the enterprise-ready monitoring tool.

If that's not the concern, then what's the big deal?


Well, time to bite the bullet and learn jujutsu over the holidays

Took me a month to learn jujutsu. Was initially a skeptic but pulled through. Git was always easy to me. Its model somehow just clicks in my brain. So when I first switched to jj, it made a lot of easy things hard due to the lack of staging (which is often part of my workflow). But now I see the value & it really does make hard things easy. My commit history is much cleaner for one.

I was scared to learn but then a coworker taught me the 4 commands I care about (jj new, jj undo, jj edit, jj log) and now I can't imagine going back to plain git.

Obviously the working tree should be a commit like any other! It just makes sense!


It’s not so much biting the bullet as eating the delicious chocolate.

This is the comment that did it for me. I love chocolate. I'm in.

Well, Graphite solves the problem of how to keep your stack of GitHub pull requests in sync while you squash merge the lowest pull request in the stack; which as far as I know jujutsu does not help with.

jj-spr solves this, although it is still pretty buggy: https://github.com/LucioFranco/jj-spr

There’s also jj-stack. I don’t know how they compare.

This is something GitHub should be investing time in, it’s so frustrating.


And tangled.sh supports JJ stacks out of the box

Woah that's actually huge. I've been very interested in tangled from an atproto perspective but I had no idea it had that as well. Wonder why that isn't talked about more. Seems like an amazing feature to potentially pull some people away from GitHub/GitLab after they've have been asking for years for a better stacking workflow.

I've been going through a lot of different git stacking tools recently and am currently quite liking git-branchless[1] with GitHub and mergify[2] for the merge queue, but it all definitely feels quite rough around the edges without first-party support. Especially when it comes to collaboration.

Jujutsu has also always just seemed a bit daunting to me, but this might be the push I needed to finally give both jj and tangled a proper try and likely move stuff over.

[1] https://github.com/arxanas/git-branchless

[2] https://mergify.com


jj is actually perfectly fit for this and many other problems. In fact, this is actually the default behavior for jj -- if you squash a bunch of jj commits, the bookmarks on top automatically point to the updated rev tree. Then when syncing the dependent branches to git they all rebase automatically.

The problem however lies in who or what does this rebasing in a multi-tenant environment. You sort of need a system that can do it automatically, or one that gives you control over the process. For example, jj can often get tripped up with branch rules in git since you might accidentally move a bookmark that isn't yours to move, so to speak.


Correct (Graphite eng here for context) - we've thought about extending our CLI to allow it to sync jj with GH pull requests to do exactly this. Essentially - similar workflow but use `jj` as the frontend instead of `gt`

Please do this! As a Graphite user, I'd love to be able to switch to jj for my local development, but the disconnect between it and Graphite keeps me away.

Do it. It's absolutely worth it. You can pick it up in 30 minutes and have full proficiency in an afternoon

if my employer has a cursor sub, but not a graphite sub, will this news free me from the demon's shackles from hell of github PRs?

This is my favorite question yet

Love this announcement style. Direct, confident, and not a word longer than it needs to be. Gives major "the work speaks for itself" vibes. OpenAI's comms used to be like this, until it morphed into Apple-like grandiosity that instead comes off as try-hard.

i mentioned a few months ago that it was a shame where graphite was headed re: AI (https://news.ycombinator.com/item?id=44955187). this appears to be the final nail in the original products coffin

for anyone else looking for a replacement, git spice and jujutsu are both fantastic


Congrats team! Graphite was basically what GitHub should have been but never was

Huge fans of their work @ GitStart!


IMO this is a smart move. A lot of these next-gen dev tools are genuinely great, but the ecosystem is fragmented and the subscriptions add up quickly. If Cursor aquires a few more, like Warp or Linear, they can become a very compelling all-in-one dev platform.

Blacksmith.sh acquisition in 3, 2, 1 ...

Then Cursor takes on GitHub for the control of the repo.


As a Graphite employee, would love this tbh - we love Blacksmith!

Why doesn't Cursor allow selecting a LLM for code completion in the UI anymore and forces "auto" everywhere now? I have a Pro account and noticed this started like a month ago, and the "auto" output was often garbage, not following the instructions.

I don't experience this. I'm still able to choose models with meta+/

> The way developers write code looks different than it did a few years ago.

Looks bad: https://forum.cursor.com/t/font-on-the-website-looks-weird/1...


Confusing. I thought graphite was a TSDB

There are two Graphite companies. The time series DB for metrics (not this) and the stacked diff code review platform (this). Looking at other comments under the post, they seem to have executed a hard AI pivot recently.

Does anyone get actual insightful reviews from these code review tools? From most people I've spoke with, it catches things like code complexity, linting, etc but nothing that actual relates to business logic because there's no way it could know about the business logic of the product

I built an LLM that has access to documentation before doing code reviews and forces devs to update it with each pr.

Needless to say, most see it as an annoyance not a benefit, me included.

It's not like it's useless but... people tend to hate reviewing LLM output, especially on something like docs that requires proper review (nope, an article and a product are different, an order and a delivery note are as well, and those are the most obvious..).

Code can be subpar or even gross but to the job, but docs cannot be subpar as they compound confusion.

I've even built a glossary to make sure the correct terms are used and kinda forced, but LLMs getting 95% right are less useful than getting 0, as the 5% tends to be more difficult to spot and tends to compound inaccuracies over time.

It's difficult, it really is, there's everything involved from behaviour to processes to human psychology to LLM instructing and tuning, those are difficult problems to solve unless your teams have budgets that allow you hiring a functional analyst that could double as a technical and business writer, and these figures are both rare and hard to sell to management. And then an LLM is hardly needed.


I have gotten code reviews from OoenAI's Codex integration that do point out meaningful issues, including across files and using significant context from the rest of the app.

Sometimes they are things I already know but was choosing to ignore for whatever reason. Sometimes it's like "I can see why you think this would be an issue, but actually it's not". But sometimes it's correct and I fix the issue.

I just looked through a couple of PRs to find a concrete example. I found a PR review comment from Codex pointing out a genuine big where I was not handling a particular code path. I happened to know that no production data would trigger that code path as we had migrated away from it. It acted as a prompt to remove some dead code.


Graphite is a pull request management interface more than it is an AI code review tool.

I guess this makes sense Github announced they are gonna bring stacked PRs this year so I think that kinda makes Graphite obsolute.

I've been using git spice (https://abhinav.github.io/git-spice/) for the stacked PRs part of graphite and it's been working pretty well and it's open source and free.

Do you have confidence they can execute?

GitHub have proven the ability to execute very well when they _want_ to. Their product people are top notch.

Given the VP of GitHub recently posted a screenshot of their new stacked diff concept on X, I'd be amazed if Graphite folks (whos product is adding this function) didn't get wind of it and look for a quick sell.


This seems very very implausible to me as an explanation of what prompted them to get acquired. They wanted to get rich and stop having to fundraise!

This was "announced" in October, and last week they were saying they're shipping to trusted partners to kick the tires before a real release, with posted screenshots.

So, we'll see what it ends up like, but they have apparently already executed.


Wow! Didn’t realize.

These are basically X posts, so it's super easy to miss.

I mean this is not a new problem to solve. Git itsels is perfect for this and Gerrit has been doing this for years. So yeah I think they can execute

Woahhhhh I missed this. Got a reference or link? My Googling is failing me. That's my biggest complaint about Github coming from Gerrit for Open Stack.


https://x.com/jaredpalmer/status/1999525369725215106?s=20 There is also a concept art (not sure if it is an actual prototype)

Good news. Been using Cursor heavily for over a year now (on the Ultra plan currently). Hope we get access to this as part of our existing subscriptions.

it would be nice if these tools named themselves something other than some random dictionary word, so you could tell what they are

what does graphite have to do with code review?


Just saw a hair cutting salon call itself "Facelook" . Peak marketing if you ask me

Their code reviewer was previously called Diamond

Hi! Another one of the Graphite co-founders here. Alongside Greg, happy to answer any questions :)

Are there thoughts on getting to something more like a "single window dev workflow"? The code editing and reviewing experiences are very disjoint, generally speaking.

My other question is whether stacked PRs are the endpoint of presenting changes or a waypoint to a bigger vision? I can't get past the idea that presenting changes as diffs in filesystem order is suboptimal, rather than as stories of what changed and why. Almost like literate programming.


I really like all these ideas - very similar to what we discuss internally! We need to iterate our way there, but working with Cursor makes some of these visions much more possible

Congrats on the acquisition! I know its early but would existing cursor users get graphite for free/at a discount?

Im sure we'll do something to simplify pricing and packaging in the future, but not right away

What's the main synergy between Cursor and Graphite that led you to join? Stacked PRs? AI code review? Something else?

Stacked PRs are a really natural fit for vibe coding workflows, it helps turn illegible 10k+ line PRs into manageable chunks that you can review independently. (Not affiliated with Cursor or Graphite)

Congratulations Tomas!

This is annoying, Graphite's core feature of stacked PRs is really good despite all the AI things they've added around their review UI. I doubt we'll want to keep relying on that for very long now.

You can still think of AI as one facet of Graphite's product that you can use or not depending on your work style. Stacked PRs are still a core piece and not going anywhere :)

Except for the undismissable "Pay use more to enable AI reviews" nag that Graphite places above your CI checks and assigned reviewers.

Never heard of graphite before today. Were they built specifically for AI code reviews or it's a pivot / new feature from a company that started with something else?

No, they've been doing "managing stacks of dependent pull requests" for a lot longer than AI code review. I've mostly been a happy user, they simplify a lot of the git pain of continually rebasing and the UI makes stacks much easier to work with than Github's own interface.

They started as a better PR review tool, with the main feature that you can stack PRs that have dependencies on each other. It solves the problem of having PRs merging into other PR branches, or having notes not to merge something until another PR merges. Recently they became an AI code review tool, and just added a bunch of AI tools to the review UI, but you could just ignore it and the core functionality was still great.

stacked prs will only get better from here :) we have an incredible amount of resources to keep improving that part of our product.

check out a range-diff approach using patchsets: https://pr.pico.sh

If my company has an existing Cursor subscription, can we get Graphite for free?

Oh, the code review system. I was worried that my favourite web svg editor got bought up: https://graphite.rs/

congrats Greg, Merrill, and the rest of the folks at Graphite!

- Hunter @ Ellipsis


Thanks Hunter!

wtf is graphite and why do they assume everyone knows

two of my fave products under one roof? ok hell yeah

I thought it were graphite.art and had a figurative heart attack.

Never used the tool being acquired, but just discovered the 2D design tool from your comment, looks super cool!

Such a common name honestly had no idea who it would be

What? They could vibe code this?

[flagged]


@dang Any guidelines on obvious AI slop like this?

Prompt injection?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: