Who forked VSCode this time?
Also: OpenAI might become a benefit corporation, and the death of SB 1047.
Who forked VSCode this time?
Pear AI (YC F24) is an “Open Source AI Code Editor for Fast Development.” What does that mean? It means that they forked VSCode1 and added some AI features—an in-editor chat, inline AI-powered edits, and autocomplete. It’s a crowded space: there’s Void, a different YC-backed startup doing the same thing, Cursor the viral, OpenAI-backed IDE that kicked off this trend, and a million other companies2 who (so far) just have VSCode extensions rather than full-on forks.
Why fork VSCode rather than just sticking with an extension? VSCode extensions allow you to do quite a lot, you can download and run binaries, access a thousand different APIs, embed webviews that can render arbitrary HTML, but ultimately Microsoft doesn’t give you the kind of deep UI customization that allows a complicated feature to feel truly native.
Why not just build a new IDE from scratch? Well, some folks are trying to do that (see Zed, for example), but modern IDEs are elaborate pieces of software (VSCode itself is more than 1 million lines of code); it’s difficult to move as fast as you need to in the AI space if you also have to build a text editor and a syntax highlighter and terminal integration and native support for every language server for every language under the sun.
And the source for VSCode is just sitting right there. Even Google—who built their own build system, version control system, container orchestration system, code search system—threw in the towel a few years ago when they abandoned their internal IDE (“Cider”) and forked VSCode (“Cider-V”).
If you fork, you also get all the network effects from the VSCode extension ecosystem—users can migrate over their extensions, settings, and profiles on day one. Well, sort of: you see, Microsoft open-sourced the editor (technically called Code OSS), but not the VSCode marketplace—forks are technically prohibited from interacting with the Microsoft marketplace. So far it looks like Microsoft hasn’t taken any preventative action, but they very well could in the future.
The whole situation is reminiscent of the current browser ecosystem: a tech giant (Google/Microsoft) maintains a mostly-open-source software project (Chrome/VScode) which allows other companies, some open source (Brave/Pear) some not (Arc/Cursor) to re-skin the software to fill a niche.
OpenAI’s steady march towards normalcy
OpenAI started out as a non-profit research lab. In 2019, they transitioned to a “capped” for-profit—any investment made in the funding round associated with that announcement could return only up to 100x. Soon afterwards they announced an investment from Microsoft which was also capped (supposedly at a lower multiplier).
Then on Wednesday, Reuters reported that in its latest funding round the capped-profit structure might be jettisoned entirely and the organization would transition to a public benefit corporation (as Anthropic is currently structured).
First, did the profit cap actually matter? 100x returns are pretty ludicrous for a company that was valued as much as OpenAI, and if they ever came close to the cap, the board—which is now wholly under Altman’s thumb—could just raise it. Still, it did make for some very complicated diagrams, and you can understand why new investors would push for simplification to de-risk their investment.
Second, how should we view this transition? One take is that these are the machinations of a founder/CEO who has steadily worked to consolidate power and remold his company as a hot tech startup rather than a research lab.
But while it’s true that Sam Altman seems determined to put the company on a path towards commercialization over research, I think this misunderstands the force that empowered him to pursue this path: the power of the employees. Indeed, OpenAI is nothing without its people. When Altman was fired in his fight against the non-profit board last year, it was the potential mass exodus of the employees which turned the tide back in his favor. And while some employees surely want to do nothing more than capability and safety research, most support their charismatic CEO and understand that if their PPU’s are ever going to be worth anything, then OpenAI needs new investment and new investment is easier as a normal tech company than a Russian nesting doll of companies with different levels of capped profitability.
Ultimately it’s very difficult to hire thousands of workers, compensate them with equity-like instruments but expect them to prioritize a nebulous mission over clear shareholder value. The well-understood incentives of equity compensation have corralled OpenAI back towards normalcy.
SB 1047 is dead
We talked last week about SB 1047, California’s AI regulation bill that would have set thresholds for AI models in terms of cost and compute and required organizations to certify that models trained above those thresholds met certain safety standards. On Sunday, Governor Gavin Newsom announced that he was vetoing the bill. (The California legislature tends not to override vetoes, even if they have the votes to do so).
Newsom’s reasoning is interesting:
SB 1047 magnified the conversation about threats that could emerge from the deployment of Al. Key to the debate is whether the threshold for regulation should be based on the cost and number of computations needed to develop an Al model, or whether we should evaluate the system's actual risks regardless of these factors.
This global discussion is occurring as the capabilities of Al continue to scale at an impressive pace. At the same time, the strategies and solutions for addressing the risk of catastrophic harm are rapidly evolving. By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.
(emphasis added). In other words, Newsom’s main disagreement was with the decision to rigorously regulate every model above the thresholds while letting any model that falls below the threshold escape regulation altogether, as opposed to a bill that could more regulate all models, but in a more narrowly tailored manner. Of course, no such alternative bill exists, and though he does make it clear that he would be willing for California to lead the way on this front, the industry seems certain to escape any real restrictions in the near future.
Actually they forked both VSCode AND continue.dev and they’re being roundly mocked the latter on Twitter.