Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Monorepos in JavaScript and TypeScript (robinwieruch.de)
106 points by rwieruch on June 2, 2022 | hide | past | favorite | 78 comments


I had an extraordinarily hard time getting a monorepo set up for a proof of concept for a pretty basic dashboard app. I was using react for the frontend, node for the backend (Typescript for both), and GraphQL for the API. I tried both npm and yarn for "workspaces" but neither really made things any easier. What I wanted most of all was a repo where both frontend code and backend code were based on the same single-source-of-truth GraphQL schema, so that everything was strongly typed, avoiding any kind of API inconsistencies.

In the end, I never got things working the way I wanted. The hardest thing was getting Typescript (and worse, VScode) to recognize code across modules. The second hardest thing was getting GraphQL schema types into the frontend and backend. There's a huge ecosystem around GraphQL development, especially if you're using JS/TS, but it's still all so clunky. I ended up using a handful of tools to 1) generate the schema from the backend code, 2) serve it via introspection on the backend dev server (thank goodness for hot reloading), and 3) watch said backend schema and generate static type files for the frontend.

Did it work, sure. Is it elegant and straight forward? Definitely not. What a mess!


My experience was very similar: I built an application using a GraphQL schema file that powered AppSync templated VTL/DynamoDB tables, as well as automatically generating GraphQL operations/types. When I cleaned up the application's template for reuse, I erroneously decided to try out Yarn 3/Lerna/PnP, and then lost an embarrassingly long time to make it work.

Each [1] tool [2] seemed [3] to break differently, and needed some form of manual massaging to make it work. That manual massaging meant learning a new configuration file syntax, multiple times.

When it worked, it felt magical. Weaving together an entire web app, powered by a small bit of GraphQL schema [4] means building at a high level of abstraction (hence can be very productive). The only issue is the muddy forest of the NPM ecosystem you're surrounded by: any step towards upgrading your external dependencies seems to cost far more time than promised.

[1] Yarn3/PnP seems to assume all packages define their dependencies correctly. Unfortunately, this isn't true in the real world. I spent hours massaging dependencies in https://github.com/ThomasRooney/reamplify/blob/master/.yarnr...

[2] Getting TypeScript to work cleanly both in an IDE (IntelliJ) and when imported across backend/frontend packages was really cumbersome: I ended up just emitting .gitignored JS files next to their associated TS.

[3] Whispering into the IDE to make it understand GraphQL required learning the .graphqlconfig syntax, and fine-tuning it.

[4] https://github.com/ThomasRooney/reamplify/blob/master/packag...


I recommend taking a look at RedwoodJS


+1 I've been using RWJS for the past 2 years and would recommend giving it a try:

> yarn create redwood-app my-redwood-project --typescript


+1, I’m early in testing but loving for the reasons you describe.


I had a similar realization that setting up a monorepo with lerna or workspaces was going to be challenging. For a Next.js + custom TypeScript / Node.js backend I decided to just add the backend in its own folder with a child tsconfig.json. This way you can have the backend compile to its own dist folder. No need to fiddle with the complex under the hood bundling that's going on in Next.js. It gives you a single git repo and backend and frontend share the same typings. To make things easier you can setup paths in tsconfig.json and use module-alias in package.json for things like

  import {IUser} from '@shared/interfaces/users/IUser'


I'm having PTSD thinking back on doing this with WSDL files back in the day.

Thinking on it, not much different than having a protobuf layer generate data objects across apps. Combined with any client generation logic, we are basically there.

Just, realize that these being separated in repo is also a benefit, since that mirrors how they are separated by deployment.


Same thing here. I think tooling (packers included) aren't really designed for this, as demonstrated by the hacks you need to apply to aliases and directories. I hope this is fixed over the next few years, because like you, I got it to work but it is fugly.


Surprised you haven't heard of or used lerna (with yarn workspaces), which is the defacto monorepo setup in JS world. I use it to maintain a bunch of monorepos and works pretty flawlessly.


When people say stuff like this, I genuinely wonder if they actually used lerna. Lerna is more or less a wrapper around yarn/npm, and still 100% totally sucks. Nor does it solve ANY of the problems GP mentions -- which I've also had -- recognizing code from other modules, live-reloading schemas, don't even get me started on jerry-rigging like 8 different webpack plugins because you need sass, but you also need an SVG loader, and also a JSX loader but oops now you're running into some weird conflict between them, etc. etc.

It's honestly embarrassing that modern web development (an incredibly "dumb" network protocol at its core) doesn't have an easier onboarding/development process.


Lerna is more akin to loop than it is to a glorified wrapper around yarn/npm, so I question whether it is you who have actually used lerna.

All of the "linking between projects" is exactly what lerna does with bootstrapping/hoisting, so I'm not sure where you're coming from when you say this doesn't address OP's problem.


> glorified wrapper around yarn/npm

This is exactly what it is, and it's not even shy about it. It literally calls npm or yarn under the hood (specified with the `npmClient` setting).

> All of the "linking between projects" is exactly what lerna does with bootstrapping/hoisting, so I'm not sure where you're coming from when you say this doesn't address OP's problem.

Yeah, here's a "super simple" lerna example: https://github.com/dan-kez/lerna-webpack-example

It only has about 7 config files before even adding any "real" code. Fun stuff.


It only relies on npm to do the actual pulling of packages and if you so choose, let yarn do the hoisting (workspaces) but everything else, I fail to see how it's simply a wrapper...

There's a ton of stuff like versioning, linking, running scripts/commands, etc across the entire project that npm and yarn do not inherently support.

I definitely think lerna has warts but it provides a ton of functionality that you'd have to ducktape together on your own, which it looks like a lot of people are trying to do as evident by this entire thread...


Lerna is one of those "this works great!" or "this is a huge dumpster fire!" solutions. I've seen more broken lerna monorepos than I can count. Recently I've made quite a little side career of fixing lerna monorepos, replacing it wholesale with pnpm's abilities.


I recently discovered pnpm myself and found it to be a significantly better experience. I've been meaning to add something like changesets to it, too.


Was the defacto?

Learna needed a new maintainer and is now moving to Nrwl who develops Nx and seems to be the better choice for future projects?

https://nx.dev/


They mentioned trying to use Yarn workspaces.

I set up several projects in a monorepo recently using Yarn 3, and while the end result was awesome, the path to success was dark and weedy.

It seemed I was constantly 95% of the way there, but some detail or another wasn't quite right. Yarn has no solution for this, because the problem is the integration of tooling. eslint, typescript, prettier, jest, relay, you name it. They each want to work together and integrate with each other in different ways, using various sources of truth, and so on. Then your IDE needs to adhere to the same protocols, but various extensions have different opinions about default configurations.

Along the way you really need to know your tooling deeply, or you're in for some suffering.

Relatively simple projects with full buy-in of Yarn 3's tooling are probably pretty easy. Once you get relatively complex though, I don't know, I don't think any of this is easy.

It does sound like Nx handles a bunch of this stuff for you, which I'd love to try out. I really like that Yarn isn't particularly opinionated though. Once I had things set up, there was very little to change or that could break. The scaffolding was fairly plain to see, easy to work with, and we owned it. With something like Nx where things reportedly "just work", I worry it would be an unsettling blend of black boxes and black magic making my tooling and code work... Until it didn't, at which point I'd wish I used something less opinionated.


You might mean with pnp as the node linker? Migrating to yarn 3 from yarn 1 wasn't an issue at all for me with `nodeLinker: node_modules`.

I just wrapped up switching over to pnp though, and that '95% of the way there' line you mentioned rings a bell for me.


I did try pnp and in some ways it seemed promising, but it wasn’t the right fit for what we wanted at the time.

You’re right, most of it did work using node_modules as the linker, but I found things like having eslint configure itself from typescript inexplicably blew up and resolving those issues wasn’t trivial.

I mean, it was on and off work for a week or so, migrating a few apps and packages (including a react native app, that was a big part of the motivation and struggle in the migration in the first place), so all in all it was acceptable. And the productivity gain was more than worth it. It just wasn’t as trivial as we expected/hoped.


True 3 years ago. NPM and Turborepo is all you need today.


My experience with monorepos is that they are excellent if, and only if, you have a team dedicated full-time to making sure the repo remains sane.

This is true for any programming language. (Also, successful monorepos can be polyglot.)

If you don't have a dedicated team, you will eventually end up with all the downsides of a monorepo and few of the benefits. Builds will break frequently, impacting many teams. Dependency management will become a nightmare.

Open-source tooling like Bazel will only get you so far -- you will need in-house tooling too, but more than that, you will need an in-house culture of behaving well in a monorepo. Unless most of your engineers have done it before, you will need strong leadership to build that culture.

If you can't dedicate a team to that purpose and really follow through with it, then don't even try having a monorepo. Do a repo per team, or a repo per project.


Can you elaborate on the kind of in-house tooling that is essential to keep monorepos sane?


I've tried numerous times to get monorepos working with React Native/React Native Web but it always winds up falling apart eventually. Yarn workspaces, no workspaces, plain symlinking, relative imports...none of it works consistently, which is a real shame (and certainly RN's problem more than anything else). In a couple I've had to resort to shell functions to rsync built files into node_modules after I make changes.


I’ve had the same problem.


The tamagui base starter repo is a monorepo with typescript, react native and web all working together[0] which you can get running with by simply doing:

npx create-tamagui-app@latest

[0] https://github.com/tamagui/starters/tree/main/next-expo-soli...


Premade monorepos are fine, I’m sure, but My experience is trying to fix a poor development situation that might benefit from synthesizing several repos, but modifying existing setups is extremely fragile, never straightforward, and therefore hard to justify the time expense to your team.


Thank you, I'll give this a try when I get a sec!


The article barely mentioned the other tools like Lerna and Nx as if the author didn’t try them. For such a deep dive I would expect the author to check out the tools that will have solved many of the problems one could encounter setting up and using monorepos.

I tried https://nx.dev/ in the past and it helped with many things. You should check it out.


The article doesn't go into how to integrate TypeScript in the monorepo for development - what we do on the Yarn repository is that we point all the package.json `main` fields to the TypeScript sources, then use `publishConfig.main` to update it to the JS artifacts right before being published.

This way, we can use babel-node or ts-node to transparently run our TS files, no transpilation needed.


By this are you saying your “app” project is the one that actually transpiles the TS from your shared packages?

Wouldn’t that mean the shared packages tsconfigs aren’t respected if you changed something like strict options? And also that a clean build of the whole monorepo is going to recompile each shared file for every app project rather than just once?


Yeah, I would be interested to hear from others how they accomplish this. I played around with Nx and it uses TypeScript project references. It is a lot of boiler plate to set up every time you want to create a new app or library. Fortunately, their generators do this with one command.


In the past, I'd put a "typescript:main" field in package.json and configured my bundler to prefer that field. I gave up at some point - probably when I migrated to rollup.

Moving forward, I'm going to use wireit for these things. Pure modules get built with tsc. At the highest level (e.g. where it needs to be embedded in a page), make a bundle with rollup.

wireit has two nice properties: incremental building and file-system-level dependencies. Within a repo, you can depend on ../package-b. However, if you have multiple monorepos that often get used together, you can also depend on ../../other-project/packages/package-b. No matter where in your tree you make changes, wireit knows when to make a new bundle.

I've just started with wireit (it was only launched recently), but it seems to be a nice solution to wrangling dependencies between related JS libraries.

[1] https://github.com/google/wireit


We use pnpm and meta-updater to keep the TS project references in sync. An example of a project setup that way is pnpm's repo itself.

https://github.com/pnpm/pnpm/blob/main/.meta-updater/src/ind...


The premise of the article and the usage of the word and concept of "monorepo" used by many organizations is misguided.

A monorepo is not just a bunch of projects thrown together into one repo. It's the philosophy of having all code of a bigger organization in one repository.

When smaller teams inside of companies start creating "monorepos" for a hand full of projects, they end up with many "monorepos". This approach combines the worst of both worlds: you get the tooling complexity and scalability problems of monorepo combined with the inability to make atomic changes over multiple projects. You get none of the benefits.

If you are thinking about moving to a monorepo, do it in a way that

- has everything required to build a deployable unit into the repo, no dependencies to other repos

- under no circumstances have code in another repo depend on code inside the monorepo

- avoids ending up with dozens of monorepos


I’ve been able to share packages with react and node api but have been pulling my hair out trying to figure out how to share typescript code between react and react-native! Does anyone have pointers, I’m about ready to give up


You know, I'm currently using a monorepo concept for the backend of a project, and I think I'll soon split it up into multiple repos, with a shared base Docker container for the generated code they all share (ORM database models).

The problem with my plan is that I know from experience that making changes to the shared Docker layer is a pain in the ass to get it to propagate across your other projects as you're developing it, at first. Once you learn the incantations to chant, it's quick.

I just don't want to have to teach my team the incantations. It takes time!

If we get another round of funding and/or I find out I'll need to care about this project for more than a few additional months, I will likely make the switch, but at this point I can probably white knuckle my way into whatever exit we end up with.

And honestly, I think this is how the decision should be made; entirely dependent on a) your team and b) your anticipated future state of your work environment. No right answers here, just more or less complex ones with better or worse tradeoffs.


Javascript and Typescript to an even worse degree are awful monorepo citizens. Beyond requiring an absolutely ludicrous amount of configuration they also don't fit into existing build tooling well, i.e Bazel, Gradle, etc. The tools created to work around this (lerna - now defunct, nx - awful) are entirely specific to the JS/TS problem and aren't sufficiently general to handle polyglot repos. On top of all that the Typescript compiler (well type-checker really, it doesn't compile anything) is horrendously slow and has poor incremental support.

If you are writing a sufficiently large application you are just better off switching to a mature tech stack than dealing with how awful the TS ecosystem is.

Ideally something with a good build system, good incremental compilation, proper test framework integration (so tests only run when input classes/objects are changed) etc.


I'm super curious, what would you recommend for a tech stack that "[has] a good build system, good incremental compilation, proper test framework integration, etc"?


Personally Kotlin/JVM. Gradle gets the job done, incremental (and concurrent) builds are fast, etc. You can also use just plain Java if you prefer if you really want a very fast compile time.

Arguably Golang is decent here but it's not my cup of tea for other reasons but you do have to admit it has a very fast compiler and the way it's package system works makes for good incremental build support.

.NET has always been very good in this regard.

Rust is a bit slower on the compiler end but it's still very good, incremental support is good etc.

All of these languages can also either produce code that works on all targets (JVM/.NET) or cross-compile natively (Go, Rust). This matters for packaging and deployment as half of devs use MacOS but deployment target is usually Linux and increasingly containers. Being able to construct Docker containers directly from artifacts without needing Dockerfiles is a huge win and all of the above languages support that via one tool or another (Bazel, Jib, etc).

Literally any of these blow TS out of the water for DX, performance and tooling. Unless you are chained to TS for browser reasons or isomorphic code requirements it's just not worth it on the server.


Thank you very very much for taking the time to write that out!


Sounds like requirements Nix solves


It helps but Nix just like Bazel can't handle Node.js in a cross-platform way. You still end up reverting to running `npm ci` in a Dockerfile and building the image in imperative style, giving up hermetic and reproducible builds in the process.

The reason this is needed is node_modules contains platform specific native code. You can somewhat speed up cross-arch/OS builds by explicitly running the appropriate node-gyp incantation to rebuild the native components and skip entirely re-generating node_modules but that is still ultimately a bit gross.

Compared to what you can achieve otherwise with Nix dockerTools.buildImage, Bazel docker_rules or Google's jib tool for JVM it's not even close to the same league in speed, reproducibility, convenience or maintenance overhead.


Lerna is defunct? Can you link any blog post about it? I thought the JupyterLab team was using that for their monorepo.

I do not like the prospect of having to use a TS/JS specific build tool, because of wanting to use monorepo, but fortunately, I did not yet have to do that, as I only ever developed extensions, and did not fork JupyterLab, to change anything core. Lerna and the whole setup brimborium is definitely something that scares me away from even trying to change things in the core.


https://github.com/lerna/lerna/issues/3121. It looks like maintenance of lerna is being passed to the company behind nx. They promise continued support of lerna, but who knows what the future will bring.


what do you dislike about nx?


I have mixed feelings about monorepos, but FWIW my most recent consulting client found success using Nx (combined with pnpm). It's not perfect, but it seemed like an improvement over lerna or yarn workspaces without being as "alien" as something like rush. /$.02


While I haven't read this article yet, this is my favorite coding blog/resource, especially for React. His post on organizing a React project was really helpful when I was first getting started (1). There's also a bunch of other really useful stuff on Docker, Babel, Testing, Web Components, etc.

(1) https://www.robinwieruch.de/react-folder-structure/


Good read. I recently ran into this problem with Yarn Workspaces and TypeScript. There doesn’t seem to be a way to keep NPM packages in a monorepo if their TypeScript “lib” configurations clash e.g. a package shared utilities, another for React Native, and the browser.

AFAIK this is due to some limit in TypeScript’s project references. It’s not possible to add a typing lib to a particular package without the checker merging all global namespaces.


Okay, but how do I get CI for it to not be slow as molasses? I'm just a backend/infra eng, but while I know some JS/TS, I'm not an expert; `yarn install` (which is all the CI section seems to cover) is slow.

We have a monorepo with about equal gigantic parts Rust & TypeScript. The Rust part builds ~1100 crates in ~8 minutes, and runs all the unit tests after another ~9 min. (~17 min total.) The yarn install part of our CI takes ~39 minutes. (Not really sure how to do a "# of crates" style comparison.) (And at 39 minutes, this is very much on CI's critical path. The Rust stuff … isn't. The irony of a compiled language beating the pants off one that is only sort-of isn't lost on us.)


Sounds like https://yarnpkg.com/features/zero-installs might be something to help out. Especially if those installs are mostly just filling out a giant set of node modules folders.


Take this advice with a grain of salt cause I'm not an expert.

You can save the contents of the yarn cache directory between builds. Set it with the YARN_CACHE_FOLDER environment variable.

Build an intermediate docker image with all packages built. Later images can be based from that and don't need to rebuild.


Been doing this for a couple years.

We use NPM v8 for packages. Yarn v1 is falling behind and v2 is on a different planet. The other package managers have too much trickery.

Turborepo is fast, simple to use, and very active at the moment.

NPM v8 with Turborepo has eliminated all of our custom monorepo tooling.


I'm currently evaluating/tinkering with an idea:

I think that a well structured monorepo might make a move away from all-encompassing full-stack frameworks and plugins to libraries, tools and special purpose frameworks easier to get close to a best of both worlds situation.

The background is deploying up to a dozen or less new sites and apps per year as a small team while continuously maintaining the old ones and wanting to merge in new improvements found in newer development.

---

Rationale:

Big web frameworks and similar give you per-project productivity, structure and are batteries included, but come with downsides, such as less control, less flexibility, unneeded complexity and abstraction, legacy cruft and gotchas, generally poor performance that you "fix" with caching where you can etc. You end up patching over things with overrides, workarounds, and strip functionality that gets in the way, and in some cases you bypass the framework entirely. You are generally more dependent on framework specific solutions instead of general solutions and can often not do things from first principles without considering the hairball of integration issues that often comes with it.

On the other end of the spectrum you have the possibility stich something together with specialized tools, libraries etc. But you can easily get into the danger of inventing your own framework because you want that common structure. Also once you found some good ways to do something you want to enable straight forward reuse and maintainence. Refactorings, regression testing, performance improvements and possibly new features should benefit everything as easily as possible. All of this is _hard_ if your codebase is spread across many repos, primarily because you don't have a hollistic workspace that helps with these structural changes.

---

My hope is that my learnings and experiments with monorepos lead to a way out, so we can make incremental, cross cutting changes with more confidence and faster feedback loops.

Does anything of the above sound familiar to you? Or do you think I might be looking at this the wrong way?


Ruby had a tiny movement going for awhile that was similar to this: Rails delivered your app, and the core domain logic should be confined to a gem that the Rails app depends on.

I don’t think it caught much traction; sadly people seem to prefer doing the easy thing over the simple thing.


Polylith does a good job of solving this: https://polylith.gitbook.io/polylith/


I really like the aim of this, but I have a hard time following the terminology, despite being someone who preaches and teaches functional core/imperative shell.


I’m experimenting with same. Turbo, projen. Please contact me via email in the profile. I’m super keen on sorting that out.


One thing I don't understand about monorepos is that do people just stay on one platform and check in binaries? Or is it assumed that everything must be compiled and correct. I get that a branch can be compiled, tested and integrated, but how does that work with multiple teams. I mean at what point does it become like week-long builds to make sure everything is accurate and correct.

Or is monorepo more of a "place to put all the code" not necessarily correct or working.

I like multiple repos because it's easier to assume that the main branch of each is "correct and tested and excellent quality".


You'd generally have a CI build with some combination of heavy caching, incremental build, and reverse dependency detection, so that it can rebuild and test everything that's changed in a given PR without taking forever. I've worked in places that had a dedicated team of senior people maintaining the build and virtually a full outsourced team contributing to the open-source build tool to support that.


What is the beneficial distinction between this and composing the monorepo with git submodules? I have been doing that in my codebase after suffering from all the regrets of attempting to emulate npm package releases of my modules. I feel that utilizing submodules feels more pure and conveys isolation and separation. It feels like I am not grokking something vital about why Monorepos > Repo w/ git submodules.


The biggest gain I get from monorepos is being able to branch many inter-dependendent projects in one command.

This is especially valuable when you have a chain of dependencies (A -> B -> C ->... Z). With package-based dependencies living in individual repos, this is tedious work - branch A, modify package.json, build, branch B, update A version to new build, give B new version number, build,..., branch Z. Submodules don't particularly help.

In contrast, with a monorepo and dependencies based on your local copies, you just create a new branch of the repo, and now everything is automatically pointing to the branched code. The benefits can be major.

If you also integrate something to generate nice branched version numbers, things living outside the monorepo won't even notice - you branch, then you start a new build for each library, and any external user can get the new version, while you still get the simple branching support.


Great points here. We probably have a non-conformist flow when, even though the package deps are defined in the projects that aren't stand-alone, we define them and utilize them in the root projects. This has potential to footgun us.

I have seen several articles addressing the folder/file structure. The author has done an excellent job and taken it a bit further. Now I have questions to research about deployment. What are best practice for isolation of code for Docker images which are to become k8s Deployments and Pods? Does it matter to stuff it all into a mega image? Are Monoimages a thing and what susceptibilities to performance do these yield?


When you fix a bug in a library, is it important for all your apps to get that bugfix right away? Do you want to run all tests and fix any apps you broke immediately? If so, you want a monorepo.

If you're fine with apps using an older version of the library then you don't need this. However, you might want to think about what you'd do about a security bug when many apps are using old versions of a library and it would take a lot of work to bring them up to date.

For the hobbyist development I do now, I'm fine with old apps using out of date libraries. I'm mostly not maintaining the apps and I'm not writing shared libraries.


The biggest advantage I've found is being able to change library code AND all usages of the library in the same commit. Say you've got three projects:

- library - app-a - app-b

and you need to make a change to `library` to support some new thing in `app-a`. If you can publish a new version of `library` and update `app-a` to use it, it's really easy to make a change that's incompatible with `app-b`. Even with a comprehensive `library` test suite, it's easy for Hyrum's law to make an appearance and now you've got some unexpected corner case that's depended on.

With a monorepo, you can immediately see `app-b`'s tests failing and either fix the usage, or re-think your `library` changes.


This is true. There is definitely a subsequent commit that is required to push those newly created SHAs.


if submodules work for you, there’s no point in changing how you’re working.

that said, monorepos solve a certain set of real problems for organizations by allowing contributions across code bases from one VCS repository, and by reframing the release/deploy pipeline around singular atomic refs. the impact is multiplicative and happens in downstream tooling and processes. they squash some problems in one place in the org (product development) that bubble up elsewhere (devops/release engineering). depending on the org this is a sensible/cost effective approach.

submodules don’t really accomplish either of those things, unless I’m missing something about your workflow.


Our org is a very small team. The biggest boon is the capability to create cohesive experiences and UI in a small handful of applications that are in isolated codebases by sharing Component libraries. I also use it for improved DX with up-to-date scrubbed and sanitized production data to ensure we aren't breaking things utilizing simplified test data.


Is it literally just your codebase or a shared one?

Generally with submodules we run into issues like each dev having to maintain the setup on their own machine, and unique commands to work with module repositories.

For WAY more writeup: https://codingkilledthecat.wordpress.com/2012/04/28/why-your...

Whole tools were created to get around these issues (git-subtree for ex).


This isn't the only place this pops up - commit hooks are another example, since they live in .git they don't travel with the repo when it is cloned (or maybe when it is updated, don't remember the specifics).

I really feel there needs to be like a git --pull-config option or something to pull all project configs (including submodules, commit hooks, etc). Or perhaps move those things into the top-level folder and allow them to be git-add'd.


You can easily make a `.githooks` directory and have your docs suggest running `git config --local core.hooksPath .githooks`.

This could have security concerns in open source though as users should rightfully be skeptical of running arbitrary code if malicious code is in there (similar to the security issue of your shell hooking into git info in directories and an untarred archive containing `.git` with malicious code in it). However, at least if it's in the docs, it's consensual unlike tools like Husky that inject scripts at runtime as well as creating yet another NPM module to audit.


My codebase. I haven't experienced any pains as in this blog post. The biggest issue is to know about `git submodule init` and `git submodule update` for the one-time commands that have to be run. After that, each submodules' branches can be isolated and your traditional `git pull` will incorporate branch changes. When I cut a release to go to prod, each submodule has its SHA reflected in that.


I imagine it can get annoying to create another git repo every single time you want to share code between some projects. It becomes an even bigger pain when you have to update more than one repo at the same time. A monorepo makes it easier to keep versions in sync and encourages code reuse because it's all in one place.


That does sound annoying. Though, in our reality it is only a half dozen evergreen projects and applications which are incorporated this way. I do imagine it will grow over time.


I have tried every monorepo manager and all but one were super annoying and difficult. Lerna was slow and annoying. Turborepo felt immature when I tried it. Yarn workspaces didn’t play nice with windows for some reason.

I swear by pnpm + rush (from Microsoft). Fast installs. Good caching. Keeps every dependency in sync. Handles the local workspace builds well if you buy into their build tool, heft (which I have).


PNPM is a godsend here. Shared deps, local deps, version pinning and overwriting, etc.


Consider removing as many extra tools as possible to make monorepos actually work.

Eg throw out eslint and prettier.


I was able to find the sweet spot for me with pnpm and turborepo.


its a fantastic introduction, you should be very proud Robin! and thanks for the shoutout!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: