├── 0000-template.md ├── README.md ├── accepted ├── .keep ├── 0000-better-why.md ├── 0000-idempotent-install.md ├── 0000-license-check.md ├── 0000-nohoist.md ├── 0000-optional-peer-dependencies.md ├── 0000-plug-an-play.md ├── 0000-publish-config.md ├── 0000-remove-yarn-check.md ├── 0000-switching-registries.md ├── 0000-update-hook-runs.md ├── 0000-workspace-run-commands.md └── 0000-yarn-knit.md ├── implemented ├── 0000-focused-workspaces.md ├── 0000-link-dependency-type.md ├── 0000-offline-mirror-pruning.md ├── 0000-offline-resolution-field.md ├── 0000-rename-yarn-clean.md ├── 0000-selective-versions-resolutions.md ├── 0000-show-updated-packages-only.md ├── 0000-upgrade-command-consistency.md ├── 0000-workspaces-command.md ├── 0000-workspaces-install-phase-1.md ├── 0000-workspaces-link-phase-3.md └── 0000-yarn-create.md └── text └── 0000-upgrade-command-consistency.md /0000-template.md: -------------------------------------------------------------------------------- 1 | - Start Date: (fill me in with today's date, YYYY-MM-DD) 2 | - RFC PR: (leave this empty) 3 | - Yarn Issue: (leave this empty) 4 | 5 | # Summary 6 | 7 | One paragraph explanation of the feature. 8 | 9 | # Motivation 10 | 11 | Why are we doing this? What use cases does it support? What is the expected 12 | outcome? 13 | 14 | Please focus on explaining the motivation so that if this RFC is not accepted, 15 | the motivation could be used to develop alternative solutions. In other words, 16 | enumerate the constraints you are trying to solve without coupling them too 17 | closely to the solution you have in mind. 18 | 19 | # Detailed design 20 | 21 | This is the bulk of the RFC. Explain the design in enough detail for somebody 22 | familiar with Yarn to understand, and for somebody familiar with the 23 | implementation to implement. This should get into specifics and corner-cases, 24 | and include examples of how the feature is used. Any new terminology should be 25 | defined here. 26 | 27 | # How We Teach This 28 | 29 | What names and terminology work best for these concepts and why? How is this 30 | idea best presented? As a continuation of existing npm patterns, existing Yarn 31 | patterns, or as a wholly new one? 32 | 33 | Would the acceptance of this proposal mean the Yarn documentation must be 34 | re-organized or altered? Does it change how Yarn is taught to new users 35 | at any level? 36 | 37 | How should this feature be introduced and taught to existing Yarn users? 38 | 39 | # Drawbacks 40 | 41 | Why should we *not* do this? Please consider the impact on teaching people to 42 | use Yarn, on the integration of this feature with other existing and planned 43 | features, on the impact of churn on existing users. 44 | 45 | There are tradeoffs to choosing any path, please attempt to identify them here. 46 | 47 | # Alternatives 48 | 49 | What other designs have been considered? What is the impact of not doing this? 50 | 51 | # Unresolved questions 52 | 53 | Optional, but suggested for first drafts. What parts of the design are still 54 | TBD? -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Yarn RFCs 2 | 3 | Many changes, including bug fixes and documentation improvements can be 4 | implemented and reviewed via the normal GitHub pull request workflow. 5 | 6 | Some changes though are "substantial", and we ask that these be put 7 | through a bit of a design process and produce a consensus among the Yarn 8 | core team. 9 | 10 | The "RFC" (request for comments) process is intended to provide a 11 | consistent and controlled path for new features to enter the project. 12 | 13 | [Active RFC List](https://github.com/yarnpkg/rfcs/pulls) 14 | 15 | As a new project, Yarn is still **actively developing** this process, 16 | and it will still change as more features are implemented and the 17 | community settles on specific approaches to feature development. 18 | 19 | ## When to follow this process 20 | 21 | You should consider using this process if you intend to make "substantial" 22 | changes to Yarn or its documentation. Some examples that would benefit 23 | from an RFC are: 24 | 25 | - A new feature that creates new API surface area, and would 26 | require a feature flag if introduced. 27 | - The removal of features that already shipped as part of the release 28 | channel. 29 | - The introduction of new idiomatic usage or conventions, even if they 30 | do not include code changes to Yarn itself. 31 | 32 | The RFC process is a great opportunity to get more eyeballs on your proposal 33 | before it becomes a part of a released version of Yarn. Quite often, even 34 | proposals that seem "obvious" can be significantly improved once a wider 35 | group of interested people have a chance to weigh in. 36 | 37 | The RFC process can also be helpful to encourage discussions about a proposed 38 | feature as it is being designed, and incorporate important constraints into 39 | the design while it's easier to change, before the design has been fully 40 | implemented. 41 | 42 | Some changes do not require an RFC: 43 | 44 | - Rephrasing, reorganizing or refactoring 45 | - Addition or removal of warnings 46 | - Additions that strictly improve objective, numerical quality 47 | criteria (speedup, better browser support) 48 | - Additions only likely to be _noticed by_ other implementors-of-Yarn, 49 | invisible to users-of-Yarn. 50 | 51 | ## What the process is 52 | 53 | In short, to get a major feature added to Yarn, one usually first gets 54 | the RFC merged into the RFC repo as a markdown file. At that point the RFC 55 | is 'active' and may be implemented with the goal of eventual inclusion 56 | into Yarn. 57 | 58 | * Fork the RFC repo http://github.com/yarnpkg/rfcs 59 | * Copy `0000-template.md` to `accepted/0000-my-feature.md` (where 60 | 'my-feature' is descriptive. don't assign an RFC number yet). 61 | * Fill in the RFC. Put care into the details: **RFCs that do not 62 | present convincing motivation, demonstrate understanding of the 63 | impact of the design, or are disingenuous about the drawbacks or 64 | alternatives tend to be poorly-received**. 65 | * Submit a pull request. As a pull request the RFC will receive design 66 | feedback from the larger community, and the author should be prepared 67 | to revise it in response. 68 | * Build consensus and integrate feedback. RFCs that have broad support 69 | are much more likely to make progress than those that don't receive any 70 | comments. 71 | * Eventually, the team will decide whether the RFC is a candidate 72 | for inclusion in Yarn. 73 | * RFCs that are candidates for inclusion in Yarn will enter a "final comment 74 | period" lasting 7 days. The beginning of this period will be signaled with a 75 | comment and tag on the RFC's pull request. 76 | * An RFC can be modified based upon feedback from the team and community. 77 | Significant modifications may trigger a new final comment period. 78 | * An RFC may be rejected by the team after public discussion has settled 79 | and comments have been made summarizing the rationale for rejection. A member of 80 | the team should then close the RFC's associated pull request. 81 | * An RFC may be accepted at the close of its final comment period. A team 82 | member will merge the RFC's associated pull request, at which point the RFC will 83 | become 'active'. 84 | 85 | ## The RFC life-cycle 86 | 87 | Once an RFC becomes active, then authors may implement it and submit the 88 | feature as a pull request to the Yarn repo. Becoming 'active' is not a rubber 89 | stamp, and in particular still does not mean the feature will ultimately 90 | be merged; it does mean that the core team has agreed to it in principle 91 | and are amenable to merging it. 92 | 93 | Furthermore, the fact that a given RFC has been accepted and is 94 | 'active' implies nothing about what priority is assigned to its 95 | implementation, nor whether anybody is currently working on it. 96 | 97 | Modifications to active RFC's can be done in followup PR's. We strive 98 | to write each RFC in a manner that it will reflect the final design of 99 | the feature; but the nature of the process means that we cannot expect 100 | every merged RFC to actually reflect what the end result will be at 101 | the time of the next major release; therefore we try to keep each RFC 102 | document somewhat in sync with the language feature as planned, 103 | tracking such changes via followup pull requests to the document. 104 | 105 | ## Implementing an RFC 106 | 107 | The author of an RFC is not obligated to implement it. Of course, the 108 | RFC author (like any other developer) is welcome to post an 109 | implementation for review after the RFC has been accepted. 110 | 111 | If you are interested in working on the implementation for an 'active' 112 | RFC, but cannot determine if someone else is already working on it, 113 | feel free to ask (e.g. by leaving a comment on the associated issue). 114 | 115 | ## Reviewing RFC's 116 | 117 | Each week the team will attempt to review some set of open RFC 118 | pull requests. 119 | 120 | We try to make sure that any RFC that we accept is accepted at the 121 | Friday team meeting, and reported in [core team notes]. Every 122 | accepted feature should have a core team champion, who will represent 123 | the feature and its progress. 124 | 125 | **Yarn's RFC process owes its inspiration to the [Rust RFC process] and the [Ember RFC process]** 126 | 127 | [Rust RFC process]: https://github.com/rust-lang/rfcs 128 | [Ember RFC process]: https://github.com/emberjs/rfcs 129 | -------------------------------------------------------------------------------- /accepted/.keep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yarnpkg/rfcs/f1d11a23cc87c5e89d909521197e6a5028e05e05/accepted/.keep -------------------------------------------------------------------------------- /accepted/0000-better-why.md: -------------------------------------------------------------------------------- 1 | - Start Date: (fill me in with today's date, 2018-08-01) 2 | - RFC PR: (leave this empty) 3 | - Yarn Issue: (leave this empty) 4 | 5 | # Summary 6 | 7 | The command `yarn why` gives me the direct dependencies of a given package. 8 | It would be much more useful to show the dependency chains leading to this package. 9 | 10 | # Motivation 11 | 12 | Knowing which direct dependencies cause a given package to be installed is 13 | useful in several cases. 14 | 15 | ## Really understand why a dependency got installed 16 | 17 | The current version of `yarn why` does not really tell why a dependency is installed, 18 | you will need to run it recursively. 19 | The current version of the command is scoped to the package name. When two versions 20 | of the same package are installed, I would like to understand why a given version is installed. 21 | 22 | ## Update vulnerable package 23 | 24 | A vulnerability in a package I use has been discovered, I need to upgrade this 25 | dependency to a later version. However this dependency is not a direct dependency listed 26 | in my package.json files. I would like to know ALL the direct dependencies that I 27 | need to update/remove to get rid of this vulnerable package. 28 | 29 | ## Dedup packages 30 | 31 | A given package is installed with two different versions. I would like to find out 32 | why and what packages I need to upgrade to remove this duplication. I would like 33 | to do this so I don't ship to customers two versions of the same code. 34 | 35 | # Detailed design 36 | 37 | The implementation would be quite simple. We traverse up the dependency chains 38 | leading to the required package. While doing so, we print the packages to screen. 39 | If the package is a direct dependency (declared in one of our package.json) then mention 40 | it. This should work with yarn workspaces (it is even more useful there). 41 | Note that a dependency can be both direct and indirect at the same time, the tool should make this visible. 42 | For each package, both the requested version (eg. ^1.2.3) and the resolved version (eg. 1.2.5) 43 | should be displayed. The same resolved version can match several requested versions 44 | (eg. ^1.2.3 and ^1.2.4 can both resolve to 1.2.5), this fact should be clear when displaying 45 | the dependency chain. 46 | 47 | ## Example 48 | 49 | ### Repo structure 50 | 51 | root 52 | package.json 53 | packages 54 | package1 55 | package.json 56 | package2 57 | package.json 58 | 59 | ### command line 60 | 61 | `>>> yarn why pseudomap 1.0.2` 62 | 63 | ### output 64 | 65 | ``` 66 | pseudomap@^1.0.1 (1.0.2) 67 | lru-cache@^3.2.0 (3.2.0) 68 | editorconfig@^0.13.2 (0.13.3) 69 | js-beautify@^1.5.1 (1.7.4) - devDependency of package1 70 | pseudomap@^1.0.2 (1.0.2) 71 | lru-cache@^4.0.1 (4.1.1) 72 | cross-spawn@^3.0.0 (3.0.1) 73 | node-sass@^4.5.3 (4.6.1) - dependency of package1 74 | cross-spawn@^5.0.1 (5.1.0) 75 | execa@^0.7.0 (0.7.0) 76 | os-locale@^2.0.0 (2.1.0) 77 | yargs@11.0.0 (11.0.0) 78 | webpack-dev-server@3.1.4 (3.1.4) - devDependency of package1 and package2 79 | yargs@^10.0.3 (10.0.3) 80 | jest-cli@^22.4.2 (22.4.4) 81 | jest@22.4.2 (22.4.2) - dependency of package1 and devDependency of package2 82 | jest-runtime@^22.4.4 (22.4.4) 83 | jest-cli@^22.4.2 (22.4.4) 84 | jest@22.4.2 (22.4.2) - dependency of package1 and devDependency of package2 85 | jest-runner@^22.4.4 (22.4.4) 86 | jest-cli@^22.4.2 (22.4.4) 87 | jest@22.4.2 (22.4.2) - dependency of package1 and devDependency of package2 88 | execa@^0.8.0 (0.8.0) - dependency of package2 89 | lerna@^2.2.0 (2.5.1) - dependency of the main package.json 90 | lint-staged@^4.2.1 (4.3.0) - dependency of the main package.json 91 | ``` 92 | 93 | # How We Teach This 94 | 95 | The command `yarn why` is already known and I believe this design fits better the 96 | expectations than the current implementation. 97 | 98 | The documentation will of course be updated but will not need a reorganization. 99 | 100 | If the `yarn why` command is used to teach yarn to new users, then the 101 | teaching materials will need to be updated. 102 | 103 | If this design solves some problems better than current tools and the current solution 104 | is documented, then the documentation of this problem solution will need to be updated 105 | to use the new command. 106 | 107 | Advanced users will benefit from being introduced to this feature. 108 | 109 | # Drawbacks 110 | 111 | Some people may have implemented some scripts on top of `yarn why`, these scripts 112 | will be broken. 113 | 114 | The command `yarn why` will become more verbose and it may confuse people that are 115 | used to the current command. 116 | 117 | The tool will need to use both `yarn.lock` and `package.json` files, it will be buggy if 118 | these are out of sync because of manual changes. 119 | 120 | # Alternatives 121 | 122 | This tool could be implemented as a separate package. 123 | 124 | An alternative is to run recursively `yarn why`. 125 | 126 | Every users write their own script for their specific problem. 127 | 128 | # Unresolved questions 129 | 130 | Do we show ALL the dependency chains leading to the desired package? Do we merge certain 131 | paths together when they share most of their packages? If so how will this look like? 132 | 133 | Should we have an option to skip devDependencies? 134 | 135 | What happens if someone manually changes package.json and run the `yarn why` command without 136 | running `yarn` first? 137 | 138 | How does this command works when ran before a merge conflict in yarn.lock is resolved? 139 | 140 | What is the source of truth? The `yarn.lock` and `package.json` files or the actually installed packages? 141 | 142 | How to treat circular dependencies? 143 | -------------------------------------------------------------------------------- /accepted/0000-idempotent-install.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2016-12-13 2 | - RFC PR: https://github.com/yarnpkg/rfcs/pull/37 3 | - Yarn Issue: https://github.com/yarnpkg/yarn/issues/2241 4 | 5 | # Summary 6 | 7 | `yarn install` should be idempotent. 8 | 9 | Ideally, the result of `yarn install` would not be statefully dependent on the contents of an existing `node_modules`, and would ensure the resulting `node_modules` is identical regardless of whether there is an existing `node_modules` or not. This seems very inline with the spirit of yarn's goal of deterministic builds (same `node_modules` independent of whether `node_modules` exists or the version of node that generated it). 10 | 11 | # Motivation 12 | 13 | Presently, a major headache with `npm` / binary `node_modules` (e.g., `heapdump`) is the need to manually run `npm rebuild` when upgrading node. Communicating this preemptively to developers prior to an upgrade is logistically very manual, leading to "Why is this broken for me?" when errors are not obvious (e.g., `Error: Cannot find module '../build/Debug/addon'`). 14 | 15 | Since `yarn install` is near instant when dependencies are unchanged, having developers run `yarn install` after a `git pull` is no big deal. However, having developers regularly run `yarn install --force` with many dependencies is a non-starter (1s vs 100s). 16 | 17 | # Detailed design 18 | 19 | Assuming both a `package.json` and `yarn.lock` in the project's root... 20 | 21 | *NOTE: A primed / clean yarn cache and/or `yarn-offline-mirror` are not applicable / relevant.* 22 | 23 | **Path A (`node_modules` dne, node@X):** 24 | 25 | - `yarn install` => binaries for node@X 26 | 27 | **Path B: (`node_modules` installed w/ node@X, node@Y)** 28 | 29 | - **Current, non-ideal**: `yarn install` => binaries for node@X 30 | - **Ideal**: `yarn install` => binaries for **node@Y** 31 | 32 | 33 | # How We Teach This 34 | 35 | *What names and terminology work best for these concepts and why?* 36 | 37 | "`node_modules`" for node context, "rebuild" & "build" for npm legacy; "idempotent" for technical accuracy; "install" & "force" for yarn context (i.e., `yarn install`). 38 | 39 | *How is this idea best presented?* 40 | 41 | As a continuation of existing Yarn patterns: "deterministic builds". 42 | 43 | *Would the acceptance of this proposal mean the Yarn documentation must be re-organized or altered?* 44 | 45 | No. 46 | 47 | *Does it change how Yarn is taught to new users at any level?* 48 | 49 | Yes. This will all but eliminate the need to explain why rebuilds are needed after an upgrade. 50 | 51 | *How should this feature be introduced and taught to existing Yarn users?* 52 | 53 | By assuring users: 54 | 55 | > `yarn install` ensures a consistent outcome. 56 | 57 | No need to caveat ^ this with: 58 | 59 | > Unless you upgraded node, then you need rebuild your binary modules with `yarn install --force`, 60 | but don't worry about it reinstalling all your modules, even the non-binary ones. 61 | 62 | # Drawbacks 63 | 64 | Complexity of detection / knowing when to rebuild. 65 | 66 | # Alternatives 67 | 68 | Use a flag: `yarn install --check-rebuild` and/or support it in `.yarnrc` (`install-check-rebuild true`) 69 | -------------------------------------------------------------------------------- /accepted/0000-license-check.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2016-10-12 2 | - RFC PR: https://github.com/yarnpkg/rfcs/pull/7 3 | - Yarn Issue: https://github.com/yarnpkg/yarn/issues/904 4 | 5 | # Summary 6 | 7 | Original issue: https://github.com/yarnpkg/yarn/issues/904 8 | 9 | From tweet: https://twitter.com/rstacruz/status/786052262841896960 10 | 11 | Integrate a license checker as a yarn command. 12 | 13 | > I have published a standalone package that does this: https://github.com/behance/license-to-fail 14 | 15 | Yarn has `yarn licenses ls`. It would also be useful to know if certain packages 16 | don't satify your license (or other similar files) requirements rather than just a list of them. 17 | 18 | ```bash 19 | $ yarn licenses check 20 | yarn licenses v0.14.0 21 | Disallowed Licenses 22 | ├─ a-pkg@1.0.3 23 | │ ├─ License: not-allowed-license 24 | │ └─ URL: git+https://github.com/pkg/here.git 25 | # error 26 | ``` 27 | 28 | # Motivation 29 | 30 | Most apps/projects have certain assumptions about the kinds of dependencies they bring in. 31 | Even if you check each new dependency, the dependencies of those dependencies may have issues. 32 | There's isn't an easy/manual way to do this outside of checking the license of all dependencies. 33 | 34 | It's most likely that there aren't issues but having an command to do so would allow running it on CI 35 | just like a linter. Issues can be caught automatically and with confidence. 36 | 37 | This solves the problem of checking the license of a new dependency brought in through a new PR 38 | or of an existing package updating it's license (whether it's a direct dependency or indirect dependency). 39 | 40 | The outcome is that users could run the command to notify which packages are disallowed. 41 | 42 | # Detailed design 43 | 44 | The basic idea is straightforward: Given an array of packages and their licenses, match that against an array of 45 | licenses that are disallowed. If any match, error and print them out. 46 | 47 | It would be useful to have a way to make a list of exceptions for when you want to whitelist a properitary pacakage. 48 | 49 | In reality you will probably need to make a lot of exceptions for packages since not all projects have a license 50 | or the program to check what license a project is doesn't always work. 51 | 52 | ## Exceptions/What to do with packages that have an "unknown" license 53 | 54 | - the license checker isn't able to figure out the license 55 | - license is in the readme or some other form (not in package.json) 56 | - the license is correctly updated in master on git but not published (not maintained) 57 | - future version of the package has a license but it's an indirect dependency 58 | 59 | The way license-to-fail does it is let you pass in a config file. 60 | 61 | ```bash 62 | $ ./node_modules/.bin/license-to-fail ./path-to-config.js 63 | ``` 64 | 65 | The config file is just an object with a list of `allowedPackages` and a list of `allowedLicenses`. 66 | 67 | ```js 68 | module.exports = { 69 | allowedPackages: [ 70 | { 71 | "name": "allowed-package-name-here", 72 | "extraFieldsForDocumentation": "hello!", // optional 73 | "date": "date added", // optional 74 | "reason": "reason for allowing" // optional 75 | } 76 | ], 77 | allowedLicenses: [ 78 | "MIT", 79 | "Apache", 80 | "ISC", 81 | "WTF" 82 | ], 83 | warnOnUnknown: true 84 | }; 85 | ``` 86 | 87 | # Alternatives 88 | 89 | Just use a separate package rather than making it built-in like https://github.com/behance/license-to-fail already is (and others). 90 | 91 | # Unresolved questions 92 | 93 | How do users specify the allowed licenses and exceptions (differences for apps/libraries)? 94 | 95 | - use package.json config 96 | - infer from the package's own license which licenses would be acceptable 97 | - use an yarnrc config 98 | - use cli arguments for options 99 | 100 | Should it warn or error with unknown licenses? 101 | -------------------------------------------------------------------------------- /accepted/0000-nohoist.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-11-04 2 | - RFC PR: (leave this empty) 3 | - Yarn Issue: (leave this empty) 4 | 5 | # Summary 6 | 7 | Adding a mechanism "nohoist" to allow monorepo projects, who utilize yarn's [workspaces](https://yarnpkg.com/en/docs/workspaces), to opt-out the default hoisting behavior. 8 | 9 | # Motivation 10 | 11 | See issue [#3882](https://github.com/yarnpkg/yarn/issues/3882) for why developers are asking for `nohoist` features. To summarize: 12 | 13 | ## use cases: 14 | 1. a monorepo project with at least one workspace depends on react-native. 15 | * this is the use case for nohoisting specific direct-dependency from a workspace. 16 | 1. tools like grunt assumes directory structure. 17 | * this could require nohoist both shallow and deep dependency trees 18 | 1. some projects may prefer no-hoisting for whatever reason ;-) 19 | 20 | ## nohoist address a common issue in hoisted monorepo project 21 | 22 | `nohoist` is not a new concept, [lerna](https://github.com/lerna/lerna#--nohoist-glob) has provided a similar feature, an indicator that this is a common issue many monorepo projects have to deal with. 23 | 24 | # Detailed design 25 | 26 | ## Summary: what does nohoist really do 27 | At the end of the day, nohoist is just a mechanism to determine a different 'hoist' point. By default, **all** packages in the workspaces are hoisted to the **root**, except for version conflicts (similar to [npm resolution](https://docs.npmjs.com/how-npm-works/npm3)). Nohoist provided a mechanism to custom this algorithm and determine: 28 | 1. what packages should be the exception of the rule ? 29 | 30 | This is majority of the work for the rest of the RFC, we will describe how we can use the glob pattern to match the package in the dependency graph, like a virtual file system. 31 | 32 | 2. where to place them? 33 | There are general 2 options: 34 | - option-1: under the module that reference it. 35 | - option-2: under the workspace that reference it. 36 | 37 | I have tried both implementations and realized why most package manager moving to the hoisting model... By leaving the packages under the referencing module (option-1), I quickly bumped into the scalability issue (ran out of ram during `yarn install`) for larger project due to high number of (duplicated) modules. Not to mention the complexity in resolving the [modules-in-between](#module-in-between) issue... 38 | 39 | After examined how react-native project worked outside of the workspace environment (both yarn and npm did host everything to the \$cwd/node\_modules), I realized that developers are already trained to go **down** the node\_module tree, but not every one has followed the [node module resolution algorithm](https://nodejs.org/api/modules.html#modules_loading_from_node_modules_folders) to go **up** from \$cwd. It is generally not a problem, unless you are in the monorepo project where modules can reside above the workspace referencing them... 40 | 41 | This drives me to settle on option-2 by simply hoisting the nohoist modules in the workspace's node\_modules (instead of the project's root) where they can still share the benefit of hoisting and greatly reduce the complexity in implementation and education. This essentially made the workspace/node_modules looked like a stand alone project, for the given nohoist-modules. Therefore, if a package can work in a stand alone project, it should be able to work under workspaces. 42 | 43 | ## principle 44 | - hoist as much as we can 45 | - nohoist is an exception/workaround, therefore favor 46 | - explicitness over convenience 47 | - small scale over large scale 48 | 49 | These principles lead us to the following implementation: _nohoist only apply to the package explicitly specified; all of its dependency, unless also matched the nohoist list, will be hoisted by default._ 50 | 51 | ## divide the problem 52 | 53 | nohoist is meant to address the hoist side-effect, thus break down by scale, from hoisting all to none: 54 | ``` 55 | hoist-all(1) --> nohoist-direct-dep(2) --> nohoist-deep-dep(3) --> hoist-none(4) 56 | ``` 57 | 1. hoist-all: hoisting everything, the current behavior of yarn workspaces. 58 | 2. nohoist-direct-dep: opt-out hoisting for direct dependency only, use-case #1 59 | 3. nohoist-deep-dep: opt-out hoisting even for dependencies' dependencies... Maybe use-case #2 60 | 4. hoist-none: a.k.a nohoist all, opt-out hoisting all together: use-case #3. 61 | 62 | yarn currently support (1), this RFC proposed solution to handle (2), (3), (4). 63 | 64 | 65 | ## configuration 66 | We need a place to specify "nohoist" packages for the given workspace(s). We can expand the existing [workspaces](https://yarnpkg.com/en/docs/workspaces) config: 67 | ``` 68 | export type WorkspacesConfig = { 69 | packages?: Array, 70 | nohoist?: Array, 71 | }; 72 | 73 | export type Manifest = { 74 | ... 75 | workspaces?: Array | WorkspacesConfig 76 | } 77 | ``` 78 | note: the config will be backward compatible. 79 | 80 | ### nohoist list inheritance 81 | considering the dependency as a virtual tree, the nohoist rule goes down the branch from where it is specified. For example, if nohoist is specified in the root package.json, all workspaces will inherit the nohoist list, likewise all the dependencies of the workspace will inherit the nohoist list. However if the nohoist is specified in workspace-1, the neighboring workspace-2 will not be impacted. 82 | 83 | ### nohoist list matching 84 | Like the current workspaces.packages configuration, we will use the same glob pattern match to locate nohoist packages. [minimatch](https://github.com/isaacs/minimatch) (the matching library used in yarn) supports many glob patterns, the most common ones for nohoist are probably the simple '*' and '**' patterns: 85 | 86 | For example the following config will basically turn off nohoist for the whole monorepo project: 87 | ``` 88 | // root packages.json 89 | workspaces = { 90 | packages: ['workspace-a', 'workspace-b'], 91 | nohoist: ['**'] 92 | } 93 | ``` 94 | 95 | But why '\*\*' ([globstar](http://www.linuxjournal.com/content/globstar-new-bash-globbing-option)) and not '\*'? By using globstar we can support both shallow ('\*' matches only 1 level) and deep matches (globstar matches all levels down). 96 | 97 | Another example, the `nohoist: ['workspace-1/**']` will only disable workspace-1's hoisting. We get workspace specific config through glob pattern, nice... 98 | 99 | To make react-native workspace work, I added `nohoist: ['**/react-native/**']` in the root package.json before installing react-native to tell yarn don't hoist react-native and all of its dependency, wherever they are. 100 | 101 | Note: right now I am leaning toward file path like format, this works intuitively with glob pattern matching. However the rest of the system seem to use '#' as the separator, as seen in HoistManifest.key. Please see [here](#dependency-tree-representation) for more discussion. 102 | 103 | ### workspace specific nohoist 104 | Workspace, or any package in that matter, can also specify its own nohoist rule just like in root. This might be useful for 3rd-party package to specify nohoist packages to assist monorepo project adoption. The consumers of these packages will automatically pick up the nohoist rule without adding its own nohoist. 105 | 106 | for example, if react-native's package.json has `nohoist: ['**']`, the workspace that referencing react-native will not need any special config to get the same effect as `**/react-native/**`. 107 | 108 | ### How about links and workspaces dependencies? 109 | by not leaving the packages at where it is referenced, we can now handle links and workspace dependency just like any package. For example, if workspace-2 depends on workspace-1 and would need to have workspace-1 traversable from workspace-2/node_modules: `nohoist: ['workspace-1']`. The tree will be as you expected: 110 | ``` 111 | _project_/node_modules 112 | |- workspace-2/node_modules 113 | |- workspace-2 (symlink) 114 | |- workspace-1 115 | ``` 116 | and if workspace-1 has hoisted everything to the _project_ root, workspace-2 might still not able to access workspace-1's dependencies, it could then 117 | ``` 118 | specify: `nohoist: ['workspace-1/**']`: 119 | _project_/node_modules 120 | |- workspace-2/node_modules 121 | |- workspace-2 (symlink) 122 | |- all workspace-1's dependent packages (copy) 123 | |- workspace-1 124 | ``` 125 | This allow both workspaces to have their own hoisting rules. Same thing should apply to linked modules. 126 | 127 | ## nohoist safe guard 128 | nohoist is still an experimental feature, there are 2 things to limit its scope: 129 | 1. user can turn nohoist off by specifying `workspaces-nohoist-experimental = false`. 130 | 1. nohoist can only be used in private package for now until we learn more about how it is used. yarn will look for explicit `private: true` in the project.json where nohoist is specified. 131 | 132 | ## nohoist logic outline 133 | 134 | all of the nohoist logic (minus the config) is implemented in `src/package-hoist.js`: 135 | 136 | 1. added the following property in HoistManifest: 137 | - `nohoistList` to record nohoist pattern. 138 | - `isNohoist` to record nohoist state. 139 | - `originalParentPath` to record the dependent tree before hoisting. Nohoist rule is evaluated against this, not the after-hoisting path (key). 140 | 1. during `PackageHoister._seed()`, populate the new properties above by examining the parent and its own package.json. 141 | 1. during the `PackageHoister.getNewParts()`, the logic only need to be added on deciding what is the highest hoisting point. By default, the highest hoisting point is always root(0), unless the package is marked as nohoist, in which case the highest hoisting point is its workspace (1). 142 | 143 | notes: 144 | - doesn't matter where the packages end up, they will always have access to the dependency tree before hoisting and the nohoist rules. This will come handy when we need to investigate with `yarn why` 145 | 146 | _The actual code change is surprisingly minimal... it took me longer to write this proposal than making the code change_ ;-) 147 | 148 | ## Incompatible assumptions 149 | During the testing, I encountered a few hidden assumptions that need to be changed: 150 | - `add.js`: added package always go in with the root, regardless \$cwd. 151 | This is fine in everything-hoist-to-root world, but not true with nohoist package. When calling `yarn add` from the workspace to add a nohoist package, even with nohoist specified in the package.json, the new package will still be placed under root because it is being 'seeded' explicitly with root. I had to modify the add.js to make it behave like a regular install, i.e no manual seeding, just add the new package into the in-memory Manifest pool and let _install.js_ do its normal thing. 152 | 153 | - `why.js`: only reported the first encountered package, assumed there is only 1 in the project... 154 | This seems like a bug, a package can indeed exist in multiple places today, due to version conflict. One would think _why.js_ should report all of them. With nohoist, this is even more critical. 155 | 156 | Also the current _why.js_ reported dependency based on the "post hoist" structure, which might not tell the full tree at once. Consider a/b/c, all of them has been hoisted to the root. if you ask why 'c', it will say 'b', and you have to ask 'b' again to get 'a'. Giving we have the originalPath now, we can report 'c' is from 'a/b' in one go. I fixed these issues to help me debug and hopefully will help others too. 157 | 158 | As one can see, some assumptions like above, maybe fine before, now will need to be adapted after nohoist. The list is far from complete. I have only addressed those stood in my way of getting the test case 1 (react-native workspace) working. Figure it is better to get the basic features out there earlier than later, so we can be sure the core implementation is sound before piling up more changes... 159 | 160 | # How We Teach This 161 | 162 | as mentioned earlier, `nohoist` is not a new concept, monorepo projects that use and understand how package hoisting work should be easy to pick up the concept of not-hosting. We could introduce a few concrete nohoist examples (such as the following) in the existing [workspaces](https://yarnpkg.com/en/docs/workspaces) document as a starting point, expand as needed. 163 | 164 | ## examples 165 | a monorepo project with 2 workspaces: 'workspace-a', 'workspace-b': 166 | ``` 167 | // root package.json 168 | workspaces = { 169 | packages: ['workspace-a', 'workspace-b'] 170 | } 171 | ``` 172 | ### example 1: disable hoisting for react-active in workspace-a 173 | ``` 174 | // workspace-a package.json 175 | workspaces = { 176 | nohoist: ['react-native'] 177 | } 178 | ``` 179 | the file structure will end up like this: 180 | ``` 181 | monorepo/node_modules/ 182 | |- (... all workspace-b's dependencies) 183 | |- (... all workspace-a's dependencies, except react-native) 184 | |- (... all react-native's dependencies, such as core-js) 185 | |- workspace-b 186 | |- workspace-a/node_modules/ 187 | |- react-native/node_modules/ (empty) 188 | 189 | ``` 190 | ### example 2: disable hoisting for react-native and its dependencies in workspace-1: 191 | ``` 192 | // workspace-a package.json 193 | workspaces = { 194 | nohoist: ['react-native', 'react-native/**'] 195 | } 196 | ``` 197 | the file structure will look like this: 198 | ``` 199 | monorepo/node_modules/ 200 | |- (... all workspace-b's dependencies) 201 | |- (... all workspace-a's dependencies, except react-native and its dependencies) 202 | |- workspace-b 203 | |- workspace-a/node_modules/ 204 | |- react-native/node_modules/ 205 | |- (... all react-native's dependencies) 206 | 207 | ``` 208 | 209 | ### example 3: disable hoisting for workspace-a 210 | ``` 211 | // workspace-a package.json 212 | workspaces = { 213 | nohoist: ['**'] 214 | } 215 | ``` 216 | the file structure will look like this: 217 | 218 | ``` 219 | monorepo/node_modules/ 220 | |- (... all workspace-b's dependencies) 221 | |- workspace-b 222 | |- workspace-a/node_modules/ 223 | |- (... all workspace-a's dependencies...) 224 | 225 | ``` 226 | ### example 4: disable hoisting for the whole monorepo project 227 | ``` 228 | // root package.json 229 | workspaces = { 230 | workspaces = ['workspace-a', 'workspace-b'], 231 | nohoist: ['**'] 232 | } 233 | ``` 234 | the file structure will look like this: 235 | ``` 236 | monorepo/node_modules/ 237 | |- workspace-b/node_modules/ 238 | |- (... all workspace-b's dependencies...) 239 | |- workspace-a/node_modules/ 240 | |- (... all workspace-a's dependencies...) 241 | 242 | ``` 243 | 244 | # Drawbacks 245 | 246 | * Why should we *not* do this? 247 | 248 | - We could argue the current implementation is simple, consistent and correct, therefore, should not introduce nohoist to muddy the water. 249 | - The violating packages, such as react-native, should be corrected to use package manager properly instead of assuming module locations. 250 | - If developers prefer no-hoist, they could just not use "workspaces" or write their own custom scripts to deal with project specific needs. 251 | 252 | * Tradeoffs 253 | 254 | This is a classic simplicity vs usability. yarn can be simple and beautiful but if developers find it hard to use in practice, they will not use it. 255 | 256 | # Alternatives 257 | 258 | * What other designs have been considered? 259 | 260 | - There are a few [high level options](https://github.com/yarnpkg/yarn/issues/3882#issuecomment-338478889) discussed in [#3882](https://github.com/yarnpkg/yarn/issues/3882). 261 | - Some implementation alternatives are discussed inline above. 262 | - The rest is documented in the following section. 263 | 264 | # Unresolved questions 265 | 266 | ## dependency tree representation 267 | in order to utilize glob paradigm, I used the file-path representation for originalPath (= the before-hoist dependency tree). Currently yarn uses '#' as the separator to describe the after-hoist tree, such as in key, parentKeys etc. Not sure if there is a specific reason with '#'. It is probably better to be consistent at least for external reporting purpose. 268 | 269 | Here are the reasons I choose path like syntax: 270 | - minimatch apply globstar only for path like strings: 'a#b#c' doesn't match '**', 'a/b/c' does. 271 | - visually, I find 'a/b/c' a lot easier to read than 'a#b#c' 272 | - '/' is a pretty common way to specify tree structure in school and publications. 273 | 274 | None of them is hard to overcome, we just need to decide a format then adapt the rest. 275 | 276 | [update 12/7/17] 277 | per our discussion, standardizing path separator should be discussed in a separate RFC, thus revert to the current separator: "#' during display for consistency. 278 | 279 | ## is allowing nohoist in public package bad? 280 | 281 | As mentioned earlier, nohoist only impact yarn hoisting, for other package managers, it's a no-op and should be able to safely ignored... maybe I am missing some problematic use cases, here is what I got: 282 | 283 | Let's say a user project A has a dependency on a public package B, which has a dependency on 'react-native' that will need to be excluded from hoist otherwise it won't work. 284 | 285 | ||B has workspaces.nohoist=react-native|B didn't have workspaces.nohoist| 286 | |--|--|--| 287 | |A doesn't use yarn or workspace|no impact|no impact| 288 | |A uses yarn workspace|react-native will be excluded from hoisting without A doing anything|A needs to discover then put both B and react-native in its own workspaces.nohoist| 289 | |A uses yarn workspaces but excluded all packages from hoisting|no impact, none will be hoisted|no impact, none will be hoisted| 290 | |A uses yarn workspaces but do not want to exclude react-native from hoisting (why?)|conflict! but the public package B is right and react-native should be excluded.|no conflict but B will most likely failed| 291 | 292 | In short, having unnecessary nohoist packages might cause inefficiency, but missing a necessary nohoist will lead to compile/execution error. 293 | 294 | [update 11/13/17] 295 | After further discussion, I agree that to have an option guarding nohoist from public package is a safer approach so users can turn off nohoist along with workspaces feature. It is not clear to me if we need a new flag to handle nohoist opt-out separately from workspaces. A nohoist without workspaces is meaningless because nohoist is essentially part of the workspace hoisting scheme. If we do have use cases to override specific public package's nohoist, a generic flag might not be sufficient... Therefore, suggest we hold off on any complex addition until seeing a concrete use case, mean while just use the existing `workspaces-experimental` to safe guard workspaces as a whole. 296 | 297 | [update 12/7/17] 298 | per our discussion, in order to start safely, we will limit the nohoist scope to private packages only. Giving `workspaces-experimental` will be retired soon, we will add a new `workspaces-nohoist-experimental` flag for users to opt-out nohoist if needed. 299 | 300 | ## unit tests failed on mac 301 | this is not related to nohoist issue specifically, but made submitting PR much more difficult and time consuming. Many async integration tests failed on my laptop (mac) for even a fresh clone. Is this a known issue? any suggestion or workaround? 302 | 303 | 304 | -------------------------------------------------------------------------------- /accepted/0000-optional-peer-dependencies.md: -------------------------------------------------------------------------------- 1 | - Start Date: 30 Oct 18 2 | - RFC PR: https://github.com/yarnpkg/rfcs/pull/105 3 | - Yarn Issue: n/a 4 | - Champion: Maël Nison (@arcanis - [Twitter](https://twitter.com/arcanis)) 5 | 6 | # Optional Peer Dependencies 7 | 8 | ## 1. Which Problem Does It Solve 9 | 10 | > I'm a library author. I want my package to provide an optional integration with some other library, but I don't want to have to ship it. 11 | 12 | The current solution to the problem described above is peer dependencies. A package will list the optional dependency as peer dependency, meaning that it's to the package consumer to also provide the library that should be used. 13 | 14 | While it currently already works, it causes semantical issues: are peer dependencies meant to be optional or not? Should a missing peer dependency cause a warning? An error? Nothing at all, and expect the runtime to check that they are available? 15 | 16 | ## 2. Detailed Design 17 | 18 | Yarn will introduce a new field, `peerDependenciesMeta`. This field will be a dictionary that will allow adding metadata to the peer dependencies. As a first step, we'll only add the `optional` metadata field. 19 | 20 | ```json 21 | { 22 | "peerDependencies": { 23 | "lodash": "*" 24 | }, 25 | "peerDependenciesMeta": { 26 | "lodash": { 27 | "optional": true 28 | } 29 | } 30 | } 31 | ``` 32 | 33 | ## 3. How We Teach This 34 | 35 | - Being stricly backward compatible it doesn't require us to push changes onto our users, so the teaching should be simple and spread on the long time. 36 | 37 | ## 4. Drawbacks 38 | 39 | - It requires adoption from other package managers, otherwise there's a risk to fracture the ecosystem if they decide to go with (for example) naming the field `peerDependenciesSettings`. 40 | 41 | ## 5. Alternatives 42 | 43 | - We could introduce an `optionalPeerDependencies` key. 44 | 45 | - It would be one more key to teach users. 46 | 47 | - This wouldn't be backward compatible - most package managers would have no idea what to do with such a key and would ignore it entirely, breaking the tree because of the hoisting. 48 | 49 | - Semantic isn't clear if a dependency is both in `peerDependencies` and `optionalPeerDependencies`. 50 | 51 | - It would have a completely different meaning from the already existing `optionalDependencies` field. 52 | 53 | - We could add an `optional:` protocol 54 | 55 | - It's been noted it could potentially cause issues on older package managers. 56 | 57 | - We could overhaul how the dependencies are defined, and fix them once and for all. 58 | 59 | - This sounds a huge undertaking for a problem relatively minor at the moment. 60 | 61 | - It seems unlikely we can reach a consensus in a reasonable timeframe. 62 | -------------------------------------------------------------------------------- /accepted/0000-plug-an-play.md: -------------------------------------------------------------------------------- 1 | - Start Date: 13 Sep 18 2 | - RFC PR: https://github.com/yarnpkg/rfcs/pull/101 3 | - Yarn Issue: https://github.com/yarnpkg/yarn/issues/6382 4 | - Champion: Maël Nison (@arcanis - [Twitter](https://twitter.com/arcanis)) 5 | 6 | # Plug'n'Play Whitepaper 7 | 8 | ## 1. Summary 9 | 10 | We propose in this RFC a new alternative and entirely optional way to resolve dependencies installed on the disk, in order to solve issues caused by the incomplete knowledge Node has regarding the dependency tree. We also detail the actual implementation we went with, describing the rational behind the design choice we made. 11 | 12 | I'll keep it short in this summary since the document is already large, but here are some highlights: 13 | 14 | * Installs ran using Plug'n'Play are up to 70% faster than regular ones ([sample app](https://github.com/yarnpkg/pnp-sample-app)) 15 | * Starting from this PR, Yarn will now be on the path to make yarn install a no-op on CI 16 | * Yarn will now be able to tell you precisely when you forgot to list packages in your dependencies 17 | * Your applications will boot faster through a hybrid approach of static resolutions 18 | 19 | This is but a high-level description of some of the benefits unlocked by Plug'n'Play, I encourage you to give a look at the document for more information about the specific design choices - and in case anything is missing, please ask and I'll do my best to explain them more in depth! 20 | 21 | ## 2. Motivation 22 | 23 | When the first Javascript package managers appeared, they had to use the tools available to them. As a consequence, they were implemented on top of the Node resolution algorithm, copying the packages in such a layout that Node would be able to find them without external tool. This allowed the ecosystem to thrive, which interestingly revealed some of the scalability flaws in this approach: 24 | 25 | * The Node resolution algorithm isn't aware of what packages are, and as a result doesn't know anything about dependencies either. When a module makes a require call, Node simply traverses the filesystem until it finds something that matches the request, and uses it. 26 | * Installations take time, and the time required to copy files from the package managers' caches to the `node_modules` folders is one of the main bottleneck as it is heavily I/O bound, which cannot be easily optimized. Copy-on-Write filesystems can alleviate this issue to an extent but come with their own set of drawbacks, such as a lack of global support. 27 | 28 | We at Facebook suffered from these issues, and decided to find a way to solve them cleanly while staying compatible with the current ecosystem. The solution we found has been put to the test inside our infrastructure for a few weeks now, and we now feel confident enough about its merits that we want to share with the community at large, and continue iterating on it in the open. 29 | 30 | ## 3. Detailed Design 31 | 32 | In its current state, running `yarn install` does the following under the hood - assuming a cold setup: 33 | 34 | 1. Dependency ranges are resolved into pinned versions 35 | 2. Packages for each version are fetched from their sources and stored in the offline mirror 36 | 3. The offline mirror is unpacked into the cache 37 | 4. The cache is copied into the `node_modules` folders 38 | 39 | Our solution aims to trim the fourth step from the equation. Instead of copying each and every file from the cache to the `node_modules` folder (which can take a lot of time, amplified by the sheer number of files), we will instead generate a single file that will contain static resolutions tables that list: 40 | 41 | * What packages are available in the dependency tree 42 | * How they are linked together 43 | * Where they are located on the disk 44 | 45 | A special resolver is then able to leverage the knowledge extracted from those tables to guide Node and help it figure out the location where each package has been installed (in our case, the Yarn cache). Since all packages from the dependency tree can be statically found inside the tables, the whole filesystem traversal required by the `node_modules` resolution can be skipped - bringing a free performance win at runtime by decreasing the amount of filesystem I/O needed to boot Node applications. 46 | 47 | > **Why a special resolver rather than symlinks/hardlinks?** 48 | > 49 | > While quite useful, we believe both symlinks and hardlinks are solving the wrong problem. Symlinks require tooling support and have ambiguous semantics (Node has no way to know whether a symlink has been created through yarn link, through a workspace, or through a special installation). Hardlinks have surprising behaviors (they unexpectedly corrupt your cache if you change them), don't play well with cross-volumes installs, and also require tooling support (for example when trying to persist the `node_modules` folders). Both of them require heavy I/O at install time, don't decrease the I/O at runtime, and more generally try to workaround the Node resolution rather than change it. 50 | 51 | In the end, while some package managers had some success with these strategies (shoutout to pnpm in particular which was one of the first package managers experimenting to solve those problems!), we envision an alternative approach that's working for us at large scale and that we hope will work for others as well. 52 | 53 | ### Generated api 54 | 55 | A point needs to be made regarding what Plug'n'Play is. While the previous section described the way Yarn would now generate “static resolution tables”, those tables are not what make Plug'n'Play what it is - instead, they are an implementation detail amongst others. Instead, Plug'n'Play is first and foremost an API meant to abstract the resolution process and provide strict behavioral guarantees. In consequence, the resolution tables are useless without the resolver itself, since it's the resolver that is standardized, not the tables. 56 | 57 | For this reason, we decided to generate a single Javascript file (called `.pnp.js`, for “Plug'n'Play”) that would contain both the static resolution tables and the resolver itself. This has multiple advantages: 58 | 59 | * The first one is encapsulation. By keeping the static resolution tables private to the resolver, we also prevent third-party scripts from relying on them, thus making it possible for us to change their underlying implementation as we see fit. A real-life case study lies within the `findPackageLocator` function. Its current implementation is somewhat naive and would be much improved through the use of a [trie](https://en.wikipedia.org/wiki/Trie). Since the static resolution tables are hidden, we can easily replace their format to match this new data structure, which wouldn't necessarily be possible if we had publicized the underlying data structures. 60 | * Project dependencies are traditionally versioned through the `dependencies` field, and installed using `yarn install`. But assuming that the resolver would be kept within Yarn, how would we make sure that the generated file is always compatible with the current Yarn version? In fact, how would we make sure that Yarn itself was available? As detailed in a later section, one of the future improvements we've planned is to make Yarn entirely optional on CI. In this context, where would the “runtime” required for such a resolver be located? The easiest answer is to keep the static resolution tables tied to the resolver itself, removing the risk of unexpected incompatibilities. 61 | 62 | As an executable Javascript file, the generated `.pnp.js` file presents the following features: 63 | 64 | * It doesn't have any dependency other than the built-in Node modules - it doesn't even depend on Yarn itself. 65 | * It exposes an API that can be used to programmatically query resolutions without having to parse the file or to load the full resolver. As mentioned, the implementation itself is entirely free as long as both the interface specified in Annex B and the contract listed in Annex C are fulfilled. This should leave enough room for experimentation while providing a consistent API that matches the current expectations of most packages. 66 | * It's an executable script that can act as a resolution daemon that other processes can communicate with through standard input / standard output using the protocol listed in Annex D (based on JSON). This makes it suitable for integration with third-party tools not written in Javascript - this allowed us for example [to introduce the `module.resolver` option into Flow](https://github.com/facebook/flow/commit/7b6738bdba7b6a4a5844c079d9dd1ddcf815effb). Flow is written in OCaml and cannot use the Javascript API, but thanks to this small bridge, it was only a matter of parsing a few lines of JSON. 67 | * If loaded as a preloaded module (`node -r ./.pnp.js`), it will inject itself into the Node environment and will transparently cause the `require` function to load files directly from the right location. It will also expose the Plug'n'Play API through the `pnpapi` name, that can be required from any package of the dependency tree. The current implementation overrides `Module._load`, but Node 10 recently released a new API that we plan to use to register into the resolver. 68 | * While not guaranteed strictly speaking (should it?), the `.pnp.js` file implemented by Yarn is stable through the use of relative paths. It means that the file can be moved from a computer to another and will still work as expected (provided that the cache folder on the new environment is both hot and located in the same path relative to the `.pnp.js` file). Through smart uses of the `.yarnrc` argument options, it becomes possible to store both the cache and the `.pnp.js` files together, allowing to skip the installs altogether and making Yarn optional. 69 | 70 | ### Workspaces & links 71 | 72 | Yarn supports adding persistent symlinks to your projects through two means: the first one, which we recommend, is to use the workspaces feature in order to create automatic links between your packages. The second one, which is a bit older, is to use the `link:` protocol and force Yarn to create a symlink to a location, regardless what's located there. 73 | 74 | Both of those will continue working with Plug'n'Play, but will be slightly repurposed: when operating under a Plug'n'Play environment, the links between the packages will be kept inside the `.pnp.js` file, and only there. It means that we won't be creating actual symlinks anymore - we don't need them, since their main goal was to hook into the require process! In case a user needs to access something “through the symlink”, they just have to use `require.resolve`, which will query the Plug'n'Play resolution and return the right path. 75 | 76 | A third way of adding symlinks also exists in traditional installs: `yarn link`. While it would be possible to implement something similar, we decided not to rush it since it comes with additional issues. In case you need to use yarn link: 77 | 78 | * If needed for debugging a project, we recommend to use `yarn unplug`, which will copy a package from your cache into the `.pnp/unplugged` folder. This “unplugged” copy is entirely free for you to alter as you see fit. Once you're done, just run `yarn unplug --purge-all` and the modifications you've made will be forgotten. Note that **this feature is not meant to be used as a permanent trick.** 79 | * If you were using `yarn link` as part of your actual install process, we recommend you to either use the `link:` protocol (but be aware of the issues caused by split dependency trees, cf Section 5.B) or, much better, to port your project to use workspaces (which don't suffer from the issues described in Section 5.B since Yarn is then aware of the whole dependency tree). 80 | 81 | ### Virtual packages 82 | 83 | > **Packages instantiations** 84 | > 85 | > As a reminder, a module instance is the representation in memory of this module (it's usually the `module.exports` value exported by this value). Node caches each require call so that if a same module is required multiple times, it will only be instantiated once. The way it does this is by comparing the [realpath](https://nodejs.org/api/fs.html#fs_fs_realpath_path_options_callback) of the files. Unfortunately, this heuristic doesn't work when a same package is located in multiple different locations - which can happen when a package cannot be hoisted, for example. In this instance, Node will create one separate cache entry for each time the package has been duplicated, increasing the time needed for the application to start and messing with `instanceof` checks. 86 | 87 | 88 | Plug'n'Play guarantees that each combination of package name / version will only be instantiated once by Node, **except in one documented case**: if a package has any number of peer dependencies, Node will instantiate them exactly once for each time it is found in the dependency tree - and this regardless of whether the packages are strictly identical or not. 89 | 90 | So for example if you have `react` and `react-dom` (which has a peer dependency on `react`), a same version of `react` will be guaranteed to only ever be instantiated once by Node (because it doesn't have any peer dependencies), but `react-dom` will be instantiated exactly once for each package that depends on it. 91 | 92 | 93 | > **Why does it work this way?** 94 | > 95 | > Let's say you have `package-a` and `package-b`. Both of them depending on the same package `child`, which has a peer dependency on `peer`. Now, imagine that `package-a` depends on `peer@1` while `package-b` depends on `peer@2`. In this instance, the `child` package will have to be instantiated twice in order for us to satisfy the peer dependency contract for both `package-a` and `package-b`. We do this by computing a “unique identifier” (also called a virtual package) for each package instance. Now, how can we standardize the wording for this behavior? 96 | > 97 | > First solution would be to say something similar to: “a package with peer dependencies must have exactly one unique identifier per each set of inherited dependencies”. So in our example, since `child` has two sets of inherited dependencies, it would get two unique identifiers, would be instantiated twice, and everything would work. But unfortunately it's not so simple. 98 | > 99 | > Problems arise when you consider circular dependencies. Let's imagine a different scenario: `package` depends on `child-a` and `child-b`. The `child-a` package has a peer dependency on `child-b`, and `child-b` has a peer dependency on `child-a`. In this situation, per the wording described above, we would need to generate the unique identifier for `child-a` based on the set of its dependencies, which includes `child-b`. The problem is that the unique identifier for `child-b` has not been generated yet, so we cannot compute the unique identifier for `child-a`, and vice-versa! The loop cannot be broken. 100 | > 101 | > The solution to this issue is to say that the unique identifier for a package with peer dependencies is based on the unique identifier of its direct parent. Since a package unique identifier is always computed before its children, we cannot have a cyclic dependency. 102 | 103 | ### Install config 104 | 105 | Since this proposal is still experimental we decided not to enable Plug'n'Play by default for the time being. As a result, a new key has been added to the `package.json`: `installConfig` (we selected this name to mirror the `publishConfig` settings that already existed). If `installConfig.pnp` contains a truthy value, Plug'n'Play will be used if the current version of Yarn supports it. Otherwise, the regular install will be used: 106 | 107 | ``` 108 | { 109 | "installConfig": { 110 | "pnp": true 111 | } 112 | } 113 | ``` 114 | 115 | > **Why not store this configuration value within the `.yarnrc`?** 116 | > 117 | > While the `.yarnrc` files would have made a fine candidate, we believe that checking whether a project is Plug'n'Play compatible or not can be extremely interesting for various tools. Listing it into the `package.json` means that virtually any tool can quickly decide whether they want to take advantage of the guarantees provided by Plug'n'Play. 118 | 119 | ## 4. Solved Issues 120 | 121 | ### A. INSTALL SPEED NOW REACHES NEW ALL-TIME-HIGHS 122 | 123 | Probably the most obvious win is that by keeping the dependencies files stored into the cache, Yarn doesn't have to copy them around anymore. This allows the link step to be skipped almost entirely - the last remaining actions being done only because of how Yarn is currently architectured and will be removed later on (we're still creating a deep tree before flattening it in a later step, which is unnecessary). 124 | 125 | ### B. Installs can now be efficiently cached even on ci 126 | 127 | Now that files don't have to be copied anymore, efficient caching becomes a breeze. The actual Yarn cache can be persisted into locations shared between all CI instances, making it possible to skip installs altogether provided both the cache and the `.pnp.js` file are made available. 128 | 129 | Where some projects were spending more than two minutes running (like is the case for react-native, for example), Plug'n'Play now allows them to spend this time actually running their tests, decreasing the load on the CIs and making for a better developer experience - we all hate waiting for the tests to end. 130 | 131 | ### C. Users working on multiple projects across a system won't pay increasing install costs 132 | 133 | A common occurrence in the Javascript world is developers working on multiple disconnected projects sharing similar dependencies (for example all projects created through `create-react-app`). Due to how package managers currently work, the files used by those projects were typically copied from the cache into multiple `node_modules`, multiplying both the total size of the installs on the disk and the time wasted running installs. 134 | 135 | Now that the files are read directly from the cache, no matter your system, you'll only ever pay the cost of having multiple projects once. This multi-megs project is much more bearable now that you know that its dependencies will be reused by all other projects on your machine now. 136 | 137 | ### D. “Perfect” hoisting due to the removal of the filesystem limitations 138 | 139 | An example will be worth a thousand words: let's say you have the following dependency tree: 140 | 141 | * `top-level` 142 | * `package-a` 143 | * `package-c@1.0.0` 144 | * `package-b` 145 | * `package-c@1.0.0` 146 | * `package-c@2.0.0` 147 | 148 | In this situation, even though it is found multiple times inside the dependency tree, `package-c@1.0.0` cannot be hoisted in such a way that it can be installing only once. This is because doing so would conflict with the strict requirement of `package-c@2.0.0` declared by the top-level. 149 | 150 | Since Plug'n'Play flattens the dependency tree while still preserving the links between the nodes, the paths Node will get will be the same for any `package-c@1.0.0` inside the dependency tree, causing the package to be instantiated a single time. 151 | 152 | ### E. Users cannot require dependencies that aren't listed in their dependencies 153 | 154 | A common problem was that it was extremely easy for a library author to start relying on a package and forget listing it inside the dependencies. Because these broken dependencies were being pulled by dev dependencies before being hoisted to the top level, they often happened to work fine in development environments and break in production. 155 | 156 | This problem isn't possible anymore with Plug'n'Play, because Yarn is aware of the whole dependency tree and can instantly decide whether a resolution request is valid or not for a given package. As an added bonus, it is able to know the exact reason why a require cannot succeed, which further improves the developer experience: 157 | 158 | ``` 159 | Error: You cannot require a package ("foobar") that is not declared in your 160 | dependencies (via "/Users/mael/app/some-script.js") 161 | ``` 162 | 163 | ``` 164 | Error: Package "foo@0.0.0" (via "/path/to/foo/0.0.0/node_modules/foo/index.js") is 165 | trying to require the package "foobar" (via "foobar") without it being listed in 166 | its dependencies (react, react-dom, foo) 167 | ``` 168 | 169 | ### F. Package instantiations now obey strict & predictable rules 170 | 171 | As mentioned in the previous point, it happens that the hoisting may not be applied fully. But the thing is, it often can be. Which makes it impossible for a project to know for sure how many times it will be instantiated by Node, and plan accordingly. 172 | 173 | The Plug'n'Play API makes it a goal to provide strong guarantees that library authors can rely on. Anything that would cause the contract to be broken in some circumstances is unacceptable, and as a result cannot be used as guarantee. 174 | 175 | 176 | > **Peer dependencies: The Return** 177 | > 178 | > This is for example why Plug'n'Play guarantees that *ALL* packages with peer dependencies are instantiated exactly once for each time they are found in the dependency tree: while it would be possible to optimize it some of the packages with peer dependencies in some specific cases, we wouldn't be able to guarantee it and would have to make it an undefined behavior. 179 | > 180 | > Since it's been proven in the past that such undefined behaviors were still leading some libraries to make incorrect assumptions (as happened with packages crossing the `node_modules` boundaries, for example), we deemed it safer to enforce a stricter but entirely predictable and consistent behavior that library authors can rely on. 181 | 182 | ### G. Enforcing the boundaries leaves room for different implementations 183 | 184 | As detailed in section 6, Plug'n'Play is but the beginning of a long term project. As a result, we worked hard to make sure that the guarantees exposed in section 1 and the APIs detailed in section 3 weren't overly vast and would make it possible for us to change the way the Yarn Plug'n'Play resolver is implemented while still honoring the contract we set up. 185 | 186 | Moreover, enforcing correctness will also make it easier for third-parties to write their own resolvers, because they'll know from the get go what are the rules they need to implement. We describe in Section 7.A a generalized testsuite that we wrote to make this work easier. 187 | 188 | ## 5. Potential New Issues 189 | 190 | ### A. Post install scripts considered harmful 191 | 192 | Post-install scripts are likely the biggest technical issue of Plug'n'Play. Since all packages are kept in the cache, build artifacts need to be stored there as well. While it might work for a single project, modifying files into the cache would still lead to cache corruptions, and would thus be unacceptable: it would cause issues when working on multiple projects sharing the same package with a post-install script, since they would each overwrite the files the others generated (which might be different since they can depend on a dependency of the package, which might be locked to different versions across multiple projects - think about a project using a Node-4-compiled version of node-sass that would conflict with a project using a Node-10-compiled version of this same package). 193 | 194 | There's two ways this issue can be solved: 195 | 196 | * First we've started to implement a `yarn unplug --persist` command that put specific packages outside of the cache, inside a specific project directory (`pnp-packages`, which wouldn't be too different from `node_modules` except that it would be entirely flat, even when the package names would usually cause a conflict). 197 | * On the long term we believe that post-install scripts are a problem by themselves (they still need an install step, and can lead to security issues), and we would suggest relying less on this feature in the future, possibly even deprecating it in the long term. While native modules have their usefulness, WebAssembly is becoming a more and more serious candidate for a portable bytecode as the months pass. 198 | 199 | As a data point, we encountered no problem at Facebook with adding `--ignore-scripts` to all of our Yarn invocations. The two main projects we're aware of that still use postinstall scripts are `fsevents` (which is optional, and whose absence didn't cause us any harm), and node-sass (which is currently working on [a WebAssembly port](https://github.com/sass/node-sass/issues/2011)). 200 | 201 | ### B. Cross-installs break the model 202 | 203 | Plug'n'Play relies on the fact that it knows the whole dependency tree. This causes issues when a package tries to require a file located in another entirely different part of the filesystem. 204 | 205 | The current implementation partially solves this by having a fallback on the regular Node resolution when files located outside of the dependency tree make require calls. Unfortunately, this doesn't work well with other projects that have themselves been installed using Plug'n'Play. We think this shouldn't happen under normal circumstances, and as a result have decided it wasn't blocking the proposal. 206 | 207 | ### C. The package manager becomes the hub to run the project 208 | 209 | This is more a philosophical issue than a technical one. In order to make it easier for users to work with Plug'n'Play, Yarn recommends calling scripts through `yarn run`, and running Javascript files using `yarn node` (both of which are commands that have been available for some time now). They both automatically insert the Plug'n'Play hook if needed, making the whole thing transparent to the user. As a downside, it also means that the package manager also becomes the preferred way to run scripts. 210 | 211 | The easiest solution would be for Node to implement native support for loading Plug'n'Play if detected. This is obviously not something that can be taken lightly, so this will only become viable once Plug'n'Play will have proven its value. 212 | 213 | ### D. Tools relying on crossing package boundaries in order to load their plugins need help 214 | 215 | Some packages try to require packages they don't directly depend on for legit reasons. Most of those are trying to do this in order to reference plugins that their users declared in their configuration. Since they don't list those plugins in their dependencies (nor should they have to), Plug'n'Play is supposed to deny them access. 216 | 217 | In order to solve this, Plug'n'Play details a special case when a package makes a require call to a package it doesn't own but that the top-level has listed as one of its dependencies. In such a case, the require call will succeed, and the path that will be returned will be the exact same one as the one that would be obtained if the top-level package was making the call. 218 | 219 | ### E. Edit-in-place workflows need different tools 220 | 221 | A quite common debug pattern is to manually edit the `node_modules` files in order to alter the behavior of the program and debug it more easily (by adding `console.log` statements, for example). Since the `node_modules` folders don't exist anymore, this isn't directly possible anymore. 222 | 223 | In order to solve this use case, we've implemented the `yarn unplug` command that can temporarily put a package from your cache into a special folder inside your project. You're then free to edit this local copy as you see fit, then once you're done just run `yarn install` again and it will be removed. This also provides a safety mechanism that avoids you from having to remove your whole `node_modules` hierarchy when you want to revert your changes. 224 | 225 | ## 6. Future Work 226 | 227 | ### A. `require.resolve` 228 | 229 | The `require.resolve` function is problematic in that it does two things in one: 230 | 231 | * On one hand it returns an identifier that, when passed to `require`, will allow any module to load a file that would typically only be accessible from another module. 232 | * On the other hand it converts an identifier into a path on the filesystem. 233 | 234 | The reason this is a problem is that both of those actions don't have the same meaning and as such interfere with each other. The symlinks used for the virtual packages implementation referenced in Section 3 are a direct consequence of this: while it would be possible to implement this concept by making `require.resolve` create and return special in-memory identifiers that `require` would be able to understand, it wouldn't be possible to use those identifiers as paths (unless we were to patch the `fs` module, which is totally unacceptable). 235 | 236 | A fix would be to split `require.resolve` in two: 237 | 238 | * `require.resolve.virtual`: would convert a request into an implementation-defined object ready for consumption by `require` and `require.resolve.physical` 239 | * `require.resolve.physical`: would convert a request (or one of the values returned by `require.resolve.virtual`) into a filesystem path 240 | 241 | ### B. Tarball unpacking 242 | 243 | The classic Yarn installs copy files from the cache to the `node_modules` folder. The Plug'n'Play Yarn installs instead ensure that the cache becomes the one and single source of truth. But what if we were to go one step further? What if the Yarn offline mirror (this folder that contains the tarballs of each package found in the project, and used to populate the cache without querying the network) was this one and only source of truth? What if we didn't have to unpack them anymore? 244 | 245 | Plug'n'Play makes this easy: since the Plug'n'Play implementation is fluid (as long as the guarantees listed above are met), it becomes possible to implement it from various different way. One of these ways could be to return an opaque object that would contain an combination of an archive path and a file path, which `require` would then be able to interpret in order to load the file from the given archive on demand! 246 | 247 | Deployments would then only have to ship the `.pnp.js` file along with the offline mirror and the project could then be run from anywhere without needing any extra install. 248 | 249 | ## 7. Annexes 250 | 251 | ### A. Generalized testsuite 252 | 253 | In order to make it easier for everyone to experiment with different Plug'n'Play implementations, we've written a generalized testsuite that can be found [on the Yarn repository](https://github.com/yarnpkg/yarn/tree/master/packages/pkg-tests). It contains acceptance tests that validate the high-level behavior of the feature, and aren't tied to any specific implementation detail. While it's still a work-in-progress, we've invested in it a lot and hope it'll prove valuable to the community. 254 | 255 | Adapting the testsuite to a different manager is a matter of implementing an adapter matching the package manager you want to test, then enabling the specs you want to validate. You can take a look at [the Yarn adapter](https://github.com/yarnpkg/yarn/blob/master/packages/pkg-tests/yarn.test.js) for an example on how we implemented this for our package manager. 256 | 257 | ### B. Plug'n'Play api 258 | 259 | ``` 260 | interface { 261 | 262 | VERSIONS: {std: 1, [extension]: number}; 263 | 264 | // Helper variable representing the top-level in the API 265 | topLevel: {name: null, reference: null}; 266 | 267 | findPackageLocator(path: string): {name: null, reference: null}; 268 | findPackageLocator(path: string): {name: string, reference: string}; 269 | 270 | getPackageInformation({name, reference}: {name: null, reference: null}): {packageLocation: string, packageDependencies: Map, [key: string]: any}; 271 | getPackageInformation({name, reference}: {name: string, reference: string}): {packageLocation: string, packageDependencies: Map, [key: string]: any}; 272 | 273 | resolveToUnqualified(request: string, issuer: string, {considerBuiltins?: boolean = true}): string; 274 | resolveUnqualified(unqualified: string, {extensions?: Array}): string; 275 | resolveRequest(request: string, issuer: string): string; 276 | 277 | setup(void): void; 278 | 279 | } 280 | ``` 281 | 282 | ### C. Formal Plug'n'Play guarantees 283 | 284 | * A package **MUST** be able to get the value exported by the main entries of its dependencies using the `require` function, or through an import statement. 285 | * A package **MUST** be able to get the filesystem path to a file stored in one of its dependencies by using the `require.resolve` function. 286 | * A package listing a peer dependency **MUST** obtain the exact same instance of this peer dependency when using `require` than its immediate parent in the dependency tree would. This process is applied recursively. 287 | 288 | * A package **MUST NOT** be able to require a package that isn't listed in its dependency detail. The dependency detail is the sum of `dependencies` and `peerDependencies` for all packages, plus `devDependencies` for the top level package if running in development mode. 289 | * An exception is made if the package being required is listed in the dependency detail of the top-level. In this case, the package making the request will obtain the exact same **instance** than if the top-level package had made the require call (note the emphasis on instance rather than version). 290 | * We however discourage packages from relying on this exception, since it's only been implemented to lower the adoption cost and help plugin systems. Packages should prefer using `peerDependencies` if applicable. 291 | * Two packages depending on the same reference of the same dependency that *doesn't* have any transitive peer dependencies **MUST** get the exact same instance of this dependency, whatever their locations in the dependency tree are. 292 | * Two packages depending on the same reference of the same dependency that itself has a transitive peer dependency **MUST** get different instances of this dependency. 293 | 294 | The comprehensive list of guarantees can also be extracted from the “it should” statements that can be found on the Plug'n'Play test suite. 295 | 296 | ### D. Daemon-mode communications 297 | 298 | The Plug'n'Play daemon communicates with the outside world by the mean of a very simple stdin / stdout loop. It doesn't spawn a server. The protocol is quite simple. In its most basic form, it's the following: 299 | 300 | ``` 301 | > JSON [request: string, issuer: string] 302 | < JSON [error: ?{code: string, message: string, data: Object}, resolution: string] 303 | ``` 304 | 305 | Note that any execution is synchronous (multiple requests cannot be handled simultaneously), but the daemon can be spawn multiple times and pooled for a similar effect (there's no lock). Well behaving applications should watch for the resolver being modified, and act accordingly when they detect changes (such as clearing the resolution cache and restarting the daemon processes). 306 | -------------------------------------------------------------------------------- /accepted/0000-publish-config.md: -------------------------------------------------------------------------------- 1 | - Start Date: (1-10-17) 2 | - RFC PR: https://github.com/yarnpkg/rfcs/pull/40 3 | 4 | # Summary 5 | 6 | Right now yarn does not have a concept of publishConfig. This setting already 7 | exists in package.json for many npm packages. The setting allows you to set the 8 | registry url where you want your package published. 9 | 10 | # Motivation 11 | 12 | The addition of picking up publishConfig from package.json will allow developers 13 | creating internal packages to move across to using yarn with greater ease. 14 | 15 | Developers will be able to use yarn publish to send their package to an internal 16 | registry such as Nexus Repository Manager or Artifactory, while maintaining a 17 | separate setting for the main registries such as npmjs or the yarn mirror. These 18 | are very useful on their own for installing packages, but in many cases a 19 | developer is going to want to publish their package to a different registry. 20 | 21 | # Detailed design 22 | 23 | To implement this I propose that publish.js be modified in such a way that it will override 24 | registry settings if a package.json contains a publishConfig url. This will need to 25 | modify seteps 2 and 3 of the process, particular the getToken (this occurs against a registry url) 26 | and publish. I believe this can be accomplished by a if(pkg.publishConfig) and overriding 27 | the registry url in config. 28 | 29 | # How We Teach This 30 | 31 | Since this is an existing npm feature, I believe not much will be needed 32 | for users to understand or grasp this. Changes to documentation for yarn 33 | publish are likely in order, but small changes to explain the behavior. 34 | 35 | At least on our side (since this originates with Sonatype), we would likely 36 | blog about this new behavior for our users so that they can adopt yarn 37 | as well. 38 | 39 | # Drawbacks 40 | 41 | As with any new code, it's new code. Adding it expands the amount of functionality 42 | that yarn now supports. That's the largest drawback I can think of. 43 | 44 | # Alternatives 45 | 46 | I considered a --registry flag for the yarn publish command. This would likely accomplish the 47 | same functionality but is prone to error as most hand typed things are. 48 | 49 | # Unresolved questions 50 | 51 | I'm a relative newbie to yarn, so my design might be too simplistic or not accounting 52 | for things I just don't know about. 53 | -------------------------------------------------------------------------------- /accepted/0000-remove-yarn-check.md: -------------------------------------------------------------------------------- 1 | - Start Date: 30 Oct 2018 2 | - RFC PR: https://github.com/yarnpkg/rfcs/pull/106 3 | - Yarn Issue: n/a 4 | 5 | # Remove `yarn check` 6 | 7 | ## 1. Motivation 8 | 9 | Running `yarn install` should work out of the box. There's no reason for `yarn check` to exist from a user perspective - they should never have to run it, because installs must always be right. 10 | 11 | Being not particularly useful, the `yarn check` command receives less attention and as a result it often yields wrong / confusing results. This leads users to believe that `yarn install` is broken when it's in fact `yarn check` (https://twitter.com/betaorbust/status/1055610508533878784). 12 | 13 | ## 2. Detailed Design 14 | 15 | We'll remove the `yarn check` command in Yarn 2.0. 16 | 17 | ## 3. How Can We Teach This 18 | 19 | This command will be marked as deprecated, and running it will exit with an error code and an error message explaining why it got removed. 20 | 21 | ## 4. Drawbacks 22 | 23 | - The one use of `yarn check` is to provide debug information when installs aren't properly made. In practice it's never used this way, even by maintainers. 24 | 25 | ## 5. Alternatives 26 | 27 | - We could fix `yarn check`. This would prove time consuming, of little value because of the reasons detailed above, and resources would be better spent on other worksites. 28 | -------------------------------------------------------------------------------- /accepted/0000-switching-registries.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-09-21 2 | - RFC PR: 3 | - Yarn Issue: 4 | 5 | # Summary 6 | 7 | Yarn should always use the registry configured by the current user, falling back to the default registry otherwise. It should not use the registry specified in the `resolved` field of lockfile. 8 | 9 | # Motivation 10 | 11 | The default registry used by Yarn is `registry.yarnpkg.com`, but the use of alternate registries is possible through configuration or command-line flags. Using an alternate registry can be useful for a number of reasons; it can improve performance, facilitate serving private packages, and improve the reliability of build and test servers. However, many of these use cases are environment specific. These benefits cannot be realized without allowing the use of different registries in different environments. 12 | 13 | * Using a registry mirror that is closer to you can dramatically improve performance. Using a single registry mirror might work well for teams that work in the same office, but teams that span great distances will need to use different registry mirrors to see the same benefit. 14 | * Build environments can benefit from using a dedicated registry. In addition to the performance benefits, it can also protect against outages that might affect the public registries. 15 | 16 | Currently, Yarn will only use the configured registry for newly installed packages. For packages that are listed in the lockfile already, Yarn will use the registry saved in the lockfile instead, ignoring the registry configuration of the current user. This effectively prevents switching registries between environments. 17 | 18 | # Detailed design 19 | 20 | Yarn should adopt the behavior `npm` introduced with `npm@5`, which is to always use the registry configured by the current user. This change was described in [this blog post](http://blog.npmjs.org/post/161081169345/v500): 21 | >If you generated your package lock against registry A, and you switch to registry B, npm will now try to install the packages from registry B, instead of A. If you want to use different registries for different packages, use scope-specific registries (npm config set @myscope:registry=https://myownregist.ry/packages/). Different registries for different unscoped packages are not supported anymore. 22 | 23 | Yarn already supports switching to a different registry and scoped registries. The change would be to use them in all cases rather than just for new packages. 24 | 25 | ### What about the `resolved` field in the lockfile? 26 | 27 | The `resolved` field will not be used to determine which registry is used. Changes to to the `resolved` field, such as removing the registry, are outside the scope of this RFC. 28 | 29 | ### How do scoped registries work? 30 | 31 | Yarn supports configuring a registry for a specific scope (e.g. `yarn config set '@foo:registry' 'https://registry.foo.com'`). If a scoped registry has been configured, then this registry shall be used for **all** packages under that scope. Using different registries within a single scope is not supported. 32 | 33 | ### Can I configure a specific non-scoped package use an alternate registry? 34 | 35 | No, non-scoped packages would all use the same registry. This restriction simplifies the configuration, and keeps us in line with `npm`. 36 | 37 | ### Should the `resolved` field be used as a fallback? 38 | 39 | No, the user's configured registry should be used at all times. Falling back to `resolved` could hide problems (i.e. outdated or misconfigured mirrors) and negatively affect performance. It could also leak information to third parties about which packages are being used, which might be undesirable in certain environments. 40 | 41 | # How We Teach This 42 | 43 | We should improve the documentation for configuring Yarn, with a focus on the registry and scoped registry configuration. This change should also be featured prominently and explained in the release notes, and any blog post or announcement accompanying the version this ships in. This is a significant change, but it does bring it more in line with the behavior of `npm`, which many people are familiar with. 44 | 45 | Overall, the behavior of Yarn should be easier to understand after this change. The behavior will be more transparent and controllable by the user. 46 | 47 | # Drawbacks 48 | 49 | 1. This would be a breaking change. 50 | 2. Alternate registries for non-scoped packages (or within scopes) would be impossible 51 | While this was never supported officially, it was possible to do by directly editing the `yarn.lock` file. With this change, that would no longer be possible. 52 | 53 | # Alternatives 54 | 55 | 1. Add additional configuration for overriding registries 56 | We could preserve the existing behavior, but add additional configuration for overriding the registry. This `override-registry` would behave like `registry` configuration does in the main proposal. This would allow users to opt-in to the new behavior without making a breaking change. 57 | While this would solve the problem, it would also make the configuration more confusing and difficult to teach. 58 | 2. Use the `registry` configuration only as a fallback to the `resolved` field 59 | This could allow the install to succeed in cases where the `resolved` field has a registry that is inaccessible to the current user. However it would do nothing to address the use case where an alternate registry is used for performance reasons. 60 | 61 | # Unresolved questions 62 | 63 | * What is the performance impact of not using the cached tarball URL (`resolved`)? 64 | -------------------------------------------------------------------------------- /accepted/0000-update-hook-runs.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-06-19 2 | - RFC PR: 3 | - Yarn Issue: 4 | 5 | # Summary 6 | 7 | Bring the scripts that run on certain commands closer to parity with NPM. 8 | 9 | # Motivation 10 | 11 | There are a few changes to the scripts hooks that NPM has made recently, and it 12 | would be good to bring `yarn` to the same standard. 13 | 14 | This means that `yarn` can remove hooks that don't make sense, and stop 15 | supporting issues brought up by people confused about *why* a certain script is 16 | running. 17 | 18 | Additionally, by tracking closely to what NPM is doing with their hooks, `yarn` 19 | remains a drop-in replacement, as most people expect it to act. 20 | 21 | # Detailed design 22 | 23 | ### State of Scripts 24 | 25 | Below are all the scripts that NPM currently auto-runs, and at which points they 26 | run. Currently, there is only one deprecation that motivated this RFC, but it is 27 | anticipated that there will be more in the long run. 28 | 29 | This list is kind of long, because I've enumerated all options. 30 | 31 | - `prepublish` 32 | - before `publish` 33 | - before `install` 34 | - *Note: this is the deprecation. `prepublish` will no longer run on 35 | install* 36 | - This hook should *not* be added, as it is removed as of `npm@5`. 37 | - `prepare` 38 | - before `publish` 39 | - before `install` 40 | - *Note: This is the replacement for the current `prepublish` behaviour.* 41 | - `prepublishOnly` 42 | - before `publish` 43 | - `prepack` 44 | - before `pack` 45 | - before `publish` 46 | - `postpack` 47 | - after `pack` 48 | - after `publish` 49 | - `publish` 50 | - after `publish` 51 | - confusing, but runs *after* the package has been published. 52 | - `postpublish` 53 | - after `publish` 54 | - `preinstall` 55 | - before `install` 56 | - `install` 57 | - after `install` 58 | - confusing, but runs *after* the package has been installed. 59 | - `postinstall` 60 | - after `install` 61 | - `preuninstall` 62 | - before `uninstall` 63 | - `uninstall` 64 | - before `uninstall` 65 | - confusing, but runs *before* the package is uninstalled 66 | - `postuninstall` 67 | - after `uninstall` 68 | - `preversion` 69 | - before `version` 70 | - `version` 71 | - after `version` 72 | - runs *before* the version commit is made 73 | - `postversion` 74 | - after `version` 75 | - runs *after* the version commit is made 76 | - `preshrinkwrap` 77 | - before `shrinkwrap` 78 | - `shrinkwrap` 79 | - before `shrinkwrap` 80 | - confusing, but runs *before* the shrinkwrap is created 81 | - I'm not sure `yarn` needs to include this, given that `shrinkwrap` is not 82 | implemented. 83 | - `postshrinkwrap` 84 | - after `shrinkwrap` 85 | 86 | The following commands are backed by user-written scripts. The `pre` and `post` 87 | commands are run before and after the user-written version, and there is no 88 | built-in run. 89 | 90 | *Note: `test` has a built-in default of `echo 'Error: no test specified'`, so 91 | the `pre` and `post` will run regardless* 92 | 93 | *Note: all other scripts will error, and no hook will run, except `restart` 94 | (explained below).* 95 | 96 | - `pretest` 97 | - before `test` 98 | - `posttest` 99 | - after `test` 100 | - `prestop` 101 | - before `stop` 102 | - `poststop` 103 | - after `stop` 104 | - `prestart` 105 | - before `start` 106 | - `poststart` 107 | - after `start` 108 | - `prerestart` 109 | - before `restart` 110 | - `postrestart` 111 | - after `restart` 112 | 113 | `restart` is a special case, because it will run the `stop` and then the `start` 114 | scripts if `restart` doesn't exist. It doesn't throw an error if any of those 115 | scripts are not defined. Interestingly, it will run the `pre`- and `post`- 116 | scripts for `stop` and `start`, even if `stop` and `start` themselves are not 117 | defined. 118 | 119 | ### Multiple Hooks, One Command 120 | 121 | There are some commands that have *multiple* hooks attached to them. These hooks 122 | will run in a certain order. 123 | 124 | #### restart 125 | 126 | `restart` is arguabled the most confusing behavior. First, let's look at the 127 | behaviour when no `restart` script is defined. 128 | 129 | Regardless of the definition of the actual `start` and `stop` commands, 130 | `restart` will run the lifecycle hooks. If all lifecycle hooks are defined, the 131 | scripts are run in the following order. If a particular script is not defined, 132 | it is simply skipped. 133 | 134 | `prerestart` -> `prestop` -> `poststop` -> `prestart` -> `poststart` -> 135 | `postrestart` 136 | 137 | This makes more sense when you look at the fact the `restart` runs `stop`, and 138 | then `start` if there is no `restart`. It makes less sense that these hooks are 139 | run if there is no `stop` or `start`, even though running `start` will error, 140 | and run no hooks if `start` is not defined. 141 | 142 | If `restart` is defined, the hooks are run like: 143 | 144 | `prerestart` -> `restart` -> `postrestart` 145 | 146 | #### publish 147 | 148 | Due to the addition of the extra hooks to try and solve the confusion around the 149 | `publish` hook, the hooks are run in the following order: 150 | 151 | `prepublish` -> `prepare` -> `prepublishOnly` 152 | 153 | In practice, the `prepublishOnly` event may be dropped at any time, so this hook 154 | ordering makes little-to-no sense, given that `prepublishOnly` now has the same 155 | behaviour as `publish`, yet they on different sides of `prepare`. 156 | 157 | The current proposal would be to match the existing behaviour of `npm`. That 158 | will also require changing the current `yarn` behaviour, as `prestart` and 159 | `poststart` scripts currently run, even without a `start` defined. 160 | 161 | # How We Teach This 162 | 163 | This is a continuation of both `npm` patterns and existing `yarn` patterns. I 164 | think that this could also be cleared up by displaying what hooks will be run at 165 | the commencement, so that people can clearly see what is going to happen. 166 | 167 | E.g. 168 | 169 | ``` 170 | > yarn start 171 | Running prestart -> start -> poststart 172 | ``` 173 | 174 | And show only the hooks that get run, and what order they'll be run in. 175 | 176 | # Drawbacks 177 | 178 | - It will cause changes to the current `yarn` behaviour of running hooks even if 179 | no script is defined 180 | - e.g. `prestart` and `poststart` run, even without `start` being present. 181 | `npm` throws an error. 182 | - It may change peoples existing workflows if they expect `prepublish` to run on 183 | `install` 184 | - However, this change brings `yarn` into line with `npm` 185 | 186 | # Alternatives 187 | 188 | - Split away from current `npm` behaviour 189 | - This gives `yarn` the option to define behaviours in a more modern way, and 190 | the flexibility to change when/how hooks run 191 | - Leave the current behaviour as-is 192 | 193 | # Unresolved questions 194 | 195 | - How to display this change to users? 196 | - Where to keep a list of hooks and what orders they run in? 197 | -------------------------------------------------------------------------------- /accepted/0000-workspace-run-commands.md: -------------------------------------------------------------------------------- 1 | * Start Date: 2017-09-21 2 | * RFC PR: 3 | * Yarn Issue: 4 | 5 | # Summary 6 | 7 | Allow Yarn CLI to execute a package script on each workspace package from the workspace root. 8 | 9 | # Motivation 10 | 11 | Lerna does a great job for handling monorepos. Yarn's built-in workspaces feature would benefit from borrowing more of functionality that Lerna exposes, to handle monorepos more easily. 12 | 13 | Just like installing all dependencies from one place (workspace root), it'd be useful to execute a specific package script for each of the workspace packages from the workspace root using a single command. Not only this would make for less back-and-forth traveling between the packages to execute scripts, but also it'll help a lot for CI/CD configuration. 14 | 15 | # Detailed design 16 | 17 | As @BYK mentioned [here](https://github.com/yarnpkg/yarn/issues/4467#issuecomment-330873337), to start we'd create two commands for this feature: 18 | 19 | * `yarn workspaces list` 20 | * `yarn workspaces run ` 21 | 22 | Unifying all of the workspace-specific commands under the `workpaces` namespace allows for a modular approach from the CLI's perspective, making future commands easy to add. 23 | 24 | ## `yarn workspaces list [flags]` 25 | 26 | This command lists all of the packages for all of the workspaces alphabetically. If no workspaces are found it will error. 27 | 28 | ## `yarn workspaces run [flags] ...` 29 | 30 | This command runs a specific package script (as defined in the `scripts` property in `package.json`) for all of the packages for all of the workspaces. 31 | 32 | Just like the `yarn run` command, any arguments after the command name will be passed as arguments to the package script. 33 | 34 | For a _fail fast_ operation, we could traverse all the workspaces to check whether the specified command exists within that workspace, before we actually start executing. If not found, display an error with which workspace is missing that command. 35 | 36 | The ordering of execution is also important. It must be executed _topologically_, so that it won't break the inter-dependant workspaces. From the Lerna documentation: 37 | 38 | > By default, all tasks execute on packages in topologically sorted order as to respect the dependency relationships of the packages in question. Cycles are broken on a best-effort basis in a way not guaranteed to be consistent across Lerna invocations. 39 | > 40 | > Topological sorting can cause concurrency bottlenecks if there are a small number of packages with many dependents or if some packages take a disproportionately long time to execute. The --no-sort option disables sorting, instead executing tasks in an arbitrary order with maximum concurrency. 41 | > 42 | > This option can also help if you run multiple "watch" commands. Since lerna run will execute commands in topologically sorted order, it can end up waiting for a command before moving on. This will block execution when you run "watch" commands, since they typically never end. An example of a "watch" command is running babel with the --watch CLI flag. 43 | 44 | The output should be prefixed with the name of the workspace (eg. `packages/package-a:`), so the user has a better idea of what is currently running. 45 | 46 | ### `--concurrency` 47 | 48 | The `--concurrency` flag changes the number of child processes that are spawn when commands are run in parallel, defaulting to `4` (like Lerna). 49 | 50 | ### `--parallel` 51 | 52 | If the `--parallel` flag is passed, it will runs the commands in parallel in separate child processes, instead of running them in series. This command will ignore the concurrency flag and topological sorting requirements. (Just like Lerna.) 53 | 54 | ## `yarn workspaces exec ...` 55 | 56 | This commands runs an arbitrary shell command in each package. It is similar to `yarn workspaces run`, and respects the same flags, but instead of running a package script defined in `package.json` you can pass it arbitrary shell commands. 57 | 58 | This is helpful for cases where you want to execute something that isn't worthy of storing in `package.json`, often when debugging or running a one-off. 59 | 60 | For example: 61 | 62 | ``` 63 | $ yarn workspaces exec babel --out-dir ./lib ./src 64 | ``` 65 | 66 | ## Common Flags 67 | 68 | ### `--packages` 69 | 70 | The `--packages` flag takes a package name or glob, and it restricts the command being run to only take affect in those packages. 71 | 72 | **Note:** this flag operates on the package names, as defined by the `name` field in `package.json` files. 73 | 74 | For example: 75 | 76 | ``` 77 | $ yarn workspaces list --packages 'babel-*' build 78 | ``` 79 | 80 | 81 | ### `--workspaces` 82 | 83 | The `--workspaces` flag takes a workspace name or glob, and it restricts the command being run to only take affect in those workspaces. 84 | 85 | **Note:** this flag operates on the workspace names, not the package names. For example `packages/my-package` would be a workspace name. This is helpful when working with multiple directories of workspaces. 86 | 87 | For example, with a `workspaces` setup of: 88 | 89 | ```json 90 | [ 91 | "packages/*", 92 | "services/*", 93 | "utils/*" 94 | ] 95 | ``` 96 | ``` 97 | packages/ 98 | package-a/ 99 | package-b/ 100 | ... 101 | services/ 102 | api/ 103 | cdn/ 104 | utils/ 105 | ... 106 | ``` 107 | 108 | You could restrict the command to only run in `services/api` and `services/cdn by doing: 109 | 110 | ``` 111 | $ yarn workspaces run --workspaces 'services/*' start 112 | ``` 113 | 114 | # How We Teach This 115 | 116 | Since this is a new feature and doesn't affect existing functionality, it won't change any existing user behavior. However, documentation should be added to explain the new features to those who want to opt-in to them. 117 | 118 | # Drawbacks 119 | 120 | This will increase the complexity of Yarn, instead of letting it live in Lerna and work in tandem. Especially while executing the package scripts, because we need to keep track of the package dependencies. 121 | 122 | # Alternatives 123 | 124 | There's no current alternative using Yarn. You have to use Lerna and Yarn in concert, which causes confusion as to which one to use when. 125 | 126 | # Unresolved questions 127 | 128 | - Should package names and workspace names be handled? or only packages? 129 | - What does Yarn define as a single "workspace"? 130 | - Should parallel execution be handled? 131 | - Should an `exec` command be added as well, similar to Lerna? 132 | -------------------------------------------------------------------------------- /accepted/0000-yarn-knit.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-01-13 2 | - RFC PR: https://github.com/yarnpkg/rfcs/pull/41 3 | - Yarn Issue: https://github.com/yarnpkg/yarn/issues/1213 4 | 5 | # Summary 6 | 7 | This is a proposal to improve upon `yarn link` so that developers can more accurately test in-development versions of their libraries from their apps or other libraries. 8 | 9 | # Motivation 10 | 11 | `yarn link` (and `npm link`) before it have several problems when working on code bases of non-trivial sizes, especially with multiple apps. The current `link` command doesn't isolate `node_modules` between apps (especially problematic with the advent of Electron), it doesn't allow for working on multiple versions of a library, and it produces a `node_modules` hierarchy that is not faithful to the one produced after the library is published. 12 | 13 | # Detailed design 14 | 15 | ## Desired behavior 16 | 17 | The `yarn link` workflow should mimic publishing a package (ex: `dep`) to npm and then installing it in a dependent (ex: `app`), and keep this constraint while you're making changes to the first package. Concretely, `yarn link` should make it so that when you save a change to `dep`, the resulting state is as if you: 18 | 1. Ran `npm publish` in `dep` (assume that it can clobber an existing version, and that you're publishing to a local registry on just your computer). 19 | 2. Ran `yarn add dep` in `app`. 20 | 21 | ## Why this behavior is great 22 | 23 | This solves several problems that "yarn link" has today: 24 | 25 | #### Isolating `node_modules` correctly 26 | 27 | You can install `dep` in two different apps without sharing the `node_modules` of `dep`. This is a problem with Electron apps, whose V8 version is different than Node's and uses a different ABI. If you have `node-app` and `electron-app` that both depend on `dep`, the native dependencies of `dep` need to be recompiled separately for each app; `node-app/n_m/dep/n_m` must not be the same as `electron-app/n_m/dep/n_m`. 28 | 29 | #### Working on multiple versions 30 | 31 | You can be developing multiple different versions of `dep`. Say you have two directories, `dep-1` and `dep-2`, which have your v1 and v2 branches checked out, respectively. With "yarn link" it's not possible to make both of these directories linkable at the same time. 32 | 33 | This is a problem when you are developing & testing `dep-1` with `old-app` and `dep-2` with `new-app`. You don't want to be going back and forth between `dep-1` and `dep-2` running "yarn link" each time you switch which app you're testing. 34 | 35 | #### Faithfully representing the `node_modules` hierarchy 36 | 37 | Currently `yarn link` symlinks the entire package directory, which brings along its `node_modules` subdirectory with it. With dependency deduping and flattening, bringing in `dep/node_modules` wholesale usually produces a different `node_modules` hierarchy than running `yarn install` in `app` and installing everything from npm. This isn't a problem most of the time but it does go against Yarn's spirit of consistency and the lockfile. 38 | 39 | ## A practical proposal -- knitting 40 | 41 | This is a proposal that solves all of the problems above and isn't too hard to implement or understand. I'm going to call it `yarn knit` to distinguish it from `yarn link`. Conceptually, we find all the files we'd normally publish to npm, pack them up using symlinks instead of copies of the files, publish the pack to a local registry (just a directory), and then when installing we look up packages in the local registry directory instead of npm. 42 | 43 | ### Running "yarn knit" inside of `dep` 44 | 45 | This is the step that simulates publishing `dep`. Running `yarn knit` in `dep` finds all the files that "yarn publish" would pack up and upload to npm. Crucially, this excludes `node_modules`, and would follow the same algorithm as "yarn publish" such as reading package.json's `files` field. 46 | 47 | Then it simulates publishing `dep`: it creates a directory named `dep-X.Y.Z` (where `X.Y.Z` is the version of `dep` in its package.json) inside of a global directory like `~/.yarn-knit`. A symlink is created for each file or directory that `yarn publish` would normally have packed up. This step shares some conceptual similarities with publishing to a registry, except it uses symlinks and it's local on your computer. 48 | 49 | ### Running "yarn knit dep" inside of `app` 50 | 51 | This behaves like `yarn add dep` except that it looks at the versions of `dep` that are in the global `~/.yarn-knit` folder and takes the latest one. (You also could run "yarn link dep@X.Y.Z" if you wanted a more specific version, like "yarn add".) 52 | 53 | `yarn knit dep` then runs most of the same installation steps that `yarn add dep` would. It creates `app/node_modules/dep` and creates symlinks for each of the symlinks under `~/.yarn-knit/dep-X.Y.Z`. Then it installs the dependencies of `dep` as usual by fetching them from npm. Finally it runs postinstall scripts. 54 | 55 | # How We Teach This 56 | 57 | This proposal is mostly additive and affects only how people work on libraries that they are using in their apps. We would want to document the `knit` command in the "CLI Commands" section of the docs and perhaps add a new section to "The Yarn Workflow". 58 | 59 | `yarn link` would stay around, so people migrating from the npm client wouldn't have to learn anything new at first. 60 | 61 | # Drawbacks 62 | 63 | One issue with this proposal is that it's not clear what to put in the lockfile after running `yarn link dep` since we don't have an npm URL for the dep yet -- it hasn't been published to npm. 64 | 65 | Another issue is that if you change package.json in `dep`, namely changing a dependency or modifying the `files` entry, you have to run `cd dep; yarn knit; cd app; yarn knit dep`. 66 | 67 | Also, if you update the code in dep and bump its version, say from 1.0.0 to 1.1.0, the symlinks in ~/.yarn-knit/dep-1.0.0 will still point to the code in your working directory, which now contains 1.1.0 code. 68 | 69 | The symlinks might break but I think that's mostly OK since at that point you're done working on dep and have published it to npm and it's easy to go run yarn add dep in app and not use the symlinks anymore. 70 | 71 | If you want to truly pin the versions of knitted packages then you'd need to have a different working directory for each version. (Git worktrees are great for this use case actually. Worktrees let you check out a repo once and then magically create semi-clones of it in separate directories, with the constraint that the worktrees need to be on different branches, which is totally OK in this scenario. The worktrees all share the same Git repo though, so if you commit in one worktree you can cherry pick that commit within another worktree.) 72 | 73 | # Alternatives 74 | 75 | Another similar approach is to run `yarn pack`, which creates a tarball of the library's contents as if it were downloaded from npm, and install the tarball in the app used to test the library. This has the benefits of reducing divergence between development and production -- `library/node_modules` is not shared across apps, the developer can work on multiple versions of the library, and the `node_modules` hierarchy is faithfully represented. The downside is that everytime the developer edits a file, they need to re-pack and reinstall the library. 76 | 77 | # Unresolved questions 78 | -------------------------------------------------------------------------------- /implemented/0000-focused-workspaces.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2018-02-12 2 | - RFC PR: https://github.com/yarnpkg/rfcs/pull/92 3 | - Implementation: https://github.com/yarnpkg/yarn/pull/5663 4 | 5 | # Summary 6 | 7 | Focusing on a single workspace should be as easy as working on all workspaces together. There should be an easy way to install sibling workspaces as regular dependencies rather than being required to build all of them just to work on a single workspace. 8 | 9 | # Motivation 10 | 11 | Currently, yarn workspaces are optimized for people making changes across multiple projects in the 12 | monorepo, but they can make things more difficult if you want to focus on a single workspace. While 13 | the automatic linking and auto-installation of dependencies for all workspaces can be convenient if you are actually working 14 | on all workspaces, having to build and install everything locally for multiple projects when you only work on one 15 | at a time can be time consuming compared to just pulling those packages down from a registry. 16 | 17 | If you could focus on a single workspace as easily as if it were its own repo, the transition to a monorepo 18 | from separate repos would be seamless for people who focus on a single workspace/repo at a time, and one of the 19 | biggest reasons not to use workspaces would be removed. 20 | 21 | # Detailed design 22 | 23 | Add a new flag for `yarn install`, `--focus`. The flag can only be used from an indvidual (target) workspace (not the root of a workspaces project or an isolated project). Instead of a flag, this could also be a new command, `yarn focus`. This has a minimal effect on the design. 24 | 25 | With the focus flag, in addition to normal installation behavior, yarn install "shallowly" installs any sibling workspaces that the target workspace depends on. A shallow installation can be best explained with an example. 26 | 27 | There is a monorepo with packages A and B, where A depends on B, and B depends on Foo, which is external and not part of the monorepo. If you try to focus on A, `yarn install --focus` will install everything that `yarn install` does now. In addition, it will shallowly install B in A/node_modules. Shallow installation means installing B but not installing any of its transitive dependencies (such as Foo). A regular install would already guarantee that Foo would be at root/node_modules, so regular node module resolution would already be able to find it and it is not needed under A. 28 | 29 | The version of B that will be shallowly installed is the version that is in B's package.json (but downloaded from the registry). However, one edge case is when A depends on a different version of B than what is in B's package.json. This may happen in a scenario where A rolls back its version of B due to a bug in B but continues to release new versions of A. In this scenario, the shallowly installed version of B should be the version that A specifies, not the version of B's package.json. (This is the current behavior and should not change). 30 | 31 | Another edge case is when A also depends on Foo, but depends on a different version than B. In this case, B's version of Foo should be installed under A/node_modules/B/node_modules/Foo. This will allow it to use its version of Foo without interfering with the version A uses (whether that is under root/node_modules/Foo or A/node_modules/Foo). 32 | 33 | The most complicated edge case comes into play when there are local changes to B's package.json that have not been published to the registry. The problem with this is that package name and version are no longer a unique identifier for a copy of a package, which is currently relied on in many places. Using the local version of B is not an option because a difference in dependencies would break the remote version of B. Trying to install both the local and remote versions of B would require major changes to differentiate between 2 copies of the same package with the same version. Instead, we should use the remote version of B to determine all dependencies that are installed, rather than the local version. 34 | 35 | 36 | The focus flag should prevent writing to the lockfile because the potentially different dependencies locally versus what is published should not change the lockfile. The integrity file should be marked with a flag to signals that a focused install was done to allow for quick bailouts on repeated focused installs and to prevent early bailouts if install is later run without the focus flag. 37 | 38 | This approach should allow for quick switching between focused work and non-focused work, since every dependency stays in the same place and only a few extra ones are added or removed. (a regular yarn install should erase any shallow installations and let you go back to normal cross-package work). 39 | 40 | # How We Teach This 41 | 42 | *What names and terminology work best for these concepts and why?* 43 | A new documentation section should be added for the focus flag. When explaining what it does, the term "shallow installation" should paint a clear picture of how sibling workspaces are installed. 44 | 45 | *How is this idea best presented?* 46 | As a way of enabling focused development in a single workspace without removing the ability to manage the whole 47 | workspace from a subfolder. 48 | 49 | *How should this feature be introduced and taught to existing Yarn users?* 50 | Add a new section for the focus flag in the install documentation. A blog post would be a good way to announce the feature as well. 51 | 52 | *Would the acceptance of this proposal mean the Yarn documentation must be 53 | re-organized or altered? Does it change how Yarn is taught to new users 54 | at any level?* 55 | Just adding new documetation for the new flag (see above). 56 | 57 | # Drawbacks 58 | 59 | You have to rerun focus every time you run yarn add/upgrade. (This would also be the case for yarn install if focus was a command instead of a new flag). This can possibly be mitigated with future additions of focus flags for add and upgrade. 60 | 61 | Focused installs still to have to install all dependencies for all repos (trying to ignore other repos would greatly increase complexity). If you are working on package A, which depends on B but does not depend on C, you still have to install C's dependencies (at the root, like normal) even though you don't need them. In practice, this should not be a huge problem because workspaces already optimizes away the need to reinstall common dependencies between C and A/B, which limits the unncessary installations. Additionally, if you have already run a regular install, the unnecessary dependencies will already 62 | be in place. 63 | 64 | # Alternatives 65 | 66 | As described above, focus could be a new command rather than a flag for the existing install command. However, its similarity to install makes it a better candidate for a flag rather than a whole new command. Flags are also easier to add auotmatically in .yarnrc files if you want to always run focused installs without having to think about it. 67 | 68 | Instead of only shallowly installing sibling workspaces, focus could do a full install of a workspace's dependencies inside its node modules folder, allowing it to ignore the root node_modules. Similar flags could also be added to other commands, such as add and upgrade. However, in addition to being MUCH slower, it greatly increases the complexity of the code for install because the lockfile (which is shared across all workspaces) would need to be maintained correctly even though only a single workspace is being considered. 69 | 70 | An alternative that would require no work would be to just encourage people to disable the workspaces feature 71 | when they want to focus on a single workspace. This is a poor user experience though, and the feature flag for workspaces 72 | will likely be deleted at some point. 73 | -------------------------------------------------------------------------------- /implemented/0000-link-dependency-type.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2016-10-12 2 | - RFC PR: 3 | - Yarn Issue: 4 | 5 | # Summary 6 | 7 | Add symlink `link:` dependency type to enable complex cross-project development 8 | workflows. 9 | 10 | # Motivation 11 | 12 | This RFC is a spinoff of yarn's [issue #884](https://github.com/yarnpkg/yarn/issues/884). 13 | 14 | We've been using some kind of monorepo approach for our projects for quite some 15 | time, a bit like [lerna](https://github.com/lerna/lerna) but with a more private 16 | and nested approach (our packages aren't expected to be published for now) along 17 | some specific needs: we must to link some public dependencies (eg. mongoose) 18 | that must be the exact same instance accross our subpackages (otherwise you 19 | encounter a lot of exotic bugs, edge cases), we also link our devDeps (they are 20 | shared accross all our subpackages). 21 | 22 | At first we used [linklocal](https://github.com/timoxley/linklocal) with npm@2, 23 | leveraging a custom use of the `file:` prefix (basically just symlinking them), 24 | but npm@3 broke a lot of things related to their handling (eg. 25 | [#10343](https://github.com/npm/npm/issues/10343)). We ended up moving to 26 | [ied](https://github.com/alexanderGugel/ied) where we implemented the `file:` 27 | prefix handling using simple symlinks, that tackled our need. 28 | 29 | I would love to be able to switch theses project to yarn but I would need a way 30 | to create these links. 31 | 32 | npm is also considering to add the same `link:` specifier, see this [recent RFC](https://github.com/npm/npm/pull/15900). 33 | 34 | # Detailed design 35 | 36 | We Add a new `link:` specifier that would just create symlinks and that's it 37 | (regardless of destination's existence) 38 | 39 | I've already implemented the changes in yarn's [pr#1109](https://github.com/yarnpkg/yarn/pull/1109) and I'm 40 | currently maintaining a fork since my team already rely on this for several 41 | projects. 42 | 43 | # How We Teach This 44 | 45 | I think `link:` is pretty explicit, an update to the docs/cli-help should be 46 | enough. 47 | 48 | # Drawbacks 49 | 50 | Not sure how exactly cross-platform symlinks are today. However it looks like 51 | [Microsoft might be catching up](https://blogs.windows.com/buildingapps/2016/12/02/symlinks-windows-10/) on this issue. 52 | 53 | # Alternatives 54 | 55 | Beside the two alternatives exposed above, I can't think of anything else for 56 | now. 57 | 58 | Drawback of not implementing this is that we restrict how creative developers 59 | can be with complex multi-packages workflows. 60 | 61 | # Unresolved questions 62 | 63 | Not sure how we should handle actually publishing packages with such 64 | dependencies, the existing behavior for `file:` types? 65 | -------------------------------------------------------------------------------- /implemented/0000-offline-mirror-pruning.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-02-11 2 | - RFC PR: https://github.com/yarnpkg/rfcs/pull/49 3 | - Yarn Issue: https://github.com/yarnpkg/yarn/issues/2109 4 | - Implementation: https://github.com/yarnpkg/yarn/pull/2836 5 | 6 | # Summary 7 | 8 | It would be helpful to have a built-in method to remove unneeded tarballs 9 | from an offline mirror. 10 | 11 | # Motivation 12 | 13 | `yarn add` and `yarn remove` keep `package.json`, `node_modules`, and 14 | `yarn.lock` in sync, so when a package is removed, it is removed in all three 15 | places if possible. The same is not true for an offline mirror. When 16 | a package is removed, it does not get deleted from the mirror, even if no other 17 | package depends on it. 18 | 19 | This behavior would be desirable in an environment where many projects share 20 | the same offline mirror. However, when an offline mirror is only used by one 21 | project, it would be reasonable to trim tarballs from the mirror when they are 22 | no longer required. Developers would be able to keep the offline mirror as 23 | small as possible, which is particularly beneficial when the mirror is checked 24 | into source control. 25 | 26 | # Detailed design 27 | 28 | The feature can be controlled through a new configuration setting that turns on 29 | automatic pruning. When `yarn-offline-mirror-pruning` is set to `true`, `yarn` 30 | will check the offline mirror whenever `yarn.lock` is changed. If a package 31 | exists in the mirror but no longer exists in `yarn.lock`, the package will be 32 | deleted from the mirror. 33 | 34 | Setting `yarn-offline-mirror-pruning` to `false` should result in the current 35 | behavior (no pruning), which would also be the default behavior. 36 | 37 | # How We Teach This 38 | 39 | This feature should be presented as an extension to the current workflow for 40 | maintaining an offline mirror. With a proper setup, the developer does not need 41 | to worry about adding a package to the mirror. `yarn add` handles that work. 42 | Similarly, with pruning turned on through the configuration, the developer 43 | would not need to worry about removing packages from the mirror. 44 | 45 | We would want to add an explanation for this feature to the existing 46 | documentation. It would not be a priority to teach this feature to 47 | brand new Yarn users since it is not critical functionality and only applies 48 | to people who want to use an offline mirror. 49 | 50 | # Drawbacks 51 | 52 | Turning on the feature may add non-neglibile processing time for certain `yarn` 53 | commands if the offline mirror is large. 54 | 55 | In a monorepo setting, one project that turns on pruning may accidentally 56 | wipe out a significant portion of a shared offline mirror. 57 | 58 | # Alternatives 59 | 60 | Users could just let their offline mirrors grow indefinitely. 61 | 62 | They could also write their own script that parses `yarn.lock` and removes 63 | uneeded packages from the offline mirror. 64 | 65 | # Unresolved questions 66 | 67 | Do we need to worry about a project turning on the feature when using a shared 68 | offline mirror? 69 | 70 | Should we also add a new flag to the CLI commands to do pruning on demand rather 71 | than only being able to rely on automatic pruning? I can't think of a good use 72 | case for only wanting to do pruning sometimes rather than always or never. 73 | -------------------------------------------------------------------------------- /implemented/0000-offline-resolution-field.md: -------------------------------------------------------------------------------- 1 | - Start Date: 28 Feb 2017 2 | - RFC PR: https://github.com/yarnpkg/rfcs/pull/51 3 | - Implementation: https://github.com/yarnpkg/yarn/pull/2970 4 | 5 | # Summary 6 | 7 | When enabling the offline mirror, Yarn updates the lockfile by stripping the registry URL from its `resolved` fields. This RFC aims to simplify this process by making such an update unneeded. 8 | 9 | Related issues: https://github.com/yarnpkg/yarn/issues/393 / https://github.com/yarnpkg/yarn/issues/394 10 | 11 | Tentative implementation: https://github.com/yarnpkg/yarn/pull/2970/ 12 | 13 | # Motivation 14 | 15 | Yarn currently has two different type of values for the lockfile `resolved` fields: 16 | 17 | - When online, they're in the form `${source}/${name}-${version}.tar.gz#${hash}` 18 | 19 | - But when offline, they're instead `${name}-${version}.tar.gz#${hash}` 20 | 21 | The current reasoning (or at least side effect) seems to be that it allows the fetch process to refuse installing things from the network when running under the `--offline` switch (and to always fetch things from the network otherwise instead of looking into the offline mirror). Unfortunately, such a separation also makes it harder to switch between working with a remote registry and an offline repository (for example, dev environments might not need the offline repository, but under the current design they can't do without). 22 | 23 | Because of these reasons, it would be best for the `resolved` field to contain the same informations during both online and offline work, *as long as the files we fetch are the expected ones* (ie. their hashes match). 24 | 25 | # Detailed design 26 | 27 | I suggest the following: 28 | 29 | - Adding the `${source}` part to the `resolved` field even when offline 30 | 31 | # Drawbacks 32 | 33 | Nothing should break. More iterations will be required to address the other issues raised in the previous iterations of this document. 34 | 35 | # How We Teach This 36 | 37 | This change is quite transparent, since it's unlikely the users will ever want to update the yarn.lock file manually. 38 | 39 | # Alternatives 40 | 41 | - Instead of adding the package registry to each `resolved` field, we could remove it instead. 42 | -------------------------------------------------------------------------------- /implemented/0000-rename-yarn-clean.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-03-13 2 | - RFC PR: (leave this empty) 3 | - Yarn Issue: (leave this empty) 4 | 5 | # Summary 6 | 7 | Rename `yarn clean` command. 8 | 9 | # Motivation 10 | 11 | 1. People might expect `yarn clean` to do a different thing (to clean build artifacts) 12 | than it actually does 13 | (cleans node modules of some files and sets the system to clean it automatically in the future). 14 | This is a very natural expectation because it is usually possible to do, e.g., 15 | `yarn watch`, `yarn dist`, and of course, `yarn clean` seems like a logical command 16 | (compare with gradle clean, mvn clean, gulp clean). 17 | 18 | Next, when `yarn clean` is executed a user realizes that it is not the correct command 19 | but it does not seem to do anything drastic (probably just cleans some yarn caches) and 20 | forgets about it. 21 | (Of course, the right thing to do would be to look at https://yarnpkg.com/en/docs/cli/clean instead.) 22 | The result often is that the project is broken and the bug is hard to detect. 23 | There are a lot of such bugs in this project and more elsewhere (example: twbs/bootstrap-sass#1097). 24 | 25 | The name is confusing and the best way to stop confusion seems to be renaming the command. 26 | 27 | 2. Also, it is nice to be able to run a user-defined `yarn clean` if you already can run a user-defined `yarn build`. 28 | 29 | # Detailed design 30 | The command is to be renamed to `autoclean`. 31 | 32 | Next, `clean` should be available for redefinition by user scripts. 33 | 34 | It should be more or less safe after that to produce an error in a standard way when `clean` 35 | is executed but the user script is missing. This is because the command is not usually run 36 | during a build but is mostly executed manually and its state persisted in cvs as a `.yarnclean` file. 37 | 38 | 39 | # How We Teach This 40 | The documentation should only reflect the change of the command name. 41 | 42 | Release notes should provide a note about a possible breaking change. 43 | 44 | # Drawbacks 45 | 46 | We should not do this if yarn does not want to promote using `yarn command` 47 | for user defined scripts. Note that then the existing usage of such commands 48 | should also be deprecated. 49 | 50 | # Alternatives 51 | 1. Produce big and nice warnings when the command is used 52 | and on subsequent installation of modules (listing cleaned/ignored files). 53 | (This does not solve point 2. of the Motivation.) 54 | 2. Deprecate `yarn command` for user-defined scripts. (So that only `yarn run command` is supported.) 55 | 3. Deprecate running user-defined commands through yarn altogether (and optionally provide 56 | a different default command for running user-scripts, e.g., `yarun`). 57 | 4. Some alternative proposed new name candidates are: 58 | `delete-module-bloat`, `delete-package-assets`, `remove-module-files`, 59 | `enable-advanced-auto-disk-space-optimizations`, `prune`, `pruneModules`, 60 | `strip`, `shrink`, `stripModules`, `shrinkModules`, `cleanModules`, 61 | `stripPackages`, `yarn-clean`. 62 | -------------------------------------------------------------------------------- /implemented/0000-selective-versions-resolutions.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-05-21 2 | - RFC PR: (leave this empty) 3 | - Yarn Issue: (leave this empty) 4 | 5 | # Summary 6 | 7 | Allow to select a nested dependency version via the `resolutions` field of 8 | the `package.json` file. 9 | 10 | # Motivation 11 | 12 | The motivation was initially discussed in 13 | [yarnpkg/yarn#2763](https://github.com/yarnpkg/yarn/issues/2763). 14 | 15 | Basically, the problem with the current behaviour of yarn is that it is 16 | not possible to force the use of a particular version for a nested dependency. 17 | 18 | ## Example 19 | 20 | For example, given the following content in the `package.json`: 21 | ```json 22 | "devDependencies": { 23 | "@angular/cli": "1.0.3", 24 | "typescript": "2.3.2" 25 | } 26 | ``` 27 | 28 | The `yarn.lock` file will contain: 29 | ``` 30 | "typescript@>=2.0.0 <2.3.0": 31 | version "2.2.2" 32 | resolved "https://registry.yarnpkg.com/typescript/-/typescript-2.2.2.tgz#606022508479b55ffa368b58fee963a03dfd7b0c" 33 | 34 | typescript@2.3.2: 35 | version "2.3.2" 36 | resolved "https://registry.yarnpkg.com/typescript/-/typescript-2.3.2.tgz#f0f045e196f69a72f06b25fd3bd39d01c3ce9984" 37 | ``` 38 | 39 | Also, there will be: 40 | - `typescript@2.3.2` in `node_modules/typescript` 41 | - `typescript@2.2.2` in `node_modules/@angular/cli/node_modules`. 42 | 43 | ## Problem 44 | 45 | In this context, it is impossible to force the use of `typescript@2.3.2` for 46 | the whole project (except by flattening the whole project, which we don't want). 47 | 48 | It makes sense for typescript as the user intent is clearly to use typescript 49 | 2.3.2 for compiling all its project, and with the current behaviour, the angular 50 | CLI (responsible of compiling `.ts` files) will simply use the 2.2.2 version 51 | from its `node_modules`. 52 | 53 | Similarly, even using such a content for `package.json`: 54 | ```json 55 | "devDependencies": { 56 | "@angular/cli": "1.0.3" 57 | } 58 | ``` 59 | 60 | The need could arise for forcing the use of `typescript@2.3.2` (or 61 | `typescript@2.1.0` for that matter). 62 | 63 | ## Why? 64 | 65 | In these example, the need does not seem very important (the user could maybe 66 | use `typescript@2.2.2` or ask the `@angular/cli` dev team to relax its 67 | constraints on typescript), but there could be cases where a nested dependency 68 | introduces a bug and the project developer would want to set a specific 69 | version for it (see for example this 70 | [comment](https://github.com/yarnpkg/yarn/issues/2763#issuecomment-302682844)). 71 | 72 | ## Related scenario (out of scope of this document) 73 | 74 | An extension of this motivation is also the potential need for mapping nested 75 | dependencies to others. For example a project developer could want to map 76 | `typescript@>=2.0.0 <2.3.0` to `my-typescript-fork@2.0.0`. 77 | 78 | See alternatives solutions below also. 79 | 80 | # Detailed design 81 | 82 | The proposed solution is to make the `resolutions` field of the `package.json` 83 | file to be considered all the time and on a per-package basis (instead of 84 | only when the `--flat` parameter is used). 85 | 86 | When a nested dependency is being resolved by yarn, if the `resolutions` field 87 | contains a specification for this package, then it will be used instead. 88 | 89 | Special attention to the specific nature of package management in the npm 90 | ecosystem is given in this RFC: indeed, it is not unusual to have the same 91 | package being present as a nested dependency of multiple packages with 92 | different versions. It is thus possible within the `resolutions` field to 93 | express versions either for the whole dependency tree or only for a subset of 94 | it, using a syntax relying on glob patterns. 95 | 96 | Most of the examples are given with exact dependencies, but note that using a 97 | non-exact specification in the `resolutions` field should be accepted and 98 | resolved by yarn like it usually does. This subject is discussed below also. 99 | 100 | Any potentially counter-intuitive situation will result in a warning being 101 | issued. This subject is discussed at the end of this section. 102 | 103 | ## Examples 104 | 105 | We have the following packages and their dependencies: 106 | ``` 107 | package-a@1.0.0 108 | |_ package-d1@1.0.0 109 | |_ package-d2@1.0.0 110 | 111 | package-a@2.0.0 112 | |_ package-d1@2.0.0 113 | |_ package-d2@1.0.0 114 | 115 | package-b@1.0.0 116 | |_ package-d1@2.0.0 117 | |_ package-d2@1.0.0 118 | 119 | package-c@1.0.0 120 | |_ package-a@2.0.0 121 | |_ package-d1@2.0.0 122 | |_ package-d2@1.0.0 123 | ``` 124 | 125 | With: 126 | ```json 127 | "dependencies": { 128 | "package-a": "1.0.0", 129 | "package-b": "1.0.0" 130 | }, 131 | "resolutions": { 132 | "**/package-d1": "2.0.0" 133 | } 134 | ``` 135 | 136 | yarn will use `package-d1@2.0.0` for every nested dependency to `package-d1` 137 | and will behave as expected with respect to the `node_modules` folder by not 138 | duplicating the `package-d1` installation. 139 | 140 | With: 141 | ```json 142 | "dependencies": { 143 | "package-a": "1.0.0", 144 | "package-b": "1.0.0" 145 | }, 146 | "resolutions": { 147 | "package-a/package-d1": "3.0.0" 148 | } 149 | ``` 150 | 151 | yarn will use `package-d1@3.0.0` only for `package-a` and `package-b` will 152 | still have `package-d1@2.0.0` in its own `node_modules`. 153 | 154 | With: 155 | ```json 156 | "dependencies": { 157 | "package-a": "1.0.0", 158 | "package-c": "1.0.0" 159 | }, 160 | "resolutions": { 161 | "**/package-a": "3.0.0" 162 | } 163 | ``` 164 | 165 | `package-a` will still be resolved to `1.0.0`, but `package-c` will have 166 | `package-a@3.0.0` in its own `node_modules`. 167 | 168 | With: 169 | ```json 170 | "dependencies": { 171 | "package-a": "1.0.0", 172 | "package-c": "1.0.0" 173 | }, 174 | "resolutions": { 175 | "package-a": "3.0.0" 176 | } 177 | ``` 178 | 179 | yarn will do nothing (see below why). 180 | 181 | With: 182 | ```json 183 | "dependencies": { 184 | "package-a": "1.0.0", 185 | "package-c": "1.0.0" 186 | }, 187 | "resolutions": { 188 | "**/package-a/package-d1": "3.0.0" 189 | } 190 | ``` 191 | 192 | yarn will use `package-d1@3.0.0` both for `package-a` and the nested 193 | dependency `package-a` of `package-c`. 194 | 195 | ## Resolutions 196 | 197 | Each sub-field of the `resolutions` field is called a *resolution*. 198 | It is a JSON field expressed by two strings: the package designation on the 199 | left and a version specification on the right. 200 | 201 | ### Package designation 202 | 203 | A *resolution* contains on the left-hand side a glob pattern applied to 204 | the dependency tree (and not to the `node_modules` directory tree, since the 205 | latter is the result of yarn resolution being influenced by the *resolution*). 206 | 207 | - `a/b` denotes the directly nested dependency `b` of the project's 208 | dependency `a`. 209 | - `**/a/b` denotes the directly nested dependency `b` 210 | of all the dependencies and nested dependencies `a` of the project. 211 | - `a/**/b` denotes all the nested dependencies `b` of the project's 212 | dependency `a`. 213 | - `**/a` denotes all the nested dependencies `a` of the project. 214 | - `a` is an alias for `**/a` (for retro-compatibility, see below, and because 215 | if it wasn't such an alias, it wouldn't mean anything as it would represent 216 | one of the non-nested project dependencies, which can't be overridden as 217 | explained below). 218 | - `**` denotes all the nested dependencies of the project (a bad idea mostly, 219 | as well as all other designations ending with `**`). 220 | 221 | Note on single star: `*` is not authorized in a package resolution because it 222 | would introduce too much non-determinism. For example, there is the risk of a 223 | referring to `package-*` at one point to match `package-a` and `package-b`, 224 | and later on, this would match a new nested dependency `package-c` that wasn't 225 | intended to be matched. 226 | 227 | ### Version specification 228 | 229 | A *resolution* contains on the right-hand side a version specification 230 | interpreted via the `semver` package as usually done in yarn. 231 | 232 | ## Relation to non-nested dependencies 233 | 234 | The `devDependencies`, `optionalDependencies` and `dependencies` fields always 235 | take precedence over the 236 | `resolutions` field: if the user defines explicitly a dependency there, 237 | it means that he wants that version, even if it's specified with a non-exact 238 | specification. So the `resolutions` field only applies to nested-dependencies. 239 | Nevertheless, in case of incompatibility between the specification of a 240 | non-nested dependency version and a *resolution*, a warning is issued. 241 | 242 | This is coherent with the fact that the package designation `package-a` can be 243 | used safely as an alias of `**/package-a`: if it wasn't the case, `package-a` 244 | would designate one of the non-nested dependencies and would be ignored. 245 | 246 | ## Retro compatibility for the `resolutions` field 247 | 248 | Until now, the`resolutions` field can contain *resolutions* of the following 249 | form (filled by `add --flat` or `install --flat`): 250 | ```json 251 | "resolutions": { 252 | "package-a": "1.0.0" 253 | } 254 | ``` 255 | 256 | With the current proposal, the package designation `package-a` is an alias for 257 | `**/package-a`: this means the behaviour of yarn with a project whose 258 | `resolutions` field contains *resolutions* filed by a pre-RFC yarn will be 259 | as expected: the nested dependencies will have the fixed version specified. 260 | 261 | ## Relation to the `--flat` option 262 | 263 | Before this RFC, `--flat` is both about populating resolutions field AND 264 | taking resolutions field into account when executing the `install` command 265 | (including installation as part of the `add` command). 266 | 267 | This RFC is about taking the `resolutions` field into account when executing 268 | the `install` command (including installation as part of the `add` command). 269 | 270 | So with this RFC, `--flat` is now only about populating resolutions field. 271 | I does it in the same way as before (using a package designation in the 272 | form of `package-name`). 273 | 274 | The only breaking change is that the `resolutions` field is always considered 275 | by yarn, even when `--flat` is not specified! 276 | 277 | Incidently, this resolves this strange situation when two developers would be 278 | working on the same project, and one is using `--flat` while the other is not, 279 | and they would get different `node_modules` contents because of that. 280 | 281 | Note that `--flat` being related to the installation mode (it is used via 282 | the `install` command, but also via the `add` command but pertains to the 283 | installation itself, not the adding), it will continue to behave as before 284 | by asking for *resolutions* of all the nested dependencies of the project even 285 | with `add`. 286 | 287 | In the future, `--flat` will need to be rethought but for now we will keep 288 | its behaviour. 289 | 290 | ## `yarn.lock` 291 | 292 | This design implies that it is possible to have for a given version 293 | specification (e.g., `>=2.0.0 <2.3.0`) a resolved version that is incompatible 294 | with it (e.g., `2.3.2`). It is acceptable as long as it is explicitly 295 | asked by the user via a *resolution*. 296 | 297 | It is currently the case that such situation would make yarn unhappy and 298 | provoke the modification of the `yarn.lock` (see 299 | [yarnpkg/yarn#3420](https://github.com/yarnpkg/yarn/issues/3420)). 300 | 301 | This feature would remove the need for this behaviour of yarn. 302 | 303 | ## Relation to the `check` command 304 | 305 | The default `check` (without specific options) reads `yarn.lock` and makes 306 | sure that all versions in it match to what is inside `node_modules`. 307 | 308 | We should thus get this for free without extra changes. 309 | 310 | ### `--verify-tree` 311 | 312 | `--verify-tree` was built to make sure that all packages inside `node_modules` 313 | are consistent between each other independently of yarn's resolution logic. 314 | 315 | If you force a version that does not match semver requirements of a package, 316 | `--verify-tree` would throw an error. 317 | 318 | For now we don't need to make changes to it, but later, we can expand 319 | `--verify-tree` to support the overrides of the `resolutions` field. 320 | 321 | ## Non-exact version specifications 322 | 323 | If there is a non-exact specifications in the `resolutions` field, the rule is 324 | the same: the `resolutions` field takes precedence over the specification in a 325 | nested dependency. 326 | 327 | In case the `resolutions` field is broader than the nested dependency 328 | specification, then a warning can be issued. This happens if the the exact 329 | version resolved by yarn based on the `resolutions` specification is 330 | incompatible with the nested dependency specification. 331 | 332 | For example, if `@angular/cli` depends on `typescript@>=2.0.0 <2.3.0` and the 333 | `resolutions` field contains `typescript@>=2.0.0 <2.4.0`, then if the latest 334 | available version for typescript is `2.2.2`, no warning is issued, and if the 335 | latest available version for typescript is `2.3.2` then a warning is issued. 336 | 337 | The rational behind that is that since the `yarn.lock` file is only modified 338 | by the user (via yarn commands), then a warning will always be issued before 339 | such a situation happens and is written to the `yarn.lock` file. 340 | 341 | ## Warnings in logs 342 | 343 | yarn should warn about the following situations: 344 | 1. Unused resolutions. 345 | 346 | 2. Incompatible resolutions (see also above the sections about `yarn.lock` 347 | and about broadening non-exact specifications). 348 | Basically, an incompatible resolution is used because a package does not 349 | correctly express its dependencies. In an ideal world, the package should 350 | be fixed at one point or another and the resolution should be removed. 351 | In that sense, incompatible resolutions should always be warned about. 352 | Furthermore, an incompatible resolution is a potential for unwanted behaviour 353 | and should thus never be ignored by the user. 354 | 355 | ## Locality of the *resolutions* 356 | 357 | The `resolutions` field only apply to the local project and not to the projects 358 | that depends on it. It is the same as with lock files in a way. 359 | 360 | # How We Teach This 361 | 362 | This won't have much impact as it extends the current behaviour by adding 363 | functionality. 364 | 365 | The only breaking change is that `resolutions` is being considered all the time, 366 | but that won't surprise people, this will make yarn behaviour simply more 367 | consistent than before (see the comment on `--flat` above). 368 | 369 | The term "resolution" has the same meaning as before, but it is not under the 370 | sole control of yarn itself anymore, but also under the control of the user 371 | now. 372 | 373 | This is an advanced use of yarn, so new users don't really have to know about 374 | it in the beginning. Still, it is meant to be used on a potential regular 375 | basis, in particular when some packages a project depends on have problems 376 | in their how dependencies. 377 | 378 | Thus it would make sense to have a bit of the documentation talking about 379 | this use case and underlying the fact that *resolutions* are mostly here 380 | on a temporary basis. 381 | 382 | # Drawbacks 383 | 384 | ## Teaching 385 | 386 | It makes yarn behaviour a bit more complex, although more useful. So it 387 | can be difficult for users to wrap their head around it. The RFC submitter has 388 | seen it happen many times with maven, which is quite complex but complete in 389 | its dependency management. Users would get confused and it can take time to 390 | understand the implications of manipulation the `resolutions` field . 391 | 392 | # Alternatives 393 | 394 | ## Global nested dependencies resolution 395 | 396 | Starting from an example, this solution would take the following form in the 397 | `package.json` file: 398 | ```json 399 | "devDependencies": { 400 | "@angular/cli": "1.0.3", 401 | "typescript": "2.3.2" 402 | }, 403 | "resolutions": { 404 | "typescript": "2.0.2" 405 | } 406 | ``` 407 | 408 | yarn would use `typescript@2.0.2` for the whole project and that's all. 409 | The same kind of consideration (outside of the glob pattern thing) should be 410 | followed as with the selected solution of this RFC. 411 | 412 | This is basically too simple according to discussions with yarn maintainers. 413 | 414 | ## Mapping version specifications 415 | 416 | This is a kind of simplified solution to the "out-of-scope scenario" presented 417 | in the Motivations section above (it maps versions but not dependency names). 418 | 419 | It was proposed in this 420 | [comment](https://github.com/yarnpkg/yarn/issues/2763#issuecomment-301896274). 421 | 422 | It is similar to the previous alternative but with a version specification 423 | in the package designation. This would take this form in the `package.json`: 424 | ```json 425 | "devDependencies": { 426 | "@angular/cli": "1.0.3", 427 | "typescript": "2.2.2", 428 | "more dependencies..." 429 | }, 430 | "resolutions": { 431 | "typescript@>=2.0.0 <2.3.0": "typescript@2.3.2" 432 | } 433 | ``` 434 | 435 | yarn would then replace matching version specifications with the user's one. 436 | For example a dependency normally resolved to `typescript@2.2.2` would be 437 | resolved in practice to `typescript@2.3.2`. 438 | 439 | This is too advanced and can be considered a possible extension of this RFC. 440 | 441 | ## Mapping version specifications as well as packages name 442 | 443 | Same as the two above but with a different name on the right-hand side of the 444 | *resolution*: 445 | ```json 446 | "devDependencies": { 447 | "@angular/cli": "1.0.3", 448 | "typescript": "2.2.2" 449 | }, 450 | "resolutions": { 451 | "typescript@>=2.0.0 <2.3.0": "my-typescript-fork@2.3.2" 452 | } 453 | ``` 454 | 455 | or even: 456 | ```json 457 | "devDependencies": { 458 | "@angular/cli": "1.0.3", 459 | "typescript": "2.2.2" 460 | }, 461 | "resolutions": { 462 | "typescript": "my-typescript-fork", 463 | } 464 | ``` 465 | 466 | and the version specification would be conserved. 467 | 468 | This is too advanced and can be considered a possible extension of this RFC. 469 | 470 | # Future extensions 471 | 472 | The two alternatives discussed in the section just above, "Mapping version 473 | specifications" and "Mapping version specifications as well as packages name", 474 | can be adapted to the current proposition to support these uses cases as well. 475 | 476 | ## `flatten` 477 | 478 | Some notes on `--flat` and its future with respect to this RFC. 479 | 480 | The `--flat` option of `install` could be transformed to a `flatten` command 481 | that would: 482 | 1. Fill in the *resolutions* for all nested dependencies. 483 | 2. Set the `flat` field in the `package.json`. 484 | 485 | It makes no real sense to have a flattening mode for `install`: 486 | 1. `install` already follows the `resolutions` field with this RFC. 487 | 2. `install` should be only about building the `node_modules` directory, not 488 | modifying the the `package.json` IMHO. 489 | 490 | Then the `flat` option in the `package.json` (and the `--flat` option of `add`) 491 | would apply not to the installation but to the adding, upgrading, etc 492 | (everything that modify the `package.json`'s dependencies). It will ensure 493 | that the project stays flattened via the populating of the `resolutions` field. 494 | -------------------------------------------------------------------------------- /implemented/0000-show-updated-packages-only.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-02-10 2 | - RFC PR: (leave this empty) 3 | - Yarn Issue: (leave this empty) 4 | 5 | # Summary 6 | 7 | Show only updated packages on `yarn upgrade` with the new version. 8 | 9 | # Motivation 10 | 11 | When updating I want to know which of "my" packages have been updated (not dependencies of my dependencies) so that I can verify a package has been updated to the expected version. This makes it easy to spot incorrect version constraints you define within your package.json. 12 | 13 | # Detailed design 14 | 15 | Instead of just showing all dependencies the following should be shown. 16 | 17 | ``` 18 | success Saved lockfile. 19 | success Saved 774 new dependencies. 20 | 21 | Updated direct dependencies: 22 | ├─ @webcomponents/custom-elements@1.0.0-alpha.3 23 | ├─ @webcomponents/shadycss@0.0.1 24 | ├─ @webcomponents/shadydom@0.0.1 25 | 26 | All updated dependencies: 27 | ├─ @webcomponents/custom-elements@1.0.0-alpha.3 28 | ├─ @webcomponents/shadycss@0.0.1 29 | ├─ @webcomponents/shadydom@0.0.1 30 | ├─ abbrev@1.0.9 31 | ├─ accepts@1.3.3 32 | ├─ acorn-jsx@3.0.1 33 | ├─ acorn@4.0.11 34 | ├─ ajv-keywords@1.5.1 35 | ├─ ajv@4.11.2     36 | ``` 37 | 38 | # How We Teach This 39 | 40 | The new section should be shown when running upgrade. Nothing else should be needed. 41 | 42 | # Drawbacks 43 | 44 | Your dependencies will be shown twice, or a `flag` needs to be introduces or the current view (showing all packages is not available / needs a flag). 45 | 46 | # Alternatives 47 | 48 | - add a flag like `--only-deps` to only show my dependencies 49 | - or add a flag like `--all` to show the current version 50 | 51 | # Unresolved questions 52 | 53 | - 54 | -------------------------------------------------------------------------------- /implemented/0000-upgrade-command-consistency.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-06-21 2 | - RFC PR: https://github.com/yarnpkg/yarn/pull/3847 3 | - Yarn Issue: https://github.com/yarnpkg/yarn/issues/3603 4 | 5 | # Summary 6 | 7 | Spawned from https://github.com/yarnpkg/yarn/issues/3603 8 | 9 | There is a lot of confusion among new users of Yarn as to how the `upgrade` and `upgrade-interactive` commands work. 10 | 11 | A lot of that confusion is due to those commands not working the same way (nor are they implemented the same). 12 | 13 | The purpose of this RFC is to align those commands, and begin to share implementation between them. 14 | 15 | # Motivation 16 | 17 | ## Currently (yarn <=0.26): 18 | 19 | `upgrade` = upgrade all packages to their latest version, **respecting** the range in `package.json` 20 | `upgrade left-pad` = upgrade only the left-pad package to it's `latest` tag, **ignoring** the range in `package.json` 21 | `upgrade-interactive` = upgrade all packages to their `latest` tag, **ignoring** the range in `package.json` 22 | 23 | It is very confusing that `upgrade` vs `upgrade-interactive` chose different versions, and `upgrade` vs `upgrade {package}` chose different versions. 24 | 25 | # Detailed design 26 | 27 | Major design ideas: 28 | 29 | 1. `upgrade-interactive` should just be an "interactive" version of `upgrade`. 30 | 2. Both commands should respect the package.json semver range by default. 31 | 2. PR #3510 added a `--latest` flag to `upgrade` to tell it to ignore package.json range. Utilize this change across both commands to have them ignore package.json range and use `latest` tag instead. 32 | 33 | ## New Logic: 34 | 35 | * Leave upgrade with no additional parameters how it is: 36 | > yarn upgrade 37 | > 38 | > This command updates all dependencies to their latest version based on the version range specified in the package.json file. 39 | 40 | * Change passing a package without an explicit version to respect package.json 41 | > yarn upgrade [package] 42 | > 43 | > This upgrades a single named package to the latest version based on the version range specified in the package.json file. 44 | 45 | * Leave handling an explicit version the same as how it is 46 | > yarn upgrade [package@version] 47 | > 48 | > This will upgrade (or downgrade) an installed package to the specified version. You can use any SemVer version number or range. 49 | 50 | * Utilize the --latest flag from PR #3510 for an upgrade without a specific package, and add it to the docs 51 | > yarn upgrade --latest 52 | > 53 | > This command updates all dependencies to the version specified by the latest tag (potentially upgrading the package across major versions). 54 | 55 | * Utilize the --latest flag from PR #3510 for an upgrade with a specific package, and add it to the docs 56 | > yarn upgrade [package] --latest 57 | > 58 | > This upgrades a single named package to the version specified by the latest tag (potentially upgrading the package across major versions). 59 | 60 | For `upgrade-interactive` it would internally just call the `upgrade` logic to follow the same rules above, but would then present the list of packages to the user for them to chose which to upgrade. The exception is that `upgrade-interactive` does not have the ability to take specific package names in its parameters (because the user would chose them from the interactive selection list instead of specifying them on the cmd line) 61 | 62 | 63 | ## Implementation Details 64 | 65 | Currently, `upgrade` reads all packages and ranges from package.json and forwards them to `add`. `upgrade-interactive` is implemented differently; it uses `PackageRequest.getOutdatedPackages()` to determine only the packages that are out of date, and what version they would update to. 66 | 67 | As part of this work, the upgrade-interactive logic to use `getOutdatedPackages` would be moved over to `upgrade`. 68 | 69 | `PackageRequest.getOutdatedPackages()` already reports the "wanted" (latest respecting package.json specified range) and the "latest" (latest specified by registry, ignoring package.json) versions for all outdated packages. `upgrade` would look for the `--latest` flag to decide which of these version to upgrade each package to. 70 | 71 | The `upgrade-interactive` command's output will include an additional column named "range". This column will show what the current package.json specified range is. If the `--latest` flag is passed, then the word "latest" will be displayed. In other words, this column is showing the range specifier that upgrade is using to determine what to upgrade to. 72 | 73 | example: 74 | 75 | ``` 76 | dependencies 77 | name range from to url 78 | ❯◯ chai ^3.0.0 3.4.0 ❯ 3.5.8 http://chaijs.com 79 | ``` 80 | 81 | Which indicates "You have chai@^3.0.0 as a dependency. Currently 3.4.0 is installed. Upgrade will move to 3.5.8" 82 | 83 | or when using the --latest flag: 84 | 85 | ``` 86 | dependencies 87 | name range from to url 88 | ❯◯ chai latest 3.4.0 ❯ 4.0.2 http://chaijs.com 89 | ``` 90 | 91 | The goal here is to better explain to the user why this version was selected to upgrade to. 92 | 93 | 94 | ## Preserve package.json range operator 95 | 96 | Related to #2367 and #3609 there have been requests that `upgrade` and `upgrade-interactive` when upgrading to a new major version will preserve the `package.json` specified version range, if it exists. 97 | 98 | So for example if package.json specifies the dependency 99 | 100 | ``` 101 | "foo": "~0.1.2", 102 | "bar": "^1.2.3", 103 | "baz": "2.3.4" 104 | ``` 105 | 106 | Then if `upgrade --latest` jumps to a new major version, it will preserve the range specifiers, and upgrade to something like: 107 | 108 | ``` 109 | "foo": "~5.0.0", 110 | "bar": "^6.0.0", 111 | "baz": "7.0.0" 112 | ``` 113 | 114 | (with the current implementation, all 3 packags would be changed to use caret `@^x.x.x`) 115 | 116 | * This only has an affect when `--latest` is specified, otherwise package.json file would not be modified. 117 | 118 | * This only works for simple range operators (exact, ^, ~, =, <=, >). Complex operators are not handled. When a range operator is not one of these simple cases, `^` will be used as the default., since that is the normal range operator when adding a package the first time. 119 | 120 | * This behavior is overriden by the following flags: `--caret` `--tilde` `--exact`. If any of these are passed, then that range operator will always be used. 121 | 122 | 123 | # How We Teach This 124 | 125 | Docs will need to be updated to reflect these changes. 126 | 127 | 128 | # Drawbacks 129 | 130 | This is a change to the version ranges selected by `upgrade-interactive` so could cause additional confusion to those who use it in previous Yarn versions. 131 | 132 | However, I beleive that overall it would reduce the confusion between these commands, and make them overall more versitile. 133 | 134 | 135 | # Alternatives 136 | 137 | Do not change the behavior. Deal with user confusion and issues that arrise from it as they come up. 138 | 139 | 140 | # Unresolved questions 141 | 142 | None at this time. 143 | -------------------------------------------------------------------------------- /implemented/0000-workspaces-command.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-05-04 2 | - RFC PR: 3 | 4 | # Summary 5 | 6 | This document specify a new command, that can be used to execute subcommands inside a project workspaces. 7 | 8 | # Motivation 9 | 10 | With the addition of the Workspace feature, it will become handy to be able to execute commands inside other workspaces than the one in the current directory. 11 | 12 | # Detailed design 13 | 14 | This RFC suggests to add the following commands: 15 | 16 | ## `yarn exec ...` 17 | 18 | Execute a shell command inside the same environment than the one used when running scripts. For example, running `yarn exec env` will print something similar to this: 19 | 20 | ``` 21 | PWD=/path/to/project 22 | npm_config_user_agent=yarn/0.23.4 npm/? node/v7.10.0 darwin x64 23 | npm_node_execpath=/usr/bin/node 24 | ... 25 | ``` 26 | 27 | ## `yarn workspace ...` 28 | 29 | This command will execute the specified sub-command inside the workspace that is being referenced by ``. 30 | 31 | Recursion aside, it's essentially an alias for: 32 | 33 | ``` 34 | $> (cd $(yarn workspace exec pwd) && yarn ...) 35 | ``` 36 | 37 | # Drawbacks 38 | 39 | - We will still have the issue of requiring the `--` separator to forward any command line option (`yarn workspace test install -- --production`). It's an important issue, something we really should tackle sooner than later. 40 | -------------------------------------------------------------------------------- /implemented/0000-workspaces-install-phase-1.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-05-04 2 | - RFC PR: 3 | - Yarn Issue: https://github.com/yarnpkg/yarn/issues/3294 4 | 5 | # Summary 6 | 7 | Add workspaces keyword to package.json to allow multi project dependency installation and management. 8 | When running install command yarn will aggregate all the dependencies in package.json files listed in workspaces field, generate a single yarn.lock and install them in a single node_modules. 9 | 10 | # Motivation 11 | 12 | This RFC is based on a need to manage multiple projects in one repository [issue #3294 aggregates all related discussions](https://github.com/yarnpkg/yarn/issues/3294 (https://github.com/yarnpkg/yarn/issues/884)). 13 | 14 | [Image: https://fb.quip.com/-/blob/CTYAAAX6p4Y/3y0pmJnH2NKkg4ae06hVyw] 15 | We are taking an iterative approach to implement the whole end to end experience of workspaces, there will be RFC and PRs for: 16 | 17 | 18 | 1. Installing dependencies of all workspaces in the root node_modules (this RFC) 19 | 2. Commands to manage (add/remove/upgrade) dependencies in workspaces from the parent directory 20 | 3. Ability for workspaces to refer each other, e.g. jest-diff → jest-matcher-utils 21 | 4. package-hoister for workspaces, e.g. when dependencies conflict and can't be installed on root level 22 | 5. Command to publish all workspaces in one go 23 | 24 | For a while workspaces will be considered an experimental feature and we will expect breaking changes until it becomes stable. 25 | 26 | # Detailed design 27 | 28 | I'll use [Jest](https://github.com/facebook/jest) for the example implementation. 29 | 30 | Workspaces can be enabled by a flag in .yarnrc: 31 | ``` 32 | yarn-offline-mirror "path" 33 | disable-self-update-check true 34 | workspaces-experimental true 35 | ``` 36 | 37 | The structure of the source code is following 38 | 39 | ``` 40 | | jest/ 41 | | ---- package.json 42 | | ---- packages/ 43 | | -------- babel-jest/ 44 | | ------------ package.json 45 | | -------- babel-preset-jest/ 46 | | ------------ package.json 47 | ... 48 | ``` 49 | 50 | Top level package.json is like 51 | 52 | ``` 53 | { 54 | "private": true, 55 | "name": "jest", 56 | "devDependencies": { 57 | "ansi-regex": "^2.0.0", 58 | "babel-core": "^6.23.1, 59 | }, 60 | "workspaces": [ 61 | "packages/*" 62 | ] 63 | } 64 | ``` 65 | babel-jest 66 | ``` 67 | { 68 | "name": "babel-jest", 69 | "description": "Jest plugin to use babel for transformation.", 70 | "version": "19.0.0", 71 | "repository": { 72 | "type": "git", 73 | "url": "https://github.com/facebook/jest.git" 74 | }, 75 | "license": "BSD-3-Clause", 76 | "main": "build/index.js", 77 | "dependencies": { 78 | "babel-core": "^6.0.0", 79 | "babel-plugin-istanbul": "^4.0.0", 80 | "babel-preset-jest": "^19.0.0" 81 | } 82 | } 83 | ``` 84 | 85 | babel-preset-jest 86 | ``` 87 | { 88 | "name": "babel-preset-jest", 89 | "version": "19.0.0", 90 | "repository": { 91 | "type": "git", 92 | "url": "https://github.com/facebook/jest.git" 93 | }, 94 | "license": "BSD-3-Clause", 95 | "main": "index.js", 96 | "dependencies": { 97 | "babel-plugin-jest-hoist": "^19.0.0" 98 | } 99 | } 100 | ``` 101 | 102 | If workspaces is enabled and yarn install is run at the root level of jest Yarn would install dependencies as if the package.json contained all the dependencies of all the package.json files combined, i.e. 103 | 104 | ``` 105 | { 106 | "devDependencies": { 107 | "ansi-regex": "^2.0.0", 108 | "babel-core": "^6.23.1, 109 | }, 110 | "dependencies": { 111 | "babel-core": "^6.0.0", 112 | "babel-plugin-istanbul": "^4.0.0", 113 | "babel-preset-jest": "^19.0.0" 114 | "babel-plugin-jest-hoist": "^19.0.0" 115 | } 116 | } 117 | ``` 118 | 119 | ## Resolving conflicts 120 | 121 | The algorithm is the same as in Yarn's [hoisting algorithm](https://github.com/yarnpkg/yarn/blob/master/src/package-hoister.js) during linking phase. 122 | 123 | In the example above babel-core is used in both top level package.json and one of the workspaces. 124 | Yarn will resolve the highest possible common version and install it. 125 | If versions are conflicting Yarn will install the most common used one at the root level and install the other versions in each of the workspaces folder. 126 | 127 | This should be enough for Node.js to resolve required dependencies when running in each of the workspaces. 128 | 129 | ### Note: In the first implementation workspaces level hoisting won't be implemented and Yarn would throw an error in case of dependency conflicts between packages. 130 | 131 | ### Note: linking, i.e. workspaces referring each other is not covered in this RFC, it will come in a next phase 132 | 133 | ## yarn.lock 134 | 135 | After running yarn install at the top level Yarn will generate a yarn.lock for all the used dependencies at the root level in workspaces and save it only at the root level. 136 | Yarn won't save yarn.lock files at workspaces' folders. 137 | 138 | 139 | ## Running yarn install in workspaces folders 140 | 141 | Yarn will automatically run all commands as if running in the root folder, i.e. install won't install node_modules in workspaces' folders individually. 142 | 143 | ## Check command and integrity check 144 | 145 | Changes will be needed to [check command](https://github.com/yarnpkg/yarn/blob/master/src/cli/commands/check.js) and [integrity-checker](https://github.com/yarnpkg/yarn/blob/master/src/integrity-checker.js) considering the new patterns added during resolve phase. 146 | 147 | 148 | # Drawbacks 149 | 150 | Not sure how exactly cross-platform symlinks are today. However it looks like 151 | [Microsoft might be catching up](https://blogs.windows.com/buildingapps/2016/12/02/symlinks-windows-10/) on this issue. 152 | 153 | # Alternatives 154 | 155 | Lerna as used in [jest](https://github.com/facebook/jest) now. 156 | Having multi-project dependency management natively in Yarn gives us a more cohesive user experience and because Yarn has access to dependency resolution graph the whole solution should provide more features than a wrapper like Lerna. 157 | 158 | # Unresolved questions 159 | 160 | * Running lifecycle scripts may cause unexpected results if they require a specific folder structure in node_modules. 161 | 162 | * How do we prevent people from publishing package and forgetting to setup correct dependencies for every workspace? E.g. `left-pad` may be absent from a workspace package.json and be present in the workspace root package.json. Testing the workspace code with node_modules installed in the root won't reveal this issue. 163 | -------------------------------------------------------------------------------- /implemented/0000-workspaces-link-phase-3.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-05-18 2 | - RFC PR: 3 | - Yarn Issue: https://github.com/yarnpkg/yarn/issues/3294 4 | 5 | # Yarn workspaces phase 3: linking workspaces to each other 6 | 7 | ## Summary 8 | 9 | A continuation of https://github.com/yarnpkg/rfcs/pull/60. 10 | Ability for workspaces to refer each other when testing packages in integration. 11 | 12 | ## Motivation 13 | 14 | People tend to split larger projects into self contained packages that are published to npm independently. The workspaces feature is being developed for Yarn to address this workflow. 15 | 16 | In particular, testing packages that refer other packages from the same codebase can be difficult because Node.js and front end bundling tools would look up the referred packages in node_modules folder as it should be installed from npm registry. 17 | 18 | Yarn Workspaces need to be able to refer to other local packages the same way when local packages are in development mode (source of truth is the package source code) and in production mode (source of truth is the package installed from npm). 19 | 20 | ## Detailed design 21 | 22 | The structure of the source code is following 23 | 24 | ``` 25 | | jest/ 26 | | ---- package.json 27 | | ---- packages/ 28 | | -------- jest-matcher-utils/ 29 | | ------------ package.json 30 | | -------- jest-diff/ 31 | | ------------ package.json 32 | ... 33 | ``` 34 | 35 | Top level package.json is like 36 | 37 | ``` 38 | { 39 | "private": true, 40 | "name": "jest", 41 | "devDependencies": { 42 | }, 43 | "workspaces": [ 44 | "packages/*" 45 | ] 46 | } 47 | ``` 48 | jest-matcher-utils (workspace referred by another one) 49 | ``` 50 | { 51 | "name": "jest-matcher-utils", 52 | "description": "...", 53 | "version": "20.0.3", 54 | "repository": { 55 | "type": "git", 56 | "url": "https://github.com/facebook/jest.git" 57 | }, 58 | "license": "...", 59 | "main": "...", 60 | "browser": "...", 61 | "dependencies": { 62 | "chalk": "^1.1.3", 63 | "pretty-format": "^20.0.3" 64 | } 65 | } 66 | ``` 67 | 68 | jest-diff (workspace that refers jest-matcher-utils) 69 | ``` 70 | { 71 | "name": "jest-diff", 72 | "version": "20.0.3", 73 | "repository": { 74 | "type": "git", 75 | "url": "https://github.com/facebook/jest.git" 76 | }, 77 | "license": "...", 78 | "main": "...", 79 | "browser": "...", 80 | "dependencies": { 81 | "chalk": "^1.1.3", 82 | "diff": "^3.2.0", 83 | "**jest-matcher-utils**": "^20.0.3", 84 | "pretty-format": "^20.0.3" 85 | } 86 | } 87 | ``` 88 | 89 | When user runs yarn install, this folder structure of the Workspace gets created 90 | ``` 91 | | jest/ 92 | | ---- node_modules/ 93 | | -------- chalk/ 94 | | -------- diff/ 95 | | -------- pretty-format/ 96 | | ---- package.json 97 | | ---- packages/ 98 | | -------- jest-matcher-utils/ 99 | | ------------ node_modules/ (empty, all dependencies hoisted to the root) 100 | | ------------ package.json 101 | | -------- jest-diff/ 102 | | ------------ node_modules/ 103 | | ---------------- **jest-matcher-utils**/ (symlink) -> ../jest-matcher-utils 104 | | ------------ package.json 105 | ... 106 | ``` 107 | 108 | `jest/packages/jest-diff/node_modules/**jest-matcher-utils**` is a relative symlink to `jest/packages/jest-matcher-utils` 109 | 110 | ### Dependencies and version matching 111 | 112 | Yarn would only link workspaces to each other if they match semver conditions. 113 | For example, 114 | 115 | * `jest-matcher-utils` package.json is `20.0.3` 116 | * if `jest-diff` package.json dependencies has `jest-matcher-utils` with version specifier that matches 20.0.3, e.g. `"^20.0.3"` then Yarn will make a link from `jest-diff/node_modules/jest-matcher-utils` to `jest-matcher-utils` workspace 117 | * if `jest-diff` package.json dependencies has `jest-matcher-utils` with version specifier that does not match `20.0.3`, e.g. `"^19.0.0"` then Yarn would fetch `jest-matcher-utils@^19.0.0` from npm registry and install it the regular way 118 | 119 | 120 | ### Problems with peer dependencies and hoisting 121 | 122 | There is a common [peer dependency problem](http://codetunnel.io/you-can-finally-npm-link-packages-that-contain-peer-dependencies/) when using **yarn link** on local packages that people can work around in Node 6+ by setting **--preserve-symlinks** runtime flag. 123 | In Workspaces this situation won't be a problem because node_modules are installed in Workspace root and Node.js `require()` statements will resolve third-party peer dependencies by going up the folder tree and reaching the Workspaces' root node_modules. 124 | 125 | As long as **jest-matcher-utils** does not make relative requires via its parent folder, flag **--preserve-symlinks** won't be necessary. 126 | 127 | ### Installing workspace in project root 128 | 129 | Workspace root may also depend on a workspace and it should be installed the same way as other workspaces referring each other, e.g. if jest has `jest-matcher-utils` as dependency it will be installed 130 | 131 | ``` 132 | | jest/ 133 | | ---- node_modules/ 134 | | -------- chalk/ 135 | | -------- diff/ 136 | | -------- pretty-format/ 137 | | -------- **jest-matcher-utils**/ (symlink) -> ../packages/jest-matcher-utils 138 | | ---- package.json 139 | | ---- packages/ 140 | | -------- jest-matcher-utils/ 141 | | ------------ node_modules/ (empty, all dependencies hoisted to the root) 142 | | ------------ package.json 143 | ... 144 | ``` 145 | 146 | ### Build scripts run order and cycle detection 147 | 148 | From workspaces linking point of view installation phases look like this: 149 | 150 | 1. Resolution - Yarn identifies all workspaces and which workspaces refer each other 151 | 2. Fetching - Yarn skips it for linked workspaces 152 | 3. Linking - Yarn creates symlinks in node_modules of referring workspaces in the alphanumeric order of workspaces (starting with workspace root) 153 | 4. Running scripts - Yarn runs (pre/post)install scripts for each linked workspace the same way it runs for packages from registry. Yarn already has a way to identify cycles between packages during this phase, in this case the order of execution is not controlled by user. To control scripts execution order for cycling dependencies there is an RFC gist https://gist.github.com/thejameskyle/abbc146a8cb5c26194c8acc4d14e7c30 by @thejameskyle 154 | 155 | ## Drawbacks 156 | 157 | This solution creates a symlink inside node_modules of a Workspace package and symlinks have multiple drawbacks: 158 | 159 | * Symlinks are not supported in all tools (e.g. watchman) 160 | * Symlinks are not supported well in all OS and environments (Windows pre 10 Creative updated, Docker on SMB storage(?)) 161 | * A symlink to **jest-matcher-utils** does not emulate actual installation of the package, it just symlinks to the package source code - no prepublish and postinstall lifecycle scripts are executed and no files are filtered (as done during publishing) 162 | * A version change in package.json of **jest-matcher-utils** needs Yarn to rebuild the links, this may require file watching 163 | 164 | ## Alternatives 165 | 166 | * Run **yarn pack** for **jest-matcher-utils** and install them from a .tgz file 167 | * PROS 168 | * Works without symlinks 169 | * Does not leak files from **jest-matcher-utils**, i.e. node_modules folder 170 | * Runs the same pack command as with real publishing to registry (tests folder and dev files won't be included) 171 | * CONS 172 | * Every file change during development of **jest-matcher-utils** will require Yarn to repack and install it 173 | * Pack/unpack is an excessive use of CPU 174 | * Hardlink files in **jest-matcher-utils** (only the ones listed for publishing) into jest-diff/node_modules/**jest-matcher-utils**. Similar idea was expressed in the knit RFC https://github.com/yarnpkg/rfcs/pull/41 175 | * PROS 176 | * Works without symlinks' drawbacks 177 | * Partially emulates published package by leaving out non publishable files, e.g. node_modules folder 178 | * Changes in the hardlinked files will be reflected in referring workspace node_modules 179 | * CONS 180 | * Hardlinks have limited support in Windows pre 10 181 | * When new files are created/removed in **jest-matcher-utils** the hardlinks need to be regenerated, that may require file watching to get good developer experience otherwise developer needs to run yarn install on every significant change 182 | * This does not simulate actual installation of the package as no prepublish and postinstall lifecycle scripts are executed 183 | 184 | Yarn Workspaces could implement all of the above linking strategies and give developers a choice which one to choose for their project. 185 | Or the alternatives could be merged in a single solution for isolated e2e testing. 186 | 187 | ## Unresolved questions 188 | 189 | * Is there an issue with Node resolving real paths in symlinked folders (https://github.com/nodejs/node/issues/3402) with this solution? 190 | If workspaces don't make relative requires outside of their root (e.g. a file in a sibling folder to the one with workspace's package.json), all requires should resolve the same way. 191 | 192 | * Does it need to work for other type of packages: git, file, etc? 193 | 194 | * As described in Workspace phase 1 RFC (https://github.com/yarnpkg/rfcs/pull/60) there is only one lockfile per workspace. Does yarn.lock need to reference that `jest-matcher-utils@^20.0.0` is resolved as a link to a folder? 195 | 196 | * Combining multiple workspaces is out of scope of this document. 197 | 198 | * (related to general Workspaces RFC) How do we prevent people from publishing package and forgetting to setup correct dependencies for every workspace? E.g. `left-pad` may be absent from a workspace package.json and be present in the workspace root package.json. Testing the workspace code with node_modules installed in the root won't reveal this issue. 199 | -------------------------------------------------------------------------------- /implemented/0000-yarn-create.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-03-31 2 | - RFC PR: (leave this empty) 3 | - Yarn Issue: (leave this empty) 4 | 5 | # Summary 6 | 7 | The idea would be to make it easier for users to setup projects from scratch. 8 | 9 | # Motivation 10 | 11 | During the past few years, we've seen an increase of the number of "boilerplate" projects, each one aiming to lower the complexity of creating new projects with Javascript. Create React App is a good example, but we can also mention Neutrino, released by the Mozilla teams, or Next.js, which recently reached its v2 milestone. 12 | 13 | Despite their utility, using these tools still require to manually install them, and then to keep them updated. A common workaround has been to prone the use of a "core" package to be installed locally, and a "cli" package to be installed globally that would act as a bridge to the core package. It feels like a hack, and maybe we could do something to help both project maintainers their users. 14 | 15 | # Detailed design 16 | 17 | This RFC suggests to add a new command: 18 | 19 | ``` 20 | $> yarn create ... 21 | ``` 22 | 23 | Running this command would have the same effect as: 24 | 25 | ``` 26 | $> yarn global add yarn-create- 27 | $> yarn-create- ... 28 | ``` 29 | 30 | One could assume that a simple boilerplate would be configured as such: 31 | 32 | ```json 33 | { 34 | "bin": { 35 | "yarn-create-hello": "index.js" 36 | } 37 | } 38 | ``` 39 | 40 | With `hello.js`: 41 | 42 | ```js 43 | let fs = require(`fs`); 44 | 45 | fs.writeFileSync(`hello.md`, `Hello World!~`); 46 | ``` 47 | 48 | This RFC doesn't cover the case where `yarn create ` is called in an already existing package - it is suggested that the boilerplate modules register new script commands that the user could then use: 49 | 50 | ``` 51 | $> cat package.json 52 | { 53 | "scripts": { 54 | "cra": "create-react-app" 55 | } 56 | } 57 | $> yarn cra eject 58 | ``` 59 | 60 | # Alternatives 61 | 62 | - We could do more than just running a binary file (maybe automatically copying files, etc), but I'm not sure it would be a good idea - I feel like such a feature should remain very simple. 63 | 64 | - The script could be named differently. However, "create" isn't currently used as as lifecycle hook, and doesn't see a lot of usage (of the 490,000+ packages on the npm registry, only 33 of them have a script called "create"). 65 | 66 | # Unresolved questions 67 | 68 | - The best way this feature could be implemented would probably be via a plugin, since the core project would then not have to bother about cluttering. Unfortunately, we've not yet reached the point where we can start exposing a public API, and as such it seems difficult to avoid adding the command into the core app right now. 69 | 70 | - Should the extra arguments be forwarded to the create script? If it works like the regular ones then no, and users would have to type `yarn create -- --no-eslint` instead of `yarn create --no-eslint`. However, fixing this behaviour might require a different RFC, since consistency would suggest to fix how the parameters are passed to the scripts as well. 71 | -------------------------------------------------------------------------------- /text/0000-upgrade-command-consistency.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2017-06-21 2 | - RFC PR: n/a 3 | - Yarn Issue: https://github.com/yarnpkg/yarn/issues/3603 4 | 5 | # Summary 6 | 7 | Spawned from https://github.com/yarnpkg/yarn/issues/3603 8 | 9 | There is a lot of confusion among new users of Yarn as to how the `upgrade` and `upgrade-interactive` commands work. 10 | 11 | A lot of that confusion is due to those commands not working the same way (nor are they implemented the same). 12 | 13 | The purpose of this RFC is to align those commands, and begin to share implementation between them. 14 | 15 | # Motivation 16 | 17 | ## Currently (yarn <=0.26): 18 | 19 | `upgrade` = upgrade all packages to their latest version, **respecting** the range in `package.json` 20 | `upgrade left-pad` = upgrade only the left-pad package to it's `latest` tag, **ignoring** the range in `package.json` 21 | `upgrade-interactive` = upgrade all packages to their `latest` tag, **ignoring** the range in `package.json` 22 | 23 | It is very confusing that `upgrade` vs `upgrade-interactive` chose different versions, and `upgrade` vs `upgrade {package}` chose different versions. 24 | 25 | # Detailed design 26 | 27 | Major design ideas: 28 | 29 | 1. `upgrade-interactive` should just be an "interactive" version of `upgrade`. 30 | 2. Both commands should respect the package.json semver range by default. 31 | 2. PR #3510 added a `--latest` flag to `upgrade` to tell it to ignore package.json range. Utilize this change across both commands to have them ignore package.json range and use `latest` tag instead. 32 | 33 | ## New Logic: 34 | 35 | * Leave upgrade with no additional parameters how it is: 36 | > yarn upgrade 37 | > 38 | > This command updates all dependencies to their latest version based on the version range specified in the package.json file. 39 | 40 | * Change passing a package without an explicit version to respect package.json 41 | > yarn upgrade [package] 42 | > 43 | > This upgrades a single named package to the latest version based on the version range specified in the package.json file. 44 | 45 | * Leave handling an explicit version the same as how it is 46 | > yarn upgrade [package@version] 47 | > 48 | > This will upgrade (or downgrade) an installed package to the specified version. You can use any SemVer version number or range. 49 | 50 | * Utilize the --latest flag from PR #3510 for an upgrade without a specific package, and add it to the docs 51 | > yarn upgrade --latest 52 | > 53 | > This command updates all dependencies to the version specified by the latest tag (potentially upgrading the package across major versions). 54 | 55 | * Utilize the --latest flag from PR #3510 for an upgrade with a specific package, and add it to the docs 56 | > yarn upgrade [package] --latest 57 | > 58 | > This upgrades a single named package to the version specified by the latest tag (potentially upgrading the package across major versions). 59 | 60 | For `upgrade-interactive` it would internally just call the `upgrade` logic to follow the same rules above, but would then present the list of packages to the user for them to chose which to upgrade. The exception is that `upgrade-interactive` does not have the ability to take specific package names in its parameters (because the user would chose them from the interactive selection list instead of specifying them on the cmd line) 61 | 62 | 63 | ## Implementation Details 64 | 65 | Currently, `upgrade` reads all packages and ranges from package.json and forwards them to `add`. `upgrade-interactive` is implemented differently; it uses `PackageRequest.getOutdatedPackages()` to determine only the packages that are out of date, and what version they would update to. 66 | 67 | As part of this work, the upgrade-interactive logic to use `getOutdatedPackages` would be moved over to `upgrade`. 68 | 69 | `PackageRequest.getOutdatedPackages()` already reports the "wanted" (latest respecting package.json specified range) and the "latest" (latest specified by registry, ignoring package.json) versions for all outdated packages. `upgrade` would look for the `--latest` flag to decide which of these version to upgrade each package to. 70 | 71 | The `upgrade-interactive` command's output will include an additional column named "range". This column will show what the current package.json specified range is. If the `--latest` flag is passed, then the word "latest" will be displayed. In other words, this column is showing the range specifier that upgrade is using to determine what to upgrade to. 72 | 73 | example: 74 | 75 | ``` 76 | dependencies 77 | name range from to url 78 | ❯◯ chai ^3.0.0 3.4.0 ❯ 3.5.8 http://chaijs.com 79 | ``` 80 | 81 | Which indicates "You have chai@^3.0.0 as a dependency. Currently 3.4.0 is installed. Upgrade will move to 3.5.8" 82 | 83 | or when using the --latest flag: 84 | 85 | ``` 86 | dependencies 87 | name range from to url 88 | ❯◯ chai latest 3.4.0 ❯ 4.0.2 http://chaijs.com 89 | ``` 90 | 91 | The goal here is to better explain to the user why this version was selected to upgrade to. 92 | 93 | 94 | ## Preserve package.json range operator 95 | 96 | Related to #2367 and #3609 there have been requests that `upgrade` and `upgrade-interactive` when upgrading to a new major version will preserve the `package.json` specified version range, if it exists. 97 | 98 | So for example if package.json specifies the dependency 99 | 100 | ``` 101 | "foo": "~0.1.2", 102 | "bar": "^1.2.3", 103 | "baz": "2.3.4" 104 | ``` 105 | 106 | Then if `upgrade --latest` jumps to a new major version, it will preserve the range specifiers, and upgrade to something like: 107 | 108 | ``` 109 | "foo": "~5.0.0", 110 | "bar": "^6.0.0", 111 | "baz": "7.0.0" 112 | ``` 113 | 114 | (with the current implementation, all 3 packags would be changed to use caret `@^x.x.x`) 115 | 116 | * This only has an affect when `--latest` is specified, otherwise package.json file would not be modified. 117 | 118 | * This only works for simple range operators (exact, ^, ~, =, <=, >). Complex operators are not handled. When a range operator is not one of these simple cases, `^` will be used as the default., since that is the normal range operator when adding a package the first time. 119 | 120 | * This behavior is overriden by the following flags: `--caret` `--tilde` `--exact`. If any of these are passed, then that range operator will always be used. 121 | 122 | 123 | # How We Teach This 124 | 125 | Docs will need to be updated to reflect these changes. 126 | 127 | 128 | # Drawbacks 129 | 130 | This is a change to the version ranges selected by `upgrade-interactive` so could cause additional confusion to those who use it in previous Yarn versions. 131 | 132 | However, I beleive that overall it would reduce the confusion between these commands, and make them overall more versitile. 133 | 134 | 135 | # Alternatives 136 | 137 | Do not change the behavior. Deal with user confusion and issues that arrise from it as they come up. 138 | 139 | 140 | # Unresolved questions 141 | 142 | None at this time. 143 | --------------------------------------------------------------------------------