├── .clang-format ├── .clang-tidy ├── .envrc ├── .github ├── ISSUE_TEMPLATE │ └── bug_report.md └── dependabot.yml ├── .gitignore ├── CONTRIBUTING.md ├── LICENSE.md ├── README.md ├── default.nix ├── dev └── treefmt.nix ├── flake.lock ├── flake.nix ├── meson.build ├── pyproject.toml ├── renovate.json ├── shell.nix ├── src ├── buffered-io.cc ├── buffered-io.hh ├── constituents.cc ├── constituents.hh ├── drv.cc ├── drv.hh ├── eval-args.cc ├── eval-args.hh ├── meson.build ├── nix-eval-jobs.cc ├── strings-portable.cc ├── strings-portable.hh ├── worker.cc └── worker.hh └── tests ├── assets ├── ci.nix ├── flake.lock └── flake.nix └── test_eval.py /.clang-format: -------------------------------------------------------------------------------- 1 | BasedOnStyle: llvm 2 | IndentWidth: 4 3 | SortIncludes: false 4 | -------------------------------------------------------------------------------- /.clang-tidy: -------------------------------------------------------------------------------- 1 | Checks: > 2 | - bugprone-* 3 | - performance-* 4 | - modernize-* 5 | - readability-* 6 | - misc-* 7 | - portability-* 8 | - concurrency-* 9 | - google-* 10 | - -google-readability-todo 11 | 12 | # don't find them too problematic 13 | - -readability-identifier-length 14 | - -readability-magic-numbers 15 | - -bugprone-easily-swappable-parameters 16 | 17 | # maybe address this in the future 18 | - -readability-function-cognitive-complexity 19 | 20 | - cppcoreguidelines-* 21 | - -cppcoreguidelines-avoid-magic-numbers 22 | UseColor: true 23 | CheckOptions: 24 | misc-non-private-member-variables-in-classes.IgnoreClassesWithAllMemberVariablesBeingPublic: True 25 | -------------------------------------------------------------------------------- /.envrc: -------------------------------------------------------------------------------- 1 | if ! has nix_direnv_version || ! nix_direnv_version 3.0.5; then 2 | source_url "https://raw.githubusercontent.com/nix-community/nix-direnv/3.0.5/direnvrc" "sha256-RuwIS+QKFj/T9M2TFXScjBsLR6V3A17YVoEW/Q6AZ1w=" 3 | fi 4 | use flake 5 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: '[BUG] ' 5 | labels: bug 6 | assignees: '' 7 | --- 8 | 9 | **Describe the bug** A clear and concise description of what the bug is. 10 | 11 | **To Reproduce** Steps to reproduce the behavior: 12 | 13 | 1. Run '...' 14 | 2. See error 15 | 16 | **Expected behavior** A clear and concise description of what you expected to 17 | happen. 18 | 19 | **Version** nix-eval-jobs version: [e.g. 0.1.6] 20 | 21 | **Additional context** Add any other context about the problem here. 22 | -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | updates: 3 | - package-ecosystem: "github-actions" 4 | directory: "/" 5 | schedule: 6 | interval: "weekly" 7 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | .idea 3 | *.log 4 | 5 | tmp/ 6 | 7 | 8 | # Prerequisites 9 | *.d 10 | 11 | # Compiled Object files 12 | *.slo 13 | *.lo 14 | *.o 15 | *.obj 16 | 17 | # Precompiled Headers 18 | *.gch 19 | *.pch 20 | 21 | # Compiled Dynamic libraries 22 | *.so 23 | *.dylib 24 | *.dll 25 | 26 | # Fortran module files 27 | *.mod 28 | *.smod 29 | 30 | # Compiled Static libraries 31 | *.lai 32 | *.la 33 | *.a 34 | *.lib 35 | 36 | # Executables 37 | *.exe 38 | *.out 39 | *.app 40 | 41 | # build directory 42 | /build 43 | # nix-build 44 | /result 45 | 46 | # Byte-compiled / optimized / DLL files 47 | __pycache__/ 48 | *.py[cod] 49 | *$py.class 50 | 51 | # mypy 52 | .mypy_cache/ 53 | .dmypy.json 54 | dmypy.json 55 | 56 | # nix-direnv 57 | .direnv 58 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to nix-eval-jobs 2 | 3 | Thank you for considering contributing to nix-eval-jobs! This document provides 4 | guidelines and instructions for contributing to the project. 5 | 6 | ## Development Setup 7 | 8 | 1. Clone the repository: 9 | ```bash 10 | git clone https://github.com/nix-community/nix-eval-jobs.git 11 | cd nix-eval-jobs 12 | ``` 13 | 14 | 2. Set up the development environment: 15 | ```bash 16 | # Using nix 17 | nix develop 18 | # Or using direnv 19 | direnv allow 20 | ``` 21 | 22 | ## Building and Testing 23 | 24 | ### Building 25 | 26 | ```bash 27 | meson setup build 28 | cd build 29 | ninja 30 | ``` 31 | 32 | ### Running Tests 33 | 34 | ```bash 35 | pytest ./tests 36 | ``` 37 | 38 | ### Checking Everything 39 | 40 | To run all builds, tests, and checks: 41 | 42 | ```bash 43 | nix flake check 44 | ``` 45 | 46 | This will: 47 | 48 | - Build the package for all supported platforms 49 | - Run the test suite 50 | - Run all formatters and linters 51 | - Perform static analysis checks 52 | 53 | ## Code Quality Tools 54 | 55 | ### Formatting 56 | 57 | - Clang-format for C++ code 58 | - Ruff for Python code formatting and linting 59 | - Deno and yamlfmt for YAML files 60 | - nixfmt for Nix files 61 | 62 | ### Static Analysis 63 | 64 | - MyPy for Python type checking 65 | - deadnix for Nix code analysis 66 | - clang-tidy for C++ code analysis 67 | ```bash 68 | # Run clang-tidy checks 69 | ninja clang-tidy 70 | # Auto-fix clang-tidy issues where possible 71 | ninja clang-tidy-fix 72 | ``` 73 | 74 | All formatting can be applied using: 75 | 76 | ```bash 77 | nix fmt 78 | ``` 79 | 80 | ## Making Changes 81 | 82 | 1. Create a branch for your changes: 83 | ```bash 84 | git checkout -b your-feature-name 85 | ``` 86 | 87 | 2. Make your changes and commit them with descriptive commit messages: 88 | ```bash 89 | git commit -m "feat: Add new feature X" 90 | ``` 91 | 92 | 3. Push your changes to your fork: 93 | ```bash 94 | git push origin your-feature-name 95 | ``` 96 | 97 | 4. Create a Pull Request against the main repository. 98 | 99 | ## Additional Resources 100 | 101 | - [Project README](README.md) 102 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | # GNU General Public License 2 | 3 | _Version 3, 29 June 2007_ _Copyright © 2007 Free Software Foundation, Inc. 4 | <>_ 5 | 6 | Everyone is permitted to copy and distribute verbatim copies of this license 7 | document, but changing it is not allowed. 8 | 9 | ## Preamble 10 | 11 | The GNU General Public License is a free, copyleft license for software and 12 | other kinds of works. 13 | 14 | The licenses for most software and other practical works are designed to take 15 | away your freedom to share and change the works. By contrast, the GNU General 16 | Public License is intended to guarantee your freedom to share and change all 17 | versions of a program--to make sure it remains free software for all its users. 18 | We, the Free Software Foundation, use the GNU General Public License for most of 19 | our software; it applies also to any other work released this way by its 20 | authors. You can apply it to your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not price. Our 23 | General Public Licenses are designed to make sure that you have the freedom to 24 | distribute copies of free software (and charge for them if you wish), that you 25 | receive source code or can get it if you want it, that you can change the 26 | software or use pieces of it in new free programs, and that you know you can do 27 | these things. 28 | 29 | To protect your rights, we need to prevent others from denying you these rights 30 | or asking you to surrender the rights. Therefore, you have certain 31 | responsibilities if you distribute copies of the software, or if you modify it: 32 | responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether gratis or for a 35 | fee, you must pass on to the recipients the same freedoms that you received. You 36 | must make sure that they, too, receive or can get the source code. And you must 37 | show them these terms so they know their rights. 38 | 39 | Developers that use the GNU GPL protect your rights with two steps: **(1)** 40 | assert copyright on the software, and **(2)** offer you this License giving you 41 | legal permission to copy, distribute and/or modify it. 42 | 43 | For the developers' and authors' protection, the GPL clearly explains that there 44 | is no warranty for this free software. For both users' and authors' sake, the 45 | GPL requires that modified versions be marked as changed, so that their problems 46 | will not be attributed erroneously to authors of previous versions. 47 | 48 | Some devices are designed to deny users access to install or run modified 49 | versions of the software inside them, although the manufacturer can do so. This 50 | is fundamentally incompatible with the aim of protecting users' freedom to 51 | change the software. The systematic pattern of such abuse occurs in the area of 52 | products for individuals to use, which is precisely where it is most 53 | unacceptable. Therefore, we have designed this version of the GPL to prohibit 54 | the practice for those products. If such problems arise substantially in other 55 | domains, we stand ready to extend this provision to those domains in future 56 | versions of the GPL, as needed to protect the freedom of users. 57 | 58 | Finally, every program is threatened constantly by software patents. States 59 | should not allow patents to restrict development and use of software on 60 | general-purpose computers, but in those that do, we wish to avoid the special 61 | danger that patents applied to a free program could make it effectively 62 | proprietary. To prevent this, the GPL assures that patents cannot be used to 63 | render the program non-free. 64 | 65 | The precise terms and conditions for copying, distribution and modification 66 | follow. 67 | 68 | ## TERMS AND CONDITIONS 69 | 70 | ### 0. Definitions 71 | 72 | “This License” refers to version 3 of the GNU General Public License. 73 | 74 | “Copyright” also means copyright-like laws that apply to other kinds of works, 75 | such as semiconductor masks. 76 | 77 | “The Program” refers to any copyrightable work licensed under this License. Each 78 | licensee is addressed as “you”. “Licensees” and “recipients” may be individuals 79 | or organizations. 80 | 81 | To “modify” a work means to copy from or adapt all or part of the work in a 82 | fashion requiring copyright permission, other than the making of an exact copy. 83 | The resulting work is called a “modified version” of the earlier work or a work 84 | “based on” the earlier work. 85 | 86 | A “covered work” means either the unmodified Program or a work based on the 87 | Program. 88 | 89 | To “propagate” a work means to do anything with it that, without permission, 90 | would make you directly or secondarily liable for infringement under applicable 91 | copyright law, except executing it on a computer or modifying a private copy. 92 | Propagation includes copying, distribution (with or without modification), 93 | making available to the public, and in some countries other activities as well. 94 | 95 | To “convey” a work means any kind of propagation that enables other parties to 96 | make or receive copies. Mere interaction with a user through a computer network, 97 | with no transfer of a copy, is not conveying. 98 | 99 | An interactive user interface displays “Appropriate Legal Notices” to the extent 100 | that it includes a convenient and prominently visible feature that **(1)** 101 | displays an appropriate copyright notice, and **(2)** tells the user that there 102 | is no warranty for the work (except to the extent that warranties are provided), 103 | that licensees may convey the work under this License, and how to view a copy of 104 | this License. If the interface presents a list of user commands or options, such 105 | as a menu, a prominent item in the list meets this criterion. 106 | 107 | ### 1. Source Code 108 | 109 | The “source code” for a work means the preferred form of the work for making 110 | modifications to it. “Object code” means any non-source form of a work. 111 | 112 | A “Standard Interface” means an interface that either is an official standard 113 | defined by a recognized standards body, or, in the case of interfaces specified 114 | for a particular programming language, one that is widely used among developers 115 | working in that language. 116 | 117 | The “System Libraries” of an executable work include anything, other than the 118 | work as a whole, that **(a)** is included in the normal form of packaging a 119 | Major Component, but which is not part of that Major Component, and **(b)** 120 | serves only to enable use of the work with that Major Component, or to implement 121 | a Standard Interface for which an implementation is available to the public in 122 | source code form. A “Major Component”, in this context, means a major essential 123 | component (kernel, window system, and so on) of the specific operating system 124 | (if any) on which the executable work runs, or a compiler used to produce the 125 | work, or an object code interpreter used to run it. 126 | 127 | The “Corresponding Source” for a work in object code form means all the source 128 | code needed to generate, install, and (for an executable work) run the object 129 | code and to modify the work, including scripts to control those activities. 130 | However, it does not include the work's System Libraries, or general-purpose 131 | tools or generally available free programs which are used unmodified in 132 | performing those activities but which are not part of the work. For example, 133 | Corresponding Source includes interface definition files associated with source 134 | files for the work, and the source code for shared libraries and dynamically 135 | linked subprograms that the work is specifically designed to require, such as by 136 | intimate data communication or control flow between those subprograms and other 137 | parts of the work. 138 | 139 | The Corresponding Source need not include anything that users can regenerate 140 | automatically from other parts of the Corresponding Source. 141 | 142 | The Corresponding Source for a work in source code form is that same work. 143 | 144 | ### 2. Basic Permissions 145 | 146 | All rights granted under this License are granted for the term of copyright on 147 | the Program, and are irrevocable provided the stated conditions are met. This 148 | License explicitly affirms your unlimited permission to run the unmodified 149 | Program. The output from running a covered work is covered by this License only 150 | if the output, given its content, constitutes a covered work. This License 151 | acknowledges your rights of fair use or other equivalent, as provided by 152 | copyright law. 153 | 154 | You may make, run and propagate covered works that you do not convey, without 155 | conditions so long as your license otherwise remains in force. You may convey 156 | covered works to others for the sole purpose of having them make modifications 157 | exclusively for you, or provide you with facilities for running those works, 158 | provided that you comply with the terms of this License in conveying all 159 | material for which you do not control copyright. Those thus making or running 160 | the covered works for you must do so exclusively on your behalf, under your 161 | direction and control, on terms that prohibit them from making any copies of 162 | your copyrighted material outside their relationship with you. 163 | 164 | Conveying under any other circumstances is permitted solely under the conditions 165 | stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 166 | 167 | ### 3. Protecting Users' Legal Rights From Anti-Circumvention Law 168 | 169 | No covered work shall be deemed part of an effective technological measure under 170 | any applicable law fulfilling obligations under article 11 of the WIPO copyright 171 | treaty adopted on 20 December 1996, or similar laws prohibiting or restricting 172 | circumvention of such measures. 173 | 174 | When you convey a covered work, you waive any legal power to forbid 175 | circumvention of technological measures to the extent such circumvention is 176 | effected by exercising rights under this License with respect to the covered 177 | work, and you disclaim any intention to limit operation or modification of the 178 | work as a means of enforcing, against the work's users, your or third parties' 179 | legal rights to forbid circumvention of technological measures. 180 | 181 | ### 4. Conveying Verbatim Copies 182 | 183 | You may convey verbatim copies of the Program's source code as you receive it, 184 | in any medium, provided that you conspicuously and appropriately publish on each 185 | copy an appropriate copyright notice; keep intact all notices stating that this 186 | License and any non-permissive terms added in accord with section 7 apply to the 187 | code; keep intact all notices of the absence of any warranty; and give all 188 | recipients a copy of this License along with the Program. 189 | 190 | You may charge any price or no price for each copy that you convey, and you may 191 | offer support or warranty protection for a fee. 192 | 193 | ### 5. Conveying Modified Source Versions 194 | 195 | You may convey a work based on the Program, or the modifications to produce it 196 | from the Program, in the form of source code under the terms of section 4, 197 | provided that you also meet all of these conditions: 198 | 199 | - **a)** The work must carry prominent notices stating that you modified it, and 200 | giving a relevant date. 201 | - **b)** The work must carry prominent notices stating that it is released under 202 | this License and any conditions added under section 7. This requirement 203 | modifies the requirement in section 4 to “keep intact all notices”. 204 | - **c)** You must license the entire work, as a whole, under this License to 205 | anyone who comes into possession of a copy. This License will therefore apply, 206 | along with any applicable section 7 additional terms, to the whole of the 207 | work, and all its parts, regardless of how they are packaged. This License 208 | gives no permission to license the work in any other way, but it does not 209 | invalidate such permission if you have separately received it. 210 | - **d)** If the work has interactive user interfaces, each must display 211 | Appropriate Legal Notices; however, if the Program has interactive interfaces 212 | that do not display Appropriate Legal Notices, your work need not make them do 213 | so. 214 | 215 | A compilation of a covered work with other separate and independent works, which 216 | are not by their nature extensions of the covered work, and which are not 217 | combined with it such as to form a larger program, in or on a volume of a 218 | storage or distribution medium, is called an “aggregate” if the compilation and 219 | its resulting copyright are not used to limit the access or legal rights of the 220 | compilation's users beyond what the individual works permit. Inclusion of a 221 | covered work in an aggregate does not cause this License to apply to the other 222 | parts of the aggregate. 223 | 224 | ### 6. Conveying Non-Source Forms 225 | 226 | You may convey a covered work in object code form under the terms of sections 4 227 | and 5, provided that you also convey the machine-readable Corresponding Source 228 | under the terms of this License, in one of these ways: 229 | 230 | - **a)** Convey the object code in, or embodied in, a physical product 231 | (including a physical distribution medium), accompanied by the Corresponding 232 | Source fixed on a durable physical medium customarily used for software 233 | interchange. 234 | - **b)** Convey the object code in, or embodied in, a physical product 235 | (including a physical distribution medium), accompanied by a written offer, 236 | valid for at least three years and valid for as long as you offer spare parts 237 | or customer support for that product model, to give anyone who possesses the 238 | object code either **(1)** a copy of the Corresponding Source for all the 239 | software in the product that is covered by this License, on a durable physical 240 | medium customarily used for software interchange, for a price no more than 241 | your reasonable cost of physically performing this conveying of source, or 242 | **(2)** access to copy the Corresponding Source from a network server at no 243 | charge. 244 | - **c)** Convey individual copies of the object code with a copy of the written 245 | offer to provide the Corresponding Source. This alternative is allowed only 246 | occasionally and noncommercially, and only if you received the object code 247 | with such an offer, in accord with subsection 6b. 248 | - **d)** Convey the object code by offering access from a designated place 249 | (gratis or for a charge), and offer equivalent access to the Corresponding 250 | Source in the same way through the same place at no further charge. You need 251 | not require recipients to copy the Corresponding Source along with the object 252 | code. If the place to copy the object code is a network server, the 253 | Corresponding Source may be on a different server (operated by you or a third 254 | party) that supports equivalent copying facilities, provided you maintain 255 | clear directions next to the object code saying where to find the 256 | Corresponding Source. Regardless of what server hosts the Corresponding 257 | Source, you remain obligated to ensure that it is available for as long as 258 | needed to satisfy these requirements. 259 | - **e)** Convey the object code using peer-to-peer transmission, provided you 260 | inform other peers where the object code and Corresponding Source of the work 261 | are being offered to the general public at no charge under subsection 6d. 262 | 263 | A separable portion of the object code, whose source code is excluded from the 264 | Corresponding Source as a System Library, need not be included in conveying the 265 | object code work. 266 | 267 | A “User Product” is either **(1)** a “consumer product”, which means any 268 | tangible personal property which is normally used for personal, family, or 269 | household purposes, or **(2)** anything designed or sold for incorporation into 270 | a dwelling. In determining whether a product is a consumer product, doubtful 271 | cases shall be resolved in favor of coverage. For a particular product received 272 | by a particular user, “normally used” refers to a typical or common use of that 273 | class of product, regardless of the status of the particular user or of the way 274 | in which the particular user actually uses, or expects or is expected to use, 275 | the product. A product is a consumer product regardless of whether the product 276 | has substantial commercial, industrial or non-consumer uses, unless such uses 277 | represent the only significant mode of use of the product. 278 | 279 | “Installation Information” for a User Product means any methods, procedures, 280 | authorization keys, or other information required to install and execute 281 | modified versions of a covered work in that User Product from a modified version 282 | of its Corresponding Source. The information must suffice to ensure that the 283 | continued functioning of the modified object code is in no case prevented or 284 | interfered with solely because modification has been made. 285 | 286 | If you convey an object code work under this section in, or with, or 287 | specifically for use in, a User Product, and the conveying occurs as part of a 288 | transaction in which the right of possession and use of the User Product is 289 | transferred to the recipient in perpetuity or for a fixed term (regardless of 290 | how the transaction is characterized), the Corresponding Source conveyed under 291 | this section must be accompanied by the Installation Information. But this 292 | requirement does not apply if neither you nor any third party retains the 293 | ability to install modified object code on the User Product (for example, the 294 | work has been installed in ROM). 295 | 296 | The requirement to provide Installation Information does not include a 297 | requirement to continue to provide support service, warranty, or updates for a 298 | work that has been modified or installed by the recipient, or for the User 299 | Product in which it has been modified or installed. Access to a network may be 300 | denied when the modification itself materially and adversely affects the 301 | operation of the network or violates the rules and protocols for communication 302 | across the network. 303 | 304 | Corresponding Source conveyed, and Installation Information provided, in accord 305 | with this section must be in a format that is publicly documented (and with an 306 | implementation available to the public in source code form), and must require no 307 | special password or key for unpacking, reading or copying. 308 | 309 | ### 7. Additional Terms 310 | 311 | “Additional permissions” are terms that supplement the terms of this License by 312 | making exceptions from one or more of its conditions. Additional permissions 313 | that are applicable to the entire Program shall be treated as though they were 314 | included in this License, to the extent that they are valid under applicable 315 | law. If additional permissions apply only to part of the Program, that part may 316 | be used separately under those permissions, but the entire Program remains 317 | governed by this License without regard to the additional permissions. 318 | 319 | When you convey a copy of a covered work, you may at your option remove any 320 | additional permissions from that copy, or from any part of it. (Additional 321 | permissions may be written to require their own removal in certain cases when 322 | you modify the work.) You may place additional permissions on material, added by 323 | you to a covered work, for which you have or can give appropriate copyright 324 | permission. 325 | 326 | Notwithstanding any other provision of this License, for material you add to a 327 | covered work, you may (if authorized by the copyright holders of that material) 328 | supplement the terms of this License with terms: 329 | 330 | - **a)** Disclaiming warranty or limiting liability differently from the terms 331 | of sections 15 and 16 of this License; or 332 | - **b)** Requiring preservation of specified reasonable legal notices or author 333 | attributions in that material or in the Appropriate Legal Notices displayed by 334 | works containing it; or 335 | - **c)** Prohibiting misrepresentation of the origin of that material, or 336 | requiring that modified versions of such material be marked in reasonable ways 337 | as different from the original version; or 338 | - **d)** Limiting the use for publicity purposes of names of licensors or 339 | authors of the material; or 340 | - **e)** Declining to grant rights under trademark law for use of some trade 341 | names, trademarks, or service marks; or 342 | - **f)** Requiring indemnification of licensors and authors of that material by 343 | anyone who conveys the material (or modified versions of it) with contractual 344 | assumptions of liability to the recipient, for any liability that these 345 | contractual assumptions directly impose on those licensors and authors. 346 | 347 | All other non-permissive additional terms are considered “further restrictions” 348 | within the meaning of section 10. If the Program as you received it, or any part 349 | of it, contains a notice stating that it is governed by this License along with 350 | a term that is a further restriction, you may remove that term. If a license 351 | document contains a further restriction but permits relicensing or conveying 352 | under this License, you may add to a covered work material governed by the terms 353 | of that license document, provided that the further restriction does not survive 354 | such relicensing or conveying. 355 | 356 | If you add terms to a covered work in accord with this section, you must place, 357 | in the relevant source files, a statement of the additional terms that apply to 358 | those files, or a notice indicating where to find the applicable terms. 359 | 360 | Additional terms, permissive or non-permissive, may be stated in the form of a 361 | separately written license, or stated as exceptions; the above requirements 362 | apply either way. 363 | 364 | ### 8. Termination 365 | 366 | You may not propagate or modify a covered work except as expressly provided 367 | under this License. Any attempt otherwise to propagate or modify it is void, and 368 | will automatically terminate your rights under this License (including any 369 | patent licenses granted under the third paragraph of section 11). 370 | 371 | However, if you cease all violation of this License, then your license from a 372 | particular copyright holder is reinstated **(a)** provisionally, unless and 373 | until the copyright holder explicitly and finally terminates your license, and 374 | **(b)** permanently, if the copyright holder fails to notify you of the 375 | violation by some reasonable means prior to 60 days after the cessation. 376 | 377 | Moreover, your license from a particular copyright holder is reinstated 378 | permanently if the copyright holder notifies you of the violation by some 379 | reasonable means, this is the first time you have received notice of violation 380 | of this License (for any work) from that copyright holder, and you cure the 381 | violation prior to 30 days after your receipt of the notice. 382 | 383 | Termination of your rights under this section does not terminate the licenses of 384 | parties who have received copies or rights from you under this License. If your 385 | rights have been terminated and not permanently reinstated, you do not qualify 386 | to receive new licenses for the same material under section 10. 387 | 388 | ### 9. Acceptance Not Required for Having Copies 389 | 390 | You are not required to accept this License in order to receive or run a copy of 391 | the Program. Ancillary propagation of a covered work occurring solely as a 392 | consequence of using peer-to-peer transmission to receive a copy likewise does 393 | not require acceptance. However, nothing other than this License grants you 394 | permission to propagate or modify any covered work. These actions infringe 395 | copyright if you do not accept this License. Therefore, by modifying or 396 | propagating a covered work, you indicate your acceptance of this License to do 397 | so. 398 | 399 | ### 10. Automatic Licensing of Downstream Recipients 400 | 401 | Each time you convey a covered work, the recipient automatically receives a 402 | license from the original licensors, to run, modify and propagate that work, 403 | subject to this License. You are not responsible for enforcing compliance by 404 | third parties with this License. 405 | 406 | An “entity transaction” is a transaction transferring control of an 407 | organization, or substantially all assets of one, or subdividing an 408 | organization, or merging organizations. If propagation of a covered work results 409 | from an entity transaction, each party to that transaction who receives a copy 410 | of the work also receives whatever licenses to the work the party's predecessor 411 | in interest had or could give under the previous paragraph, plus a right to 412 | possession of the Corresponding Source of the work from the predecessor in 413 | interest, if the predecessor has it or can get it with reasonable efforts. 414 | 415 | You may not impose any further restrictions on the exercise of the rights 416 | granted or affirmed under this License. For example, you may not impose a 417 | license fee, royalty, or other charge for exercise of rights granted under this 418 | License, and you may not initiate litigation (including a cross-claim or 419 | counterclaim in a lawsuit) alleging that any patent claim is infringed by 420 | making, using, selling, offering for sale, or importing the Program or any 421 | portion of it. 422 | 423 | ### 11. Patents 424 | 425 | A “contributor” is a copyright holder who authorizes use under this License of 426 | the Program or a work on which the Program is based. The work thus licensed is 427 | called the contributor's “contributor version”. 428 | 429 | A contributor's “essential patent claims” are all patent claims owned or 430 | controlled by the contributor, whether already acquired or hereafter acquired, 431 | that would be infringed by some manner, permitted by this License, of making, 432 | using, or selling its contributor version, but do not include claims that would 433 | be infringed only as a consequence of further modification of the contributor 434 | version. For purposes of this definition, “control” includes the right to grant 435 | patent sublicenses in a manner consistent with the requirements of this License. 436 | 437 | Each contributor grants you a non-exclusive, worldwide, royalty-free patent 438 | license under the contributor's essential patent claims, to make, use, sell, 439 | offer for sale, import and otherwise run, modify and propagate the contents of 440 | its contributor version. 441 | 442 | In the following three paragraphs, a “patent license” is any express agreement 443 | or commitment, however denominated, not to enforce a patent (such as an express 444 | permission to practice a patent or covenant not to sue for patent infringement). 445 | To “grant” such a patent license to a party means to make such an agreement or 446 | commitment not to enforce a patent against the party. 447 | 448 | If you convey a covered work, knowingly relying on a patent license, and the 449 | Corresponding Source of the work is not available for anyone to copy, free of 450 | charge and under the terms of this License, through a publicly available network 451 | server or other readily accessible means, then you must either **(1)** cause the 452 | Corresponding Source to be so available, or **(2)** arrange to deprive yourself 453 | of the benefit of the patent license for this particular work, or **(3)** 454 | arrange, in a manner consistent with the requirements of this License, to extend 455 | the patent license to downstream recipients. “Knowingly relying” means you have 456 | actual knowledge that, but for the patent license, your conveying the covered 457 | work in a country, or your recipient's use of the covered work in a country, 458 | would infringe one or more identifiable patents in that country that you have 459 | reason to believe are valid. 460 | 461 | If, pursuant to or in connection with a single transaction or arrangement, you 462 | convey, or propagate by procuring conveyance of, a covered work, and grant a 463 | patent license to some of the parties receiving the covered work authorizing 464 | them to use, propagate, modify or convey a specific copy of the covered work, 465 | then the patent license you grant is automatically extended to all recipients of 466 | the covered work and works based on it. 467 | 468 | A patent license is “discriminatory” if it does not include within the scope of 469 | its coverage, prohibits the exercise of, or is conditioned on the non-exercise 470 | of one or more of the rights that are specifically granted under this License. 471 | You may not convey a covered work if you are a party to an arrangement with a 472 | third party that is in the business of distributing software, under which you 473 | make payment to the third party based on the extent of your activity of 474 | conveying the work, and under which the third party grants, to any of the 475 | parties who would receive the covered work from you, a discriminatory patent 476 | license **(a)** in connection with copies of the covered work conveyed by you 477 | (or copies made from those copies), or **(b)** primarily for and in connection 478 | with specific products or compilations that contain the covered work, unless you 479 | entered into that arrangement, or that patent license was granted, prior to 28 480 | March 2007. 481 | 482 | Nothing in this License shall be construed as excluding or limiting any implied 483 | license or other defenses to infringement that may otherwise be available to you 484 | under applicable patent law. 485 | 486 | ### 12. No Surrender of Others' Freedom 487 | 488 | If conditions are imposed on you (whether by court order, agreement or 489 | otherwise) that contradict the conditions of this License, they do not excuse 490 | you from the conditions of this License. If you cannot convey a covered work so 491 | as to satisfy simultaneously your obligations under this License and any other 492 | pertinent obligations, then as a consequence you may not convey it at all. For 493 | example, if you agree to terms that obligate you to collect a royalty for 494 | further conveying from those to whom you convey the Program, the only way you 495 | could satisfy both those terms and this License would be to refrain entirely 496 | from conveying the Program. 497 | 498 | ### 13. Use with the GNU Affero General Public License 499 | 500 | Notwithstanding any other provision of this License, you have permission to link 501 | or combine any covered work with a work licensed under version 3 of the GNU 502 | Affero General Public License into a single combined work, and to convey the 503 | resulting work. The terms of this License will continue to apply to the part 504 | which is the covered work, but the special requirements of the GNU Affero 505 | General Public License, section 13, concerning interaction through a network 506 | will apply to the combination as such. 507 | 508 | ### 14. Revised Versions of this License 509 | 510 | The Free Software Foundation may publish revised and/or new versions of the GNU 511 | General Public License from time to time. Such new versions will be similar in 512 | spirit to the present version, but may differ in detail to address new problems 513 | or concerns. 514 | 515 | Each version is given a distinguishing version number. If the Program specifies 516 | that a certain numbered version of the GNU General Public License “or any later 517 | version” applies to it, you have the option of following the terms and 518 | conditions either of that numbered version or of any later version published by 519 | the Free Software Foundation. If the Program does not specify a version number 520 | of the GNU General Public License, you may choose any version ever published by 521 | the Free Software Foundation. 522 | 523 | If the Program specifies that a proxy can decide which future versions of the 524 | GNU General Public License can be used, that proxy's public statement of 525 | acceptance of a version permanently authorizes you to choose that version for 526 | the Program. 527 | 528 | Later license versions may give you additional or different permissions. 529 | However, no additional obligations are imposed on any author or copyright holder 530 | as a result of your choosing to follow a later version. 531 | 532 | ### 15. Disclaimer of Warranty 533 | 534 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. 535 | EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER 536 | PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER 537 | EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 538 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE 539 | QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE 540 | DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 541 | 542 | ### 16. Limitation of Liability 543 | 544 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY 545 | COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS 546 | PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, 547 | INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE 548 | THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED 549 | INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE 550 | PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY 551 | HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 552 | 553 | ### 17. Interpretation of Sections 15 and 16 554 | 555 | If the disclaimer of warranty and limitation of liability provided above cannot 556 | be given local legal effect according to their terms, reviewing courts shall 557 | apply local law that most closely approximates an absolute waiver of all civil 558 | liability in connection with the Program, unless a warranty or assumption of 559 | liability accompanies a copy of the Program in return for a fee. 560 | 561 | _END OF TERMS AND CONDITIONS_ 562 | 563 | ## How to Apply These Terms to Your New Programs 564 | 565 | If you develop a new program, and you want it to be of the greatest possible use 566 | to the public, the best way to achieve this is to make it free software which 567 | everyone can redistribute and change under these terms. 568 | 569 | To do so, attach the following notices to the program. It is safest to attach 570 | them to the start of each source file to most effectively state the exclusion of 571 | warranty; and each file should have at least the “copyright” line and a pointer 572 | to where the full notice is found. 573 | 574 | 575 | Copyright (C) 576 | 577 | This program is free software: you can redistribute it and/or modify 578 | it under the terms of the GNU General Public License as published by 579 | the Free Software Foundation, either version 3 of the License, or 580 | (at your option) any later version. 581 | 582 | This program is distributed in the hope that it will be useful, 583 | but WITHOUT ANY WARRANTY; without even the implied warranty of 584 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 585 | GNU General Public License for more details. 586 | 587 | You should have received a copy of the GNU General Public License 588 | along with this program. If not, see . 589 | 590 | Also add information on how to contact you by electronic and paper mail. 591 | 592 | If the program does terminal interaction, make it output a short notice like 593 | this when it starts in an interactive mode: 594 | 595 | Copyright (C) 596 | This program comes with ABSOLUTELY NO WARRANTY; for details type 'show w'. 597 | This is free software, and you are welcome to redistribute it 598 | under certain conditions; type 'show c' for details. 599 | 600 | The hypothetical commands `show w` and `show c` should show the appropriate 601 | parts of the General Public License. Of course, your program's commands might be 602 | different; for a GUI interface, you would use an “about box”. 603 | 604 | You should also get your employer (if you work as a programmer) or school, if 605 | any, to sign a “copyright disclaimer” for the program, if necessary. For more 606 | information on this, and how to apply and follow the GNU GPL, see 607 | <>. 608 | 609 | The GNU General Public License does not permit incorporating your program into 610 | proprietary programs. If your program is a subroutine library, you may consider 611 | it more useful to permit linking proprietary applications with the library. If 612 | this is what you want to do, use the GNU Lesser General Public License instead 613 | of this License. But first, please read 614 | <>. 615 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # nix-eval-jobs 2 | 3 | This project evaluates nix attribute sets in parallel with streamable json 4 | output. This is useful for time and memory intensive evaluations such as NixOS 5 | machines, i.e. in a CI context. The evaluation is done with a controllable 6 | number of threads that are restarted when their memory consumption exceeds a 7 | certain threshold. 8 | 9 | To facilitate integration, nix-eval-jobs creates garbage collection roots for 10 | each evaluated derivation (drv file, not the build) within the provided 11 | attribute. This prevents race conditions between the nix garbage collection 12 | service and user-started nix builds processes. 13 | 14 | ## Why using nix-eval-jobs? 15 | 16 | - Faster evaluation by using threads 17 | - Memory used for evaluation is reclaimed after nix-eval-jobs finish, so that 18 | the build can use it. 19 | - Evaluation of jobs can fail individually 20 | 21 | ## Example 22 | 23 | In the following example we evaluate the hydraJobs attribute of the 24 | [patchelf](https://github.com/NixOS/patchelf) flake: 25 | 26 | ```console 27 | $ nix-eval-jobs --gc-roots-dir gcroot --flake 'github:NixOS/patchelf#hydraJobs' 28 | {"attr":"coverage","attrPath":["coverage"],"drvPath":"/nix/store/fmbqzaq8mim1423879lhn9whs6imx5w4-patchelf-coverage-0.18.0.drv","inputDrvs":{"/nix/store/23632hx2c98lbbjld279dx0w08lxn6kp-hook.drv":["out"],"/nix/store/6z1jfnqqgyqr221zgbpm30v91yfj3r45-bash-5.1-p16.drv":["out"],"/nix/store/ap9g09fxbicj836zm88d56dn3ff4clxl-stdenv-linux.drv":["out"],"/nix/store/c0gg7lj101xhd8v2b3cjl5dwwkpxfc0q-patchelf-tarball-0.18.0.drv":["out"],"/nix/store/vslywm6kbazi37q1vbq8y7bi884yc6yx-lcov-1.16.drv":["out"],"/nix/store/y964yq4vz1gsn7azd44vyg65gnr4gpvi-hook.drv":["out"]},"name":"patchelf-coverage-0.18.0","outputs":{"out":"/nix/store/gfni9sbhhwhxxfqziq1fs3n82bvw962l-patchelf-coverage-0.18.0"},"system":"x86_64-linux"} 29 | {"attr":"patchelf-win32","attrPath":["patchelf-win32"],"drvPath":"/nix/store/s38l0fg5ja6j8qpws7slw2ws0c6v0qcf-patchelf-i686-w64-mingw32-0.18.0.drv","inputDrvs":{"/nix/store/6z1jfnqqgyqr221zgbpm30v91yfj3r45-bash-5.1-p16.drv":["out"],"/nix/store/b2p151ilwqpd47fbmzz50a5cmj12ixbf-hook.drv":["out"],"/nix/store/fbnhh18m4jh6cwa92am2sv3aqzjnzpdd-stdenv-linux.drv":["out"]},"name":"patchelf-i686-w64-mingw32-0.18.0","outputs":{"out":"/nix/store/w8r4h1xk71fryb99df8aszp83kfhw3bc-patchelf-i686-w64-mingw32-0.18.0"},"system":"x86_64-linux"} 30 | {"attr":"patchelf-win64","attrPath":["patchelf-win64"],"drvPath":"/nix/store/wxpym6d3dxr1w9syhinp7f058gwxfmd3-patchelf-x86_64-w64-mingw32-0.18.0.drv","inputDrvs":{"/nix/store/6z1jfnqqgyqr221zgbpm30v91yfj3r45-bash-5.1-p16.drv":["out"],"/nix/store/71lv5lsr1y59bv1b91jc9gg0n85kf1sq-stdenv-linux.drv":["out"],"/nix/store/b2p151ilwqpd47fbmzz50a5cmj12ixbf-hook.drv":["out"]},"name":"patchelf-x86_64-w64-mingw32-0.18.0","outputs":{"out":"/nix/store/fkq5428l2xsb84yj0cc6q1lkvsrga7sv-patchelf-x86_64-w64-mingw32-0.18.0"},"system":"x86_64-linux"} 31 | {"attr":"release","attrPath":["release"],"drvPath":"/nix/store/3xpwg8f623dpkh6cblv2fzcq5n99xl0j-patchelf-0.18.0.drv","inputDrvs":{"/nix/store/6z1jfnqqgyqr221zgbpm30v91yfj3r45-bash-5.1-p16.drv":["out"],"/nix/store/9rmihrl9ys0sap6827xyns0y73vqafjx-patchelf-0.18.0.drv":["out"],"/nix/store/am2zqx3pyc1i14f888jna785h0f841sg-patchelf-0.18.0.drv":["out"],"/nix/store/c0gg7lj101xhd8v2b3cjl5dwwkpxfc0q-patchelf-tarball-0.18.0.drv":["out"],"/nix/store/csjiccxbwpfv55m8kqs2xwrkkha14dnq-patchelf-0.18.0.drv":["out"],"/nix/store/jsrnpxdx5vmpnakd9bkb3sk3lgh0k8hm-patchelf-0.18.0.drv":["out"],"/nix/store/k8a51ax83554c67g98xf3y751vjgjs7m-patchelf-0.18.0.drv":["out"],"/nix/store/wq3ncl207isqqkqmsa5ql4fg19jbrhxg-stdenv-linux.drv":["out"]},"name":"patchelf-0.18.0","outputs":{"out":"/nix/store/d0mzprvv3vhasj23r1a6qn8qip0srbc4-patchelf-0.18.0"},"system":"x86_64-linux"} 32 | {"attr":"tarball","attrPath":["tarball"],"drvPath":"/nix/store/c0gg7lj101xhd8v2b3cjl5dwwkpxfc0q-patchelf-tarball-0.18.0.drv","inputDrvs":{"/nix/store/6z1jfnqqgyqr221zgbpm30v91yfj3r45-bash-5.1-p16.drv":["out"],"/nix/store/9d754glmsvpjm5kxvgsjslvgv356kbmn-libtool-2.4.7.drv":["out"],"/nix/store/ap9g09fxbicj836zm88d56dn3ff4clxl-stdenv-linux.drv":["out"],"/nix/store/f1ksgsyplvb0sli4pls6k6vsfvmv519d-autoconf-2.71.drv":["out"],"/nix/store/jf58lcnch1bmpbi2188c59w5zr1cqrx2-automake-1.16.5.drv":["out"]},"name":"patchelf-tarball-0.18.0","outputs":{"out":"/nix/store/72pz5awc7gpwdqxrdsy8j0bvg2n7z78q-patchelf-tarball-0.18.0"},"system":"x86_64-linux"} 33 | ``` 34 | 35 | The output here is newline-seperated json according to https://jsonlines.org. 36 | 37 | The code is derived from [hydra's](https://github.com/nixos/hydra) eval-jobs 38 | executable. 39 | 40 | ## Further options 41 | 42 | ```console 43 | USAGE: nix-eval-jobs [options] expr 44 | 45 | --apply Apply provided Nix function to each derivation. The result of this function will be serialized as a JSON value and stored inside `"extraValue"` key of the json line output. 46 | --arg Pass the value *expr* as the argument *name* to Nix functions. 47 | --arg-from-file Pass the contents of file *path* as the argument *name* to Nix functions. 48 | --arg-from-stdin Pass the contents of stdin as the argument *name* to Nix functions. 49 | --argstr Pass the string *string* as the argument *name* to Nix functions. 50 | --check-cache-status Check if the derivations are present locally or in any configured substituters (i.e. binary cache). The information will be exposed in the `cacheStatus` field of the JSON output. 51 | --constituents whether to evaluate constituents for Hydra's aggregate feature 52 | --debug Set the logging verbosity level to 'debug'. 53 | --eval-store 54 | The [URL of the Nix store](@docroot@/store/types/index.md#store-url-format) 55 | to use for evaluation, i.e. to store derivations (`.drv` files) and inputs referenced by them. 56 | 57 | --expr treat the argument as a Nix expression 58 | --flake build a flake 59 | --force-recurse force recursion (don't respect recurseIntoAttrs) 60 | --gc-roots-dir garbage collector roots directory 61 | --help show usage information 62 | --impure allow impure expressions 63 | --include 64 | Add *path* to search path entries used to resolve [lookup paths](@docroot@/language/constructs/lookup-path.md) 65 | 66 | This option may be given multiple times. 67 | 68 | Paths added through `-I` take precedence over the [`nix-path` configuration setting](@docroot@/command-ref/conf-file.md#conf-nix-path) and the [`NIX_PATH` environment variable](@docroot@/command-ref/env-common.md#env-NIX_PATH). 69 | 70 | --log-format Set the format of log output; one of `raw`, `internal-json`, `bar` or `bar-with-logs`. 71 | --max-memory-size maximum evaluation memory size in megabyte (4GiB per worker by default) 72 | --meta include derivation meta field in output 73 | --option Set the Nix configuration setting *name* to *value* (overriding `nix.conf`). 74 | --override-flake Override the flake registries, redirecting *original-ref* to *resolved-ref*. 75 | --override-input Override a specific flake input (e.g. `dwarffs/nixpkgs`). 76 | --quiet Decrease the logging verbosity level. 77 | --reference-lock-file Read the given lock file instead of `flake.lock` within the top-level flake. 78 | --repair During evaluation, rewrite missing or corrupted files in the Nix store. During building, rebuild missing or corrupted store paths. 79 | --show-input-drvs Show input derivations in the output for each derivation. This is useful to get direct dependencies of a derivation. 80 | --show-trace print out a stack trace in case of evaluation errors 81 | --verbose Increase the logging verbosity level. 82 | --workers number of evaluate workers 83 | ``` 84 | 85 | ## Potential use-cases for the tool 86 | 87 | **Faster evaluator in deployment tools.** When evaluating NixOS machines, 88 | evaluation can take several minutes when run on a single core. This limits 89 | scalability for large deployments with deployment tools such as 90 | [NixOps](https://github.com/NixOS/nixops). 91 | 92 | **Faster evaluator in CIs.** In addition to evaluation speed for CIs, it is also 93 | useful if evaluation of individual jobs in CIs can fail, as opposed to failing 94 | the entire jobset. For CIs that allow dynamic build steps to be created, one can 95 | also take advantage of the fact that nix-eval-jobs outputs the derivation path 96 | separately. This allows separate logs and success status per job instead of a 97 | single large log file. In the 98 | [wiki](https://github.com/nix-community/nix-eval-jobs/wiki#ci-example-configurations) 99 | we collect example ci configuration for various CIs. 100 | 101 | ## Projects using nix-eval-jobs 102 | 103 | - [nix-fast-build](https://github.com/Mic92/nix-fast-build) - Combine the power 104 | of nix-eval-jobs with nix-output-monitor to speed-up your evaluation and 105 | building process 106 | - [buildbot-nix](https://github.com/Mic92/buildbot-nix) - A nixos module to make 107 | buildbot a proper Nix-CI 108 | - [colmena](https://github.com/zhaofengli/colmena) - A simple, stateless NixOS 109 | deployment tool 110 | - [robotnix](https://github.com/danielfullmer/robotnix) - Build Android (AOSP) 111 | using Nix, used in their 112 | [CI](https://github.com/danielfullmer/robotnix/blob/38b80700ee4265c306dcfdcce45056e32ab2973f/.github/workflows/instantiate.yml#L18) 113 | 114 | ## FAQ 115 | 116 | ### How can I check if my package already have been uploaded in the binary cache? 117 | 118 | If you provide the `--check-cache-status`, the json will contain a 119 | `"cacheStatus"` key in its json, with the following values: 120 | 121 | | Value | Meaning | 122 | | -------- | ------------------------------------------------------- | 123 | | local | Package is present locally | 124 | | cached | Package is present in the binary cache, but not locally | 125 | | notBuilt | Package needs to be built. | 126 | 127 | ### How can I evaluate nixpkgs? 128 | 129 | If you want to evaluate nixpkgs in the same way 130 | [hydra](https://hydra.nixos.org/) does it, use this snippet: 131 | 132 | ```console 133 | $ nix-eval-jobs --force-recurse pkgs/top-level/release.nix 134 | ``` 135 | 136 | ### nix-eval-jobs consumes too much memory / is too slow 137 | 138 | By default, nix-eval-jobs spawns as many worker processes as there are hardware 139 | threads in the system and limits the memory usage for each worker to 4GB. 140 | 141 | However, keep in mind that each worker process may need to re-evaluate shared 142 | dependencies of the attributes, which can introduce some overhead for each 143 | evaluation or cause workers to exceed their memory limit. If you encounter these 144 | situations, you can tune the following options: 145 | 146 | `--workers`: This option allows you to set the number of evaluation workers that 147 | nix-eval-jobs should spawn. You can increase or decrease this number to optimize 148 | the evaluation speed and memory usage. For example, if you have a system with 149 | many CPU cores but limited memory, you may want to reduce the number of workers 150 | to avoid exceeding the memory limit. 151 | 152 | `--max-memory-size`: This option allows you to adjust the memory limit for each 153 | worker process. By default, it's set to 4GiB, but you can increase or decrease 154 | this value as needed. For example, if you have a system with a lot of memory and 155 | want to speed up the evaluation, you may want to increase the memory limit to 156 | allow workers to cache more data in memory before getting restarted by 157 | nix-eval-jobs. Note that this is not a hard limit and memory usage may rise 158 | above the limit momentarily before the worker process exits. 159 | 160 | Overall, tuning these options can help you optimize the performance and memory 161 | usage of nix-eval-jobs to better fit your system and evaluation needs. 162 | -------------------------------------------------------------------------------- /default.nix: -------------------------------------------------------------------------------- 1 | { 2 | stdenv, 3 | lib, 4 | nixComponents, 5 | pkgs, 6 | srcDir ? null, 7 | }: 8 | 9 | stdenv.mkDerivation { 10 | pname = "nix-eval-jobs"; 11 | version = "2.29.0"; 12 | src = 13 | if srcDir == null then 14 | lib.fileset.toSource { 15 | fileset = lib.fileset.unions [ 16 | ./meson.build 17 | ./src/meson.build 18 | (lib.fileset.fileFilter (file: file.hasExt "cc") ./src) 19 | (lib.fileset.fileFilter (file: file.hasExt "hh") ./src) 20 | ]; 21 | root = ./.; 22 | } 23 | else 24 | srcDir; 25 | buildInputs = with pkgs; [ 26 | nlohmann_json 27 | curl 28 | nixComponents.nix-store 29 | nixComponents.nix-fetchers 30 | nixComponents.nix-expr 31 | nixComponents.nix-flake 32 | nixComponents.nix-main 33 | nixComponents.nix-cmd 34 | ]; 35 | nativeBuildInputs = 36 | with pkgs; 37 | [ 38 | meson 39 | pkg-config 40 | ninja 41 | # nlohmann_json can be only discovered via cmake files 42 | cmake 43 | ] 44 | ++ (lib.optional stdenv.cc.isClang [ pkgs.clang-tools ]); 45 | 46 | passthru = { 47 | inherit nixComponents; 48 | }; 49 | 50 | meta = { 51 | description = "Hydra's builtin hydra-eval-jobs as a standalone"; 52 | homepage = "https://github.com/nix-community/nix-eval-jobs"; 53 | license = lib.licenses.gpl3; 54 | maintainers = with lib.maintainers; [ 55 | adisbladis 56 | mic92 57 | ]; 58 | platforms = lib.platforms.unix; 59 | }; 60 | } 61 | -------------------------------------------------------------------------------- /dev/treefmt.nix: -------------------------------------------------------------------------------- 1 | { pkgs, lib, ... }: 2 | let 3 | supportsDeno = 4 | lib.meta.availableOn pkgs.stdenv.buildPlatform pkgs.deno 5 | && (builtins.tryEval pkgs.deno.outPath).success; 6 | in 7 | { 8 | flakeCheck = pkgs.hostPlatform.system != "riscv64-linux"; 9 | # Used to find the project root 10 | projectRootFile = "flake.lock"; 11 | 12 | programs.deno.enable = supportsDeno; 13 | programs.yamlfmt.enable = true; 14 | 15 | programs.clang-format.enable = true; 16 | programs.clang-format.package = pkgs.llvmPackages_latest.clang-tools; 17 | 18 | programs.deadnix.enable = true; 19 | programs.nixfmt.enable = true; 20 | programs.mypy = { 21 | enable = true; 22 | directories = { 23 | "tests" = { 24 | extraPythonPackages = [ pkgs.python3Packages.pytest ]; 25 | }; 26 | }; 27 | }; 28 | programs.ruff.format = true; 29 | programs.ruff.check = true; 30 | } 31 | -------------------------------------------------------------------------------- /flake.lock: -------------------------------------------------------------------------------- 1 | { 2 | "nodes": { 3 | "flake-parts": { 4 | "inputs": { 5 | "nixpkgs-lib": [ 6 | "nixpkgs" 7 | ] 8 | }, 9 | "locked": { 10 | "lastModified": 1741352980, 11 | "narHash": "sha256-+u2UunDA4Cl5Fci3m7S643HzKmIDAe+fiXrLqYsR2fs=", 12 | "owner": "hercules-ci", 13 | "repo": "flake-parts", 14 | "rev": "f4330d22f1c5d2ba72d3d22df5597d123fdb60a9", 15 | "type": "github" 16 | }, 17 | "original": { 18 | "owner": "hercules-ci", 19 | "repo": "flake-parts", 20 | "type": "github" 21 | } 22 | }, 23 | "nix": { 24 | "flake": false, 25 | "locked": { 26 | "lastModified": 1748154947, 27 | "narHash": "sha256-rCpANMHFIlafta6J/G0ILRd+WNSnzv/lzi40Y8f1AR8=", 28 | "owner": "NixOS", 29 | "repo": "nix", 30 | "rev": "d761dad79c79af17aa476a29749bd9d69747548f", 31 | "type": "github" 32 | }, 33 | "original": { 34 | "owner": "NixOS", 35 | "ref": "2.29-maintenance", 36 | "repo": "nix", 37 | "type": "github" 38 | } 39 | }, 40 | "nix-github-actions": { 41 | "inputs": { 42 | "nixpkgs": [ 43 | "nixpkgs" 44 | ] 45 | }, 46 | "locked": { 47 | "lastModified": 1737420293, 48 | "narHash": "sha256-F1G5ifvqTpJq7fdkT34e/Jy9VCyzd5XfJ9TO8fHhJWE=", 49 | "owner": "nix-community", 50 | "repo": "nix-github-actions", 51 | "rev": "f4158fa080ef4503c8f4c820967d946c2af31ec9", 52 | "type": "github" 53 | }, 54 | "original": { 55 | "owner": "nix-community", 56 | "repo": "nix-github-actions", 57 | "type": "github" 58 | } 59 | }, 60 | "nixpkgs": { 61 | "locked": { 62 | "lastModified": 1747278858, 63 | "narHash": "sha256-k0C88JEwe7+U9gsM+FCDhf3LISAKMGF87fsP5Rh2944=", 64 | "owner": "nixos", 65 | "repo": "nixpkgs", 66 | "rev": "35b60d7d59f51a6a9c124b82e490d35df399832f", 67 | "type": "github" 68 | }, 69 | "original": { 70 | "owner": "nixos", 71 | "repo": "nixpkgs", 72 | "type": "github" 73 | } 74 | }, 75 | "root": { 76 | "inputs": { 77 | "flake-parts": "flake-parts", 78 | "nix": "nix", 79 | "nix-github-actions": "nix-github-actions", 80 | "nixpkgs": "nixpkgs", 81 | "treefmt-nix": "treefmt-nix" 82 | } 83 | }, 84 | "treefmt-nix": { 85 | "inputs": { 86 | "nixpkgs": [ 87 | "nixpkgs" 88 | ] 89 | }, 90 | "locked": { 91 | "lastModified": 1748243702, 92 | "narHash": "sha256-9YzfeN8CB6SzNPyPm2XjRRqSixDopTapaRsnTpXUEY8=", 93 | "owner": "numtide", 94 | "repo": "treefmt-nix", 95 | "rev": "1f3f7b784643d488ba4bf315638b2b0a4c5fb007", 96 | "type": "github" 97 | }, 98 | "original": { 99 | "owner": "numtide", 100 | "repo": "treefmt-nix", 101 | "type": "github" 102 | } 103 | } 104 | }, 105 | "root": "root", 106 | "version": 7 107 | } 108 | -------------------------------------------------------------------------------- /flake.nix: -------------------------------------------------------------------------------- 1 | { 2 | description = "Hydra's builtin hydra-eval-jobs as a standalone"; 3 | 4 | # Switch back after https://nixpk.gs/pr-tracker.html?pr=396710 is finished 5 | # inputs.nixpkgs.url = "https://nixos.org/channels/nixpkgs-unstable/nixexprs.tar.xz"; 6 | inputs.nixpkgs.url = "github:nixos/nixpkgs"; 7 | inputs.nix = { 8 | url = "github:NixOS/nix/2.29-maintenance"; 9 | # We want to control the deps precisely 10 | flake = false; 11 | }; 12 | inputs.flake-parts.url = "github:hercules-ci/flake-parts"; 13 | inputs.flake-parts.inputs.nixpkgs-lib.follows = "nixpkgs"; 14 | inputs.treefmt-nix.url = "github:numtide/treefmt-nix"; 15 | inputs.treefmt-nix.inputs.nixpkgs.follows = "nixpkgs"; 16 | inputs.nix-github-actions.url = "github:nix-community/nix-github-actions"; 17 | inputs.nix-github-actions.inputs.nixpkgs.follows = "nixpkgs"; 18 | 19 | outputs = 20 | inputs@{ flake-parts, ... }: 21 | let 22 | inherit (inputs.nixpkgs) lib; 23 | inherit (inputs) self; 24 | in 25 | flake-parts.lib.mkFlake { inherit inputs; } { 26 | systems = [ 27 | "aarch64-linux" 28 | "riscv64-linux" 29 | "x86_64-linux" 30 | 31 | "aarch64-darwin" 32 | "x86_64-darwin" 33 | ]; 34 | imports = [ inputs.treefmt-nix.flakeModule ]; 35 | 36 | flake.githubActions = inputs.nix-github-actions.lib.mkGithubMatrix { 37 | platforms = { 38 | "x86_64-linux" = [ 39 | "nscloud-ubuntu-22.04-amd64-4x16-with-cache" 40 | "nscloud-cache-size-20gb" 41 | "nscloud-cache-tag-nix-eval-jobs" 42 | ]; 43 | "x86_64-darwin" = "macos-13"; 44 | "aarch64-darwin" = "macos-latest"; 45 | "aarch64-linux" = [ 46 | "nscloud-ubuntu-22.04-arm64-4x16-with-cache" 47 | "nscloud-cache-size-20gb" 48 | "nscloud-cache-tag-nix-eval-jobs" 49 | ]; 50 | }; 51 | 52 | checks = { 53 | inherit (self.checks) x86_64-linux aarch64-linux aarch64-darwin; 54 | x86_64-darwin = builtins.removeAttrs self.checks.x86_64-darwin [ "treefmt" ]; 55 | }; 56 | }; 57 | 58 | perSystem = 59 | { pkgs, self', ... }: 60 | let 61 | nixDependencies = lib.makeScope pkgs.newScope ( 62 | import (inputs.nix + "/packaging/dependencies.nix") { 63 | inherit pkgs; 64 | inherit (pkgs) stdenv; 65 | inputs = { }; 66 | } 67 | ); 68 | nixComponents = lib.makeScope nixDependencies.newScope ( 69 | import (inputs.nix + "/packaging/components.nix") { 70 | officialRelease = true; 71 | inherit lib pkgs; 72 | src = inputs.nix; 73 | maintainers = [ ]; 74 | } 75 | ); 76 | drvArgs = { 77 | srcDir = self; 78 | inherit nixComponents; 79 | }; 80 | in 81 | { 82 | treefmt.imports = [ ./dev/treefmt.nix ]; 83 | packages.nix-eval-jobs = pkgs.callPackage ./default.nix drvArgs; 84 | packages.clangStdenv-nix-eval-jobs = pkgs.callPackage ./default.nix ( 85 | drvArgs // { stdenv = pkgs.clangStdenv; } 86 | ); 87 | packages.default = self'.packages.nix-eval-jobs; 88 | devShells.default = pkgs.callPackage ./shell.nix drvArgs; 89 | devShells.clang = pkgs.callPackage ./shell.nix (drvArgs // { stdenv = pkgs.clangStdenv; }); 90 | 91 | checks = builtins.removeAttrs self'.packages [ "default" ] // { 92 | shell = self'.devShells.default; 93 | clang-tidy-fix = self'.packages.nix-eval-jobs.overrideAttrs (old: { 94 | nativeBuildInputs = old.nativeBuildInputs ++ [ 95 | pkgs.git 96 | (lib.hiPrio pkgs.llvmPackages_latest.clang-tools) 97 | ]; 98 | buildPhase = '' 99 | export HOME=$TMPDIR 100 | cat > $HOME/.gitconfig < 2 | #include 3 | #include 4 | #include 5 | // NOLINTBEGIN(modernize-deprecated-headers) 6 | // misc-include-cleaner wants these headers rather than the C++ version 7 | #include 8 | #include 9 | // NOLINTEND(modernize-deprecated-headers) 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | 17 | #include "buffered-io.hh" 18 | #include "strings-portable.hh" 19 | 20 | [[nodiscard]] auto tryWriteLine(int fd, std::string s) -> int { 21 | s += "\n"; 22 | std::string_view sv{s}; 23 | while (!sv.empty()) { 24 | nix::checkInterrupt(); 25 | const ssize_t res = write(fd, sv.data(), sv.size()); 26 | if (res == -1 && errno != EINTR) { 27 | return -errno; 28 | } 29 | if (res > 0) { 30 | sv.remove_prefix(res); 31 | } 32 | } 33 | return 0; 34 | } 35 | 36 | LineReader::LineReader(int fd) : stream(fdopen(fd, "r")) { 37 | if (stream == nullptr) { 38 | throw nix::Error("fdopen(%d) failed: %s", fd, get_error_name(errno)); 39 | } 40 | } 41 | 42 | LineReader::LineReader(LineReader &&other) noexcept 43 | : stream(other.stream.release()), buffer(other.buffer.release()), 44 | len(other.len) { 45 | other.stream = nullptr; 46 | other.len = 0; 47 | } 48 | 49 | [[nodiscard]] auto LineReader::readLine() -> std::string_view { 50 | char *buf = buffer.release(); 51 | const ssize_t read = getline(&buf, &len, stream.get()); 52 | buffer.reset(buf); 53 | 54 | if (read == -1) { 55 | return {}; // Return an empty string_view in case of error 56 | } 57 | 58 | nix::checkInterrupt(); 59 | 60 | // Remove trailing newline 61 | char *line = buffer.get(); 62 | return {line, static_cast(read) - 1}; 63 | } 64 | -------------------------------------------------------------------------------- /src/buffered-io.hh: -------------------------------------------------------------------------------- 1 | #pragma once 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | 8 | [[nodiscard]] auto tryWriteLine(int fd, std::string s) -> int; 9 | 10 | struct FileDeleter { 11 | void operator()(FILE *file) const { 12 | if (file != nullptr) { 13 | std::fclose(file); // NOLINT(cppcoreguidelines-owning-memory) 14 | } 15 | } 16 | }; 17 | 18 | struct MemoryDeleter { 19 | void operator()(void *ptr) const { 20 | // NOLINTBEGIN(cppcoreguidelines-owning-memory,cppcoreguidelines-no-malloc) 21 | std::free(ptr); 22 | // NOLINTEND(cppcoreguidelines-owning-memory,cppcoreguidelines-no-malloc) 23 | } 24 | }; 25 | 26 | class LineReader { 27 | public: 28 | LineReader(const LineReader &) = delete; 29 | explicit LineReader(int fd); 30 | auto operator=(const LineReader &) -> LineReader & = delete; 31 | auto operator=(LineReader &&) -> LineReader & = delete; 32 | ~LineReader() = default; 33 | 34 | LineReader(LineReader &&other) noexcept; 35 | [[nodiscard]] auto readLine() -> std::string_view; 36 | 37 | private: 38 | std::unique_ptr stream = nullptr; 39 | std::unique_ptr buffer = nullptr; 40 | size_t len = 0; 41 | }; 42 | -------------------------------------------------------------------------------- /src/constituents.cc: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | 6 | #include "constituents.hh" 7 | 8 | namespace { 9 | // This is copied from `libutil/topo-sort.hh` in Nix and slightly modified. 10 | // However, I needed a way to use strings as identifiers to sort, but still be 11 | // able to put AggregateJob objects into this function since I'd rather not have 12 | // to transform back and forth between a list of strings and AggregateJobs in 13 | // resolveNamedConstituents. 14 | auto topoSort(const std::set &items) 15 | -> std::vector { 16 | std::vector sorted; 17 | std::set visited; 18 | std::set parents; 19 | 20 | std::map dictIdentToObject; 21 | for (const auto &it : items) { 22 | dictIdentToObject.insert({it.name, it}); 23 | } 24 | 25 | std::function 26 | dfsVisit; 27 | 28 | dfsVisit = [&](const std::string &path, const std::string *parent) { 29 | if (parents.contains(path)) { 30 | dictIdentToObject.erase(path); 31 | dictIdentToObject.erase(*parent); 32 | std::set remaining; 33 | for (auto &[k, _] : dictIdentToObject) { 34 | remaining.insert(k); 35 | } 36 | throw DependencyCycle(path, *parent, remaining); 37 | } 38 | 39 | if (!visited.insert(path).second) { 40 | return; 41 | } 42 | parents.insert(path); 43 | 44 | std::set references = dictIdentToObject[path].dependencies; 45 | 46 | for (const auto &i : references) { 47 | /* Don't traverse into items that don't exist in our starting set. 48 | */ 49 | if (i != path && 50 | dictIdentToObject.find(i) != dictIdentToObject.end()) { 51 | dfsVisit(i, &path); 52 | } 53 | } 54 | 55 | sorted.push_back(dictIdentToObject[path]); 56 | parents.erase(path); 57 | }; 58 | 59 | for (auto &[i, _] : dictIdentToObject) { 60 | dfsVisit(i, nullptr); 61 | } 62 | 63 | return sorted; 64 | } 65 | 66 | auto insertMatchingConstituents( 67 | const std::string &childJobName, const std::string &jobName, 68 | const std::function 69 | &isBroken, 70 | const std::map &jobs, 71 | std::set &results) -> bool { 72 | bool expansionFound = false; 73 | for (const auto &[currentJobName, job] : jobs) { 74 | // Never select the job itself as constituent. Trivial way 75 | // to avoid obvious cycles. 76 | if (currentJobName == jobName) { 77 | continue; 78 | } 79 | auto jobName = currentJobName; 80 | if (fnmatch(childJobName.c_str(), jobName.c_str(), 0) == 0 && 81 | !isBroken(jobName, job)) { 82 | results.insert(jobName); 83 | expansionFound = true; 84 | } 85 | } 86 | 87 | return expansionFound; 88 | } 89 | } // namespace 90 | 91 | auto resolveNamedConstituents(const std::map &jobs) 92 | -> std::variant, DependencyCycle> { 93 | std::set aggregateJobs; 94 | for (auto const &[jobName, job] : jobs) { 95 | auto named = job.find("namedConstituents"); 96 | if (named != job.end() && !named->empty()) { 97 | bool globConstituents = job.value("globConstituents", false); 98 | std::unordered_map brokenJobs; 99 | std::set results; 100 | 101 | auto isBroken = [&brokenJobs, 102 | &jobName](const std::string &childJobName, 103 | const nlohmann::json &job) -> bool { 104 | if (job.find("error") != job.end()) { 105 | std::string error = job["error"]; 106 | nix::logger->log( 107 | nix::lvlError, 108 | nix::fmt( 109 | "aggregate job '%s' references broken job '%s': %s", 110 | jobName, childJobName, error)); 111 | brokenJobs[childJobName] = error; 112 | return true; 113 | } 114 | return false; 115 | }; 116 | 117 | for (const std::string childJobName : *named) { 118 | auto childJobIter = jobs.find(childJobName); 119 | if (childJobIter == jobs.end()) { 120 | if (!globConstituents) { 121 | nix::logger->log( 122 | nix::lvlError, 123 | nix::fmt("aggregate job '%s' references " 124 | "non-existent job '%s'", 125 | jobName, childJobName)); 126 | brokenJobs[childJobName] = "does not exist"; 127 | } else if (!insertMatchingConstituents(childJobName, 128 | jobName, isBroken, 129 | jobs, results)) { 130 | nix::warn("aggregate job '%s' references constituent " 131 | "glob pattern '%s' with no matches", 132 | jobName, childJobName); 133 | brokenJobs[childJobName] = 134 | "constituent glob pattern had no matches"; 135 | } 136 | } else if (!isBroken(childJobName, childJobIter->second)) { 137 | results.insert(childJobName); 138 | } 139 | } 140 | 141 | aggregateJobs.insert(AggregateJob(jobName, results, brokenJobs)); 142 | } 143 | } 144 | 145 | try { 146 | return topoSort(aggregateJobs); 147 | } catch (DependencyCycle &e) { 148 | return e; 149 | } 150 | } 151 | 152 | void rewriteAggregates(std::map &jobs, 153 | const std::vector &aggregateJobs, 154 | nix::ref &store, nix::Path &gcRootsDir) { 155 | for (const auto &aggregateJob : aggregateJobs) { 156 | auto &job = jobs.find(aggregateJob.name)->second; 157 | auto drvPath = store->parseStorePath(std::string(job["drvPath"])); 158 | auto drv = store->readDerivation(drvPath); 159 | 160 | if (aggregateJob.brokenJobs.empty()) { 161 | for (const auto &childJobName : aggregateJob.dependencies) { 162 | auto childDrvPath = store->parseStorePath( 163 | std::string(jobs.find(childJobName)->second["drvPath"])); 164 | auto childDrv = store->readDerivation(childDrvPath); 165 | job["constituents"].push_back( 166 | store->printStorePath(childDrvPath)); 167 | drv.inputDrvs.map[childDrvPath].value = { 168 | childDrv.outputs.begin()->first}; 169 | } 170 | 171 | std::string drvName(drvPath.name()); 172 | assert(nix::hasSuffix(drvName, nix::drvExtension)); 173 | drvName.resize(drvName.size() - nix::drvExtension.size()); 174 | 175 | auto hashModulo = hashDerivationModulo(*store, drv, true); 176 | if (hashModulo.kind != nix::DrvHash::Kind::Regular) { 177 | continue; 178 | } 179 | auto h = hashModulo.hashes.find("out"); 180 | if (h == hashModulo.hashes.end()) { 181 | continue; 182 | } 183 | auto outPath = store->makeOutputPath("out", h->second, drvName); 184 | drv.env["out"] = store->printStorePath(outPath); 185 | drv.outputs.insert_or_assign( 186 | "out", nix::DerivationOutput::InputAddressed{.path = outPath}); 187 | 188 | auto newDrvPath = nix::writeDerivation(*store, drv); 189 | auto newDrvPathS = store->printStorePath(newDrvPath); 190 | 191 | /* Register the derivation as a GC root. !!! This 192 | registers roots for jobs that we may have already 193 | done. */ 194 | auto localStore = store.dynamic_pointer_cast(); 195 | if (!gcRootsDir.empty()) { 196 | const nix::Path root = 197 | gcRootsDir + "/" + 198 | std::string(nix::baseNameOf(newDrvPathS)); 199 | 200 | if (!nix::pathExists(root)) { 201 | auto localStore = 202 | store.dynamic_pointer_cast(); 203 | localStore->addPermRoot(newDrvPath, root); 204 | } 205 | } 206 | 207 | nix::logger->log(nix::lvlDebug, 208 | nix::fmt("rewrote aggregate derivation %s -> %s", 209 | store->printStorePath(drvPath), 210 | newDrvPathS)); 211 | 212 | job["drvPath"] = newDrvPathS; 213 | job["outputs"]["out"] = store->printStorePath(outPath); 214 | } 215 | 216 | job.erase("namedConstituents"); 217 | 218 | if (!aggregateJob.brokenJobs.empty()) { 219 | std::stringstream ss; 220 | for (const auto &[jobName, error] : aggregateJob.brokenJobs) { 221 | ss << jobName << ": " << error << "\n"; 222 | } 223 | job["error"] = ss.str(); 224 | } 225 | 226 | std::cout << job.dump() << "\n" << std::flush; 227 | } 228 | } 229 | -------------------------------------------------------------------------------- /src/constituents.hh: -------------------------------------------------------------------------------- 1 | #pragma once 2 | 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | 9 | #include 10 | 11 | #include 12 | #include 13 | 14 | struct DependencyCycle : public std::exception { 15 | std::string a; 16 | std::string b; 17 | std::set remainingAggregates; 18 | 19 | DependencyCycle(std::string a, std::string b, 20 | const std::set &remainingAggregates) 21 | : a(std::move(a)), b(std::move(b)), 22 | remainingAggregates(remainingAggregates) {} 23 | 24 | [[nodiscard]] auto message() const -> std::string { 25 | return nix::fmt("Dependency cycle: %s <-> %s", a, b); 26 | } 27 | }; 28 | 29 | struct AggregateJob { 30 | std::string name; 31 | std::set dependencies; 32 | std::unordered_map brokenJobs; 33 | 34 | auto operator<(const AggregateJob &b) const -> bool { 35 | return name < b.name; 36 | } 37 | }; 38 | 39 | auto resolveNamedConstituents(const std::map &jobs) 40 | -> std::variant, DependencyCycle>; 41 | 42 | void rewriteAggregates(std::map &jobs, 43 | const std::vector &aggregateJobs, 44 | nix::ref &store, nix::Path &gcRootsDir); 45 | -------------------------------------------------------------------------------- /src/drv.cc: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | // required for std::optional 18 | #include //NOLINT(misc-include-cleaner) 19 | #include 20 | #include 21 | #include 22 | #include 23 | #include 24 | #include 25 | #include 26 | #include 27 | #include 28 | #include 29 | #include 30 | 31 | #include "drv.hh" 32 | #include "eval-args.hh" 33 | 34 | namespace { 35 | 36 | auto queryCacheStatus( 37 | nix::Store &store, 38 | std::map> &outputs, 39 | std::vector &neededBuilds, 40 | std::vector &neededSubstitutes, 41 | std::vector &unknownPaths) -> Drv::CacheStatus { 42 | uint64_t downloadSize = 0; 43 | uint64_t narSize = 0; 44 | 45 | std::vector paths; 46 | for (auto const &[key, val] : outputs) { 47 | if (val) { 48 | paths.push_back(followLinksToStorePathWithOutputs(store, *val)); 49 | } 50 | } 51 | nix::StorePathSet willBuild; 52 | nix::StorePathSet willSubstitute; 53 | nix::StorePathSet unknown; 54 | 55 | store.queryMissing(toDerivedPaths(paths), willBuild, willSubstitute, 56 | unknown, downloadSize, narSize); 57 | 58 | if (!willBuild.empty()) { 59 | // TODO: can we expose the topological sort order as a graph? 60 | auto sorted = store.topoSortPaths(willBuild); 61 | std::ranges::reverse(sorted.begin(), sorted.end()); 62 | for (auto &i : sorted) { 63 | neededBuilds.push_back(store.printStorePath(i)); 64 | } 65 | } 66 | if (!willSubstitute.empty()) { 67 | std::vector willSubstituteSorted = {}; 68 | std::ranges::for_each(willSubstitute.begin(), willSubstitute.end(), 69 | [&](const nix::StorePath &p) { 70 | willSubstituteSorted.push_back(&p); 71 | }); 72 | std::ranges::sort( 73 | willSubstituteSorted.begin(), willSubstituteSorted.end(), 74 | [](const nix::StorePath *lhs, const nix::StorePath *rhs) { 75 | if (lhs->name() == rhs->name()) { 76 | return lhs->to_string() < rhs->to_string(); 77 | } 78 | return lhs->name() < rhs->name(); 79 | }); 80 | for (const auto *p : willSubstituteSorted) { 81 | neededSubstitutes.push_back(store.printStorePath(*p)); 82 | } 83 | } 84 | 85 | if (!unknown.empty()) { 86 | for (const auto &i : unknown) { 87 | unknownPaths.push_back(store.printStorePath(i)); 88 | } 89 | } 90 | 91 | if (willBuild.empty() && unknown.empty()) { 92 | if (willSubstitute.empty()) { 93 | // cacheStatus is Local if: 94 | // - there's nothing to build 95 | // - there's nothing to substitute 96 | return Drv::CacheStatus::Local; 97 | } 98 | // cacheStatus is Cached if: 99 | // - there's nothing to build 100 | // - there are paths to substitute 101 | return Drv::CacheStatus::Cached; 102 | } 103 | return Drv::CacheStatus::NotBuilt; 104 | }; 105 | 106 | } // namespace 107 | 108 | /* The fields of a derivation that are printed in json form */ 109 | Drv::Drv(std::string &attrPath, nix::EvalState &state, 110 | nix::PackageInfo &packageInfo, MyArgs &args, 111 | std::optional constituents) 112 | : constituents(std::move(constituents)) { 113 | 114 | auto localStore = state.store.dynamic_pointer_cast(); 115 | 116 | try { 117 | nix::PackageInfo::Outputs outputsQueried; 118 | 119 | // CA derivations do not have static output paths, so we have to 120 | // fallback if we encounter an error 121 | try { 122 | outputsQueried = packageInfo.queryOutputs(true); 123 | } catch (const nix::Error &e) { 124 | // We could be hitting `nix::UnimplementedError`: 125 | // https://github.com/NixOS/nix/blob/39da9462e9c677026a805c5ee7ba6bb306f49c59/src/libexpr/get-drvs.cc#L106 126 | // 127 | // Or we could be hitting: 128 | // ``` 129 | // error: derivation 'caDependingOnCA' does not have valid outputs: 130 | // error: while evaluating the output path of a derivation at 131 | // :19:9: 132 | // 133 | // 18| value = commonAttrs // { 134 | // 19| outPath = builtins.getAttr outputName strict;\n 135 | // | ^ 136 | // 20| drvPath = strict.drvPath; 137 | // 138 | // error: path 139 | // '/0rmq7bvk2raajd310spvd416f2jajrabcg6ar706gjbd6b8nmvks' is not in 140 | // the Nix store 141 | // ``` 142 | // i.e. the placeholders were confusing it. 143 | // 144 | // FIXME: a better fix would be in Nix to first check if 145 | // `outPath` is equal to the placeholder. See 146 | // https://github.com/NixOS/nix/issues/11885. 147 | if (!nix::experimentalFeatureSettings.isEnabled( 148 | nix::Xp::CaDerivations)) { 149 | // If we do have CA derivations enabled, we should not encounter 150 | // these errors. 151 | throw; 152 | } 153 | outputsQueried = packageInfo.queryOutputs(false); 154 | } 155 | for (auto &[outputName, optOutputPath] : outputsQueried) { 156 | if (optOutputPath) { 157 | outputs[outputName] = 158 | localStore->printStorePath(*optOutputPath); 159 | } else { 160 | outputs[outputName] = std::nullopt; 161 | } 162 | } 163 | } catch (const std::exception &e) { 164 | state 165 | .error( 166 | "derivation '%s' does not have valid outputs: %s", attrPath, 167 | e.what()) 168 | .debugThrow(); 169 | } 170 | 171 | if (args.checkCacheStatus) { 172 | // TODO: is this a bottleneck, where we should batch these queries? 173 | cacheStatus = queryCacheStatus(*localStore, outputs, neededBuilds, 174 | neededSubstitutes, unknownPaths); 175 | } else { 176 | cacheStatus = Drv::CacheStatus::Unknown; 177 | } 178 | 179 | if (args.meta) { 180 | nlohmann::json meta_; 181 | for (const auto &metaName : packageInfo.queryMetaNames()) { 182 | nix::NixStringContext context; 183 | std::stringstream ss; 184 | 185 | auto *metaValue = packageInfo.queryMeta(metaName); 186 | // Skip non-serialisable types 187 | // TODO: Fix serialisation of derivations to store paths 188 | if (metaValue == nullptr) { 189 | continue; 190 | } 191 | 192 | nix::printValueAsJSON(state, true, *metaValue, nix::noPos, ss, 193 | context); 194 | 195 | meta_[metaName] = nlohmann::json::parse(ss.str()); 196 | } 197 | meta = meta_; 198 | } 199 | 200 | drvPath = localStore->printStorePath(packageInfo.requireDrvPath()); 201 | 202 | name = packageInfo.queryName(); 203 | 204 | // TODO: Ideally we wouldn't have to parse the derivation to get the system 205 | auto drv = localStore->readDerivation(packageInfo.requireDrvPath()); 206 | system = drv.platform; 207 | if (args.showInputDrvs) { 208 | std::map> drvs; 209 | for (const auto &[inputDrvPath, inputNode] : drv.inputDrvs.map) { 210 | std::set inputDrvOutputs; 211 | for (const auto &outputName : inputNode.value) { 212 | inputDrvOutputs.insert(outputName); 213 | } 214 | drvs[localStore->printStorePath(inputDrvPath)] = inputDrvOutputs; 215 | } 216 | inputDrvs = drvs; 217 | } 218 | } 219 | 220 | void to_json(nlohmann::json &json, const Drv &drv) { 221 | json = nlohmann::json{{"name", drv.name}, 222 | {"system", drv.system}, 223 | {"drvPath", drv.drvPath}, 224 | {"outputs", drv.outputs}}; 225 | 226 | if (drv.meta.has_value()) { 227 | json["meta"] = drv.meta.value(); 228 | } 229 | if (drv.inputDrvs) { 230 | json["inputDrvs"] = drv.inputDrvs.value(); 231 | } 232 | 233 | if (auto constituents = drv.constituents) { 234 | json["constituents"] = constituents->constituents; 235 | json["namedConstituents"] = constituents->namedConstituents; 236 | json["globConstituents"] = constituents->globConstituents; 237 | } 238 | 239 | if (drv.cacheStatus != Drv::CacheStatus::Unknown) { 240 | // Deprecated field 241 | json["isCached"] = drv.cacheStatus == Drv::CacheStatus::Cached || 242 | drv.cacheStatus == Drv::CacheStatus::Local; 243 | 244 | switch (drv.cacheStatus) { 245 | case Drv::CacheStatus::Cached: 246 | json["cacheStatus"] = "cached"; 247 | break; 248 | case Drv::CacheStatus::Local: 249 | json["cacheStatus"] = "local"; 250 | break; 251 | default: 252 | json["cacheStatus"] = "notBuilt"; 253 | break; 254 | } 255 | json["neededBuilds"] = drv.neededBuilds; 256 | json["neededSubstitutes"] = drv.neededSubstitutes; 257 | // TODO: is it useful to include "unknown" paths at all? 258 | // json["unknown"] = drv.unknownPaths; 259 | } 260 | } 261 | -------------------------------------------------------------------------------- /src/drv.hh: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | // we need this include or otherwise we cannot instantiate std::optional 5 | #include //NOLINT(misc-include-cleaner) 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | 14 | #include "eval-args.hh" 15 | 16 | namespace nix { 17 | class EvalState; 18 | struct PackageInfo; 19 | } // namespace nix 20 | 21 | struct Constituents { 22 | std::vector constituents; 23 | std::vector namedConstituents; 24 | bool globConstituents; 25 | Constituents(std::vector constituents, 26 | std::vector namedConstituents, 27 | bool globConstituents) 28 | : constituents(std::move(constituents)), 29 | namedConstituents(std::move(namedConstituents)), 30 | globConstituents(globConstituents) {}; 31 | }; 32 | 33 | /* The fields of a derivation that are printed in json form */ 34 | struct Drv { 35 | Drv(std::string &attrPath, nix::EvalState &state, 36 | nix::PackageInfo &packageInfo, MyArgs &args, 37 | std::optional constituents); 38 | std::string name; 39 | std::string system; 40 | std::string drvPath; 41 | 42 | std::map> outputs; 43 | 44 | std::optional>> inputDrvs = 45 | std::nullopt; 46 | 47 | // TODO: can we lazily allocate these? 48 | std::vector neededBuilds; 49 | std::vector neededSubstitutes; 50 | std::vector unknownPaths; 51 | 52 | // TODO: we might not need to store this as it can be computed from the 53 | // above 54 | enum class CacheStatus : uint8_t { 55 | Local, 56 | Cached, 57 | NotBuilt, 58 | Unknown 59 | } cacheStatus; 60 | 61 | std::optional meta; 62 | std::optional constituents; 63 | }; 64 | void to_json(nlohmann::json &json, const Drv &drv); 65 | -------------------------------------------------------------------------------- /src/eval-args.cc: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | 18 | #include "eval-args.hh" 19 | 20 | MyArgs::MyArgs() : MixCommonArgs("nix-eval-jobs") { 21 | addFlag({ 22 | .longName = "help", 23 | .aliases = {}, 24 | .shortName = 0, 25 | .description = "show usage information", 26 | .category = "", 27 | .labels = {}, 28 | .handler = {[&]() { 29 | std::cout << "USAGE: nix-eval-jobs [options] expr\n\n"; 30 | for (const auto &[name, flag] : longFlags) { 31 | if (hiddenCategories.contains(flag->category)) { 32 | continue; 33 | } 34 | std::cout << " --" << std::left << std::setw(20) << name << " " 35 | << flag->description << "\n"; 36 | } 37 | 38 | ::exit(0); // NOLINT(concurrency-mt-unsafe) 39 | }}, 40 | .completer = nullptr, 41 | .experimentalFeature = std::nullopt, 42 | }); 43 | 44 | addFlag({ 45 | .longName = "impure", 46 | .aliases = {}, 47 | .shortName = 0, 48 | .description = "allow impure expressions", 49 | .category = "", 50 | .labels = {}, 51 | .handler = {&impure, true}, 52 | .completer = nullptr, 53 | .experimentalFeature = std::nullopt, 54 | }); 55 | 56 | addFlag({ 57 | .longName = "force-recurse", 58 | .aliases = {}, 59 | .shortName = 0, 60 | .description = "force recursion (don't respect recurseIntoAttrs)", 61 | .category = "", 62 | .labels = {}, 63 | .handler = {&forceRecurse, true}, 64 | .completer = nullptr, 65 | .experimentalFeature = std::nullopt, 66 | }); 67 | 68 | addFlag({ 69 | .longName = "gc-roots-dir", 70 | .aliases = {}, 71 | .shortName = 0, 72 | .description = "garbage collector roots directory", 73 | .category = "", 74 | .labels = {"path"}, 75 | .handler = {&gcRootsDir}, 76 | .completer = nullptr, 77 | .experimentalFeature = std::nullopt, 78 | }); 79 | 80 | addFlag({ 81 | .longName = "workers", 82 | .aliases = {}, 83 | .shortName = 0, 84 | .description = "number of evaluate workers", 85 | .category = "", 86 | .labels = {"workers"}, 87 | .handler = {[this](const std::string &s) { nrWorkers = std::stoi(s); }}, 88 | .completer = nullptr, 89 | .experimentalFeature = std::nullopt, 90 | }); 91 | 92 | addFlag({ 93 | .longName = "max-memory-size", 94 | .aliases = {}, 95 | .shortName = 0, 96 | .description = "maximum evaluation memory size in megabyte " 97 | "(4GiB per worker by default)", 98 | .category = "", 99 | .labels = {"size"}, 100 | .handler = {[this](const std::string &s) { 101 | maxMemorySize = std::stoi(s); 102 | }}, 103 | .completer = nullptr, 104 | .experimentalFeature = std::nullopt, 105 | }); 106 | 107 | addFlag({ 108 | .longName = "flake", 109 | .aliases = {}, 110 | .shortName = 0, 111 | .description = "build a flake", 112 | .category = "", 113 | .labels = {}, 114 | .handler = {&flake, true}, 115 | .completer = nullptr, 116 | .experimentalFeature = std::nullopt, 117 | }); 118 | 119 | addFlag({ 120 | .longName = "meta", 121 | .aliases = {}, 122 | .shortName = 0, 123 | .description = "include derivation meta field in output", 124 | .category = "", 125 | .labels = {}, 126 | .handler = {&meta, true}, 127 | .completer = nullptr, 128 | .experimentalFeature = std::nullopt, 129 | }); 130 | 131 | addFlag({ 132 | .longName = "constituents", 133 | .aliases = {}, 134 | .shortName = 0, 135 | .description = 136 | "whether to evaluate constituents for Hydra's aggregate feature", 137 | .category = "", 138 | .labels = {}, 139 | .handler = {&constituents, true}, 140 | .completer = nullptr, 141 | .experimentalFeature = std::nullopt, 142 | }); 143 | 144 | addFlag({ 145 | .longName = "check-cache-status", 146 | .aliases = {}, 147 | .shortName = 0, 148 | .description = "Check if the derivations are present locally or in " 149 | "any configured substituters (i.e. binary cache). The " 150 | "information will be exposed in the `cacheStatus` field " 151 | "of the JSON output.", 152 | .category = "", 153 | .labels = {}, 154 | .handler = {&checkCacheStatus, true}, 155 | .completer = nullptr, 156 | .experimentalFeature = std::nullopt, 157 | }); 158 | 159 | addFlag({ 160 | .longName = "show-input-drvs", 161 | .aliases = {}, 162 | .shortName = 0, 163 | .description = 164 | "Show input derivations in the output for each derivation. " 165 | "This is useful to get direct dependencies of a derivation.", 166 | .category = "", 167 | .labels = {}, 168 | .handler = {&showInputDrvs, true}, 169 | .completer = nullptr, 170 | .experimentalFeature = std::nullopt, 171 | }); 172 | 173 | addFlag({ 174 | .longName = "show-trace", 175 | .aliases = {}, 176 | .shortName = 0, 177 | .description = "print out a stack trace in case of evaluation errors", 178 | .category = "", 179 | .labels = {}, 180 | .handler = {&showTrace, true}, 181 | .completer = nullptr, 182 | .experimentalFeature = std::nullopt, 183 | }); 184 | 185 | addFlag({ 186 | .longName = "expr", 187 | .aliases = {}, 188 | .shortName = 'E', 189 | .description = "treat the argument as a Nix expression", 190 | .category = "", 191 | .labels = {}, 192 | .handler = {&fromArgs, true}, 193 | .completer = nullptr, 194 | .experimentalFeature = std::nullopt, 195 | }); 196 | 197 | addFlag({ 198 | .longName = "apply", 199 | .aliases = {}, 200 | .shortName = 0, 201 | .description = 202 | "Apply provided Nix function to each derivation. " 203 | "The result of this function will be serialized as a JSON value " 204 | "and stored inside `\"extraValue\"` key of the json line output.", 205 | .category = "", 206 | .labels = {"expr"}, 207 | .handler = {&applyExpr}, 208 | .completer = nullptr, 209 | .experimentalFeature = std::nullopt, 210 | }); 211 | 212 | // usually in MixFlakeOptions 213 | addFlag({ 214 | .longName = "override-input", 215 | .aliases = {}, 216 | .shortName = 0, 217 | .description = 218 | "Override a specific flake input (e.g. `dwarffs/nixpkgs`).", 219 | .category = category, 220 | .labels = {"input-path", "flake-url"}, 221 | .handler = {[&](const std::string &inputPath, 222 | const std::string &flakeRef) { 223 | // overriden inputs are unlocked 224 | lockFlags.allowUnlocked = true; 225 | lockFlags.inputOverrides.insert_or_assign( 226 | nix::flake::parseInputAttrPath(inputPath), 227 | nix::parseFlakeRef(nix::fetchSettings, flakeRef, 228 | nix::absPath(std::filesystem::path(".")), 229 | true)); 230 | }}, 231 | .completer = nullptr, 232 | .experimentalFeature = std::nullopt, 233 | }); 234 | 235 | addFlag({ 236 | .longName = "reference-lock-file", 237 | .aliases = {}, 238 | .shortName = 0, 239 | .description = "Read the given lock file instead of `flake.lock` " 240 | "within the top-level flake.", 241 | .category = category, 242 | .labels = {"flake-lock-path"}, 243 | .handler = {[&](const std::string &lockFilePath) { 244 | lockFlags.referenceLockFilePath = { 245 | nix::getFSSourceAccessor(), 246 | nix::CanonPath(nix::absPath(lockFilePath))}; 247 | }}, 248 | .completer = completePath, 249 | .experimentalFeature = std::nullopt, 250 | }); 251 | 252 | expectArg("expr", &releaseExpr); 253 | } 254 | 255 | void MyArgs::parseArgs(char **argv, int argc) { 256 | parseCmdline(nix::argvToStrings(argc, argv), false); 257 | } -------------------------------------------------------------------------------- /src/eval-args.hh: -------------------------------------------------------------------------------- 1 | #pragma once 2 | 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | 11 | class MyArgs : virtual public nix::MixEvalArgs, 12 | virtual public nix::MixCommonArgs, 13 | virtual public nix::RootArgs { 14 | public: 15 | virtual ~MyArgs() = default; 16 | std::string releaseExpr; 17 | std::string applyExpr; 18 | nix::Path gcRootsDir; 19 | bool flake = false; 20 | bool fromArgs = false; 21 | bool meta = false; 22 | bool showTrace = false; 23 | bool impure = false; 24 | bool forceRecurse = false; 25 | bool checkCacheStatus = false; 26 | bool showInputDrvs = false; 27 | bool constituents = false; 28 | size_t nrWorkers = 1; 29 | size_t maxMemorySize = 4096; 30 | 31 | // usually in MixFlakeOptions 32 | nix::flake::LockFlags lockFlags = {.updateLockFile = false, 33 | .writeLockFile = false, 34 | .useRegistries = false, 35 | .allowUnlocked = false, 36 | .referenceLockFilePath = {}, 37 | .outputLockFilePath = {}, 38 | .inputOverrides = {}, 39 | .inputUpdates = {}}; 40 | MyArgs(); 41 | MyArgs(MyArgs &&) = delete; 42 | auto operator=(const MyArgs &) -> MyArgs & = default; 43 | auto operator=(MyArgs &&) -> MyArgs & = delete; 44 | MyArgs(const MyArgs &) = delete; 45 | 46 | void parseArgs(char **argv, int argc); 47 | }; 48 | -------------------------------------------------------------------------------- /src/meson.build: -------------------------------------------------------------------------------- 1 | src = [ 2 | 'nix-eval-jobs.cc', 3 | 'eval-args.cc', 4 | 'drv.cc', 5 | 'buffered-io.cc', 6 | 'constituents.cc', 7 | 'worker.cc', 8 | 'strings-portable.cc' 9 | ] 10 | 11 | executable( 12 | 'nix-eval-jobs', 13 | src, 14 | dependencies: [ 15 | threads_dep, 16 | nlohmann_json_dep, 17 | libcurl_dep, 18 | 19 | nix_store_dep, 20 | nix_fetchers_dep, 21 | nix_expr_dep, 22 | nix_flake_dep, 23 | nix_main_dep, 24 | nix_cmd_dep, 25 | ], 26 | install: true, 27 | ) 28 | -------------------------------------------------------------------------------- /src/nix-eval-jobs.cc: -------------------------------------------------------------------------------- 1 | // NOLINTBEGIN(modernize-deprecated-headers) 2 | // misc-include-cleaner wants these header rather than the C++ versions 3 | #include 4 | #include 5 | #include 6 | // NOLINTEND(modernize-deprecated-headers) 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | #include 18 | #include 19 | #include 20 | #include 21 | #include 22 | #include 23 | #include 24 | #include 25 | #include 26 | #include 27 | #include 28 | #include 29 | #include 30 | #include 31 | #include 32 | #include 33 | #include 34 | #include 35 | #include 36 | #include 37 | #include 38 | #include 39 | #include 40 | #include 41 | #include 42 | #include 43 | #include 44 | #include 45 | #include 46 | #include 47 | #include 48 | #include 49 | #include 50 | #include 51 | #include 52 | #include 53 | #include 54 | #include 55 | #include 56 | 57 | #include "eval-args.hh" 58 | #include "buffered-io.hh" 59 | #include "worker.hh" 60 | #include "strings-portable.hh" 61 | #include "constituents.hh" 62 | 63 | namespace { 64 | MyArgs myArgs; // NOLINT(cppcoreguidelines-avoid-non-const-global-variables) 65 | 66 | using Processor = std::function; 68 | 69 | struct OutputStreamLock { 70 | private: 71 | std::mutex mutex; 72 | std::ostream &stream; 73 | 74 | struct LockedOutputStream { 75 | public: 76 | std::unique_lock lock; 77 | std::ostream &stream; 78 | 79 | public: 80 | LockedOutputStream(std::mutex &mutex, std::ostream &stream) 81 | : lock(mutex), stream(stream) {} 82 | LockedOutputStream(LockedOutputStream &&other) 83 | : lock(std::move(other.lock)), stream(other.stream) {} 84 | 85 | template LockedOutputStream operator<<(const T &s) && { 86 | stream << s; 87 | return std::move(*this); 88 | } 89 | 90 | ~LockedOutputStream() { 91 | if (lock) { 92 | stream << std::flush; 93 | } 94 | } 95 | }; 96 | 97 | public: 98 | OutputStreamLock(std::ostream &stream) : stream(stream) {} 99 | 100 | LockedOutputStream lock() { return {mutex, stream}; } 101 | }; 102 | 103 | OutputStreamLock coutLock(std::cout); 104 | 105 | /* Auto-cleanup of fork's process and fds. */ 106 | struct Proc { 107 | nix::AutoCloseFD to, from; 108 | nix::Pid pid; 109 | 110 | Proc(const Proc &) = delete; 111 | Proc(Proc &&) = delete; 112 | auto operator=(const Proc &) -> Proc & = delete; 113 | auto operator=(Proc &&) -> Proc & = delete; 114 | 115 | explicit Proc(const Processor &proc) { 116 | nix::Pipe toPipe; 117 | nix::Pipe fromPipe; 118 | toPipe.create(); 119 | fromPipe.create(); 120 | auto p = startProcess( 121 | [&, 122 | to{std::make_shared( 123 | std::move(fromPipe.writeSide))}, 124 | from{std::make_shared( 125 | std::move(toPipe.readSide))}]() { 126 | nix::logger->log( 127 | nix::lvlDebug, 128 | nix::fmt("created worker process %d", getpid())); 129 | try { 130 | proc(myArgs, *to, *from); 131 | } catch (nix::Error &e) { 132 | nlohmann::json err; 133 | const auto &msg = e.msg(); 134 | err["error"] = nix::filterANSIEscapes(msg, true); 135 | nix::logger->log(nix::lvlError, msg); 136 | if (tryWriteLine(to->get(), err.dump()) < 0) { 137 | return; // main process died 138 | }; 139 | // Don't forget to print it into the STDERR log, this is 140 | // what's shown in the Hydra UI. 141 | if (tryWriteLine(to->get(), "restart") < 0) { 142 | return; // main process died 143 | } 144 | } 145 | }, 146 | nix::ProcessOptions{.allowVfork = false}); 147 | 148 | to = std::move(toPipe.writeSide); 149 | from = std::move(fromPipe.readSide); 150 | pid = p; 151 | } 152 | 153 | ~Proc() = default; 154 | }; 155 | 156 | // We'd highly prefer using std::thread here; but this won't let us configure 157 | // the stack size. macOS uses 512KiB size stacks for non-main threads, and musl 158 | // defaults to 128k. While Nix configures a 64MiB size for the main thread, this 159 | // doesn't propagate to the threads we launch here. It turns out, running the 160 | // evaluator under an anemic stack of 0.5MiB has it overflow way too quickly. 161 | // Hence, we have our own custom Thread struct. 162 | struct Thread { 163 | pthread_t thread = {}; // NOLINT(misc-include-cleaner) 164 | 165 | Thread(const Thread &) = delete; 166 | Thread(Thread &&) noexcept = default; 167 | ~Thread() = default; 168 | auto operator=(const Thread &) -> Thread & = delete; 169 | auto operator=(Thread &&) -> Thread & = delete; 170 | 171 | explicit Thread(std::function f) { 172 | pthread_attr_t attr = {}; // NOLINT(misc-include-cleaner) 173 | 174 | auto func = std::make_unique>(std::move(f)); 175 | 176 | int s = pthread_attr_init(&attr); 177 | if (s != 0) { 178 | throw nix::SysError(s, "calling pthread_attr_init"); 179 | } 180 | s = pthread_attr_setstacksize(&attr, 181 | static_cast(64) * 1024 * 1024); 182 | if (s != 0) { 183 | throw nix::SysError(s, "calling pthread_attr_setstacksize"); 184 | } 185 | s = pthread_create(&thread, &attr, Thread::init, func.release()); 186 | if (s != 0) { 187 | throw nix::SysError(s, "calling pthread_launch"); 188 | } 189 | s = pthread_attr_destroy(&attr); 190 | if (s != 0) { 191 | throw nix::SysError(s, "calling pthread_attr_destroy"); 192 | } 193 | } 194 | 195 | void join() const { 196 | const int s = pthread_join(thread, nullptr); 197 | if (s != 0) { 198 | throw nix::SysError(s, "calling pthread_join"); 199 | } 200 | } 201 | 202 | private: 203 | static auto init(void *ptr) -> void * { 204 | std::unique_ptr> func; 205 | func.reset(static_cast *>(ptr)); 206 | 207 | (*func)(); 208 | return nullptr; 209 | } 210 | }; 211 | 212 | struct State { 213 | std::set todo = 214 | nlohmann::json::array({nlohmann::json::array()}); 215 | std::set active; 216 | std::map jobs; 217 | std::exception_ptr exc; 218 | }; 219 | 220 | void handleBrokenWorkerPipe(Proc &proc, std::string_view msg) { 221 | // we already took the process status from Proc, no 222 | // need to wait for it again to avoid error messages 223 | const pid_t pid = proc.pid.release(); 224 | while (true) { 225 | int status = 0; 226 | const int rc = waitpid(pid, &status, WNOHANG); 227 | if (rc == 0) { 228 | kill(pid, SIGKILL); 229 | throw nix::Error( 230 | "BUG: while %s, worker pipe got closed but evaluation " 231 | "worker still running?", 232 | msg); 233 | } 234 | 235 | if (rc == -1) { 236 | kill(pid, SIGKILL); 237 | throw nix::Error( 238 | "BUG: while %s, waitpid for evaluation worker failed: %s", msg, 239 | get_error_name(errno)); 240 | } 241 | if (WIFEXITED(status)) { 242 | if (WEXITSTATUS(status) == 1) { 243 | throw nix::Error( 244 | "while %s, evaluation worker exited with exit code 1, " 245 | "(possible infinite recursion)", 246 | msg); 247 | } 248 | throw nix::Error("while %s, evaluation worker exited with %d", msg, 249 | WEXITSTATUS(status)); 250 | } 251 | 252 | if (WIFSIGNALED(status)) { 253 | switch (WTERMSIG(status)) { 254 | case SIGKILL: 255 | throw nix::Error( 256 | "while %s, evaluation worker got killed by SIGKILL, " 257 | "maybe " 258 | "memory limit reached?", 259 | msg); 260 | break; 261 | #ifdef __APPLE__ 262 | case SIGBUS: 263 | throw nix::Error( 264 | "while %s, evaluation worker got killed by SIGBUS, " 265 | "(possible infinite recursion)", 266 | msg); 267 | break; 268 | #else 269 | case SIGSEGV: 270 | throw nix::Error( 271 | "while %s, evaluation worker got killed by SIGSEGV, " 272 | "(possible infinite recursion)", 273 | msg); 274 | #endif 275 | default: 276 | throw nix::Error("while %s, evaluation worker got killed by " 277 | "signal %d (%s)", 278 | msg, WTERMSIG(status), 279 | get_signal_name(WTERMSIG(status))); 280 | } 281 | } // else ignore WIFSTOPPED and WIFCONTINUED 282 | } 283 | } 284 | 285 | auto joinAttrPath(nlohmann::json &attrPath) -> std::string { 286 | std::string joined; 287 | for (auto &element : attrPath) { 288 | if (!joined.empty()) { 289 | joined += '.'; 290 | } 291 | joined += element.get(); 292 | } 293 | return joined; 294 | } 295 | 296 | void collector(nix::Sync &state_, std::condition_variable &wakeup) { 297 | try { 298 | std::optional> proc_; 299 | std::optional> fromReader_; 300 | 301 | while (true) { 302 | if (!proc_.has_value()) { 303 | proc_ = std::make_unique(worker); 304 | } 305 | if (!fromReader_.has_value()) { 306 | fromReader_ = 307 | std::make_unique(proc_.value()->from.release()); 308 | } 309 | auto proc = std::move(proc_.value()); 310 | auto fromReader = std::move(fromReader_.value()); 311 | 312 | /* Check whether the existing worker process is still there. */ 313 | auto s = fromReader->readLine(); 314 | if (s.empty()) { 315 | handleBrokenWorkerPipe(*proc.get(), "checking worker process"); 316 | } else if (s == "restart") { 317 | proc_ = std::nullopt; 318 | fromReader_ = std::nullopt; 319 | continue; 320 | } else if (s != "next") { 321 | try { 322 | auto json = nlohmann::json::parse(s); 323 | throw nix::Error("worker error: %s", 324 | std::string(json["error"])); 325 | } catch (const nlohmann::json::exception &e) { 326 | throw nix::Error( 327 | "Received invalid JSON from worker: %s\n json: '%s'", 328 | e.what(), s); 329 | } 330 | } 331 | 332 | /* Wait for a job name to become available. */ 333 | nlohmann::json attrPath; 334 | 335 | while (true) { 336 | nix::checkInterrupt(); 337 | auto state(state_.lock()); 338 | if ((state->todo.empty() && state->active.empty()) || 339 | state->exc) { 340 | if (tryWriteLine(proc->to.get(), "exit") < 0) { 341 | handleBrokenWorkerPipe(*proc.get(), "sending exit"); 342 | } 343 | return; 344 | } 345 | if (!state->todo.empty()) { 346 | attrPath = *state->todo.begin(); 347 | state->todo.erase(state->todo.begin()); 348 | state->active.insert(attrPath); 349 | break; 350 | } 351 | state.wait(wakeup); 352 | } 353 | 354 | /* Tell the worker to evaluate it. */ 355 | if (tryWriteLine(proc->to.get(), "do " + attrPath.dump()) < 0) { 356 | auto msg = "sending attrPath '" + joinAttrPath(attrPath) + "'"; 357 | handleBrokenWorkerPipe(*proc.get(), msg); 358 | } 359 | 360 | /* Wait for the response. */ 361 | auto respString = fromReader->readLine(); 362 | if (respString.empty()) { 363 | auto msg = "reading result for attrPath '" + 364 | joinAttrPath(attrPath) + "'"; 365 | handleBrokenWorkerPipe(*proc.get(), msg); 366 | } 367 | nlohmann::json response; 368 | try { 369 | response = nlohmann::json::parse(respString); 370 | } catch (const nlohmann::json::exception &e) { 371 | throw nix::Error( 372 | "Received invalid JSON from worker: %s\n json: '%s'", 373 | e.what(), respString); 374 | } 375 | 376 | /* Handle the response. */ 377 | std::vector newAttrs; 378 | if (response.find("attrs") != response.end()) { 379 | for (auto &i : response["attrs"]) { 380 | nlohmann::json newAttr = 381 | nlohmann::json(response["attrPath"]); 382 | newAttr.emplace_back(i); 383 | newAttrs.push_back(newAttr); 384 | } 385 | } else { 386 | { 387 | auto state(state_.lock()); 388 | state->jobs.insert_or_assign(response["attr"], response); 389 | } 390 | auto named = response.find("namedConstituents"); 391 | if (named == response.end() || named->empty()) { 392 | coutLock.lock() << respString << "\n"; 393 | } 394 | } 395 | 396 | proc_ = std::move(proc); 397 | fromReader_ = std::move(fromReader); 398 | 399 | /* Add newly discovered job names to the queue. */ 400 | { 401 | auto state(state_.lock()); 402 | state->active.erase(attrPath); 403 | for (auto &p : newAttrs) { 404 | state->todo.insert(p); 405 | } 406 | wakeup.notify_all(); 407 | } 408 | } 409 | } catch (...) { 410 | auto state(state_.lock()); 411 | state->exc = std::current_exception(); 412 | wakeup.notify_all(); 413 | } 414 | } 415 | } // namespace 416 | 417 | auto main(int argc, char **argv) -> int { 418 | 419 | /* Prevent undeclared dependencies in the evaluation via 420 | $NIX_PATH. */ 421 | unsetenv("NIX_PATH"); // NOLINT(concurrency-mt-unsafe) 422 | 423 | /* We are doing the garbage collection by killing forks */ 424 | setenv("GC_DONT_GC", "1", 1); // NOLINT(concurrency-mt-unsafe) 425 | 426 | /* Because of an objc quirk[1], calling curl_global_init for the first time 427 | after fork() will always result in a crash. 428 | Up until now the solution has been to set 429 | OBJC_DISABLE_INITIALIZE_FORK_SAFETY for every nix process to ignore that 430 | error. Instead of working around that error we address it at the core - 431 | by calling curl_global_init here, which should mean curl will already 432 | have been initialized by the time we try to do so in a forked process. 433 | 434 | [1] 435 | https://github.com/apple-oss-distributions/objc4/blob/01edf1705fbc3ff78a423cd21e03dfc21eb4d780/runtime/objc-initialize.mm#L614-L636 436 | */ 437 | curl_global_init(CURL_GLOBAL_ALL); 438 | 439 | auto args = std::span(argv, argc); 440 | 441 | return nix::handleExceptions(args[0], [&]() { 442 | nix::initNix(); 443 | nix::initGC(); 444 | nix::flakeSettings.configureEvalSettings(nix::evalSettings); 445 | 446 | std::optional gcRootsDir = std::nullopt; 447 | 448 | myArgs.parseArgs(argv, argc); 449 | 450 | /* FIXME: The build hook in conjunction with import-from-derivation is 451 | * causing "unexpected EOF" during eval */ 452 | nix::settings.builders = ""; 453 | 454 | /* When building a flake, use pure evaluation (no access to 455 | 'getEnv', 'currentSystem' etc. */ 456 | if (myArgs.impure) { 457 | nix::evalSettings.pureEval = false; 458 | } else if (myArgs.flake) { 459 | nix::evalSettings.pureEval = true; 460 | } 461 | 462 | if (myArgs.releaseExpr.empty()) { 463 | throw nix::UsageError("no expression specified"); 464 | } 465 | 466 | if (!myArgs.gcRootsDir.empty()) { 467 | myArgs.gcRootsDir = std::filesystem::absolute(myArgs.gcRootsDir); 468 | } 469 | 470 | if (myArgs.showTrace) { 471 | nix::loggerSettings.showTrace.assign(true); 472 | } 473 | 474 | nix::Sync state_; 475 | 476 | /* Start a collector thread per worker process. */ 477 | std::vector threads; 478 | std::condition_variable wakeup; 479 | threads.reserve(myArgs.nrWorkers); 480 | for (size_t i = 0; i < myArgs.nrWorkers; i++) { 481 | threads.emplace_back( 482 | [&state_, &wakeup] { collector(state_, wakeup); }); 483 | } 484 | 485 | for (auto &thread : threads) { 486 | thread.join(); 487 | } 488 | 489 | auto state(state_.lock()); 490 | 491 | if (state->exc) { 492 | std::rethrow_exception(state->exc); 493 | } 494 | 495 | if (myArgs.constituents) { 496 | auto store = myArgs.evalStoreUrl 497 | ? nix::openStore(*myArgs.evalStoreUrl) 498 | : nix::openStore(); 499 | 500 | std::visit( 501 | nix::overloaded{ 502 | [&](const std::vector &namedConstituents) { 503 | rewriteAggregates(state->jobs, namedConstituents, store, 504 | myArgs.gcRootsDir); 505 | }, 506 | [&](const DependencyCycle &e) { 507 | nix::logger->log(nix::lvlError, 508 | nix::fmt("Found dependency cycle " 509 | "between jobs '%s' and '%s'", 510 | e.a, e.b)); 511 | state->jobs[e.a]["error"] = e.message(); 512 | state->jobs[e.b]["error"] = e.message(); 513 | 514 | std::cout << state->jobs[e.a].dump() << "\n" 515 | << state->jobs[e.b].dump() << "\n"; 516 | 517 | for (const auto &jobName : e.remainingAggregates) { 518 | state->jobs[jobName]["error"] = 519 | "Skipping aggregate because of a dependency " 520 | "cycle"; 521 | std::cout << state->jobs[jobName].dump() << "\n"; 522 | } 523 | }, 524 | }, 525 | resolveNamedConstituents(state->jobs)); 526 | } 527 | }); 528 | } 529 | -------------------------------------------------------------------------------- /src/strings-portable.cc: -------------------------------------------------------------------------------- 1 | #include 2 | #include "strings-portable.hh" 3 | 4 | #ifdef __APPLE__ 5 | // for sys_siglist and sys_errlist 6 | #include 7 | #include 8 | #elif defined(__FreeBSD__) 9 | #include 10 | #endif 11 | 12 | #if defined(__GLIBC__) 13 | #include //NOLINT(modernize-deprecated-headers) 14 | 15 | // Linux with glibc specific: sigabbrev_np 16 | auto get_signal_name(int sig) -> const char * { 17 | const char *name = sigabbrev_np(sig); 18 | if (name != nullptr) { 19 | return name; 20 | } 21 | return "Unknown signal"; 22 | } 23 | auto get_error_name(int err) -> const char * { 24 | const char *name = strerrorname_np(err); 25 | if (name != nullptr) { 26 | return name; 27 | } 28 | return "Unknown error"; 29 | } 30 | #elif defined(__APPLE__) || defined(__FreeBSD__) 31 | // macOS and FreeBSD have sys_siglist 32 | auto get_signal_name(int sig) -> const char * { 33 | if (sig >= 0 && sig < NSIG) { 34 | return sys_siglist[sig]; 35 | } 36 | return "Unknown signal"; 37 | } 38 | auto get_error_name(int err) -> const char * { 39 | if (err >= 0 && err < sys_nerr) { 40 | return sys_errlist[err]; 41 | } 42 | return "Unknown error"; 43 | } 44 | #else 45 | auto get_signal_name(int sig) -> const char * { return strsignal(sig); } 46 | auto get_error_name(int err) -> const char * { return strerror(err); } 47 | #endif 48 | -------------------------------------------------------------------------------- /src/strings-portable.hh: -------------------------------------------------------------------------------- 1 | #pragma once 2 | 3 | auto get_signal_name(int sig) -> const char *; 4 | auto get_error_name(int err) -> const char *; 5 | -------------------------------------------------------------------------------- /src/worker.cc: -------------------------------------------------------------------------------- 1 | // doesn't exist on macOS 2 | // IWYU pragma: no_include 3 | 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | // NOLINTBEGIN(modernize-deprecated-headers) 16 | // misc-include-cleaner wants this header rather than the C++ version 17 | #include 18 | // NOLINTEND(modernize-deprecated-headers) 19 | #include 20 | #include 21 | #include 22 | #include 23 | #include 24 | #include 25 | #include 26 | #include 27 | #include 28 | #include 29 | #include 30 | #include 31 | #include 32 | #include 33 | #include 34 | #include 35 | #include 36 | #include 37 | #include 38 | #include 39 | #include 40 | #include 41 | #include 42 | #include 43 | #include 44 | #include 45 | #include 46 | #include 47 | 48 | #include "worker.hh" 49 | #include "drv.hh" 50 | #include "buffered-io.hh" 51 | #include "eval-args.hh" 52 | 53 | namespace nix { 54 | struct Expr; 55 | } // namespace nix 56 | 57 | namespace { 58 | auto releaseExprTopLevelValue(nix::EvalState &state, nix::Bindings &autoArgs, 59 | MyArgs &args) -> nix::Value * { 60 | nix::Value vTop; 61 | 62 | if (args.fromArgs) { 63 | nix::Expr *e = 64 | state.parseExprFromString(args.releaseExpr, state.rootPath(".")); 65 | state.eval(e, vTop); 66 | } else { 67 | state.evalFile(lookupFileArg(state, args.releaseExpr), vTop); 68 | } 69 | 70 | auto *vRoot = state.allocValue(); 71 | 72 | state.autoCallFunction(autoArgs, vTop, *vRoot); 73 | 74 | return vRoot; 75 | } 76 | 77 | auto attrPathJoin(nlohmann::json input) -> std::string { 78 | return std::accumulate(input.begin(), input.end(), std::string(), 79 | [](const std::string &ss, std::string s) { 80 | // Escape token if containing dots 81 | if (s.find('.') != std::string::npos) { 82 | s = "\"" + s + "\""; 83 | } 84 | return ss.empty() ? s : ss + "." + s; 85 | }); 86 | } 87 | } // namespace 88 | 89 | void worker( 90 | MyArgs &args, 91 | nix::AutoCloseFD &to, // NOLINT(bugprone-easily-swappable-parameters) 92 | nix::AutoCloseFD &from) { 93 | 94 | auto evalStore = args.evalStoreUrl ? nix::openStore(*args.evalStoreUrl) 95 | : nix::openStore(); 96 | auto state = nix::make_ref( 97 | args.lookupPath, evalStore, nix::fetchSettings, nix::evalSettings); 98 | nix::Bindings &autoArgs = *args.getAutoArgs(*state); 99 | 100 | nix::Value *vRoot = [&]() { 101 | if (args.flake) { 102 | auto [flakeRef, fragment, outputSpec] = 103 | nix::parseFlakeRefWithFragmentAndExtendedOutputsSpec( 104 | nix::fetchSettings, args.releaseExpr, 105 | nix::absPath(std::filesystem::path("."))); 106 | nix::InstallableFlake flake{ 107 | {}, state, std::move(flakeRef), fragment, outputSpec, 108 | {}, {}, args.lockFlags}; 109 | 110 | return flake.toValue(*state).first; 111 | } 112 | 113 | return releaseExprTopLevelValue(*state, autoArgs, args); 114 | }(); 115 | 116 | LineReader fromReader(from.release()); 117 | 118 | while (true) { 119 | /* Wait for the collector to send us a job name. */ 120 | if (tryWriteLine(to.get(), "next") < 0) { 121 | return; // main process died 122 | } 123 | 124 | auto s = fromReader.readLine(); 125 | if (s == "exit") { 126 | break; 127 | } 128 | if (!nix::hasPrefix(s, "do ")) { 129 | std::cerr << "worker error: received invalid command '" << s 130 | << "'\n"; 131 | abort(); 132 | } 133 | auto path = nlohmann::json::parse(s.substr(3)); 134 | auto attrPathS = attrPathJoin(path); 135 | 136 | /* Evaluate it and send info back to the collector. */ 137 | nlohmann::json reply = 138 | nlohmann::json{{"attr", attrPathS}, {"attrPath", path}}; 139 | try { 140 | auto *vTmp = 141 | nix::findAlongAttrPath(*state, attrPathS, autoArgs, *vRoot) 142 | .first; 143 | 144 | auto *v = state->allocValue(); 145 | state->autoCallFunction(autoArgs, *vTmp, *v); 146 | 147 | if (v->type() == nix::nAttrs) { 148 | if (auto packageInfo = nix::getDerivation(*state, *v, false)) { 149 | 150 | std::optional maybeConstituents; 151 | if (args.constituents) { 152 | std::vector constituents; 153 | std::vector namedConstituents; 154 | bool globConstituents = false; 155 | const auto *a = v->attrs()->get( 156 | state->symbols.create("_hydraAggregate")); 157 | if (a != nullptr && 158 | state->forceBool(*a->value, a->pos, 159 | "while evaluating the " 160 | "`_hydraAggregate` attribute")) { 161 | const auto *a = v->attrs()->get( 162 | state->symbols.create("constituents")); 163 | if (a == nullptr) { 164 | state 165 | ->error( 166 | "derivation must have a ‘constituents’ " 167 | "attribute") 168 | .debugThrow(); 169 | } 170 | 171 | nix::NixStringContext context; 172 | state->coerceToString( 173 | a->pos, *a->value, context, 174 | "while evaluating the `constituents` attribute", 175 | true, false); 176 | for (const auto &c : context) { 177 | std::visit( 178 | nix::overloaded{ 179 | [&](const nix::NixStringContextElem:: 180 | Built &b) { 181 | constituents.push_back( 182 | b.drvPath->to_string( 183 | *state->store)); 184 | }, 185 | [&](const nix::NixStringContextElem:: 186 | Opaque &o [[maybe_unused]]) {}, 187 | [&](const nix::NixStringContextElem:: 188 | DrvDeep &d [[maybe_unused]]) {}, 189 | }, 190 | c.raw); 191 | } 192 | 193 | state->forceList(*a->value, a->pos, 194 | "while evaluating the " 195 | "`constituents` attribute"); 196 | auto constituents = std::span(a->value->listElems(), 197 | a->value->listSize()); 198 | for (const auto &v : constituents) { 199 | state->forceValue(*v, nix::noPos); 200 | if (v->type() == nix::nString) { 201 | namedConstituents.emplace_back(v->c_str()); 202 | } 203 | } 204 | 205 | const auto *glob = 206 | v->attrs()->get(state->symbols.create( 207 | "_hydraGlobConstituents")); 208 | globConstituents = 209 | glob != nullptr && 210 | state->forceBool( 211 | *glob->value, glob->pos, 212 | "while evaluating the " 213 | "`_hydraGlobConstituents` attribute"); 214 | } 215 | maybeConstituents = Constituents( 216 | constituents, namedConstituents, globConstituents); 217 | } 218 | 219 | if (args.applyExpr != "") { 220 | auto applyExpr = state->parseExprFromString( 221 | args.applyExpr, state->rootPath(".")); 222 | 223 | nix::Value vApply; 224 | nix::Value vRes; 225 | 226 | state->eval(applyExpr, vApply); 227 | 228 | state->callFunction(vApply, *v, vRes, nix::noPos); 229 | state->forceAttrs( 230 | vRes, nix::noPos, 231 | "apply needs to evaluate to an attrset"); 232 | 233 | nix::NixStringContext context; 234 | std::stringstream ss; 235 | nix::printValueAsJSON(*state, true, vRes, nix::noPos, 236 | ss, context); 237 | 238 | reply["extraValue"] = nlohmann::json::parse(ss.str()); 239 | } 240 | 241 | auto drv = Drv(attrPathS, *state, *packageInfo, args, 242 | maybeConstituents); 243 | reply.update(drv); 244 | 245 | /* Register the derivation as a GC root. !!! This 246 | registers roots for jobs that we may have already 247 | done. */ 248 | if (!args.gcRootsDir.empty()) { 249 | const nix::Path root = 250 | args.gcRootsDir + "/" + 251 | std::string(nix::baseNameOf(drv.drvPath)); 252 | if (!nix::pathExists(root)) { 253 | auto localStore = 254 | state->store 255 | .dynamic_pointer_cast(); 256 | auto storePath = 257 | localStore->parseStorePath(drv.drvPath); 258 | localStore->addPermRoot(storePath, root); 259 | } 260 | } 261 | } else { 262 | auto attrs = nlohmann::json::array(); 263 | bool recurse = 264 | args.forceRecurse || 265 | path.empty(); // Dont require `recurseForDerivations 266 | // = true;` for top-level attrset 267 | 268 | for (auto &i : 269 | v->attrs()->lexicographicOrder(state->symbols)) { 270 | const std::string_view &name = state->symbols[i->name]; 271 | attrs.push_back(name); 272 | 273 | if (name == "recurseForDerivations" && 274 | !args.forceRecurse) { 275 | const auto *attrv = 276 | v->attrs()->get(state->sRecurseForDerivations); 277 | recurse = state->forceBool( 278 | *attrv->value, attrv->pos, 279 | "while evaluating recurseForDerivations"); 280 | } 281 | } 282 | if (recurse) { 283 | reply["attrs"] = std::move(attrs); 284 | } else { 285 | reply["attrs"] = nlohmann::json::array(); 286 | } 287 | } 288 | } else { 289 | // We ignore everything that cannot be build 290 | reply["attrs"] = nlohmann::json::array(); 291 | } 292 | } catch (nix::EvalError &e) { 293 | const auto &err = e.info(); 294 | std::ostringstream oss; 295 | nix::showErrorInfo(oss, err, nix::loggerSettings.showTrace.get()); 296 | auto msg = oss.str(); 297 | 298 | // Transmits the error we got from the previous evaluation 299 | // in the JSON output. 300 | reply["error"] = nix::filterANSIEscapes(msg, true); 301 | // Don't forget to print it into the STDERR log, this is 302 | // what's shown in the Hydra UI. 303 | std::cerr << msg << "\n"; 304 | } catch ( 305 | const std::exception &e) { // FIXME: for some reason the catch block 306 | // above, doesn't trigger on macOS (?) 307 | const auto *msg = e.what(); 308 | reply["error"] = nix::filterANSIEscapes(msg, true); 309 | std::cerr << msg << '\n'; 310 | } 311 | 312 | if (tryWriteLine(to.get(), reply.dump()) < 0) { 313 | return; // main process died 314 | } 315 | 316 | /* If our RSS exceeds the maximum, exit. The collector will 317 | start a new process. */ 318 | struct rusage r = {}; // NOLINT(misc-include-cleaner) 319 | getrusage(RUSAGE_SELF, &r); 320 | const size_t maxrss = 321 | r.ru_maxrss; // NOLINT(cppcoreguidelines-pro-type-union-access) 322 | if (maxrss > args.maxMemorySize * 1024) { 323 | break; 324 | } 325 | } 326 | 327 | if (tryWriteLine(to.get(), "restart") < 0) { 328 | return; // main process died 329 | }; 330 | } 331 | -------------------------------------------------------------------------------- /src/worker.hh: -------------------------------------------------------------------------------- 1 | #pragma once 2 | 3 | #include "eval-args.hh" 4 | 5 | class MyArgs; 6 | 7 | namespace nix { 8 | class AutoCloseFD; 9 | class Bindings; 10 | class EvalState; 11 | template class ref; 12 | } // namespace nix 13 | 14 | void worker(MyArgs &args, nix::AutoCloseFD &to, nix::AutoCloseFD &from); 15 | -------------------------------------------------------------------------------- /tests/assets/ci.nix: -------------------------------------------------------------------------------- 1 | { 2 | pkgs ? import (builtins.getFlake (toString ./.)).inputs.nixpkgs { }, 3 | system ? pkgs.system, 4 | }: 5 | 6 | let 7 | dep-a = pkgs.runCommand "dep-a" { } '' 8 | mkdir -p $out 9 | echo "bbbbbb" > $out/dep-b 10 | ''; 11 | 12 | dep-b = pkgs.runCommand "dep-b" { } '' 13 | mkdir -p $out 14 | echo "aaaaaa" > $out/dep-b 15 | ''; 16 | in 17 | { 18 | builtJob = pkgs.writeText "job1" "job1"; 19 | substitutedJob = pkgs.nix; 20 | 21 | dontRecurse = { 22 | # This shouldn't build as `recurseForDerivations = true;` is not set 23 | # recurseForDerivations = true; 24 | 25 | # This should not build 26 | drvB = derivation { 27 | inherit system; 28 | name = "drvA"; 29 | builder = ":"; 30 | }; 31 | }; 32 | 33 | "dotted.attr" = pkgs.nix; 34 | 35 | package-with-deps = pkgs.runCommand "package-with-deps" { } '' 36 | mkdir -p $out 37 | cp -r ${dep-a} $out/dep-a 38 | cp -r ${dep-b} $out/dep-b 39 | ''; 40 | 41 | recurse = { 42 | # This should build 43 | recurseForDerivations = true; 44 | 45 | # This should not build 46 | drvB = derivation { 47 | inherit system; 48 | name = "drvB"; 49 | builder = ":"; 50 | }; 51 | }; 52 | } 53 | -------------------------------------------------------------------------------- /tests/assets/flake.lock: -------------------------------------------------------------------------------- 1 | { 2 | "nodes": { 3 | "nixpkgs": { 4 | "locked": { 5 | "lastModified": 1736042175, 6 | "narHash": "sha256-jdd5UWtLVrNEW8K6u5sy5upNAFmF3S4Y+OIeToqJ1X8=", 7 | "owner": "NixOS", 8 | "repo": "nixpkgs", 9 | "rev": "bf689c40d035239a489de5997a4da5352434632e", 10 | "type": "github" 11 | }, 12 | "original": { 13 | "owner": "NixOS", 14 | "ref": "nixpkgs-unstable", 15 | "repo": "nixpkgs", 16 | "type": "github" 17 | } 18 | }, 19 | "root": { 20 | "inputs": { 21 | "nixpkgs": "nixpkgs" 22 | } 23 | } 24 | }, 25 | "root": "root", 26 | "version": 7 27 | } 28 | -------------------------------------------------------------------------------- /tests/assets/flake.nix: -------------------------------------------------------------------------------- 1 | { 2 | inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable"; 3 | 4 | outputs = 5 | { self, nixpkgs, ... }: 6 | let 7 | pkgs = nixpkgs.legacyPackages.x86_64-linux; 8 | in 9 | { 10 | hydraJobs = import ./ci.nix { inherit pkgs; }; 11 | 12 | legacyPackages.x86_64-linux = { 13 | brokenPkgs = { 14 | brokenPackage = throw "this is an evaluation error"; 15 | }; 16 | infiniteRecursionPkgs = { 17 | packageWithInfiniteRecursion = 18 | let 19 | recursion = [ recursion ]; 20 | in 21 | derivation { 22 | inherit (pkgs) system; 23 | name = "drvB"; 24 | recursiveAttr = recursion; 25 | builder = ":"; 26 | }; 27 | }; 28 | success = { 29 | indirect_aggregate = 30 | pkgs.runCommand "indirect_aggregate" 31 | { 32 | _hydraAggregate = true; 33 | constituents = [ 34 | "anotherone" 35 | ]; 36 | } 37 | '' 38 | touch $out 39 | ''; 40 | direct_aggregate = 41 | pkgs.runCommand "direct_aggregate" 42 | { 43 | _hydraAggregate = true; 44 | constituents = [ 45 | self.hydraJobs.builtJob 46 | ]; 47 | } 48 | '' 49 | touch $out 50 | ''; 51 | mixed_aggregate = 52 | pkgs.runCommand "mixed_aggregate" 53 | { 54 | _hydraAggregate = true; 55 | constituents = [ 56 | self.hydraJobs.builtJob 57 | "anotherone" 58 | ]; 59 | } 60 | '' 61 | touch $out 62 | ''; 63 | anotherone = pkgs.writeText "constituent" "text"; 64 | }; 65 | failures = { 66 | aggregate = 67 | pkgs.runCommand "aggregate" 68 | { 69 | _hydraAggregate = true; 70 | constituents = [ 71 | "doesntexist" 72 | "doesnteval" 73 | ]; 74 | } 75 | '' 76 | touch $out 77 | ''; 78 | doesnteval = pkgs.writeText "constituent" (toString { }); 79 | }; 80 | glob1 = { 81 | constituentA = pkgs.runCommand "constituentA" { } "touch $out"; 82 | constituentB = pkgs.runCommand "constituentB" { } "touch $out"; 83 | aggregate = pkgs.runCommand "aggregate" { 84 | _hydraAggregate = true; 85 | _hydraGlobConstituents = true; 86 | constituents = [ "*" ]; 87 | } "touch $out"; 88 | }; 89 | cycle = { 90 | aggregate0 = pkgs.runCommand "aggregate0" { 91 | _hydraAggregate = true; 92 | _hydraGlobConstituents = true; 93 | constituents = [ "aggregate1" ]; 94 | } "touch $out"; 95 | aggregate1 = pkgs.runCommand "aggregate1" { 96 | _hydraAggregate = true; 97 | _hydraGlobConstituents = true; 98 | constituents = [ "aggregate0" ]; 99 | } "touch $out"; 100 | }; 101 | glob2 = rec { 102 | packages = pkgs.recurseIntoAttrs { 103 | constituentA = pkgs.runCommand "constituentA" { } "touch $out"; 104 | constituentB = pkgs.runCommand "constituentB" { } "touch $out"; 105 | }; 106 | aggregate0 = pkgs.runCommand "aggregate0" { 107 | _hydraAggregate = true; 108 | _hydraGlobConstituents = true; 109 | constituents = [ 110 | "packages.*" 111 | ]; 112 | } "touch $out"; 113 | aggregate1 = pkgs.runCommand "aggregate1" { 114 | _hydraAggregate = true; 115 | _hydraGlobConstituents = true; 116 | constituents = [ 117 | "tests.*" 118 | ]; 119 | } "touch $out"; 120 | indirect_aggregate0 = pkgs.runCommand "indirect_aggregate0" { 121 | _hydraAggregate = true; 122 | constituents = [ 123 | "aggregate0" 124 | ]; 125 | } "touch $out"; 126 | mix_aggregate0 = pkgs.runCommand "mix_aggregate0" { 127 | _hydraAggregate = true; 128 | constituents = [ 129 | "aggregate0" 130 | packages.constituentA 131 | ]; 132 | } "touch $out"; 133 | }; 134 | }; 135 | }; 136 | } 137 | -------------------------------------------------------------------------------- /tests/test_eval.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import json 4 | import os 5 | import subprocess 6 | from pathlib import Path 7 | from tempfile import TemporaryDirectory 8 | from typing import Any 9 | 10 | import pytest 11 | 12 | TEST_ROOT = Path(__file__).parent.resolve() 13 | PROJECT_ROOT = TEST_ROOT.parent 14 | BIN = PROJECT_ROOT.joinpath("build", "src", "nix-eval-jobs") 15 | 16 | 17 | def check_gc_root(gcRootDir: str, drvPath: str) -> None: 18 | """ 19 | Make sure the expected GC root exists in the given dir 20 | """ 21 | link_name = os.path.basename(drvPath) 22 | symlink_path = os.path.join(gcRootDir, link_name) 23 | assert os.path.islink(symlink_path) and drvPath == os.readlink(symlink_path) 24 | 25 | 26 | def common_test(extra_args: list[str]) -> list[dict[str, Any]]: 27 | with TemporaryDirectory() as tempdir: 28 | cmd = [str(BIN), "--gc-roots-dir", tempdir, "--meta"] + extra_args 29 | res = subprocess.run( 30 | cmd, 31 | cwd=TEST_ROOT.joinpath("assets"), 32 | text=True, 33 | check=True, 34 | stdout=subprocess.PIPE, 35 | ) 36 | 37 | results = [json.loads(r) for r in res.stdout.split("\n") if r] 38 | assert len(results) == 5 39 | 40 | built_job = results[0] 41 | assert built_job["attr"] == "builtJob" 42 | assert built_job["name"] == "job1" 43 | assert built_job["outputs"]["out"].startswith("/nix/store") 44 | assert built_job["drvPath"].endswith(".drv") 45 | assert built_job["meta"]["broken"] is False 46 | 47 | dotted_job = results[1] 48 | assert dotted_job["attr"] == '"dotted.attr"' 49 | assert dotted_job["attrPath"] == ["dotted.attr"] 50 | 51 | package_with_deps = results[2] 52 | assert package_with_deps["attr"] == "package-with-deps" 53 | assert package_with_deps["name"] == "package-with-deps" 54 | 55 | recurse_drv = results[3] 56 | assert recurse_drv["attr"] == "recurse.drvB" 57 | assert recurse_drv["name"] == "drvB" 58 | 59 | substituted_job = results[4] 60 | assert substituted_job["attr"] == "substitutedJob" 61 | assert substituted_job["name"].startswith("nix-") 62 | assert substituted_job["meta"]["broken"] is False 63 | 64 | assert len(list(Path(tempdir).iterdir())) == 4 65 | return results 66 | 67 | 68 | def test_flake() -> None: 69 | results = common_test(["--flake", ".#hydraJobs"]) 70 | for result in results: 71 | assert "isCached" not in result # legacy 72 | assert "cacheStatus" not in result 73 | assert "neededBuilds" not in result 74 | assert "neededSubstitutes" not in result 75 | 76 | 77 | def test_query_cache_status() -> None: 78 | results = common_test(["--flake", ".#hydraJobs", "--check-cache-status"]) 79 | # FIXME in the nix sandbox we cannot query binary caches 80 | # this would need some local one 81 | for result in results: 82 | assert "isCached" in result # legacy 83 | assert "cacheStatus" in result 84 | assert "neededBuilds" in result 85 | assert "neededSubstitutes" in result 86 | 87 | 88 | def test_expression() -> None: 89 | results = common_test(["ci.nix"]) 90 | for result in results: 91 | assert "isCached" not in result # legacy 92 | assert "cacheStatus" not in result 93 | 94 | with open(TEST_ROOT.joinpath("assets/ci.nix")) as ci_nix: 95 | common_test(["-E", ci_nix.read()]) 96 | 97 | 98 | def test_input_drvs() -> None: 99 | results = common_test(["ci.nix", "--show-input-drvs"]) 100 | for result in results: 101 | assert "inputDrvs" in result 102 | 103 | 104 | def test_eval_error() -> None: 105 | with TemporaryDirectory() as tempdir: 106 | cmd = [ 107 | str(BIN), 108 | "--gc-roots-dir", 109 | tempdir, 110 | "--meta", 111 | "--workers", 112 | "1", 113 | "--flake", 114 | ".#legacyPackages.x86_64-linux.brokenPkgs", 115 | ] 116 | res = subprocess.run( 117 | cmd, 118 | cwd=TEST_ROOT.joinpath("assets"), 119 | text=True, 120 | stdout=subprocess.PIPE, 121 | ) 122 | print(res.stdout) 123 | attrs = json.loads(res.stdout) 124 | assert attrs["attr"] == "brokenPackage" 125 | assert "this is an evaluation error" in attrs["error"] 126 | 127 | 128 | def test_no_gcroot_dir() -> None: 129 | cmd = [ 130 | str(BIN), 131 | "--meta", 132 | "--workers", 133 | "1", 134 | "--flake", 135 | ".#legacyPackages.x86_64-linux.brokenPkgs", 136 | ] 137 | res = subprocess.run( 138 | cmd, 139 | cwd=TEST_ROOT.joinpath("assets"), 140 | text=True, 141 | stdout=subprocess.PIPE, 142 | ) 143 | print(res.stdout) 144 | attrs = json.loads(res.stdout) 145 | assert attrs["attr"] == "brokenPackage" 146 | assert "this is an evaluation error" in attrs["error"] 147 | 148 | 149 | def test_constituents() -> None: 150 | with TemporaryDirectory() as tempdir: 151 | cmd = [ 152 | str(BIN), 153 | "--gc-roots-dir", 154 | tempdir, 155 | "--meta", 156 | "--workers", 157 | "1", 158 | "--flake", 159 | ".#legacyPackages.x86_64-linux.success", 160 | "--constituents", 161 | ] 162 | res = subprocess.run( 163 | cmd, 164 | cwd=TEST_ROOT.joinpath("assets"), 165 | text=True, 166 | stdout=subprocess.PIPE, 167 | ) 168 | print(res.stdout) 169 | results = [json.loads(r) for r in res.stdout.split("\n") if r] 170 | assert len(results) == 4 171 | child = results[0] 172 | assert child["attr"] == "anotherone" 173 | direct = results[1] 174 | assert direct["attr"] == "direct_aggregate" 175 | indirect = results[2] 176 | assert indirect["attr"] == "indirect_aggregate" 177 | mixed = results[3] 178 | assert mixed["attr"] == "mixed_aggregate" 179 | 180 | def absent_or_empty(f: str, d: dict) -> bool: 181 | return f not in d or len(d[f]) == 0 182 | 183 | assert absent_or_empty("namedConstituents", direct) 184 | assert absent_or_empty("namedConstituents", indirect) 185 | assert absent_or_empty("namedConstituents", mixed) 186 | 187 | assert direct["constituents"][0].endswith("-job1.drv") 188 | 189 | assert indirect["constituents"][0] == child["drvPath"] 190 | 191 | assert mixed["constituents"][0].endswith("-job1.drv") 192 | assert mixed["constituents"][1] == child["drvPath"] 193 | 194 | assert "error" not in direct 195 | assert "error" not in indirect 196 | assert "error" not in mixed 197 | 198 | check_gc_root(tempdir, direct["drvPath"]) 199 | check_gc_root(tempdir, indirect["drvPath"]) 200 | check_gc_root(tempdir, mixed["drvPath"]) 201 | 202 | 203 | def test_constituents_all() -> None: 204 | with TemporaryDirectory() as tempdir: 205 | cmd = [ 206 | str(BIN), 207 | "--gc-roots-dir", 208 | tempdir, 209 | "--meta", 210 | "--workers", 211 | "1", 212 | "--flake", 213 | ".#legacyPackages.x86_64-linux.glob1", 214 | "--constituents", 215 | ] 216 | res = subprocess.run( 217 | cmd, 218 | cwd=TEST_ROOT.joinpath("assets"), 219 | text=True, 220 | stdout=subprocess.PIPE, 221 | ) 222 | print(res.stdout) 223 | results = [json.loads(r) for r in res.stdout.split("\n") if r] 224 | assert len(results) == 3 225 | assert [x["name"] for x in results] == [ 226 | "constituentA", 227 | "constituentB", 228 | "aggregate", 229 | ] 230 | aggregate = results[2] 231 | assert len(aggregate["constituents"]) == 2 232 | assert aggregate["constituents"][0].endswith("constituentA.drv") 233 | assert aggregate["constituents"][1].endswith("constituentB.drv") 234 | 235 | 236 | def test_constituents_glob_misc() -> None: 237 | with TemporaryDirectory() as tempdir: 238 | cmd = [ 239 | str(BIN), 240 | "--gc-roots-dir", 241 | tempdir, 242 | "--meta", 243 | "--workers", 244 | "1", 245 | "--flake", 246 | ".#legacyPackages.x86_64-linux.glob2", 247 | "--constituents", 248 | ] 249 | res = subprocess.run( 250 | cmd, 251 | cwd=TEST_ROOT.joinpath("assets"), 252 | text=True, 253 | stdout=subprocess.PIPE, 254 | ) 255 | print(res.stdout) 256 | results = [json.loads(r) for r in res.stdout.split("\n") if r] 257 | assert len(results) == 6 258 | assert [x["name"] for x in results] == [ 259 | "constituentA", 260 | "constituentB", 261 | "aggregate0", 262 | "aggregate1", 263 | "indirect_aggregate0", 264 | "mix_aggregate0", 265 | ] 266 | aggregate = results[2] 267 | assert len(aggregate["constituents"]) == 2 268 | assert aggregate["constituents"][0].endswith("constituentA.drv") 269 | assert aggregate["constituents"][1].endswith("constituentB.drv") 270 | aggregate = results[4] 271 | assert len(aggregate["constituents"]) == 1 272 | assert aggregate["constituents"][0].endswith("aggregate0.drv") 273 | failed = results[3] 274 | assert "constituents" in failed 275 | assert failed["error"] == "tests.*: constituent glob pattern had no matches\n" 276 | 277 | assert results[4]["constituents"][0] == results[2]["drvPath"] 278 | assert results[5]["constituents"][0] == results[0]["drvPath"] 279 | assert results[5]["constituents"][1] == results[2]["drvPath"] 280 | 281 | 282 | def test_constituents_cycle() -> None: 283 | with TemporaryDirectory() as tempdir: 284 | cmd = [ 285 | str(BIN), 286 | "--gc-roots-dir", 287 | tempdir, 288 | "--meta", 289 | "--workers", 290 | "1", 291 | "--flake", 292 | ".#legacyPackages.x86_64-linux.cycle", 293 | "--constituents", 294 | ] 295 | res = subprocess.run( 296 | cmd, 297 | cwd=TEST_ROOT.joinpath("assets"), 298 | text=True, 299 | stdout=subprocess.PIPE, 300 | ) 301 | print(res.stdout) 302 | results = [json.loads(r) for r in res.stdout.split("\n") if r] 303 | assert len(results) == 2 304 | assert [x["name"] for x in results] == ["aggregate0", "aggregate1"] 305 | for i in results: 306 | assert i["error"] == "Dependency cycle: aggregate0 <-> aggregate1" 307 | 308 | 309 | def test_constituents_error() -> None: 310 | with TemporaryDirectory() as tempdir: 311 | cmd = [ 312 | str(BIN), 313 | "--gc-roots-dir", 314 | tempdir, 315 | "--meta", 316 | "--workers", 317 | "1", 318 | "--flake", 319 | ".#legacyPackages.x86_64-linux.failures", 320 | "--constituents", 321 | ] 322 | res = subprocess.run( 323 | cmd, 324 | cwd=TEST_ROOT.joinpath("assets"), 325 | text=True, 326 | stdout=subprocess.PIPE, 327 | ) 328 | print(res.stdout) 329 | results = [json.loads(r) for r in res.stdout.split("\n") if r] 330 | assert len(results) == 2 331 | child = results[0] 332 | assert child["attr"] == "doesnteval" 333 | assert "error" in child 334 | aggregate = results[1] 335 | assert aggregate["attr"] == "aggregate" 336 | assert "namedConstituents" not in aggregate 337 | assert "doesntexist: does not exist\n" in aggregate["error"] 338 | assert "constituents" in aggregate 339 | 340 | 341 | def test_apply() -> None: 342 | with TemporaryDirectory() as tempdir: 343 | applyExpr = """drv: { 344 | the-name = drv.name; 345 | version = drv.version or null; 346 | }""" 347 | 348 | cmd = [ 349 | str(BIN), 350 | "--gc-roots-dir", 351 | tempdir, 352 | "--workers", 353 | "1", 354 | "--apply", 355 | applyExpr, 356 | "--flake", 357 | ".#hydraJobs", 358 | ] 359 | res = subprocess.run( 360 | cmd, 361 | cwd=TEST_ROOT.joinpath("assets"), 362 | text=True, 363 | check=True, 364 | stdout=subprocess.PIPE, 365 | ) 366 | 367 | print(res.stdout) 368 | results = [json.loads(r) for r in res.stdout.split("\n") if r] 369 | 370 | assert len(results) == 5 # sanity check that we assert against all jobs 371 | 372 | # Check that nix-eval-jobs applied the expression correctly 373 | # and extracted 'version' as 'version' and 'name' as 'the-name' 374 | assert results[0]["extraValue"]["the-name"] == "job1" 375 | assert results[0]["extraValue"]["version"] is None 376 | assert results[1]["extraValue"]["the-name"].startswith("nix-") 377 | assert results[1]["extraValue"]["version"] is not None 378 | assert results[2]["extraValue"]["the-name"] == "package-with-deps" 379 | assert results[2]["extraValue"]["version"] is None 380 | assert results[3]["extraValue"]["the-name"] == "drvB" 381 | assert results[3]["extraValue"]["version"] is None 382 | assert results[4]["extraValue"]["the-name"].startswith("nix-") 383 | assert results[4]["extraValue"]["version"] is not None 384 | 385 | 386 | @pytest.mark.infiniterecursion 387 | def test_recursion_error() -> None: 388 | with TemporaryDirectory() as tempdir: 389 | cmd = [ 390 | str(BIN), 391 | "--gc-roots-dir", 392 | tempdir, 393 | "--meta", 394 | "--workers", 395 | "1", 396 | "--flake", 397 | ".#legacyPackages.x86_64-linux.infiniteRecursionPkgs", 398 | ] 399 | res = subprocess.run( 400 | cmd, 401 | cwd=TEST_ROOT.joinpath("assets"), 402 | text=True, 403 | stderr=subprocess.PIPE, 404 | ) 405 | assert res.returncode == 1 406 | print(res.stderr) 407 | assert "packageWithInfiniteRecursion" in res.stderr 408 | assert "possible infinite recursion" in res.stderr 409 | --------------------------------------------------------------------------------