| Commit message (Collapse) | Author | Age |
|\ |
|
| | |
|
|/
|
|
|
|
|
|
|
| |
In practice, almost all requests to Hydra take longer than the default
timeout of 30 seconds.
This commit bumps all requests to the max timeout of 15 minutes. This
should hopefully make the hdyra-report.hs script more reliable and fail
less.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* luarocks-packages-updater: init
Goal is to make it possible to maintain out-of-tree luarocks packages
without needing to clone nixpkgs.
maintainers/scripts/update-luarocks-packages gets renamed to
pkgs/development/lua-modules/updater/updater.py
Once merged you can run for instance
nix run nixpkgs#luarocks-packages-updater -- -i contrib/luarocks-packages.csv -o contrib/generated-packages.nix
I also set the parallelism (--proc) to 1 by default else luarocks fails
because of https://github.com/luarocks/luarocks/issues/1540
* Update maintainers/scripts/pluginupdate.py
Co-authored-by: Marc Jakobi <mrcjkb89@outlook.com>
---------
Co-authored-by: Marc Jakobi <mrcjkb89@outlook.com>
|
| |
|
| |
|
|
|
|
| |
will apply to vimPlugins/kakoune/luarocks update
|
| |
|
|
|
|
|
|
| |
lpty was introduced in https://github.com/NixOS/nixpkgs/pull/6529, no release in 6 years https://luarocks.org/modules/gunnar_z/lpty
and the archive with source code has disappeared (could be fetched from
the rock or nix cache but probably not worth it).
|
|\
| |
| | |
luaPackages: add some lua packages
|
| | |
|
| | |
|
| | |
|
| | |
|
|/
|
|
|
|
| |
* use attrname in log messages instead of github handle
* don't remove users simply for empty github handles, if their user
still exists (prevents #259555)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This will allow buliding bootstrap tools for platforms with
non-default libcs, like *-unknown-linux-musl.
This gets rid of limitedSupportSystems/systemsWithAnySupport. There
was no need to use systemsWithAnySupport for supportDarwin, because it
was always equivalent to supportedSystems for that purpose, and the
only other way it was used was for determining which platforms to
build the bootstrap tools for, so we might as well use a more explicit
parameter for that, and then we can change how it works without
affecting the rest of the Hydra jobs.
Not affecting the rest of the Hydra jobs is important, because if we
changed all jobs to use config triples, we'd end up renaming every
Hydra job. That might still be worth thinking about at some point,
but it's unnecessary at this point (and would be a lot of work).
I've checked by running
nix-eval-jobs --force-recurse pkgs/top-level/release.nix
that the actual bootstrap tools derivations are unaffected by this
change, and that the only other jobs that change are ones that depend
on the hash of all of Nixpkgs. Of the other jobset entrypoints that
end up importing pkgs/top-level/release.nix, none used the
limitedSupportedSystems parameter, so they should all be unaffected as
well.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The nixpkgs documentation mentions how to update out of tree plugins but
one problem is that it requires a nixpkgs clone.
This makes it more convenient.
I've had the need to generate vim plugins and lua overlays for other
projects unrelated to nix and this will make updates easier (aka just
run `nix run nixpkgs#vimPluginsUpdater -- --proc=1` or with the legacy commands:
`nix-shell -p vimPluginsUpdater --run vim-plugins-updater`.
I added an optional "nixpkgs" argument to command line parser, which is the path
towards a nixpkgs checkout. By default the current folder.
update-luarocks-packages: format with black
|
|
|
|
|
|
| |
Add support for `sha512`, refactor to easily add hash functions in the future.
Also, skip autogenerated files.
|
|\ |
|
| | |
|
| | |
|
| | |
|
| | |
|
|\| |
|
| |
| |
| |
| |
| | |
This should provide nicer `throw` messages, and avoid back-and-forth like
https://github.com/NixOS/nixpkgs/pull/254418#discussion_r1322076574
|
| | |
|
| |
| |
| |
| | |
This flag needs to be passed through to hydra-report.hs.
|
| |
| |
| |
| |
| | |
This seems to be the server side Hydra timeout as well, so it makes
sense waiting as long as Hydra will try to give a response.
|
| |
| |
| |
| |
| |
| |
| |
| | |
This change adds a flag --slow to hydra-report.sh get-report which
causes it to fetch the cheap evaluation overview endpoint (which only
contains build ids and meta data). The gathered information is then used
to request each build's status individually instead of in bulk which is
very slow, but useful as a last resort if the bulk endpoint times out.
|
|/
|
|
|
|
|
| |
Since every failure in the jobset means one request to get the log when
generating the list of newly broken packages, we need to add an option
to disable log requesting in case a lot of new breakage needs to be
entered.
|
|\ |
|
| | |
|
| |
| |
| |
| | |
Nix does not respect `NIX_PATH` when the `nix-path` setting in nix.conf is set
|
|/ |
|
| |
|
| |
|
| |
|
|\
| |
| | |
dotnet: misc fixes
|
| | |
|
|\ \
| |/
|/|
| |
| | |
SuperSandro2000/check-hydra-by-maintainer-no-alias
maintainers/scripts/check-hydra-by-maintainer: don't check aliases
|
| | |
|
| | |
|
|/
|
|
| |
as this script would otherwise create a bunch of somewhat unnecessary/noisy renames that aren't "actual" renames
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we want to push only one branch, we'll have to specify branch and
remote explicitly. Pushing to origin doesn't work for everyone, since
some of us have a origin remote that can't be pushed to. Using plain
`git push` has the problem that it'll try pushing all checked out
branchs which fails e.g. if some branches (staging, staging-next, …) are
behind their remote counterparts.
The solution is to require everyone to configure a per branch pushRemote
for haskell-updates which will then be used by merge-and-open-pr.sh.
|
|\
| |
| | |
copy-tarballs: use all the urls of each file
|
| |
| |
| |
| |
| | |
If a file specifies multiple urls, try fetching all of them until
nix-prefetch-url is successful.
|
|\ \ |
|
| |\ \
| | | |
| | | | |
copy-tarballs.pl: fix DEBUG mode
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When DEBUG is defined, the script just prints the URL's without actually
checking whether they're already cached or downloading/uploading anything.
That got broken because connecting to S3 now fails fast. This PR makes sure
we skip connecting to s3 in DEBUG mode.
|