about summary refs log tree commit diff
path: root/nixos
diff options
context:
space:
mode:
Diffstat (limited to 'nixos')
-rw-r--r--nixos/doc/manual/release-notes/rl-2311.section.md10
-rw-r--r--nixos/lib/testing/driver.nix7
-rw-r--r--nixos/maintainers/scripts/lxd/lxd-container-image-inner.nix20
-rw-r--r--nixos/maintainers/scripts/lxd/lxd-container-image.nix (renamed from nixos/maintainers/scripts/lxd/lxd-image.nix)6
-rw-r--r--nixos/maintainers/scripts/lxd/lxd-image-inner.nix95
-rw-r--r--nixos/maintainers/scripts/lxd/lxd-virtual-machine-image-inner.nix20
-rw-r--r--nixos/maintainers/scripts/lxd/lxd-virtual-machine-image.nix27
-rw-r--r--nixos/modules/config/update-users-groups.pl15
-rw-r--r--nixos/modules/config/users-groups.nix2
-rw-r--r--nixos/modules/i18n/input-method/uim.nix2
-rw-r--r--nixos/modules/module-list.nix1
-rw-r--r--nixos/modules/security/sudo.nix4
-rw-r--r--nixos/modules/services/mail/stalwart-mail.nix106
-rw-r--r--nixos/modules/services/matrix/mautrix-telegram.nix1
-rw-r--r--nixos/modules/services/matrix/mautrix-whatsapp.nix99
-rw-r--r--nixos/modules/services/misc/atuin.nix2
-rw-r--r--nixos/modules/services/misc/gitlab.nix7
-rw-r--r--nixos/modules/services/monitoring/mimir.nix14
-rw-r--r--nixos/modules/services/networking/jool.nix313
-rw-r--r--nixos/modules/services/networking/nftables.nix22
-rw-r--r--nixos/modules/services/x11/desktop-managers/budgie.nix3
-rw-r--r--nixos/modules/virtualisation/anbox.nix59
-rw-r--r--nixos/modules/virtualisation/lxc-container.nix130
-rw-r--r--nixos/modules/virtualisation/lxc-image-metadata.nix104
-rw-r--r--nixos/modules/virtualisation/lxc-instance-common.nix30
-rw-r--r--nixos/modules/virtualisation/lxd-virtual-machine.nix46
-rw-r--r--nixos/modules/virtualisation/lxd.nix129
-rw-r--r--nixos/release.nix51
-rw-r--r--nixos/tests/all-tests.nix6
-rw-r--r--nixos/tests/anbox.nix40
-rw-r--r--nixos/tests/custom-ca.nix4
-rw-r--r--nixos/tests/jool.nix106
-rw-r--r--nixos/tests/lxd/container.nix13
-rw-r--r--nixos/tests/lxd/default.nix3
-rw-r--r--nixos/tests/lxd/preseed.nix71
-rw-r--r--nixos/tests/lxd/virtual-machine.nix64
-rw-r--r--nixos/tests/stalwart-mail.nix117
37 files changed, 1209 insertions, 540 deletions
diff --git a/nixos/doc/manual/release-notes/rl-2311.section.md b/nixos/doc/manual/release-notes/rl-2311.section.md
index eb6fb6fc6e45..ce3f2c579b51 100644
--- a/nixos/doc/manual/release-notes/rl-2311.section.md
+++ b/nixos/doc/manual/release-notes/rl-2311.section.md
@@ -6,6 +6,8 @@
 
 - Support for WiFi6 (IEEE 802.11ax) and WPA3-SAE-PK was enabled in the `hostapd` package, along with a significant rework of the hostapd module.
 
+- LXD now supports virtual machine instances to complement the existing container support
+
 ## New Services {#sec-release-23.11-new-services}
 
 - [MCHPRS](https://github.com/MCHPR/MCHPRS), a multithreaded Minecraft server built for redstone. Available as [services.mchprs](#opt-services.mchprs.enable).
@@ -34,7 +36,9 @@
 
 - [sitespeed-io](https://sitespeed.io), a tool that can generate metrics (timings, diagnostics) for websites. Available as [services.sitespeed-io](#opt-services.sitespeed-io.enable).
 
-- [Jool](https://nicmx.github.io/Jool/en/index.html), an Open Source implementation of IPv4/IPv6 translation on Linux. Available as [networking.jool.enable](#opt-networking.jool.enable).
+- [stalwart-mail](https://stalw.art), an all-in-one email server (SMTP, IMAP, JMAP). Available as [services.stalwart-mail](#opt-services.stalwart-mail.enable).
+
+- [Jool](https://nicmx.github.io/Jool/en/index.html), a kernelspace NAT64 and SIIT implementation, providing translation between IPv4 and IPv6. Available as [networking.jool.enable](#opt-networking.jool.enable).
 
 - [Apache Guacamole](https://guacamole.apache.org/), a cross-platform, clientless remote desktop gateway. Available as [services.guacamole-server](#opt-services.guacamole-server.enable) and [services.guacamole-client](#opt-services.guacamole-client.enable) services.
 
@@ -90,6 +94,8 @@
 
 - `etcd` has been updated to 3.5, you will want to read the [3.3 to 3.4](https://etcd.io/docs/v3.5/upgrades/upgrade_3_4/) and [3.4 to 3.5](https://etcd.io/docs/v3.5/upgrades/upgrade_3_5/) upgrade guides
 
+- `gitlab` installations created or updated between versions \[15.11.0, 15.11.2] have an incorrect database schema. This will become a problem when upgrading to `gitlab` >=16.2.0. A workaround for affected users can be found in the [GitLab docs](https://docs.gitlab.com/ee/update/versions/gitlab_16_changes.html#undefined-column-error-upgrading-to-162-or-later).
+
 - `consul` has been updated to `1.16.0`. See the [release note](https://github.com/hashicorp/consul/releases/tag/v1.16.0) for more details. Once a new Consul version has started and upgraded its data directory, it generally cannot be downgraded to the previous version.
 
 - `himalaya` has been updated to `0.8.0`, which drops the native TLS support (in favor of Rustls) and add OAuth 2.0 support. See the [release note](https://github.com/soywod/himalaya/releases/tag/v0.8.0) for more details.
@@ -171,6 +177,8 @@
 
 - The `html-proofer` package has been updated from major version 3 to major version 5, which includes [breaking changes](https://github.com/gjtorikian/html-proofer/blob/v5.0.8/UPGRADING.md).
 
+- `kratos` has been updated from 0.10.1 to the first stable version 1.0.0, please read the [0.10.1 to 0.11.0](https://github.com/ory/kratos/releases/tag/v0.11.0), [0.11.0 to 0.11.1](https://github.com/ory/kratos/releases/tag/v0.11.1), [0.11.1 to 0.13.0](https://github.com/ory/kratos/releases/tag/v0.13.0) and [0.13.0 to 1.0.0](https://github.com/ory/kratos/releases/tag/v1.0.0) upgrade guides. The most notable breaking change is the introduction of one-time passwords (`code`) and update of the default recovery strategy from `link` to `code`.
+
 ## Other Notable Changes {#sec-release-23.11-notable-changes}
 
 - The Cinnamon module now enables XDG desktop integration by default. If you are experiencing collisions related to xdg-desktop-portal-gtk you can safely remove `xdg.portal.extraPortals = [ pkgs.xdg-desktop-portal-gtk ];` from your NixOS configuration.
diff --git a/nixos/lib/testing/driver.nix b/nixos/lib/testing/driver.nix
index 23574698c062..cc97ca72083f 100644
--- a/nixos/lib/testing/driver.nix
+++ b/nixos/lib/testing/driver.nix
@@ -175,7 +175,12 @@ in
   };
 
   config = {
-    _module.args.hostPkgs = config.hostPkgs;
+    _module.args = {
+      hostPkgs =
+        # Comment is in nixos/modules/misc/nixpkgs.nix
+        lib.mkOverride lib.modules.defaultOverridePriority
+          config.hostPkgs.__splicedPackages;
+    };
 
     driver = withChecks driver;
 
diff --git a/nixos/maintainers/scripts/lxd/lxd-container-image-inner.nix b/nixos/maintainers/scripts/lxd/lxd-container-image-inner.nix
new file mode 100644
index 000000000000..7b743d170bc6
--- /dev/null
+++ b/nixos/maintainers/scripts/lxd/lxd-container-image-inner.nix
@@ -0,0 +1,20 @@
+# Edit this configuration file to define what should be installed on
+# your system.  Help is available in the configuration.nix(5) man page
+# and in the NixOS manual (accessible by running ‘nixos-help’).
+
+{ config, pkgs, lib, ... }:
+
+{
+  imports =
+    [
+      # Include the default lxd configuration.
+      ../../../modules/virtualisation/lxc-container.nix
+      # Include the container-specific autogenerated configuration.
+      ./lxd.nix
+    ];
+
+  networking.useDHCP = false;
+  networking.interfaces.eth0.useDHCP = true;
+
+  system.stateVersion = "21.05"; # Did you read the comment?
+}
diff --git a/nixos/maintainers/scripts/lxd/lxd-image.nix b/nixos/maintainers/scripts/lxd/lxd-container-image.nix
index 07605c5c3120..3bd1320b2b68 100644
--- a/nixos/maintainers/scripts/lxd/lxd-image.nix
+++ b/nixos/maintainers/scripts/lxd/lxd-container-image.nix
@@ -1,4 +1,4 @@
-{ lib, config, pkgs, ... }:
+{ lib, pkgs, ... }:
 
 {
   imports = [
@@ -16,8 +16,8 @@
   system.activationScripts.config = ''
     if [ ! -e /etc/nixos/configuration.nix ]; then
       mkdir -p /etc/nixos
-      cat ${./lxd-image-inner.nix} > /etc/nixos/configuration.nix
-      sed 's|../../../modules/virtualisation/lxc-container.nix|<nixpkgs/nixos/modules/virtualisation/lxc-container.nix>|g' -i /etc/nixos/configuration.nix
+      cat ${./lxd-container-image-inner.nix} > /etc/nixos/configuration.nix
+      ${lib.getExe pkgs.gnused} 's|../../../modules/virtualisation/lxc-container.nix|<nixpkgs/nixos/modules/virtualisation/lxc-container.nix>|g' -i /etc/nixos/configuration.nix
     fi
   '';
 
diff --git a/nixos/maintainers/scripts/lxd/lxd-image-inner.nix b/nixos/maintainers/scripts/lxd/lxd-image-inner.nix
deleted file mode 100644
index c1a9b1aacd18..000000000000
--- a/nixos/maintainers/scripts/lxd/lxd-image-inner.nix
+++ /dev/null
@@ -1,95 +0,0 @@
-# Edit this configuration file to define what should be installed on
-# your system.  Help is available in the configuration.nix(5) man page
-# and in the NixOS manual (accessible by running ‘nixos-help’).
-
-{ config, pkgs, lib, ... }:
-
-{
-  imports =
-    [ # Include the default lxd configuration.
-      ../../../modules/virtualisation/lxc-container.nix
-      # Include the container-specific autogenerated configuration.
-      ./lxd.nix
-    ];
-
-  # networking.hostName = mkForce "nixos"; # Overwrite the hostname.
-  # networking.wireless.enable = true;  # Enables wireless support via wpa_supplicant.
-
-  # Set your time zone.
-  # time.timeZone = "Europe/Amsterdam";
-
-  # The global useDHCP flag is deprecated, therefore explicitly set to false here.
-  # Per-interface useDHCP will be mandatory in the future, so this generated config
-  # replicates the default behaviour.
-  networking.useDHCP = false;
-  networking.interfaces.eth0.useDHCP = true;
-
-  # Configure network proxy if necessary
-  # networking.proxy.default = "http://user:password@proxy:port/";
-  # networking.proxy.noProxy = "127.0.0.1,localhost,internal.domain";
-
-  # Select internationalisation properties.
-  # i18n.defaultLocale = "en_US.UTF-8";
-  # console = {
-  #   font = "Lat2-Terminus16";
-  #   keyMap = "us";
-  # };
-
-  # Enable the X11 windowing system.
-  # services.xserver.enable = true;
-
-  # Configure keymap in X11
-  # services.xserver.layout = "us";
-  # services.xserver.xkbOptions = "eurosign:e";
-
-  # Enable CUPS to print documents.
-  # services.printing.enable = true;
-
-  # Enable sound.
-  # sound.enable = true;
-  # hardware.pulseaudio.enable = true;
-
-  # Enable touchpad support (enabled default in most desktopManager).
-  # services.xserver.libinput.enable = true;
-
-  # Define a user account. Don't forget to set a password with ‘passwd’.
-  # users.users.alice = {
-  #   isNormalUser = true;
-  #   extraGroups = [ "wheel" ]; # Enable ‘sudo’ for the user.
-  # };
-
-  # List packages installed in system profile. To search, run:
-  # $ nix search wget
-  # environment.systemPackages = with pkgs; [
-  #   vim # Do not forget to add an editor to edit configuration.nix! The Nano editor is also installed by default.
-  #   wget
-  #   firefox
-  # ];
-
-  # Some programs need SUID wrappers, can be configured further or are
-  # started in user sessions.
-  # programs.mtr.enable = true;
-  # programs.gnupg.agent = {
-  #   enable = true;
-  #   enableSSHSupport = true;
-  # };
-
-  # List services that you want to enable:
-
-  # Enable the OpenSSH daemon.
-  # services.openssh.enable = true;
-
-  # Open ports in the firewall.
-  # networking.firewall.allowedTCPPorts = [ ... ];
-  # networking.firewall.allowedUDPPorts = [ ... ];
-  # Or disable the firewall altogether.
-  # networking.firewall.enable = false;
-
-  # This value determines the NixOS release from which the default
-  # settings for stateful data, like file locations and database versions
-  # on your system were taken. It’s perfectly fine and recommended to leave
-  # this value at the release version of the first install of this system.
-  # Before changing this value read the documentation for this option
-  # (e.g. man configuration.nix or on https://nixos.org/nixos/options.html).
-  system.stateVersion = "21.05"; # Did you read the comment?
-}
diff --git a/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image-inner.nix b/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image-inner.nix
new file mode 100644
index 000000000000..a8f2c63ac5c6
--- /dev/null
+++ b/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image-inner.nix
@@ -0,0 +1,20 @@
+# Edit this configuration file to define what should be installed on
+# your system.  Help is available in the configuration.nix(5) man page
+# and in the NixOS manual (accessible by running ‘nixos-help’).
+
+{ config, pkgs, lib, ... }:
+
+{
+  imports =
+    [
+      # Include the default lxd configuration.
+      ../../../modules/virtualisation/lxd-virtual-machine.nix
+      # Include the container-specific autogenerated configuration.
+      ./lxd.nix
+    ];
+
+  networking.useDHCP = false;
+  networking.interfaces.eth0.useDHCP = true;
+
+  system.stateVersion = "23.05"; # Did you read the comment?
+}
diff --git a/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image.nix b/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image.nix
new file mode 100644
index 000000000000..eb0d9217d402
--- /dev/null
+++ b/nixos/maintainers/scripts/lxd/lxd-virtual-machine-image.nix
@@ -0,0 +1,27 @@
+{ lib, pkgs, ... }:
+
+{
+  imports = [
+    ../../../modules/virtualisation/lxd-virtual-machine.nix
+  ];
+
+  virtualisation.lxc.templates.nix = {
+    enable = true;
+    target = "/etc/nixos/lxd.nix";
+    template = ./nix.tpl;
+    when = ["create" "copy"];
+  };
+
+  # copy the config for nixos-rebuild
+  system.activationScripts.config = ''
+    if [ ! -e /etc/nixos/configuration.nix ]; then
+      mkdir -p /etc/nixos
+      cat ${./lxd-virtual-machine-image-inner.nix} > /etc/nixos/configuration.nix
+      ${lib.getExe pkgs.gnused} 's|../../../modules/virtualisation/lxd-virtual-machine.nix|<nixpkgs/nixos/modules/virtualisation/lxd-virtual-machine.nix>|g' -i /etc/nixos/configuration.nix
+    fi
+  '';
+
+  # Network
+  networking.useDHCP = false;
+  networking.interfaces.enp5s0.useDHCP = true;
+}
diff --git a/nixos/modules/config/update-users-groups.pl b/nixos/modules/config/update-users-groups.pl
index 5236264e16b7..3785ba8c5fb8 100644
--- a/nixos/modules/config/update-users-groups.pl
+++ b/nixos/modules/config/update-users-groups.pl
@@ -4,7 +4,7 @@ use File::Path qw(make_path);
 use File::Slurp;
 use Getopt::Long;
 use JSON;
-use DateTime;
+use Time::Piece;
 
 # Keep track of deleted uids and gids.
 my $uidMapFile = "/var/lib/nixos/uid-map";
@@ -26,17 +26,8 @@ sub updateFile {
 # Converts an ISO date to number of days since 1970-01-01
 sub dateToDays {
     my ($date) = @_;
-    my ($year, $month, $day) = split('-', $date, -3);
-    my $dt = DateTime->new(
-        year      => $year,
-        month     => $month,
-        day       => $day,
-        hour      => 0,
-        minute    => 0,
-        second    => 0,
-        time_zone => 'UTC',
-    );
-    return $dt->epoch / 86400;
+    my $time = Time::Piece->strptime($date, "%Y-%m-%d");
+    return $time->epoch / 60 / 60 / 24;
 }
 
 sub nscdInvalidate {
diff --git a/nixos/modules/config/users-groups.nix b/nixos/modules/config/users-groups.nix
index 9629e3964c96..684b4bc8fbcc 100644
--- a/nixos/modules/config/users-groups.nix
+++ b/nixos/modules/config/users-groups.nix
@@ -648,7 +648,7 @@ in {
         install -m 0700 -d /root
         install -m 0755 -d /home
 
-        ${pkgs.perl.withPackages (p: [ p.FileSlurp p.JSON p.DateTime ])}/bin/perl \
+        ${pkgs.perl.withPackages (p: [ p.FileSlurp p.JSON ])}/bin/perl \
         -w ${./update-users-groups.pl} ${spec}
       '';
     };
diff --git a/nixos/modules/i18n/input-method/uim.nix b/nixos/modules/i18n/input-method/uim.nix
index 9491ab2640fc..7225783b2a6f 100644
--- a/nixos/modules/i18n/input-method/uim.nix
+++ b/nixos/modules/i18n/input-method/uim.nix
@@ -10,7 +10,7 @@ in
 
     i18n.inputMethod.uim = {
       toolbar = mkOption {
-        type    = types.enum [ "gtk" "gtk3" "gtk-systray" "gtk3-systray" "qt4" ];
+        type    = types.enum [ "gtk" "gtk3" "gtk-systray" "gtk3-systray" "qt5" ];
         default = "gtk";
         example = "gtk-systray";
         description = lib.mdDoc ''
diff --git a/nixos/modules/module-list.nix b/nixos/modules/module-list.nix
index ec6424682ccc..abe0847ade21 100644
--- a/nixos/modules/module-list.nix
+++ b/nixos/modules/module-list.nix
@@ -594,6 +594,7 @@
   ./services/mail/rss2email.nix
   ./services/mail/schleuder.nix
   ./services/mail/spamassassin.nix
+  ./services/mail/stalwart-mail.nix
   ./services/mail/sympa.nix
   ./services/mail/zeyple.nix
   ./services/matrix/appservice-discord.nix
diff --git a/nixos/modules/security/sudo.nix b/nixos/modules/security/sudo.nix
index 9ac91bd0d368..d225442773c6 100644
--- a/nixos/modules/security/sudo.nix
+++ b/nixos/modules/security/sudo.nix
@@ -192,6 +192,10 @@ in
   ###### implementation
 
   config = mkIf cfg.enable {
+    assertions = [
+      { assertion = cfg.package.pname != "sudo-rs";
+        message = "The NixOS `sudo` module does not work with `sudo-rs` yet."; }
+    ];
 
     # We `mkOrder 600` so that the default rule shows up first, but there is
     # still enough room for a user to `mkBefore` it.
diff --git a/nixos/modules/services/mail/stalwart-mail.nix b/nixos/modules/services/mail/stalwart-mail.nix
new file mode 100644
index 000000000000..fdbdc99070b9
--- /dev/null
+++ b/nixos/modules/services/mail/stalwart-mail.nix
@@ -0,0 +1,106 @@
+{ config, lib, pkgs, ... }:
+
+with lib;
+
+let
+  cfg = config.services.stalwart-mail;
+  configFormat = pkgs.formats.toml { };
+  configFile = configFormat.generate "stalwart-mail.toml" cfg.settings;
+  dataDir = "/var/lib/stalwart-mail";
+
+in {
+  options.services.stalwart-mail = {
+    enable = mkEnableOption (mdDoc "the Stalwart all-in-one email server");
+    package = mkPackageOptionMD pkgs "stalwart-mail" { };
+
+    settings = mkOption {
+      inherit (configFormat) type;
+      default = { };
+      description = mdDoc ''
+        Configuration options for the Stalwart email server.
+        See <https://stalw.art/docs/> for available options.
+
+        By default, the module is configured to store everything locally.
+      '';
+    };
+  };
+
+  config = mkIf cfg.enable {
+    # Default config: all local
+    services.stalwart-mail.settings = {
+      global.tracing.method = mkDefault "stdout";
+      global.tracing.level = mkDefault "info";
+      queue.path = mkDefault "${dataDir}/queue";
+      report.path = mkDefault "${dataDir}/reports";
+      store.db.path = mkDefault "${dataDir}/data/index.sqlite3";
+      store.blob.type = mkDefault "local";
+      store.blob.local.path = mkDefault "${dataDir}/data/blobs";
+      resolver.type = mkDefault "system";
+    };
+
+    systemd.services.stalwart-mail = {
+      wantedBy = [ "multi-user.target" ];
+      after = [ "local-fs.target" "network.target" ];
+
+      preStart = ''
+        mkdir -p ${dataDir}/{queue,reports,data/blobs}
+      '';
+
+      serviceConfig = {
+        ExecStart =
+          "${cfg.package}/bin/stalwart-mail --config=${configFile}";
+
+        # Base from template resources/systemd/stalwart-mail.service
+        Type = "simple";
+        LimitNOFILE = 65536;
+        KillMode = "process";
+        KillSignal = "SIGINT";
+        Restart = "on-failure";
+        RestartSec = 5;
+        StandardOutput = "syslog";
+        StandardError = "syslog";
+        SyslogIdentifier = "stalwart-mail";
+
+        DynamicUser = true;
+        User = "stalwart-mail";
+        StateDirectory = "stalwart-mail";
+
+        # Bind standard privileged ports
+        AmbientCapabilities = [ "CAP_NET_BIND_SERVICE" ];
+        CapabilityBoundingSet = [ "CAP_NET_BIND_SERVICE" ];
+
+        # Hardening
+        DeviceAllow = [ "" ];
+        LockPersonality = true;
+        MemoryDenyWriteExecute = true;
+        PrivateDevices = true;
+        PrivateUsers = false;  # incompatible with CAP_NET_BIND_SERVICE
+        ProcSubset = "pid";
+        PrivateTmp = true;
+        ProtectClock = true;
+        ProtectControlGroups = true;
+        ProtectHome = true;
+        ProtectHostname = true;
+        ProtectKernelLogs = true;
+        ProtectKernelModules = true;
+        ProtectKernelTunables = true;
+        ProtectProc = "invisible";
+        ProtectSystem = "strict";
+        RestrictAddressFamilies = [ "AF_INET" "AF_INET6" ];
+        RestrictNamespaces = true;
+        RestrictRealtime = true;
+        RestrictSUIDSGID = true;
+        SystemCallArchitectures = "native";
+        SystemCallFilter = [ "@system-service" "~@privileged" ];
+        UMask = "0077";
+      };
+    };
+
+    # Make admin commands available in the shell
+    environment.systemPackages = [ cfg.package ];
+  };
+
+  meta = {
+    maintainers = with maintainers; [ happysalada pacien ];
+  };
+}
diff --git a/nixos/modules/services/matrix/mautrix-telegram.nix b/nixos/modules/services/matrix/mautrix-telegram.nix
index 17032ed808e9..97a6ba858e00 100644
--- a/nixos/modules/services/matrix/mautrix-telegram.nix
+++ b/nixos/modules/services/matrix/mautrix-telegram.nix
@@ -159,7 +159,6 @@ in {
         if [ ! -f '${registrationFile}' ]; then
           ${pkgs.mautrix-telegram}/bin/mautrix-telegram \
             --generate-registration \
-            --base-config='${pkgs.mautrix-telegram}/${pkgs.mautrix-telegram.pythonModule.sitePackages}/mautrix_telegram/example-config.yaml' \
             --config='${settingsFile}' \
             --registration='${registrationFile}'
         fi
diff --git a/nixos/modules/services/matrix/mautrix-whatsapp.nix b/nixos/modules/services/matrix/mautrix-whatsapp.nix
index 80c85980196f..c4dc48213495 100644
--- a/nixos/modules/services/matrix/mautrix-whatsapp.nix
+++ b/nixos/modules/services/matrix/mautrix-whatsapp.nix
@@ -11,53 +11,47 @@
   settingsFileUnsubstituted = settingsFormat.generate "mautrix-whatsapp-config-unsubstituted.json" cfg.settings;
   settingsFormat = pkgs.formats.json {};
   appservicePort = 29318;
+
+  mkDefaults = lib.mapAttrsRecursive (n: v: lib.mkDefault v);
+  defaultConfig = {
+    homeserver.address = "http://localhost:8448";
+    appservice = {
+      hostname = "[::]";
+      port = appservicePort;
+      database.type = "sqlite3";
+      database.uri = "${dataDir}/mautrix-whatsapp.db";
+      id = "whatsapp";
+      bot.username = "whatsappbot";
+      bot.displayname = "WhatsApp Bridge Bot";
+      as_token = "";
+      hs_token = "";
+    };
+    bridge = {
+      username_template = "whatsapp_{{.}}";
+      displayname_template = "{{if .BusinessName}}{{.BusinessName}}{{else if .PushName}}{{.PushName}}{{else}}{{.JID}}{{end}} (WA)";
+      double_puppet_server_map = {};
+      login_shared_secret_map = {};
+      command_prefix = "!wa";
+      permissions."*" = "relay";
+      relay.enabled = true;
+    };
+    logging = {
+      min_level = "info";
+      writers = lib.singleton {
+        type = "stdout";
+        format = "pretty-colored";
+        time_format = " ";
+      };
+    };
+  };
+
 in {
-  imports = [];
   options.services.mautrix-whatsapp = {
-    enable = lib.mkEnableOption "mautrix-whatsapp, a puppeting/relaybot bridge between Matrix and WhatsApp.";
+    enable = lib.mkEnableOption (lib.mdDoc "mautrix-whatsapp, a puppeting/relaybot bridge between Matrix and WhatsApp.");
 
     settings = lib.mkOption {
       type = settingsFormat.type;
-      default = {
-        appservice = {
-          address = "http://localhost:${toString appservicePort}";
-          hostname = "[::]";
-          port = appservicePort;
-          database = {
-            type = "sqlite3";
-            uri = "${dataDir}/mautrix-whatsapp.db";
-          };
-          id = "whatsapp";
-          bot = {
-            username = "whatsappbot";
-            displayname = "WhatsApp Bridge Bot";
-          };
-          as_token = "";
-          hs_token = "";
-        };
-        bridge = {
-          username_template = "whatsapp_{{.}}";
-          displayname_template = "{{if .BusinessName}}{{.BusinessName}}{{else if .PushName}}{{.PushName}}{{else}}{{.JID}}{{end}} (WA)";
-          double_puppet_server_map = {};
-          login_shared_secret_map = {};
-          command_prefix = "!wa";
-          permissions."*" = "relay";
-          relay.enabled = true;
-        };
-        logging = {
-          min_level = "info";
-          writers = [
-            {
-              type = "stdout";
-              format = "pretty-colored";
-            }
-            {
-              type = "file";
-              format = "json";
-            }
-          ];
-        };
-      };
+      default = defaultConfig;
       description = lib.mdDoc ''
         {file}`config.yaml` configuration as a Nix attribute set.
         Configuration options should match those described in
@@ -117,10 +111,22 @@ in {
   };
 
   config = lib.mkIf cfg.enable {
-    services.mautrix-whatsapp.settings = {
-      homeserver.domain = lib.mkDefault config.services.matrix-synapse.settings.server_name;
+
+    users.users.mautrix-whatsapp = {
+      isSystemUser = true;
+      group = "mautrix-whatsapp";
+      home = dataDir;
+      description = "Mautrix-WhatsApp bridge user";
     };
 
+    users.groups.mautrix-whatsapp = {};
+
+    services.mautrix-whatsapp.settings = lib.mkMerge (map mkDefaults [
+      defaultConfig
+      # Note: this is defined here to avoid the docs depending on `config`
+      { homeserver.domain = config.services.matrix-synapse.settings.server_name; }
+    ]);
+
     systemd.services.mautrix-whatsapp = {
       description = "Mautrix-WhatsApp Service - A WhatsApp bridge for Matrix";
 
@@ -158,10 +164,11 @@ in {
       '';
 
       serviceConfig = {
-        DynamicUser = true;
+        User = "mautrix-whatsapp";
+        Group = "mautrix-whatsapp";
         EnvironmentFile = cfg.environmentFile;
         StateDirectory = baseNameOf dataDir;
-        WorkingDirectory = "${dataDir}";
+        WorkingDirectory = dataDir;
         ExecStart = ''
           ${pkgs.mautrix-whatsapp}/bin/mautrix-whatsapp \
           --config='${settingsFile}' \
diff --git a/nixos/modules/services/misc/atuin.nix b/nixos/modules/services/misc/atuin.nix
index 57ff02df7d68..8d2c1b5242ff 100644
--- a/nixos/modules/services/misc/atuin.nix
+++ b/nixos/modules/services/misc/atuin.nix
@@ -6,7 +6,7 @@ in
 {
   options = {
     services.atuin = {
-      enable = lib.mkEnableOption (mdDoc "Enable server for shell history sync with atuin");
+      enable = lib.mkEnableOption (mdDoc "Atuin server for shell history sync");
 
       openRegistration = mkOption {
         type = types.bool;
diff --git a/nixos/modules/services/misc/gitlab.nix b/nixos/modules/services/misc/gitlab.nix
index c5e38b498829..b399ccc38f58 100644
--- a/nixos/modules/services/misc/gitlab.nix
+++ b/nixos/modules/services/misc/gitlab.nix
@@ -1088,6 +1088,11 @@ in {
         ''Support for container registries other than gitlab-container-registry has ended since GitLab 16.0.0 and is scheduled for removal in a future release.
           Please back up your data and migrate to the gitlab-container-registry package.''
       )
+      (mkIf
+        (versionAtLeast (getVersion cfg.packages.gitlab) "16.2.0" && versionOlder (getVersion cfg.packages.gitlab) "16.5.0")
+        ''GitLab instances created or updated between versions [15.11.0, 15.11.2] have an incorrect database schema.
+        Check the upstream documentation for a workaround: https://docs.gitlab.com/ee/update/versions/gitlab_16_changes.html#undefined-column-error-upgrading-to-162-or-later''
+      )
     ];
 
     assertions = [
@@ -1655,7 +1660,7 @@ in {
         Restart = "on-failure";
         WorkingDirectory = "${cfg.packages.gitlab}/share/gitlab";
         ExecStart = concatStringsSep " " [
-          "${cfg.packages.gitlab.rubyEnv}/bin/puma"
+          "${cfg.packages.gitlab.rubyEnv}/bin/bundle" "exec" "puma"
           "-e production"
           "-C ${cfg.statePath}/config/puma.rb"
           "-w ${cfg.puma.workers}"
diff --git a/nixos/modules/services/monitoring/mimir.nix b/nixos/modules/services/monitoring/mimir.nix
index edca9b7be4ff..6ed139b22974 100644
--- a/nixos/modules/services/monitoring/mimir.nix
+++ b/nixos/modules/services/monitoring/mimir.nix
@@ -32,11 +32,21 @@ in {
       type = types.package;
       description = lib.mdDoc ''Mimir package to use.'';
     };
+
+    extraFlags = mkOption {
+      type = types.listOf types.str;
+      default = [];
+      example = [ "--config.expand-env=true" ];
+      description = lib.mdDoc ''
+        Specify a list of additional command line flags,
+        which get escaped and are then passed to Mimir.
+      '';
+    };
   };
 
   config = mkIf cfg.enable {
     # for mimirtool
-    environment.systemPackages = [ pkgs.mimir ];
+    environment.systemPackages = [ cfg.package ];
 
     assertions = [{
       assertion = (
@@ -60,7 +70,7 @@ in {
                else cfg.configFile;
       in
       {
-        ExecStart = "${cfg.package}/bin/mimir --config.file=${conf}";
+        ExecStart = "${cfg.package}/bin/mimir --config.file=${conf} ${escapeShellArgs cfg.extraFlags}";
         DynamicUser = true;
         Restart = "always";
         ProtectSystem = "full";
diff --git a/nixos/modules/services/networking/jool.nix b/nixos/modules/services/networking/jool.nix
index 3aafbe40967c..d2d2b0956e8a 100644
--- a/nixos/modules/services/networking/jool.nix
+++ b/nixos/modules/services/networking/jool.nix
@@ -16,7 +16,7 @@ let
     TemporaryFileSystem = [ "/" ];
     BindReadOnlyPaths = [
       builtins.storeDir
-      "/run/current-system/kernel-modules"
+      "/run/booted-system/kernel-modules"
     ];
 
     # Give capabilities to load the module and configure it
@@ -31,26 +31,96 @@ let
 
   configFormat = pkgs.formats.json {};
 
-  mkDefaultAttrs = lib.mapAttrs (n: v: lib.mkDefault v);
+  # Generate the config file of instance `name`
+  nat64Conf = name:
+    configFormat.generate "jool-nat64-${name}.conf"
+      (cfg.nat64.${name} // { instance = name; });
+  siitConf = name:
+    configFormat.generate "jool-siit-${name}.conf"
+      (cfg.siit.${name} // { instance = name; });
+
+  # NAT64 config type
+  nat64Options = lib.types.submodule {
+    # The format is plain JSON
+    freeformType = configFormat.type;
+    # Some options with a default value
+    options.framework = lib.mkOption {
+      type = lib.types.enum [ "netfilter" "iptables" ];
+      default = "netfilter";
+      description = lib.mdDoc ''
+        The framework to use for attaching Jool's translation to the exist
+        kernel packet processing rules. See the
+        [documentation](https://nicmx.github.io/Jool/en/intro-jool.html#design)
+        for the differences between the two options.
+      '';
+    };
+    options.global.pool6 = lib.mkOption {
+      type = lib.types.strMatching "[[:xdigit:]:]+/[[:digit:]]+"
+        // { description = "Network prefix in CIDR notation"; };
+      default = "64:ff9b::/96";
+      description = lib.mdDoc ''
+        The prefix used for embedding IPv4 into IPv6 addresses.
+        Defaults to the well-known NAT64 prefix, defined by
+        [RFC 6052](https://datatracker.ietf.org/doc/html/rfc6052).
+      '';
+    };
+  };
 
-  defaultNat64 = {
-    instance = "default";
-    framework = "netfilter";
-    global.pool6 = "64:ff9b::/96";
+  # SIIT config type
+  siitOptions = lib.types.submodule {
+    # The format is, again, plain JSON
+    freeformType = configFormat.type;
+    # Some options with a default value
+    options = { inherit (nat64Options.getSubOptions []) framework; };
+  };
+
+  makeNat64Unit = name: opts: {
+    "jool-nat64-${name}" = {
+      description = "Jool, NAT64 setup of instance ${name}";
+      documentation = [ "https://nicmx.github.io/Jool/en/documentation.html" ];
+      after = [ "network.target" ];
+      wantedBy = [ "multi-user.target" ];
+      serviceConfig = {
+        Type = "oneshot";
+        RemainAfterExit = true;
+        ExecStartPre = "${pkgs.kmod}/bin/modprobe jool";
+        ExecStart    = "${jool-cli}/bin/jool file handle ${nat64Conf name}";
+        ExecStop     = "${jool-cli}/bin/jool -f ${nat64Conf name} instance remove";
+      } // hardening;
+    };
   };
-  defaultSiit = {
-    instance = "default";
-    framework = "netfilter";
+
+  makeSiitUnit = name: opts: {
+    "jool-siit-${name}" = {
+      description = "Jool, SIIT setup of instance ${name}";
+      documentation = [ "https://nicmx.github.io/Jool/en/documentation.html" ];
+      after = [ "network.target" ];
+      wantedBy = [ "multi-user.target" ];
+      serviceConfig = {
+        Type = "oneshot";
+        RemainAfterExit = true;
+        ExecStartPre = "${pkgs.kmod}/bin/modprobe jool_siit";
+        ExecStart    = "${jool-cli}/bin/jool_siit file handle ${siitConf name}";
+        ExecStop     = "${jool-cli}/bin/jool_siit -f ${siitConf name} instance remove";
+      } // hardening;
+    };
   };
 
-  nat64Conf = configFormat.generate "jool-nat64.conf" cfg.nat64.config;
-  siitConf  = configFormat.generate "jool-siit.conf" cfg.siit.config;
+  checkNat64 = name: _: ''
+    printf 'Validating Jool configuration for NAT64 instance "${name}"... '
+    jool file check ${nat64Conf name}
+    printf 'Ok.\n'; touch "$out"
+  '';
+
+  checkSiit = name: _: ''
+    printf 'Validating Jool configuration for SIIT instance "${name}"... '
+    jool_siit file check ${siitConf name}
+    printf 'Ok.\n'; touch "$out"
+  '';
 
 in
 
 {
-  ###### interface
-
   options = {
     networking.jool.enable = lib.mkOption {
       type = lib.types.bool;
@@ -64,157 +134,146 @@ in
         NAT64, analogous to the IPv4 NAPT. Refer to the upstream
         [documentation](https://nicmx.github.io/Jool/en/intro-xlat.html) for
         the supported modes of translation and how to configure them.
+
+        Enabling this option will install the Jool kernel module and the
+        command line tools for controlling it.
       '';
     };
 
-    networking.jool.nat64.enable = lib.mkEnableOption (lib.mdDoc "a NAT64 instance of Jool.");
-    networking.jool.nat64.config = lib.mkOption {
-      type = configFormat.type;
-      default = defaultNat64;
+    networking.jool.nat64 = lib.mkOption {
+      type = lib.types.attrsOf nat64Options;
+      default = { };
       example = lib.literalExpression ''
         {
-          # custom NAT64 prefix
-          global.pool6 = "2001:db8:64::/96";
-
-          # Port forwarding
-          bib = [
-            { # SSH 192.0.2.16 → 2001:db8:a::1
-              "protocol"     = "TCP";
-              "ipv4 address" = "192.0.2.16#22";
-              "ipv6 address" = "2001:db8:a::1#22";
-            }
-            { # DNS (TCP) 192.0.2.16 → 2001:db8:a::2
-              "protocol"     = "TCP";
-              "ipv4 address" = "192.0.2.16#53";
-              "ipv6 address" = "2001:db8:a::2#53";
-            }
-            { # DNS (UDP) 192.0.2.16 → 2001:db8:a::2
-              "protocol" = "UDP";
-              "ipv4 address" = "192.0.2.16#53";
-              "ipv6 address" = "2001:db8:a::2#53";
-            }
-          ];
-
-          pool4 = [
-            # Ports for dynamic translation
-            { protocol =  "TCP";  prefix = "192.0.2.16/32"; "port range" = "40001-65535"; }
-            { protocol =  "UDP";  prefix = "192.0.2.16/32"; "port range" = "40001-65535"; }
-            { protocol = "ICMP";  prefix = "192.0.2.16/32"; "port range" = "40001-65535"; }
-
-            # Ports for static BIB entries
-            { protocol =  "TCP";  prefix = "192.0.2.16/32"; "port range" = "22"; }
-            { protocol =  "UDP";  prefix = "192.0.2.16/32"; "port range" = "53"; }
-          ];
+          default = {
+            # custom NAT64 prefix
+            global.pool6 = "2001:db8:64::/96";
+
+            # Port forwarding
+            bib = [
+              { # SSH 192.0.2.16 → 2001:db8:a::1
+                "protocol"     = "TCP";
+                "ipv4 address" = "192.0.2.16#22";
+                "ipv6 address" = "2001:db8:a::1#22";
+              }
+              { # DNS (TCP) 192.0.2.16 → 2001:db8:a::2
+                "protocol"     = "TCP";
+                "ipv4 address" = "192.0.2.16#53";
+                "ipv6 address" = "2001:db8:a::2#53";
+              }
+              { # DNS (UDP) 192.0.2.16 → 2001:db8:a::2
+                "protocol" = "UDP";
+                "ipv4 address" = "192.0.2.16#53";
+                "ipv6 address" = "2001:db8:a::2#53";
+              }
+            ];
+
+            pool4 = [
+              # Port ranges for dynamic translation
+              { protocol =  "TCP";  prefix = "192.0.2.16/32"; "port range" = "40001-65535"; }
+              { protocol =  "UDP";  prefix = "192.0.2.16/32"; "port range" = "40001-65535"; }
+              { protocol = "ICMP";  prefix = "192.0.2.16/32"; "port range" = "40001-65535"; }
+
+              # Ports for static BIB entries
+              { protocol =  "TCP";  prefix = "192.0.2.16/32"; "port range" = "22"; }
+              { protocol =  "UDP";  prefix = "192.0.2.16/32"; "port range" = "53"; }
+            ];
+          };
         }
       '';
       description = lib.mdDoc ''
-        The configuration of a stateful NAT64 instance of Jool managed through
-        NixOS. See https://nicmx.github.io/Jool/en/config-atomic.html for the
-        available options.
+        Definitions of NAT64 instances of Jool.
+        See the
+        [documentation](https://nicmx.github.io/Jool/en/config-atomic.html) for
+        the available options. Also check out the
+        [tutorial](https://nicmx.github.io/Jool/en/run-nat64.html) for an
+        introduction to NAT64 and how to troubleshoot the setup.
+
+        The attribute name defines the name of the instance, with the main one
+        being `default`: this can be accessed from the command line without
+        specifying the name with `-i`.
 
         ::: {.note}
-        Existing or more instances created manually will not interfere with the
-        NixOS instance, provided the respective `pool4` addresses and port
-        ranges are not overlapping.
+        Instances created imperatively from the command line will not interfere
+        with the NixOS instances, provided the respective `pool4` addresses and
+        port ranges are not overlapping.
         :::
 
         ::: {.warning}
-        Changes to the NixOS instance performed via `jool instance nixos-nat64`
-        are applied correctly but will be lost after restarting
-        `jool-nat64.service`.
+        Changes to an instance performed via `jool -i <name>` are applied
+        correctly but will be lost after restarting the respective
+        `jool-nat64-<name>.service`.
         :::
       '';
     };
 
-    networking.jool.siit.enable = lib.mkEnableOption (lib.mdDoc "a SIIT instance of Jool.");
-    networking.jool.siit.config = lib.mkOption {
-      type = configFormat.type;
-      default = defaultSiit;
+    networking.jool.siit = lib.mkOption {
+      type = lib.types.attrsOf siitOptions;
+      default = { };
       example = lib.literalExpression ''
         {
-          # Maps any IPv4 address x.y.z.t to 2001:db8::x.y.z.t and v.v.
-          pool6 = "2001:db8::/96";
-
-          # Explicit address mappings
-          eamt = [
-            # 2001:db8:1:: ←→ 192.0.2.0
-            { "ipv6 prefix": "2001:db8:1::/128", "ipv4 prefix": "192.0.2.0" }
-            # 2001:db8:1::x ←→ 198.51.100.x
-            { "ipv6 prefix": "2001:db8:2::/120", "ipv4 prefix": "198.51.100.0/24" }
-          ]
+          default = {
+            # Maps any IPv4 address x.y.z.t to 2001:db8::x.y.z.t and v.v.
+            global.pool6 = "2001:db8::/96";
+
+            # Explicit address mappings
+            eamt = [
+              # 2001:db8:1:: ←→ 192.0.2.0
+              { "ipv6 prefix" = "2001:db8:1::/128"; "ipv4 prefix" = "192.0.2.0"; }
+              # 2001:db8:1::x ←→ 198.51.100.x
+              { "ipv6 prefix" = "2001:db8:2::/120"; "ipv4 prefix" = "198.51.100.0/24"; }
+            ];
+          };
         }
       '';
       description = lib.mdDoc ''
-        The configuration of a SIIT instance of Jool managed through
-        NixOS. See https://nicmx.github.io/Jool/en/config-atomic.html for the
-        available options.
+        Definitions of SIIT instances of Jool.
+        See the
+        [documentation](https://nicmx.github.io/Jool/en/config-atomic.html) for
+        the available options. Also check out the
+        [tutorial](https://nicmx.github.io/Jool/en/run-vanilla.html) for an
+        introduction to SIIT and how to troubleshoot the setup.
+
+        The attribute name defines the name of the instance, with the main one
+        being `default`: this can be accessed from the command line without
+        specifying the name with `-i`.
 
         ::: {.note}
-        Existing or more instances created manually will not interfere with the
-        NixOS instance, provided the respective `EAMT` address mappings are not
-        overlapping.
+        Instances created imperatively from the command line will not interfere
+        with the NixOS instances, provided the respective EAMT addresses and
+        port ranges are not overlapping.
         :::
 
         ::: {.warning}
-        Changes to the NixOS instance performed via `jool instance nixos-siit`
-        are applied correctly but will be lost after restarting
-        `jool-siit.service`.
+        Changes to an instance performed via `jool -i <name>` are applied
+        correctly but will be lost after restarting the respective
+        `jool-siit-<name>.service`.
         :::
       '';
     };
 
   };
 
-  ###### implementation
-
   config = lib.mkIf cfg.enable {
-    environment.systemPackages = [ jool-cli ];
+    # Install kernel module and cli tools
     boot.extraModulePackages = [ jool ];
+    environment.systemPackages = [ jool-cli ];
 
-    systemd.services.jool-nat64 = lib.mkIf cfg.nat64.enable {
-      description = "Jool, NAT64 setup";
-      documentation = [ "https://nicmx.github.io/Jool/en/documentation.html" ];
-      after = [ "network.target" ];
-      wantedBy = [ "multi-user.target" ];
-      reloadIfChanged = true;
-      serviceConfig = {
-        Type = "oneshot";
-        RemainAfterExit = true;
-        ExecStartPre = "${pkgs.kmod}/bin/modprobe jool";
-        ExecStart    = "${jool-cli}/bin/jool file handle ${nat64Conf}";
-        ExecStop     = "${jool-cli}/bin/jool -f ${nat64Conf} instance remove";
-      } // hardening;
-    };
-
-    systemd.services.jool-siit = lib.mkIf cfg.siit.enable {
-      description = "Jool, SIIT setup";
-      documentation = [ "https://nicmx.github.io/Jool/en/documentation.html" ];
-      after = [ "network.target" ];
-      wantedBy = [ "multi-user.target" ];
-      reloadIfChanged = true;
-      serviceConfig = {
-        Type = "oneshot";
-        RemainAfterExit = true;
-        ExecStartPre = "${pkgs.kmod}/bin/modprobe jool_siit";
-        ExecStart    = "${jool-cli}/bin/jool_siit file handle ${siitConf}";
-        ExecStop     = "${jool-cli}/bin/jool_siit -f ${siitConf} instance remove";
-      } // hardening;
-    };
-
-    system.checks = lib.singleton (pkgs.runCommand "jool-validated" {
-      nativeBuildInputs = [ pkgs.buildPackages.jool-cli ];
-      preferLocalBuild = true;
-    } ''
-      printf 'Validating Jool configuration... '
-      ${lib.optionalString cfg.siit.enable "jool_siit file check ${siitConf}"}
-      ${lib.optionalString cfg.nat64.enable "jool file check ${nat64Conf}"}
-      printf 'ok\n'
-      touch "$out"
-    '');
-
-    networking.jool.nat64.config = mkDefaultAttrs defaultNat64;
-    networking.jool.siit.config  = mkDefaultAttrs defaultSiit;
+    # Install services for each instance
+    systemd.services = lib.mkMerge
+      (lib.mapAttrsToList makeNat64Unit cfg.nat64 ++
+       lib.mapAttrsToList makeSiitUnit cfg.siit);
 
+    # Check the configuration of each instance
+    system.checks = lib.optional (cfg.nat64 != {} || cfg.siit != {})
+      (pkgs.runCommand "jool-validated"
+        {
+          nativeBuildInputs = with pkgs.buildPackages; [ jool-cli ];
+          preferLocalBuild = true;
+        }
+        (lib.concatStrings
+          (lib.mapAttrsToList checkNat64 cfg.nat64 ++
+           lib.mapAttrsToList checkSiit cfg.siit)));
   };
 
   meta.maintainers = with lib.maintainers; [ rnhmjoj ];
diff --git a/nixos/modules/services/networking/nftables.nix b/nixos/modules/services/networking/nftables.nix
index 0e4cd6fa1503..47159ade328c 100644
--- a/nixos/modules/services/networking/nftables.nix
+++ b/nixos/modules/services/networking/nftables.nix
@@ -70,6 +70,26 @@ in
       '';
     };
 
+    networking.nftables.checkRulesetRedirects = mkOption {
+      type = types.addCheck (types.attrsOf types.path) (attrs: all types.path.check (attrNames attrs));
+      default = {
+        "/etc/hosts" = config.environment.etc.hosts.source;
+        "/etc/protocols" = config.environment.etc.protocols.source;
+        "/etc/services" = config.environment.etc.services.source;
+      };
+      defaultText = literalExpression ''
+        {
+          "/etc/hosts" = config.environment.etc.hosts.source;
+          "/etc/protocols" = config.environment.etc.protocols.source;
+          "/etc/services" = config.environment.etc.services.source;
+        }
+      '';
+      description = mdDoc ''
+        Set of paths that should be intercepted and rewritten while checking the ruleset
+        using `pkgs.buildPackages.libredirect`.
+      '';
+    };
+
     networking.nftables.preCheckRuleset = mkOption {
       type = types.lines;
       default = "";
@@ -282,7 +302,7 @@ in
             cp $out ruleset.conf
             sed 's|include "${deletionsScriptVar}"||' -i ruleset.conf
             ${cfg.preCheckRuleset}
-            export NIX_REDIRECTS=/etc/protocols=${pkgs.buildPackages.iana-etc}/etc/protocols:/etc/services=${pkgs.buildPackages.iana-etc}/etc/services
+            export NIX_REDIRECTS=${escapeShellArg (concatStringsSep ":" (mapAttrsToList (n: v: "${n}=${v}") cfg.checkRulesetRedirects))}
             LD_PRELOAD="${pkgs.buildPackages.libredirect}/lib/libredirect.so ${pkgs.buildPackages.lklWithFirewall.lib}/lib/liblkl-hijack.so" \
               ${pkgs.buildPackages.nftables}/bin/nft --check --file ruleset.conf
           '';
diff --git a/nixos/modules/services/x11/desktop-managers/budgie.nix b/nixos/modules/services/x11/desktop-managers/budgie.nix
index bee627ec76c0..a4f8bd5051ec 100644
--- a/nixos/modules/services/x11/desktop-managers/budgie.nix
+++ b/nixos/modules/services/x11/desktop-managers/budgie.nix
@@ -134,6 +134,7 @@ in {
         # Update user directories.
         xdg-user-dirs
       ]
+      ++ lib.optional config.networking.networkmanager.enable pkgs.networkmanagerapplet
       ++ (utils.removePackagesByName [
           cinnamon.nemo
           mate.eom
@@ -192,7 +193,7 @@ in {
     # Required by Budgie Panel plugins and/or Budgie Control Center panels.
     networking.networkmanager.enable = mkDefault true; # for BCC's Network panel.
     programs.nm-applet.enable = config.networking.networkmanager.enable; # Budgie has no Network applet.
-    programs.nm-applet.indicator = false; # Budgie doesn't support AppIndicators.
+    programs.nm-applet.indicator = true; # Budgie uses AppIndicators.
 
     hardware.bluetooth.enable = mkDefault true; # for Budgie's Status Indicator and BCC's Bluetooth panel.
     hardware.pulseaudio.enable = mkDefault true; # for Budgie's Status Indicator and BCC's Sound panel.
diff --git a/nixos/modules/virtualisation/anbox.nix b/nixos/modules/virtualisation/anbox.nix
index c7e9e23c4c92..523d9a9576ef 100644
--- a/nixos/modules/virtualisation/anbox.nix
+++ b/nixos/modules/virtualisation/anbox.nix
@@ -5,7 +5,7 @@ with lib;
 let
 
   cfg = config.virtualisation.anbox;
-  kernelPackages = config.boot.kernelPackages;
+
   addrOpts = v: addr: pref: name: {
     address = mkOption {
       default = addr;
@@ -25,6 +25,28 @@ let
     };
   };
 
+  finalImage = if cfg.imageModifications == "" then cfg.image else ( pkgs.callPackage (
+    { runCommandNoCC, squashfsTools }:
+
+    runCommandNoCC "${cfg.image.name}-modified.img" {
+      nativeBuildInputs = [
+        squashfsTools
+      ];
+    } ''
+      echo "-> Extracting Anbox root image..."
+      unsquashfs -dest rootfs ${cfg.image}
+
+      echo "-> Modifying Anbox root image..."
+      (
+      cd rootfs
+      ${cfg.imageModifications}
+      )
+
+      echo "-> Packing modified Anbox root image..."
+      mksquashfs rootfs $out -comp xz -no-xattrs -all-root
+    ''
+  ) { });
+
 in
 
 {
@@ -42,6 +64,18 @@ in
       '';
     };
 
+    imageModifications = mkOption {
+      default = "";
+      type = types.lines;
+      description = lib.mdDoc ''
+        Commands to edit the image filesystem.
+
+        This can be used to e.g. bundle a privileged F-Droid.
+
+        Commands are ran with PWD being at the root of the filesystem.
+      '';
+    };
+
     extraInit = mkOption {
       type = types.lines;
       default = "";
@@ -67,16 +101,19 @@ in
   config = mkIf cfg.enable {
 
     assertions = singleton {
-      assertion = versionAtLeast (getVersion config.boot.kernelPackages.kernel) "4.18";
-      message = "Anbox needs user namespace support to work properly";
+      assertion = with config.boot.kernelPackages; kernelAtLeast "5.5" && kernelOlder "5.18";
+      message = "Anbox needs a kernel with binder and ashmem support";
     };
 
     environment.systemPackages = with pkgs; [ anbox ];
 
-    services.udev.extraRules = ''
-      KERNEL=="ashmem", NAME="%k", MODE="0666"
-      KERNEL=="binder*", NAME="%k", MODE="0666"
-    '';
+    systemd.mounts = singleton {
+      requiredBy = [ "anbox-container-manager.service" ];
+      description = "Anbox Binder File System";
+      what = "binder";
+      where = "/dev/binderfs";
+      type = "binder";
+    };
 
     virtualisation.lxc.enable = true;
     networking.bridges.anbox0.interfaces = [];
@@ -87,6 +124,9 @@ in
       internalInterfaces = [ "anbox0" ];
     };
 
+    # Ensures NetworkManager doesn't touch anbox0
+    networking.networkmanager.unmanaged = [ "anbox0" ];
+
     systemd.services.anbox-container-manager = let
       anboxloc = "/var/lib/anbox";
     in {
@@ -121,12 +161,13 @@ in
         ExecStart = ''
           ${pkgs.anbox}/bin/anbox container-manager \
             --data-path=${anboxloc} \
-            --android-image=${cfg.image} \
+            --android-image=${finalImage} \
             --container-network-address=${cfg.ipv4.container.address} \
             --container-network-gateway=${cfg.ipv4.gateway.address} \
             --container-network-dns-servers=${cfg.ipv4.dns} \
             --use-rootfs-overlay \
-            --privileged
+            --privileged \
+            --daemon
         '';
       };
     };
diff --git a/nixos/modules/virtualisation/lxc-container.nix b/nixos/modules/virtualisation/lxc-container.nix
index 55b285b69147..9402d3bf37d0 100644
--- a/nixos/modules/virtualisation/lxc-container.nix
+++ b/nixos/modules/virtualisation/lxc-container.nix
@@ -1,96 +1,16 @@
 { lib, config, pkgs, ... }:
 
-with lib;
-
 let
-  templateSubmodule = { ... }: {
-    options = {
-      enable = mkEnableOption (lib.mdDoc "this template");
-
-      target = mkOption {
-        description = lib.mdDoc "Path in the container";
-        type = types.path;
-      };
-      template = mkOption {
-        description = lib.mdDoc ".tpl file for rendering the target";
-        type = types.path;
-      };
-      when = mkOption {
-        description = lib.mdDoc "Events which trigger a rewrite (create, copy)";
-        type = types.listOf (types.str);
-      };
-      properties = mkOption {
-        description = lib.mdDoc "Additional properties";
-        type = types.attrs;
-        default = {};
-      };
-    };
-  };
-
-  toYAML = name: data: pkgs.writeText name (generators.toYAML {} data);
-
   cfg = config.virtualisation.lxc;
-  templates = if cfg.templates != {} then let
-    list = mapAttrsToList (name: value: { inherit name; } // value)
-      (filterAttrs (name: value: value.enable) cfg.templates);
-  in
-    {
-      files = map (tpl: {
-        source = tpl.template;
-        target = "/templates/${tpl.name}.tpl";
-      }) list;
-      properties = listToAttrs (map (tpl: nameValuePair tpl.target {
-        when = tpl.when;
-        template = "${tpl.name}.tpl";
-        properties = tpl.properties;
-      }) list);
-    }
-  else { files = []; properties = {}; };
-
-in
-{
+in {
   imports = [
-    ../installer/cd-dvd/channel.nix
-    ../profiles/clone-config.nix
-    ../profiles/minimal.nix
+    ./lxc-instance-common.nix
   ];
 
   options = {
     virtualisation.lxc = {
-      templates = mkOption {
-        description = lib.mdDoc "Templates for LXD";
-        type = types.attrsOf (types.submodule (templateSubmodule));
-        default = {};
-        example = literalExpression ''
-          {
-            # create /etc/hostname on container creation. also requires networking.hostName = "" to be set
-            "hostname" = {
-              enable = true;
-              target = "/etc/hostname";
-              template = builtins.toFile "hostname.tpl" "{{ container.name }}";
-              when = [ "create" ];
-            };
-            # create /etc/nixos/hostname.nix with a configuration for keeping the hostname applied
-            "hostname-nix" = {
-              enable = true;
-              target = "/etc/nixos/hostname.nix";
-              template = builtins.toFile "hostname-nix.tpl" "{ ... }: { networking.hostName = \"{{ container.name }}\"; }";
-              # copy keeps the file updated when the container is changed
-              when = [ "create" "copy" ];
-            };
-            # copy allow the user to specify a custom configuration.nix
-            "configuration-nix" = {
-              enable = true;
-              target = "/etc/nixos/configuration.nix";
-              template = builtins.toFile "configuration-nix" "{{ config_get(\"user.user-data\", properties.default) }}";
-              when = [ "create" ];
-            };
-          };
-        '';
-      };
-
-      privilegedContainer = mkOption {
-        type = types.bool;
+      privilegedContainer = lib.mkOption {
+        type = lib.types.bool;
         default = false;
         description = lib.mdDoc ''
           Whether this LXC container will be running as a privileged container or not. If set to `true` then
@@ -116,24 +36,6 @@ in
         ${config.nix.package.out}/bin/nix-env -p /nix/var/nix/profiles/system --set /run/current-system
       '';
 
-    system.build.metadata = pkgs.callPackage ../../lib/make-system-tarball.nix {
-      contents = [
-        {
-          source = toYAML "metadata.yaml" {
-            architecture = builtins.elemAt (builtins.match "^([a-z0-9_]+).+" (toString pkgs.system)) 0;
-            creation_date = 1;
-            properties = {
-              description = "${config.system.nixos.distroName} ${config.system.nixos.codeName} ${config.system.nixos.label} ${pkgs.system}";
-              os = "${config.system.nixos.distroId}";
-              release = "${config.system.nixos.codeName}";
-            };
-            templates = templates.properties;
-          };
-          target = "/metadata.yaml";
-        }
-      ] ++ templates.files;
-    };
-
     # TODO: build rootfs as squashfs for faster unpack
     system.build.tarball = pkgs.callPackage ../../lib/make-system-tarball.nix {
       extraArgs = "--owner=0";
@@ -180,7 +82,7 @@ in
           ProtectKernelTunables=no
           NoNewPrivileges=no
           LoadCredential=
-        '' + optionalString cfg.privilegedContainer ''
+        '' + lib.optionalString cfg.privilegedContainer ''
           # Additional settings for privileged containers
           ProtectHome=no
           ProtectSystem=no
@@ -193,28 +95,8 @@ in
       })
     ];
 
-    # Allow the user to login as root without password.
-    users.users.root.initialHashedPassword = mkOverride 150 "";
-
-    system.activationScripts.installInitScript = mkForce ''
+    system.activationScripts.installInitScript = lib.mkForce ''
       ln -fs $systemConfig/init /sbin/init
     '';
-
-    # Some more help text.
-    services.getty.helpLine =
-      ''
-
-        Log in as "root" with an empty password.
-      '';
-
-    # Containers should be light-weight, so start sshd on demand.
-    services.openssh.enable = mkDefault true;
-    services.openssh.startWhenNeeded = mkDefault true;
-
-    # As this is intended as a standalone image, undo some of the minimal profile stuff
-    environment.noXlibs = false;
-    documentation.enable = true;
-    documentation.nixos.enable = true;
-    services.logrotate.enable = true;
   };
 }
diff --git a/nixos/modules/virtualisation/lxc-image-metadata.nix b/nixos/modules/virtualisation/lxc-image-metadata.nix
new file mode 100644
index 000000000000..2c0568b4c468
--- /dev/null
+++ b/nixos/modules/virtualisation/lxc-image-metadata.nix
@@ -0,0 +1,104 @@
+{ lib, config, pkgs, ... }:
+
+let
+  templateSubmodule = {...}: {
+    options = {
+      enable = lib.mkEnableOption "this template";
+
+      target = lib.mkOption {
+        description = "Path in the container";
+        type = lib.types.path;
+      };
+      template = lib.mkOption {
+        description = ".tpl file for rendering the target";
+        type = lib.types.path;
+      };
+      when = lib.mkOption {
+        description = "Events which trigger a rewrite (create, copy)";
+        type = lib.types.listOf (lib.types.str);
+      };
+      properties = lib.mkOption {
+        description = "Additional properties";
+        type = lib.types.attrs;
+        default = {};
+      };
+    };
+  };
+
+  toYAML = name: data: pkgs.writeText name (lib.generators.toYAML {} data);
+
+  cfg = config.virtualisation.lxc;
+  templates = if cfg.templates != {} then let
+    list = lib.mapAttrsToList (name: value: { inherit name; } // value)
+      (lib.filterAttrs (name: value: value.enable) cfg.templates);
+  in
+    {
+      files = map (tpl: {
+        source = tpl.template;
+        target = "/templates/${tpl.name}.tpl";
+      }) list;
+      properties = lib.listToAttrs (map (tpl: lib.nameValuePair tpl.target {
+        when = tpl.when;
+        template = "${tpl.name}.tpl";
+        properties = tpl.properties;
+      }) list);
+    }
+  else { files = []; properties = {}; };
+
+in {
+  options = {
+    virtualisation.lxc = {
+      templates = lib.mkOption {
+        description = "Templates for LXD";
+        type = lib.types.attrsOf (lib.types.submodule templateSubmodule);
+        default = {};
+        example = lib.literalExpression ''
+          {
+            # create /etc/hostname on container creation
+            "hostname" = {
+              enable = true;
+              target = "/etc/hostname";
+              template = builtins.writeFile "hostname.tpl" "{{ container.name }}";
+              when = [ "create" ];
+            };
+            # create /etc/nixos/hostname.nix with a configuration for keeping the hostname applied
+            "hostname-nix" = {
+              enable = true;
+              target = "/etc/nixos/hostname.nix";
+              template = builtins.writeFile "hostname-nix.tpl" "{ ... }: { networking.hostName = "{{ container.name }}"; }";
+              # copy keeps the file updated when the container is changed
+              when = [ "create" "copy" ];
+            };
+            # copy allow the user to specify a custom configuration.nix
+            "configuration-nix" = {
+              enable = true;
+              target = "/etc/nixos/configuration.nix";
+              template = builtins.writeFile "configuration-nix" "{{ config_get(\"user.user-data\", properties.default) }}";
+              when = [ "create" ];
+            };
+          };
+        '';
+      };
+    };
+  };
+
+  config = {
+    system.build.metadata = pkgs.callPackage ../../lib/make-system-tarball.nix {
+      contents = [
+        {
+          source = toYAML "metadata.yaml" {
+            architecture = builtins.elemAt (builtins.match "^([a-z0-9_]+).+" (toString pkgs.system)) 0;
+            creation_date = 1;
+            properties = {
+              description = "${config.system.nixos.distroName} ${config.system.nixos.codeName} ${config.system.nixos.label} ${pkgs.system}";
+              os = "${config.system.nixos.distroId}";
+              release = "${config.system.nixos.codeName}";
+            };
+            templates = templates.properties;
+          };
+          target = "/metadata.yaml";
+        }
+      ] ++ templates.files;
+    };
+  };
+}
diff --git a/nixos/modules/virtualisation/lxc-instance-common.nix b/nixos/modules/virtualisation/lxc-instance-common.nix
new file mode 100644
index 000000000000..d6a0e05fb1c9
--- /dev/null
+++ b/nixos/modules/virtualisation/lxc-instance-common.nix
@@ -0,0 +1,30 @@
+{lib, ...}:
+
+{
+  imports = [
+    ./lxc-image-metadata.nix
+
+    ../installer/cd-dvd/channel.nix
+    ../profiles/clone-config.nix
+    ../profiles/minimal.nix
+  ];
+
+  # Allow the user to login as root without password.
+  users.users.root.initialHashedPassword = lib.mkOverride 150 "";
+
+  # Some more help text.
+  services.getty.helpLine = ''
+
+    Log in as "root" with an empty password.
+  '';
+
+  # Containers should be light-weight, so start sshd on demand.
+  services.openssh.enable = lib.mkDefault true;
+  services.openssh.startWhenNeeded = lib.mkDefault true;
+
+  # As this is intended as a standalone image, undo some of the minimal profile stuff
+  environment.noXlibs = false;
+  documentation.enable = true;
+  documentation.nixos.enable = true;
+  services.logrotate.enable = true;
+}
diff --git a/nixos/modules/virtualisation/lxd-virtual-machine.nix b/nixos/modules/virtualisation/lxd-virtual-machine.nix
new file mode 100644
index 000000000000..ba729465ec2f
--- /dev/null
+++ b/nixos/modules/virtualisation/lxd-virtual-machine.nix
@@ -0,0 +1,46 @@
+{ config, lib, pkgs, ... }:
+
+let
+  serialDevice =
+    if pkgs.stdenv.hostPlatform.isx86
+    then "ttyS0"
+    else "ttyAMA0"; # aarch64
+in {
+  imports = [
+    ./lxc-instance-common.nix
+
+    ../profiles/qemu-guest.nix
+  ];
+
+  config = {
+    system.build.qemuImage = import ../../lib/make-disk-image.nix {
+      inherit pkgs lib config;
+
+      partitionTableType = "efi";
+      format = "qcow2-compressed";
+      copyChannel = true;
+    };
+
+    fileSystems = {
+      "/" = {
+        device = "/dev/disk/by-label/nixos";
+        autoResize = true;
+        fsType = "ext4";
+      };
+      "/boot" = {
+        device = "/dev/disk/by-label/ESP";
+        fsType = "vfat";
+      };
+    };
+
+    boot.growPartition = true;
+    boot.loader.systemd-boot.enable = true;
+
+    # image building needs to know what device to install bootloader on
+    boot.loader.grub.device = "/dev/vda";
+
+    boot.kernelParams = ["console=tty1" "console=${serialDevice}"];
+
+    virtualisation.lxd.agent.enable = lib.mkDefault true;
+  };
+}
diff --git a/nixos/modules/virtualisation/lxd.nix b/nixos/modules/virtualisation/lxd.nix
index e22ba9a0ae2c..e30fbebb662c 100644
--- a/nixos/modules/virtualisation/lxd.nix
+++ b/nixos/modules/virtualisation/lxd.nix
@@ -2,21 +2,20 @@
 
 { config, lib, pkgs, ... }:
 
-with lib;
-
 let
   cfg = config.virtualisation.lxd;
+  preseedFormat = pkgs.formats.yaml {};
 in {
   imports = [
-    (mkRemovedOptionModule [ "virtualisation" "lxd" "zfsPackage" ] "Override zfs in an overlay instead to override it globally")
+    (lib.mkRemovedOptionModule [ "virtualisation" "lxd" "zfsPackage" ] "Override zfs in an overlay instead to override it globally")
   ];
 
   ###### interface
 
   options = {
     virtualisation.lxd = {
-      enable = mkOption {
-        type = types.bool;
+      enable = lib.mkOption {
+        type = lib.types.bool;
         default = false;
         description = lib.mdDoc ''
           This option enables lxd, a daemon that manages
@@ -32,28 +31,28 @@ in {
         '';
       };
 
-      package = mkOption {
-        type = types.package;
+      package = lib.mkOption {
+        type = lib.types.package;
         default = pkgs.lxd;
-        defaultText = literalExpression "pkgs.lxd";
+        defaultText = lib.literalExpression "pkgs.lxd";
         description = lib.mdDoc ''
           The LXD package to use.
         '';
       };
 
-      lxcPackage = mkOption {
-        type = types.package;
+      lxcPackage = lib.mkOption {
+        type = lib.types.package;
         default = pkgs.lxc;
-        defaultText = literalExpression "pkgs.lxc";
+        defaultText = lib.literalExpression "pkgs.lxc";
         description = lib.mdDoc ''
           The LXC package to use with LXD (required for AppArmor profiles).
         '';
       };
 
-      zfsSupport = mkOption {
-        type = types.bool;
+      zfsSupport = lib.mkOption {
+        type = lib.types.bool;
         default = config.boot.zfs.enabled;
-        defaultText = literalExpression "config.boot.zfs.enabled";
+        defaultText = lib.literalExpression "config.boot.zfs.enabled";
         description = lib.mdDoc ''
           Enables lxd to use zfs as a storage for containers.
 
@@ -62,8 +61,8 @@ in {
         '';
       };
 
-      recommendedSysctlSettings = mkOption {
-        type = types.bool;
+      recommendedSysctlSettings = lib.mkOption {
+        type = lib.types.bool;
         default = false;
         description = lib.mdDoc ''
           Enables various settings to avoid common pitfalls when
@@ -75,8 +74,67 @@ in {
         '';
       };
 
-      startTimeout = mkOption {
-        type = types.int;
+      preseed = lib.mkOption {
+        type = lib.types.nullOr (lib.types.submodule {
+          freeformType = preseedFormat.type;
+        });
+
+        default = null;
+
+        description = lib.mdDoc ''
+          Configuration for LXD preseed, see
+          <https://documentation.ubuntu.com/lxd/en/latest/howto/initialize/#initialize-preseed>
+          for supported values.
+
+          Changes to this will be re-applied to LXD which will overwrite existing entities or create missing ones,
+          but entities will *not* be removed by preseed.
+        '';
+
+        example = lib.literalExpression ''
+          {
+            networks = [
+              {
+                name = "lxdbr0";
+                type = "bridge";
+                config = {
+                  "ipv4.address" = "10.0.100.1/24";
+                  "ipv4.nat" = "true";
+                };
+              }
+            ];
+            profiles = [
+              {
+                name = "default";
+                devices = {
+                  eth0 = {
+                    name = "eth0";
+                    network = "lxdbr0";
+                    type = "nic";
+                  };
+                  root = {
+                    path = "/";
+                    pool = "default";
+                    size = "35GiB";
+                    type = "disk";
+                  };
+                };
+              }
+            ];
+            storage_pools = [
+              {
+                name = "default";
+                driver = "dir";
+                config = {
+                  source = "/var/lib/lxd/storage-pools/default";
+                };
+              }
+            ];
+          }
+        '';
+      };
+
+      startTimeout = lib.mkOption {
+        type = lib.types.int;
         default = 600;
         apply = toString;
         description = lib.mdDoc ''
@@ -91,13 +149,13 @@ in {
           Enables the (experimental) LXD UI.
         '');
 
-        package = mkPackageOption pkgs.lxd-unwrapped "ui" { };
+        package = lib.mkPackageOption pkgs.lxd-unwrapped "ui" { };
       };
     };
   };
 
   ###### implementation
-  config = mkIf cfg.enable {
+  config = lib.mkIf cfg.enable {
     environment.systemPackages = [ cfg.package ];
 
     # Note: the following options are also declared in virtualisation.lxc, but
@@ -139,19 +197,19 @@ in {
       wantedBy = [ "multi-user.target" ];
       after = [
         "network-online.target"
-        (mkIf config.virtualisation.lxc.lxcfs.enable "lxcfs.service")
+        (lib.mkIf config.virtualisation.lxc.lxcfs.enable "lxcfs.service")
       ];
       requires = [
         "network-online.target"
         "lxd.socket"
-        (mkIf config.virtualisation.lxc.lxcfs.enable "lxcfs.service")
+        (lib.mkIf config.virtualisation.lxc.lxcfs.enable "lxcfs.service")
       ];
       documentation = [ "man:lxd(1)" ];
 
       path = [ pkgs.util-linux ]
-        ++ optional cfg.zfsSupport config.boot.zfs.package;
+        ++ lib.optional cfg.zfsSupport config.boot.zfs.package;
 
-      environment = mkIf (cfg.ui.enable) {
+      environment = lib.mkIf (cfg.ui.enable) {
         "LXD_UI" = cfg.ui.package;
       };
 
@@ -173,11 +231,26 @@ in {
         # By default, `lxd` loads configuration files from hard-coded
         # `/usr/share/lxc/config` - since this is a no-go for us, we have to
         # explicitly tell it where the actual configuration files are
-        Environment = mkIf (config.virtualisation.lxc.lxcfs.enable)
+        Environment = lib.mkIf (config.virtualisation.lxc.lxcfs.enable)
           "LXD_LXC_TEMPLATE_CONFIG=${pkgs.lxcfs}/share/lxc/config";
       };
     };
 
+    systemd.services.lxd-preseed = lib.mkIf (cfg.preseed != null) {
+      description = "LXD initialization with preseed file";
+      wantedBy = ["multi-user.target"];
+      requires = ["lxd.service"];
+      after = ["lxd.service"];
+
+      script = ''
+        ${pkgs.coreutils}/bin/cat ${preseedFormat.generate "lxd-preseed.yaml" cfg.preseed} | ${cfg.package}/bin/lxd init --preseed
+      '';
+
+      serviceConfig = {
+        Type = "oneshot";
+      };
+    };
+
     users.groups.lxd = {};
 
     users.users.root = {
@@ -185,7 +258,7 @@ in {
       subGidRanges = [ { startGid = 1000000; count = 65536; } ];
     };
 
-    boot.kernel.sysctl = mkIf cfg.recommendedSysctlSettings {
+    boot.kernel.sysctl = lib.mkIf cfg.recommendedSysctlSettings {
       "fs.inotify.max_queued_events" = 1048576;
       "fs.inotify.max_user_instances" = 1048576;
       "fs.inotify.max_user_watches" = 1048576;
@@ -196,7 +269,7 @@ in {
       "kernel.keys.maxkeys" = 2000;
     };
 
-    boot.kernelModules = [ "veth" "xt_comment" "xt_CHECKSUM" "xt_MASQUERADE" ]
-      ++ optionals (!config.networking.nftables.enable) [ "iptable_mangle" ];
+    boot.kernelModules = [ "veth" "xt_comment" "xt_CHECKSUM" "xt_MASQUERADE" "vhost_vsock" ]
+      ++ lib.optionals (!config.networking.nftables.enable) [ "iptable_mangle" ];
   };
 }
diff --git a/nixos/release.nix b/nixos/release.nix
index 6da6faab73be..abaa7ef9a711 100644
--- a/nixos/release.nix
+++ b/nixos/release.nix
@@ -44,10 +44,13 @@ let
   pkgs = import ./.. { system = "x86_64-linux"; };
 
 
-  versionModule =
-    { system.nixos.versionSuffix = versionSuffix;
-      system.nixos.revision = nixpkgs.rev or nixpkgs.shortRev;
-    };
+  versionModule = { config, ... }: {
+    system.nixos.versionSuffix = versionSuffix;
+    system.nixos.revision = nixpkgs.rev or nixpkgs.shortRev;
+
+    # At creation time we do not have state yet, so just default to latest.
+    system.stateVersion = config.system.nixos.version;
+  };
 
   makeModules = module: rest: [ configuration versionModule module rest ];
 
@@ -310,7 +313,7 @@ in rec {
   );
 
   # An image that can be imported into lxd and used for container creation
-  lxdImage = forMatchingSystems [ "x86_64-linux" "aarch64-linux" ] (system:
+  lxdContainerImage = forMatchingSystems [ "x86_64-linux" "aarch64-linux" ] (system:
 
     with import ./.. { inherit system; };
 
@@ -319,14 +322,46 @@ in rec {
       modules =
         [ configuration
           versionModule
-          ./maintainers/scripts/lxd/lxd-image.nix
+          ./maintainers/scripts/lxd/lxd-container-image.nix
         ];
     }).config.system.build.tarball)
 
   );
 
   # Metadata for the lxd image
-  lxdMeta = forMatchingSystems [ "x86_64-linux" "aarch64-linux" ] (system:
+  lxdContainerMeta = forMatchingSystems [ "x86_64-linux" "aarch64-linux" ] (system:
+
+    with import ./.. { inherit system; };
+
+    hydraJob ((import lib/eval-config.nix {
+      inherit system;
+      modules =
+        [ configuration
+          versionModule
+          ./maintainers/scripts/lxd/lxd-container-image.nix
+        ];
+    }).config.system.build.metadata)
+
+  );
+
+  # An image that can be imported into lxd and used for container creation
+  lxdVirtualMachineImage = forMatchingSystems [ "x86_64-linux" "aarch64-linux" ] (system:
+
+    with import ./.. { inherit system; };
+
+    hydraJob ((import lib/eval-config.nix {
+      inherit system;
+      modules =
+        [ configuration
+          versionModule
+          ./maintainers/scripts/lxd/lxd-virtual-machine-image.nix
+        ];
+    }).config.system.build.qemuImage)
+
+  );
+
+  # Metadata for the lxd image
+  lxdVirtualMachineImageMeta = forMatchingSystems [ "x86_64-linux" "aarch64-linux" ] (system:
 
     with import ./.. { inherit system; };
 
@@ -335,7 +370,7 @@ in rec {
       modules =
         [ configuration
           versionModule
-          ./maintainers/scripts/lxd/lxd-image.nix
+          ./maintainers/scripts/lxd/lxd-virtual-machine-image.nix
         ];
     }).config.system.build.metadata)
 
diff --git a/nixos/tests/all-tests.nix b/nixos/tests/all-tests.nix
index 40fdf0b9df8b..e5affdab8890 100644
--- a/nixos/tests/all-tests.nix
+++ b/nixos/tests/all-tests.nix
@@ -109,6 +109,7 @@ in {
   allTerminfo = handleTest ./all-terminfo.nix {};
   alps = handleTest ./alps.nix {};
   amazon-init-shell = handleTest ./amazon-init-shell.nix {};
+  anbox = runTest ./anbox.nix;
   anuko-time-tracker = handleTest ./anuko-time-tracker.nix {};
   apcupsd = handleTest ./apcupsd.nix {};
   apfs = runTest ./apfs.nix;
@@ -396,7 +397,7 @@ in {
   jibri = handleTest ./jibri.nix {};
   jirafeau = handleTest ./jirafeau.nix {};
   jitsi-meet = handleTest ./jitsi-meet.nix {};
-  jool = handleTest ./jool.nix {};
+  jool = import ./jool.nix { inherit pkgs runTest; };
   k3s = handleTest ./k3s {};
   kafka = handleTest ./kafka.nix {};
   kanidm = handleTest ./kanidm.nix {};
@@ -447,7 +448,7 @@ in {
   loki = handleTest ./loki.nix {};
   luks = handleTest ./luks.nix {};
   lvm2 = handleTest ./lvm2 {};
-  lxd = pkgs.recurseIntoAttrs (handleTest ./lxd {});
+  lxd = pkgs.recurseIntoAttrs (handleTest ./lxd { inherit handleTestOn; });
   lxd-image-server = handleTest ./lxd-image-server.nix {};
   #logstash = handleTest ./logstash.nix {};
   lorri = handleTest ./lorri/default.nix {};
@@ -729,6 +730,7 @@ in {
   sslh = handleTest ./sslh.nix {};
   sssd = handleTestOn ["x86_64-linux"] ./sssd.nix {};
   sssd-ldap = handleTestOn ["x86_64-linux"] ./sssd-ldap.nix {};
+  stalwart-mail = handleTest ./stalwart-mail.nix {};
   stargazer = runTest ./web-servers/stargazer.nix;
   starship = handleTest ./starship.nix {};
   static-web-server = handleTest ./web-servers/static-web-server.nix {};
diff --git a/nixos/tests/anbox.nix b/nixos/tests/anbox.nix
new file mode 100644
index 000000000000..d78f63ec761f
--- /dev/null
+++ b/nixos/tests/anbox.nix
@@ -0,0 +1,40 @@
+{ lib, pkgs, ... }:
+
+{
+  name = "anbox";
+  meta.maintainers = with lib.maintainers; [ mvnetbiz ];
+
+  nodes.machine = { pkgs, config, ... }: {
+    imports = [
+      ./common/user-account.nix
+      ./common/x11.nix
+    ];
+
+    environment.systemPackages = with pkgs; [ android-tools ];
+
+    test-support.displayManager.auto.user = "alice";
+
+    virtualisation.anbox.enable = true;
+    boot.kernelPackages = pkgs.linuxPackages_5_15;
+
+    # The AArch64 anbox image will not start.
+    # Meanwhile the postmarketOS images work just fine.
+    virtualisation.anbox.image = pkgs.anbox.postmarketos-image;
+    virtualisation.memorySize = 2500;
+  };
+
+  testScript = { nodes, ... }: let
+    user = nodes.machine.users.users.alice;
+    bus = "DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/${toString user.uid}/bus";
+  in ''
+    machine.wait_for_x()
+
+    machine.wait_until_succeeds(
+        "sudo -iu alice ${bus} anbox wait-ready"
+    )
+
+    machine.wait_until_succeeds("adb shell true")
+
+    print(machine.succeed("adb devices"))
+  '';
+}
diff --git a/nixos/tests/custom-ca.nix b/nixos/tests/custom-ca.nix
index 25a7b6fdea46..0fcdf81022d7 100644
--- a/nixos/tests/custom-ca.nix
+++ b/nixos/tests/custom-ca.nix
@@ -131,8 +131,8 @@ let
         # chromium-based browsers refuse to run as root
         test-support.displayManager.auto.user = "alice";
 
-        # browsers may hang with the default memory
-        virtualisation.memorySize = 600;
+        # machine often runs out of memory with less
+        virtualisation.memorySize = 1024;
 
         environment.systemPackages = [ pkgs.xdotool pkgs.${browser} ];
       };
diff --git a/nixos/tests/jool.nix b/nixos/tests/jool.nix
index 6d5ded9b18e0..93575f07b1c8 100644
--- a/nixos/tests/jool.nix
+++ b/nixos/tests/jool.nix
@@ -1,9 +1,4 @@
-{ system ? builtins.currentSystem,
-  config ? {},
-  pkgs ? import ../.. { inherit system config; }
-}:
-
-with import ../lib/testing-python.nix { inherit system pkgs; };
+{ pkgs, runTest }:
 
 let
   inherit (pkgs) lib;
@@ -23,7 +18,6 @@ let
       description = "Mock webserver";
       wants = [ "network-online.target" ];
       wantedBy = [ "multi-user.target" ];
-      serviceConfig.Restart = "always";
       script = ''
         while true; do
         {
@@ -40,7 +34,7 @@ let
 in
 
 {
-  siit = makeTest {
+  siit = runTest {
     # This test simulates the setup described in [1] with two IPv6 and
     # IPv4-only devices on different subnets communicating through a border
     # relay running Jool in SIIT mode.
@@ -49,8 +43,7 @@ in
     meta.maintainers = with lib.maintainers; [ rnhmjoj ];
 
     # Border relay
-    nodes.relay = { ... }: {
-      imports = [ ../modules/profiles/minimal.nix ];
+    nodes.relay = {
       virtualisation.vlans = [ 1 2 ];
 
       # Enable packet routing
@@ -65,20 +58,13 @@ in
         eth2.ipv4.addresses = [ { address = "192.0.2.1";  prefixLength = 24; } ];
       };
 
-      networking.jool = {
-        enable = true;
-        siit.enable = true;
-        siit.config.global.pool6 = "fd::/96";
-      };
+      networking.jool.enable = true;
+      networking.jool.siit.default.global.pool6 = "fd::/96";
     };
 
     # IPv6 only node
-    nodes.alice = { ... }: {
-      imports = [
-        ../modules/profiles/minimal.nix
-        ipv6Only
-        (webserver 6 "Hello, Bob!")
-      ];
+    nodes.alice = {
+      imports = [ ipv6Only (webserver 6 "Hello, Bob!") ];
 
       virtualisation.vlans = [ 1 ];
       networking.interfaces.eth1.ipv6 = {
@@ -89,12 +75,8 @@ in
     };
 
     # IPv4 only node
-    nodes.bob = { ... }: {
-      imports = [
-        ../modules/profiles/minimal.nix
-        ipv4Only
-        (webserver 4 "Hello, Alice!")
-      ];
+    nodes.bob = {
+      imports = [ ipv4Only (webserver 4 "Hello, Alice!") ];
 
       virtualisation.vlans = [ 2 ];
       networking.interfaces.eth1.ipv4 = {
@@ -107,17 +89,17 @@ in
     testScript = ''
       start_all()
 
-      relay.wait_for_unit("jool-siit.service")
+      relay.wait_for_unit("jool-siit-default.service")
       alice.wait_for_unit("network-addresses-eth1.service")
       bob.wait_for_unit("network-addresses-eth1.service")
 
       with subtest("Alice and Bob can't ping each other"):
-        relay.systemctl("stop jool-siit.service")
+        relay.systemctl("stop jool-siit-default.service")
         alice.fail("ping -c1 fd::192.0.2.16")
         bob.fail("ping -c1 198.51.100.8")
 
       with subtest("Alice and Bob can ping using the relay"):
-        relay.systemctl("start jool-siit.service")
+        relay.systemctl("start jool-siit-default.service")
         alice.wait_until_succeeds("ping -c1 fd::192.0.2.16")
         bob.wait_until_succeeds("ping -c1 198.51.100.8")
 
@@ -132,7 +114,7 @@ in
     '';
   };
 
-  nat64 = makeTest {
+  nat64 = runTest {
     # This test simulates the setup described in [1] with two IPv6-only nodes
     # (a client and a homeserver) on the LAN subnet and an IPv4 node on the WAN.
     # The router runs Jool in stateful NAT64 mode, masquarading the LAN and
@@ -142,8 +124,7 @@ in
     meta.maintainers = with lib.maintainers; [ rnhmjoj ];
 
     # Router
-    nodes.router = { ... }: {
-      imports = [ ../modules/profiles/minimal.nix ];
+    nodes.router = {
       virtualisation.vlans = [ 1 2 ];
 
       # Enable packet routing
@@ -158,32 +139,29 @@ in
         eth2.ipv4.addresses = [ { address = "203.0.113.1"; prefixLength = 24; } ];
       };
 
-      networking.jool = {
-        enable = true;
-        nat64.enable = true;
-        nat64.config = {
-          bib = [
-            { # forward HTTP 203.0.113.1 (router) → 2001:db8::9 (homeserver)
-              "protocol"     = "TCP";
-              "ipv4 address" = "203.0.113.1#80";
-              "ipv6 address" = "2001:db8::9#80";
-            }
-          ];
-          pool4 = [
-            # Ports for dynamic translation
-            { protocol =  "TCP";  prefix = "203.0.113.1/32"; "port range" = "40001-65535"; }
-            { protocol =  "UDP";  prefix = "203.0.113.1/32"; "port range" = "40001-65535"; }
-            { protocol = "ICMP";  prefix = "203.0.113.1/32"; "port range" = "40001-65535"; }
-            # Ports for static BIB entries
-            { protocol =  "TCP";  prefix = "203.0.113.1/32"; "port range" = "80"; }
-          ];
-        };
+      networking.jool.enable = true;
+      networking.jool.nat64.default = {
+        bib = [
+          { # forward HTTP 203.0.113.1 (router) → 2001:db8::9 (homeserver)
+            "protocol"     = "TCP";
+            "ipv4 address" = "203.0.113.1#80";
+            "ipv6 address" = "2001:db8::9#80";
+          }
+        ];
+        pool4 = [
+          # Ports for dynamic translation
+          { protocol =  "TCP";  prefix = "203.0.113.1/32"; "port range" = "40001-65535"; }
+          { protocol =  "UDP";  prefix = "203.0.113.1/32"; "port range" = "40001-65535"; }
+          { protocol = "ICMP";  prefix = "203.0.113.1/32"; "port range" = "40001-65535"; }
+          # Ports for static BIB entries
+          { protocol =  "TCP";  prefix = "203.0.113.1/32"; "port range" = "80"; }
+        ];
       };
     };
 
     # LAN client (IPv6 only)
-    nodes.client = { ... }: {
-      imports = [ ../modules/profiles/minimal.nix ipv6Only ];
+    nodes.client = {
+      imports = [ ipv6Only ];
       virtualisation.vlans = [ 1 ];
 
       networking.interfaces.eth1.ipv6 = {
@@ -194,12 +172,8 @@ in
     };
 
     # LAN server (IPv6 only)
-    nodes.homeserver = { ... }: {
-      imports = [
-        ../modules/profiles/minimal.nix
-        ipv6Only
-        (webserver 6 "Hello from IPv6!")
-      ];
+    nodes.homeserver = {
+      imports = [ ipv6Only (webserver 6 "Hello from IPv6!") ];
 
       virtualisation.vlans = [ 1 ];
       networking.interfaces.eth1.ipv6 = {
@@ -210,12 +184,8 @@ in
     };
 
     # WAN server (IPv4 only)
-    nodes.server = { ... }: {
-      imports = [
-        ../modules/profiles/minimal.nix
-        ipv4Only
-        (webserver 4 "Hello from IPv4!")
-      ];
+    nodes.server = {
+      imports = [ ipv4Only (webserver 4 "Hello from IPv4!") ];
 
       virtualisation.vlans = [ 2 ];
       networking.interfaces.eth1.ipv4.addresses =
@@ -229,7 +199,7 @@ in
         node.wait_for_unit("network-addresses-eth1.service")
 
       with subtest("Client can ping the WAN server"):
-        router.wait_for_unit("jool-nat64.service")
+        router.wait_for_unit("jool-nat64-default.service")
         client.succeed("ping -c1 64:ff9b::203.0.113.16")
 
       with subtest("Client can connect to the WAN webserver"):
diff --git a/nixos/tests/lxd/container.nix b/nixos/tests/lxd/container.nix
index 9e56f6e41e05..bdaaebfc0028 100644
--- a/nixos/tests/lxd/container.nix
+++ b/nixos/tests/lxd/container.nix
@@ -1,7 +1,7 @@
 import ../make-test-python.nix ({ pkgs, lib, ... } :
 
 let
-  lxd-image = import ../../release.nix {
+  releases = import ../../release.nix {
     configuration = {
       # Building documentation makes the test unnecessarily take a longer time:
       documentation.enable = lib.mkForce false;
@@ -11,14 +11,14 @@ let
     };
   };
 
-  lxd-image-metadata = lxd-image.lxdMeta.${pkgs.stdenv.hostPlatform.system};
-  lxd-image-rootfs = lxd-image.lxdImage.${pkgs.stdenv.hostPlatform.system};
+  lxd-image-metadata = releases.lxdContainerMeta.${pkgs.stdenv.hostPlatform.system};
+  lxd-image-rootfs = releases.lxdContainerImage.${pkgs.stdenv.hostPlatform.system};
 
 in {
-  name = "lxd";
+  name = "lxd-container";
 
   meta = with pkgs.lib.maintainers; {
-    maintainers = [ patryk27 ];
+    maintainers = [ patryk27 adamcstephens ];
   };
 
   nodes.machine = { lib, ... }: {
@@ -49,6 +49,9 @@ in {
     # Wait for lxd to settle
     machine.succeed("lxd waitready")
 
+    # no preseed should mean no service
+    machine.fail("systemctl status lxd-preseed.service")
+
     machine.succeed("lxd init --minimal")
 
     machine.succeed(
diff --git a/nixos/tests/lxd/default.nix b/nixos/tests/lxd/default.nix
index 2e34907d7936..20afdd5e48bb 100644
--- a/nixos/tests/lxd/default.nix
+++ b/nixos/tests/lxd/default.nix
@@ -2,8 +2,11 @@
   system ? builtins.currentSystem,
   config ? {},
   pkgs ? import ../../.. {inherit system config;},
+  handleTestOn,
 }: {
   container = import ./container.nix {inherit system pkgs;};
   nftables = import ./nftables.nix {inherit system pkgs;};
+  preseed = import ./preseed.nix {inherit system pkgs;};
   ui = import ./ui.nix {inherit system pkgs;};
+  virtual-machine = handleTestOn ["x86_64-linux"] ./virtual-machine.nix { inherit system pkgs; };
 }
diff --git a/nixos/tests/lxd/preseed.nix b/nixos/tests/lxd/preseed.nix
new file mode 100644
index 000000000000..7d89b9f56daa
--- /dev/null
+++ b/nixos/tests/lxd/preseed.nix
@@ -0,0 +1,71 @@
+import ../make-test-python.nix ({ pkgs, lib, ... } :
+
+{
+  name = "lxd-preseed";
+
+  meta = {
+    maintainers = with lib.maintainers; [ adamcstephens ];
+  };
+
+  nodes.machine = { lib, ... }: {
+    virtualisation = {
+      diskSize = 4096;
+
+      lxc.lxcfs.enable = true;
+      lxd.enable = true;
+
+      lxd.preseed = {
+        networks = [
+          {
+            name = "nixostestbr0";
+            type = "bridge";
+            config = {
+              "ipv4.address" = "10.0.100.1/24";
+              "ipv4.nat" = "true";
+            };
+          }
+        ];
+        profiles = [
+          {
+            name = "nixostest_default";
+            devices = {
+              eth0 = {
+                name = "eth0";
+                network = "nixostestbr0";
+                type = "nic";
+              };
+              root = {
+                path = "/";
+                pool = "default";
+                size = "35GiB";
+                type = "disk";
+              };
+            };
+          }
+        ];
+        storage_pools = [
+          {
+            name = "nixostest_pool";
+            driver = "dir";
+          }
+        ];
+      };
+    };
+  };
+
+  testScript = ''
+    def wait_for_preseed(_) -> bool:
+      _, output = machine.systemctl("is-active lxd-preseed.service")
+      return ("inactive" in output)
+
+    machine.wait_for_unit("sockets.target")
+    machine.wait_for_unit("lxd.service")
+    with machine.nested("Waiting for preseed to complete"):
+      retry(wait_for_preseed)
+
+    with subtest("Verify preseed resources created"):
+      machine.succeed("lxc profile show nixostest_default")
+      machine.succeed("lxc network info nixostestbr0")
+      machine.succeed("lxc storage show nixostest_pool")
+  '';
+})
diff --git a/nixos/tests/lxd/virtual-machine.nix b/nixos/tests/lxd/virtual-machine.nix
new file mode 100644
index 000000000000..93705e9350c5
--- /dev/null
+++ b/nixos/tests/lxd/virtual-machine.nix
@@ -0,0 +1,64 @@
+import ../make-test-python.nix ({ pkgs, lib, ... }:
+
+let
+  releases = import ../../release.nix {
+    configuration = {
+      # Building documentation makes the test unnecessarily take a longer time:
+      documentation.enable = lib.mkForce false;
+
+      # Our tests require `grep` & friends:
+      environment.systemPackages = with pkgs; [busybox];
+    };
+  };
+
+  lxd-image-metadata = releases.lxdVirtualMachineImageMeta.${pkgs.stdenv.hostPlatform.system};
+  lxd-image-disk = releases.lxdVirtualMachineImage.${pkgs.stdenv.hostPlatform.system};
+
+  instance-name = "instance1";
+in {
+  name = "lxd-virtual-machine";
+
+  meta = with pkgs.lib.maintainers; {
+    maintainers = [adamcstephens];
+  };
+
+  nodes.machine = {lib, ...}: {
+    virtualisation = {
+      diskSize = 4096;
+
+      cores = 2;
+
+      # Ensure we have enough memory for the nested virtual machine
+      memorySize = 1024;
+
+      lxc.lxcfs.enable = true;
+      lxd.enable = true;
+    };
+  };
+
+  testScript = ''
+    def instance_is_up(_) -> bool:
+      status, _ = machine.execute("lxc exec ${instance-name} --disable-stdin --force-interactive /run/current-system/sw/bin/true")
+      return status == 0
+
+    machine.wait_for_unit("sockets.target")
+    machine.wait_for_unit("lxd.service")
+    machine.wait_for_file("/var/lib/lxd/unix.socket")
+
+    # Wait for lxd to settle
+    machine.succeed("lxd waitready")
+
+    machine.succeed("lxd init --minimal")
+
+    with subtest("virtual-machine image can be imported"):
+        machine.succeed("lxc image import ${lxd-image-metadata}/*/*.tar.xz ${lxd-image-disk}/nixos.qcow2 --alias nixos")
+
+    with subtest("virtual-machine can be launched and become available"):
+        machine.succeed("lxc launch nixos ${instance-name} --vm --config limits.memory=512MB --config security.secureboot=false")
+        with machine.nested("Waiting for instance to start and be usable"):
+          retry(instance_is_up)
+
+    with subtest("lxd-agent is started"):
+        machine.succeed("lxc exec ${instance-name} systemctl is-active lxd-agent")
+  '';
+})
diff --git a/nixos/tests/stalwart-mail.nix b/nixos/tests/stalwart-mail.nix
new file mode 100644
index 000000000000..b5589966a160
--- /dev/null
+++ b/nixos/tests/stalwart-mail.nix
@@ -0,0 +1,117 @@
+# Rudimentary test checking that the Stalwart email server can:
+# - receive some message through SMTP submission, then
+# - serve this message through IMAP.
+
+let
+  certs = import ./common/acme/server/snakeoil-certs.nix;
+  domain = certs.domain;
+
+in import ./make-test-python.nix ({ lib, ... }: {
+  name = "stalwart-mail";
+
+  nodes.main = { pkgs, ... }: {
+    security.pki.certificateFiles = [ certs.ca.cert ];
+
+    services.stalwart-mail = {
+      enable = true;
+      settings = {
+        server.hostname = domain;
+
+        certificate."snakeoil" = {
+          cert = "file://${certs.${domain}.cert}";
+          private-key = "file://${certs.${domain}.key}";
+        };
+
+        server.tls = {
+          certificate = "snakeoil";
+          enable = true;
+          implicit = false;
+        };
+
+        server.listener = {
+          "smtp-submission" = {
+            bind = [ "[::]:587" ];
+            protocol = "smtp";
+          };
+
+          "imap" = {
+            bind = [ "[::]:143" ];
+            protocol = "imap";
+          };
+        };
+
+        session.auth.mechanisms = [ "PLAIN" ];
+        session.auth.directory = "in-memory";
+        jmap.directory = "in-memory";  # shared with imap
+
+        session.rcpt.directory = "in-memory";
+        queue.outbound.next-hop = [ "local" ];
+
+        directory."in-memory" = {
+          type = "memory";
+          users = [
+            {
+              name = "alice";
+              secret = "foobar";
+              email = [ "alice@${domain}" ];
+            }
+            {
+              name = "bob";
+              secret = "foobar";
+              email = [ "bob@${domain}" ];
+            }
+          ];
+        };
+      };
+    };
+
+    environment.systemPackages = [
+      (pkgs.writers.writePython3Bin "test-smtp-submission" { } ''
+        from smtplib import SMTP
+
+        with SMTP('localhost', 587) as smtp:
+            smtp.starttls()
+            smtp.login('alice', 'foobar')
+            smtp.sendmail(
+                'alice@${domain}',
+                'bob@${domain}',
+                """
+                    From: alice@${domain}
+                    To: bob@${domain}
+                    Subject: Some test message
+
+                    This is a test message.
+                """.strip()
+            )
+      '')
+
+      (pkgs.writers.writePython3Bin "test-imap-read" { } ''
+        from imaplib import IMAP4
+
+        with IMAP4('localhost') as imap:
+            imap.starttls()
+            imap.login('bob', 'foobar')
+            imap.select('"All Mail"')
+            status, [ref] = imap.search(None, 'ALL')
+            assert status == 'OK'
+            [msgId] = ref.split()
+            status, msg = imap.fetch(msgId, 'BODY[TEXT]')
+            assert status == 'OK'
+            assert msg[0][1].strip() == b'This is a test message.'
+      '')
+    ];
+  };
+
+  testScript = /* python */ ''
+    main.wait_for_unit("stalwart-mail.service")
+    main.wait_for_open_port(587)
+    main.wait_for_open_port(143)
+
+    main.succeed("test-smtp-submission")
+    main.succeed("test-imap-read")
+  '';
+
+  meta = {
+    maintainers = with lib.maintainers; [ happysalada pacien ];
+  };
+})