diff options
Diffstat (limited to 'nixos/doc/manual/administration')
17 files changed, 499 insertions, 574 deletions
diff --git a/nixos/doc/manual/administration/boot-problems.xml b/nixos/doc/manual/administration/boot-problems.xml index be6ff3aac0fe..5f05ad261ef3 100644 --- a/nixos/doc/manual/administration/boot-problems.xml +++ b/nixos/doc/manual/administration/boot-problems.xml @@ -3,63 +3,83 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-boot-problems"> + <title>Boot Problems</title> -<title>Boot Problems</title> + <para> + If NixOS fails to boot, there are a number of kernel command line parameters + that may help you to identify or fix the issue. You can add these parameters + in the GRUB boot menu by pressing “e” to modify the selected boot entry + and editing the line starting with <literal>linux</literal>. The following + are some useful kernel command line parameters that are recognised by the + NixOS boot scripts or by systemd: + <variablelist> + <varlistentry> + <term><literal>boot.shell_on_fail</literal> + </term> + <listitem> + <para> + Start a root shell if something goes wrong in stage 1 of the boot process + (the initial ramdisk). This is disabled by default because there is no + authentication for the root shell. + </para> + </listitem> + </varlistentry> + <varlistentry> + <term><literal>boot.debug1</literal> + </term> + <listitem> + <para> + Start an interactive shell in stage 1 before anything useful has been + done. That is, no modules have been loaded and no file systems have been + mounted, except for <filename>/proc</filename> and + <filename>/sys</filename>. + </para> + </listitem> + </varlistentry> + <varlistentry> + <term><literal>boot.trace</literal> + </term> + <listitem> + <para> + Print every shell command executed by the stage 1 and 2 boot scripts. + </para> + </listitem> + </varlistentry> + <varlistentry> + <term><literal>single</literal> + </term> + <listitem> + <para> + Boot into rescue mode (a.k.a. single user mode). This will cause systemd + to start nothing but the unit <literal>rescue.target</literal>, which + runs <command>sulogin</command> to prompt for the root password and start + a root login shell. Exiting the shell causes the system to continue with + the normal boot process. + </para> + </listitem> + </varlistentry> + <varlistentry> + <term><literal>systemd.log_level=debug systemd.log_target=console</literal> + </term> + <listitem> + <para> + Make systemd very verbose and send log messages to the console instead of + the journal. + </para> + </listitem> + </varlistentry> + </variablelist> + For more parameters recognised by systemd, see <citerefentry> + <refentrytitle>systemd</refentrytitle> + <manvolnum>1</manvolnum></citerefentry>. + </para> -<para>If NixOS fails to boot, there are a number of kernel command -line parameters that may help you to identify or fix the issue. You -can add these parameters in the GRUB boot menu by pressing “e” to -modify the selected boot entry and editing the line starting with -<literal>linux</literal>. The following are some useful kernel command -line parameters that are recognised by the NixOS boot scripts or by -systemd: - -<variablelist> - - <varlistentry><term><literal>boot.shell_on_fail</literal></term> - <listitem><para>Start a root shell if something goes wrong in - stage 1 of the boot process (the initial ramdisk). This is - disabled by default because there is no authentication for the - root shell.</para></listitem> - </varlistentry> - - <varlistentry><term><literal>boot.debug1</literal></term> - <listitem><para>Start an interactive shell in stage 1 before - anything useful has been done. That is, no modules have been - loaded and no file systems have been mounted, except for - <filename>/proc</filename> and - <filename>/sys</filename>.</para></listitem> - </varlistentry> - - <varlistentry><term><literal>boot.trace</literal></term> - <listitem><para>Print every shell command executed by the stage 1 - and 2 boot scripts.</para></listitem> - </varlistentry> - - <varlistentry><term><literal>single</literal></term> - <listitem><para>Boot into rescue mode (a.k.a. single user mode). - This will cause systemd to start nothing but the unit - <literal>rescue.target</literal>, which runs - <command>sulogin</command> to prompt for the root password and - start a root login shell. Exiting the shell causes the system to - continue with the normal boot process.</para></listitem> - </varlistentry> - - <varlistentry><term><literal>systemd.log_level=debug systemd.log_target=console</literal></term> - <listitem><para>Make systemd very verbose and send log messages to - the console instead of the journal.</para></listitem> - </varlistentry> - -</variablelist> - -For more parameters recognised by systemd, see -<citerefentry><refentrytitle>systemd</refentrytitle><manvolnum>1</manvolnum></citerefentry>.</para> - -<para>If no login prompts or X11 login screens appear (e.g. due to -hanging dependencies), you can press Alt+ArrowUp. If you’re lucky, -this will start rescue mode (described above). (Also note that since -most units have a 90-second timeout before systemd gives up on them, -the <command>agetty</command> login prompts should appear eventually -unless something is very wrong.)</para> - -</section> \ No newline at end of file + <para> + If no login prompts or X11 login screens appear (e.g. due to hanging + dependencies), you can press Alt+ArrowUp. If you’re lucky, this will start + rescue mode (described above). (Also note that since most units have a + 90-second timeout before systemd gives up on them, the + <command>agetty</command> login prompts should appear eventually unless + something is very wrong.) + </para> +</section> diff --git a/nixos/doc/manual/administration/cleaning-store.xml b/nixos/doc/manual/administration/cleaning-store.xml index 4cf62947f528..ee201982a40b 100644 --- a/nixos/doc/manual/administration/cleaning-store.xml +++ b/nixos/doc/manual/administration/cleaning-store.xml @@ -3,60 +3,51 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-nix-gc"> - -<title>Cleaning the Nix Store</title> - -<para>Nix has a purely functional model, meaning that packages are -never upgraded in place. Instead new versions of packages end up in a -different location in the Nix store (<filename>/nix/store</filename>). -You should periodically run Nix’s <emphasis>garbage -collector</emphasis> to remove old, unreferenced packages. This is -easy: - + <title>Cleaning the Nix Store</title> + <para> + Nix has a purely functional model, meaning that packages are never upgraded + in place. Instead new versions of packages end up in a different location in + the Nix store (<filename>/nix/store</filename>). You should periodically run + Nix’s <emphasis>garbage collector</emphasis> to remove old, unreferenced + packages. This is easy: <screen> $ nix-collect-garbage </screen> - -Alternatively, you can use a systemd unit that does the same in the -background: - + Alternatively, you can use a systemd unit that does the same in the + background: <screen> # systemctl start nix-gc.service </screen> - -You can tell NixOS in <filename>configuration.nix</filename> to run -this unit automatically at certain points in time, for instance, every -night at 03:15: - + You can tell NixOS in <filename>configuration.nix</filename> to run this unit + automatically at certain points in time, for instance, every night at 03:15: <programlisting> -nix.gc.automatic = true; -nix.gc.dates = "03:15"; +<xref linkend="opt-nix.gc.automatic"/> = true; +<xref linkend="opt-nix.gc.dates"/> = "03:15"; </programlisting> - -</para> - -<para>The commands above do not remove garbage collector roots, such -as old system configurations. Thus they do not remove the ability to -roll back to previous configurations. The following command deletes -old roots, removing the ability to roll back to them: + </para> + <para> + The commands above do not remove garbage collector roots, such as old system + configurations. Thus they do not remove the ability to roll back to previous + configurations. The following command deletes old roots, removing the ability + to roll back to them: <screen> $ nix-collect-garbage -d </screen> -You can also do this for specific profiles, e.g. + You can also do this for specific profiles, e.g. <screen> $ nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations old </screen> -Note that NixOS system configurations are stored in the profile -<filename>/nix/var/nix/profiles/system</filename>.</para> - -<para>Another way to reclaim disk space (often as much as 40% of the -size of the Nix store) is to run Nix’s store optimiser, which seeks -out identical files in the store and replaces them with hard links to -a single copy. + Note that NixOS system configurations are stored in the profile + <filename>/nix/var/nix/profiles/system</filename>. + </para> + <para> + Another way to reclaim disk space (often as much as 40% of the size of the + Nix store) is to run Nix’s store optimiser, which seeks out identical files + in the store and replaces them with hard links to a single copy. <screen> $ nix-store --optimise </screen> -Since this command needs to read the entire Nix store, it can take -quite a while to finish.</para> - + Since this command needs to read the entire Nix store, it can take quite a + while to finish. + </para> </chapter> diff --git a/nixos/doc/manual/administration/container-networking.xml b/nixos/doc/manual/administration/container-networking.xml index d89d262eff4e..4b977d1d82eb 100644 --- a/nixos/doc/manual/administration/container-networking.xml +++ b/nixos/doc/manual/administration/container-networking.xml @@ -3,15 +3,13 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-container-networking"> + <title>Container Networking</title> - -<title>Container Networking</title> - -<para>When you create a container using <literal>nixos-container -create</literal>, it gets it own private IPv4 address in the range -<literal>10.233.0.0/16</literal>. You can get the container’s IPv4 -address as follows: - + <para> + When you create a container using <literal>nixos-container create</literal>, + it gets it own private IPv4 address in the range + <literal>10.233.0.0/16</literal>. You can get the container’s IPv4 address + as follows: <screen> # nixos-container show-ip foo 10.233.4.2 @@ -19,40 +17,39 @@ address as follows: $ ping -c1 10.233.4.2 64 bytes from 10.233.4.2: icmp_seq=1 ttl=64 time=0.106 ms </screen> - -</para> - -<para>Networking is implemented using a pair of virtual Ethernet -devices. The network interface in the container is called -<literal>eth0</literal>, while the matching interface in the host is -called <literal>ve-<replaceable>container-name</replaceable></literal> -(e.g., <literal>ve-foo</literal>). The container has its own network -namespace and the <literal>CAP_NET_ADMIN</literal> capability, so it -can perform arbitrary network configuration such as setting up -firewall rules, without affecting or having access to the host’s -network.</para> - -<para>By default, containers cannot talk to the outside network. If -you want that, you should set up Network Address Translation (NAT) -rules on the host to rewrite container traffic to use your external -IP address. This can be accomplished using the following configuration -on the host: - + </para> + + <para> + Networking is implemented using a pair of virtual Ethernet devices. The + network interface in the container is called <literal>eth0</literal>, while + the matching interface in the host is called + <literal>ve-<replaceable>container-name</replaceable></literal> (e.g., + <literal>ve-foo</literal>). The container has its own network namespace and + the <literal>CAP_NET_ADMIN</literal> capability, so it can perform arbitrary + network configuration such as setting up firewall rules, without affecting or + having access to the host’s network. + </para> + + <para> + By default, containers cannot talk to the outside network. If you want that, + you should set up Network Address Translation (NAT) rules on the host to + rewrite container traffic to use your external IP address. This can be + accomplished using the following configuration on the host: <programlisting> -networking.nat.enable = true; -networking.nat.internalInterfaces = ["ve-+"]; -networking.nat.externalInterface = "eth0"; +<xref linkend="opt-networking.nat.enable"/> = true; +<xref linkend="opt-networking.nat.internalInterfaces"/> = ["ve-+"]; +<xref linkend="opt-networking.nat.externalInterface"/> = "eth0"; </programlisting> -where <literal>eth0</literal> should be replaced with the desired -external interface. Note that <literal>ve-+</literal> is a wildcard -that matches all container interfaces.</para> - -<para>If you are using Network Manager, you need to explicitly prevent -it from managing container interfaces: - + where <literal>eth0</literal> should be replaced with the desired external + interface. Note that <literal>ve-+</literal> is a wildcard that matches all + container interfaces. + </para> + + <para> + If you are using Network Manager, you need to explicitly prevent it from + managing container interfaces: <programlisting> networking.networkmanager.unmanaged = [ "interface-name:ve-*" ]; </programlisting> -</para> - + </para> </section> diff --git a/nixos/doc/manual/administration/containers.xml b/nixos/doc/manual/administration/containers.xml index 4cd2c8ae5563..0d3355e56a58 100644 --- a/nixos/doc/manual/administration/containers.xml +++ b/nixos/doc/manual/administration/containers.xml @@ -3,32 +3,32 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="ch-containers"> - -<title>Container Management</title> - -<para>NixOS allows you to easily run other NixOS instances as -<emphasis>containers</emphasis>. Containers are a light-weight -approach to virtualisation that runs software in the container at the -same speed as in the host system. NixOS containers share the Nix store -of the host, making container creation very efficient.</para> - -<warning><para>Currently, NixOS containers are not perfectly isolated -from the host system. This means that a user with root access to the -container can do things that affect the host. So you should not give -container root access to untrusted users.</para></warning> - -<para>NixOS containers can be created in two ways: imperatively, using -the command <command>nixos-container</command>, and declaratively, by -specifying them in your <filename>configuration.nix</filename>. The -declarative approach implies that containers get upgraded along with -your host system when you run <command>nixos-rebuild</command>, which -is often not what you want. By contrast, in the imperative approach, -containers are configured and updated independently from the host -system.</para> - -<xi:include href="imperative-containers.xml" /> -<xi:include href="declarative-containers.xml" /> -<xi:include href="container-networking.xml" /> - + <title>Container Management</title> + <para> + NixOS allows you to easily run other NixOS instances as + <emphasis>containers</emphasis>. Containers are a light-weight approach to + virtualisation that runs software in the container at the same speed as in + the host system. NixOS containers share the Nix store of the host, making + container creation very efficient. + </para> + <warning> + <para> + Currently, NixOS containers are not perfectly isolated from the host system. + This means that a user with root access to the container can do things that + affect the host. So you should not give container root access to untrusted + users. + </para> + </warning> + <para> + NixOS containers can be created in two ways: imperatively, using the command + <command>nixos-container</command>, and declaratively, by specifying them in + your <filename>configuration.nix</filename>. The declarative approach implies + that containers get upgraded along with your host system when you run + <command>nixos-rebuild</command>, which is often not what you want. By + contrast, in the imperative approach, containers are configured and updated + independently from the host system. + </para> + <xi:include href="imperative-containers.xml" /> + <xi:include href="declarative-containers.xml" /> + <xi:include href="container-networking.xml" /> </chapter> - diff --git a/nixos/doc/manual/administration/control-groups.xml b/nixos/doc/manual/administration/control-groups.xml index 0d7b8ae910a7..bb8b7f83d9e0 100644 --- a/nixos/doc/manual/administration/control-groups.xml +++ b/nixos/doc/manual/administration/control-groups.xml @@ -3,20 +3,18 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-cgroups"> - -<title>Control Groups</title> - -<para>To keep track of the processes in a running system, systemd uses -<emphasis>control groups</emphasis> (cgroups). A control group is a -set of processes used to allocate resources such as CPU, memory or I/O -bandwidth. There can be multiple control group hierarchies, allowing -each kind of resource to be managed independently.</para> - -<para>The command <command>systemd-cgls</command> lists all control -groups in the <literal>systemd</literal> hierarchy, which is what -systemd uses to keep track of the processes belonging to each service -or user session: - + <title>Control Groups</title> + <para> + To keep track of the processes in a running system, systemd uses + <emphasis>control groups</emphasis> (cgroups). A control group is a set of + processes used to allocate resources such as CPU, memory or I/O bandwidth. + There can be multiple control group hierarchies, allowing each kind of + resource to be managed independently. + </para> + <para> + The command <command>systemd-cgls</command> lists all control groups in the + <literal>systemd</literal> hierarchy, which is what systemd uses to keep + track of the processes belonging to each service or user session: <screen> $ systemd-cgls ├─user @@ -34,40 +32,34 @@ $ systemd-cgls │ └─2376 dhcpcd --config /nix/store/f8dif8dsi2yaa70n03xir8r653776ka6-dhcpcd.conf └─ <replaceable>...</replaceable> </screen> - -Similarly, <command>systemd-cgls cpu</command> shows the cgroups in -the CPU hierarchy, which allows per-cgroup CPU scheduling priorities. -By default, every systemd service gets its own CPU cgroup, while all -user sessions are in the top-level CPU cgroup. This ensures, for -instance, that a thousand run-away processes in the -<literal>httpd.service</literal> cgroup cannot starve the CPU for one -process in the <literal>postgresql.service</literal> cgroup. (By -contrast, it they were in the same cgroup, then the PostgreSQL process -would get 1/1001 of the cgroup’s CPU time.) You can limit a service’s -CPU share in <filename>configuration.nix</filename>: - + Similarly, <command>systemd-cgls cpu</command> shows the cgroups in the CPU + hierarchy, which allows per-cgroup CPU scheduling priorities. By default, + every systemd service gets its own CPU cgroup, while all user sessions are in + the top-level CPU cgroup. This ensures, for instance, that a thousand + run-away processes in the <literal>httpd.service</literal> cgroup cannot + starve the CPU for one process in the <literal>postgresql.service</literal> + cgroup. (By contrast, it they were in the same cgroup, then the PostgreSQL + process would get 1/1001 of the cgroup’s CPU time.) You can limit a + service’s CPU share in <filename>configuration.nix</filename>: <programlisting> -systemd.services.httpd.serviceConfig.CPUShares = 512; +<link linkend="opt-systemd.services._name_.serviceConfig">systemd.services.httpd.serviceConfig</link>.CPUShares = 512; </programlisting> - -By default, every cgroup has 1024 CPU shares, so this will halve the -CPU allocation of the <literal>httpd.service</literal> cgroup.</para> - -<para>There also is a <literal>memory</literal> hierarchy that -controls memory allocation limits; by default, all processes are in -the top-level cgroup, so any service or session can exhaust all -available memory. Per-cgroup memory limits can be specified in -<filename>configuration.nix</filename>; for instance, to limit -<literal>httpd.service</literal> to 512 MiB of RAM (excluding swap): - + By default, every cgroup has 1024 CPU shares, so this will halve the CPU + allocation of the <literal>httpd.service</literal> cgroup. + </para> + <para> + There also is a <literal>memory</literal> hierarchy that controls memory + allocation limits; by default, all processes are in the top-level cgroup, so + any service or session can exhaust all available memory. Per-cgroup memory + limits can be specified in <filename>configuration.nix</filename>; for + instance, to limit <literal>httpd.service</literal> to 512 MiB of RAM + (excluding swap): <programlisting> -systemd.services.httpd.serviceConfig.MemoryLimit = "512M"; +<link linkend="opt-systemd.services._name_.serviceConfig">systemd.services.httpd.serviceConfig</link>.MemoryLimit = "512M"; </programlisting> - -</para> - -<para>The command <command>systemd-cgtop</command> shows a -continuously updated list of all cgroups with their CPU and memory -usage.</para> - + </para> + <para> + The command <command>systemd-cgtop</command> shows a continuously updated + list of all cgroups with their CPU and memory usage. + </para> </chapter> diff --git a/nixos/doc/manual/administration/declarative-containers.xml b/nixos/doc/manual/administration/declarative-containers.xml index 94f03a2ee116..2a98fb126231 100644 --- a/nixos/doc/manual/administration/declarative-containers.xml +++ b/nixos/doc/manual/administration/declarative-containers.xml @@ -3,58 +3,58 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-declarative-containers"> + <title>Declarative Container Specification</title> -<title>Declarative Container Specification</title> - -<para>You can also specify containers and their configuration in the -host’s <filename>configuration.nix</filename>. For example, the -following specifies that there shall be a container named -<literal>database</literal> running PostgreSQL: - + <para> + You can also specify containers and their configuration in the host’s + <filename>configuration.nix</filename>. For example, the following specifies + that there shall be a container named <literal>database</literal> running + PostgreSQL: <programlisting> containers.database = { config = { config, pkgs, ... }: - { services.postgresql.enable = true; - services.postgresql.package = pkgs.postgresql96; + { <xref linkend="opt-services.postgresql.enable"/> = true; + <xref linkend="opt-services.postgresql.package"/> = pkgs.postgresql96; }; }; </programlisting> - -If you run <literal>nixos-rebuild switch</literal>, the container will -be built. If the container was already running, it will be -updated in place, without rebooting. The container can be configured to -start automatically by setting <literal>containers.database.autoStart = true</literal> -in its configuration.</para> - -<para>By default, declarative containers share the network namespace -of the host, meaning that they can listen on (privileged) -ports. However, they cannot change the network configuration. You can -give a container its own network as follows: - + If you run <literal>nixos-rebuild switch</literal>, the container will be + built. If the container was already running, it will be updated in place, + without rebooting. The container can be configured to start automatically by + setting <literal>containers.database.autoStart = true</literal> in its + configuration. + </para> + + <para> + By default, declarative containers share the network namespace of the host, + meaning that they can listen on (privileged) ports. However, they cannot + change the network configuration. You can give a container its own network as + follows: <programlisting> -containers.database = - { privateNetwork = true; - hostAddress = "192.168.100.10"; - localAddress = "192.168.100.11"; - }; +containers.database = { + <link linkend="opt-containers._name_.privateNetwork">privateNetwork</link> = true; + <link linkend="opt-containers._name_.hostAddress">hostAddress</link> = "192.168.100.10"; + <link linkend="opt-containers._name_.localAddress">localAddress</link> = "192.168.100.11"; +}; </programlisting> - -This gives the container a private virtual Ethernet interface with IP -address <literal>192.168.100.11</literal>, which is hooked up to a -virtual Ethernet interface on the host with IP address -<literal>192.168.100.10</literal>. (See the next section for details -on container networking.)</para> - -<para>To disable the container, just remove it from -<filename>configuration.nix</filename> and run <literal>nixos-rebuild -switch</literal>. Note that this will not delete the root directory of -the container in <literal>/var/lib/containers</literal>. Containers can be -destroyed using the imperative method: <literal>nixos-container destroy - foo</literal>.</para> - -<para>Declarative containers can be started and stopped using the -corresponding systemd service, e.g. <literal>systemctl start -container@database</literal>.</para> - + This gives the container a private virtual Ethernet interface with IP address + <literal>192.168.100.11</literal>, which is hooked up to a virtual Ethernet + interface on the host with IP address <literal>192.168.100.10</literal>. (See + the next section for details on container networking.) + </para> + + <para> + To disable the container, just remove it from + <filename>configuration.nix</filename> and run <literal>nixos-rebuild + switch</literal>. Note that this will not delete the root directory of the + container in <literal>/var/lib/containers</literal>. Containers can be + destroyed using the imperative method: <literal>nixos-container destroy + foo</literal>. + </para> + + <para> + Declarative containers can be started and stopped using the corresponding + systemd service, e.g. <literal>systemctl start container@database</literal>. + </para> </section> diff --git a/nixos/doc/manual/administration/imperative-containers.xml b/nixos/doc/manual/administration/imperative-containers.xml index d5d8140e0764..9cc7ca3e672a 100644 --- a/nixos/doc/manual/administration/imperative-containers.xml +++ b/nixos/doc/manual/administration/imperative-containers.xml @@ -3,131 +3,114 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-imperative-containers"> + <title>Imperative Container Management</title> -<title>Imperative Container Management</title> - -<para>We’ll cover imperative container management using -<command>nixos-container</command> first. -Be aware that container management is currently only possible -as <literal>root</literal>.</para> - -<para>You create a container with -identifier <literal>foo</literal> as follows: + <para> + We’ll cover imperative container management using + <command>nixos-container</command> first. Be aware that container management + is currently only possible as <literal>root</literal>. + </para> + <para> + You create a container with identifier <literal>foo</literal> as follows: <screen> # nixos-container create foo </screen> - -This creates the container’s root directory in -<filename>/var/lib/containers/foo</filename> and a small configuration -file in <filename>/etc/containers/foo.conf</filename>. It also builds -the container’s initial system configuration and stores it in -<filename>/nix/var/nix/profiles/per-container/foo/system</filename>. You -can modify the initial configuration of the container on the command -line. For instance, to create a container that has -<command>sshd</command> running, with the given public key for -<literal>root</literal>: - + This creates the container’s root directory in + <filename>/var/lib/containers/foo</filename> and a small configuration file + in <filename>/etc/containers/foo.conf</filename>. It also builds the + container’s initial system configuration and stores it in + <filename>/nix/var/nix/profiles/per-container/foo/system</filename>. You can + modify the initial configuration of the container on the command line. For + instance, to create a container that has <command>sshd</command> running, + with the given public key for <literal>root</literal>: <screen> # nixos-container create foo --config ' - services.openssh.enable = true; - users.extraUsers.root.openssh.authorizedKeys.keys = ["ssh-dss AAAAB3N…"]; + <xref linkend="opt-services.openssh.enable"/> = true; + <link linkend="opt-users.users._name__.openssh.authorizedKeys.keys">users.extraUsers.root.openssh.authorizedKeys.keys</link> = ["ssh-dss AAAAB3N…"]; ' </screen> + </para> -</para> - -<para>Creating a container does not start it. To start the container, -run: - + <para> + Creating a container does not start it. To start the container, run: <screen> # nixos-container start foo </screen> - -This command will return as soon as the container has booted and has -reached <literal>multi-user.target</literal>. On the host, the -container runs within a systemd unit called -<literal>container@<replaceable>container-name</replaceable>.service</literal>. -Thus, if something went wrong, you can get status info using -<command>systemctl</command>: - + This command will return as soon as the container has booted and has reached + <literal>multi-user.target</literal>. On the host, the container runs within + a systemd unit called + <literal>container@<replaceable>container-name</replaceable>.service</literal>. + Thus, if something went wrong, you can get status info using + <command>systemctl</command>: <screen> # systemctl status container@foo </screen> + </para> -</para> - -<para>If the container has started successfully, you can log in as -root using the <command>root-login</command> operation: - + <para> + If the container has started successfully, you can log in as root using the + <command>root-login</command> operation: <screen> # nixos-container root-login foo [root@foo:~]# </screen> - -Note that only root on the host can do this (since there is no -authentication). You can also get a regular login prompt using the -<command>login</command> operation, which is available to all users on -the host: - + Note that only root on the host can do this (since there is no + authentication). You can also get a regular login prompt using the + <command>login</command> operation, which is available to all users on the + host: <screen> # nixos-container login foo foo login: alice Password: *** </screen> - -With <command>nixos-container run</command>, you can execute arbitrary -commands in the container: - + With <command>nixos-container run</command>, you can execute arbitrary + commands in the container: <screen> # nixos-container run foo -- uname -a Linux foo 3.4.82 #1-NixOS SMP Thu Mar 20 14:44:05 UTC 2014 x86_64 GNU/Linux </screen> + </para> -</para> - -<para>There are several ways to change the configuration of the -container. First, on the host, you can edit -<literal>/var/lib/container/<replaceable>name</replaceable>/etc/nixos/configuration.nix</literal>, -and run - + <para> + There are several ways to change the configuration of the container. First, + on the host, you can edit + <literal>/var/lib/container/<replaceable>name</replaceable>/etc/nixos/configuration.nix</literal>, + and run <screen> # nixos-container update foo </screen> - -This will build and activate the new configuration. You can also -specify a new configuration on the command line: - + This will build and activate the new configuration. You can also specify a + new configuration on the command line: <screen> # nixos-container update foo --config ' - services.httpd.enable = true; - services.httpd.adminAddr = "foo@example.org"; - networking.firewall.allowedTCPPorts = [ 80 ]; + <xref linkend="opt-services.httpd.enable"/> = true; + <xref linkend="opt-services.httpd.adminAddr"/> = "foo@example.org"; + <xref linkend="opt-networking.firewall.allowedTCPPorts"/> = [ 80 ]; ' # curl http://$(nixos-container show-ip foo)/ <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">… </screen> - -However, note that this will overwrite the container’s -<filename>/etc/nixos/configuration.nix</filename>.</para> - -<para>Alternatively, you can change the configuration from within the -container itself by running <command>nixos-rebuild switch</command> -inside the container. Note that the container by default does not have -a copy of the NixOS channel, so you should run <command>nix-channel ---update</command> first.</para> - -<para>Containers can be stopped and started using -<literal>nixos-container stop</literal> and <literal>nixos-container -start</literal>, respectively, or by using -<command>systemctl</command> on the container’s service unit. To -destroy a container, including its file system, do - + However, note that this will overwrite the container’s + <filename>/etc/nixos/configuration.nix</filename>. + </para> + + <para> + Alternatively, you can change the configuration from within the container + itself by running <command>nixos-rebuild switch</command> inside the + container. Note that the container by default does not have a copy of the + NixOS channel, so you should run <command>nix-channel --update</command> + first. + </para> + + <para> + Containers can be stopped and started using <literal>nixos-container + stop</literal> and <literal>nixos-container start</literal>, respectively, or + by using <command>systemctl</command> on the container’s service unit. To + destroy a container, including its file system, do <screen> # nixos-container destroy foo </screen> - -</para> - + </para> </section> diff --git a/nixos/doc/manual/administration/logging.xml b/nixos/doc/manual/administration/logging.xml index 1d5df7770e29..a41936b373d6 100644 --- a/nixos/doc/manual/administration/logging.xml +++ b/nixos/doc/manual/administration/logging.xml @@ -3,26 +3,20 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-logging"> - -<title>Logging</title> - -<para>System-wide logging is provided by systemd’s -<emphasis>journal</emphasis>, which subsumes traditional logging -daemons such as syslogd and klogd. Log entries are kept in binary -files in <filename>/var/log/journal/</filename>. The command -<literal>journalctl</literal> allows you to see the contents of the -journal. For example, - + <title>Logging</title> + <para> + System-wide logging is provided by systemd’s <emphasis>journal</emphasis>, + which subsumes traditional logging daemons such as syslogd and klogd. Log + entries are kept in binary files in <filename>/var/log/journal/</filename>. + The command <literal>journalctl</literal> allows you to see the contents of + the journal. For example, <screen> $ journalctl -b </screen> - -shows all journal entries since the last reboot. (The output of -<command>journalctl</command> is piped into <command>less</command> by -default.) You can use various options and match operators to restrict -output to messages of interest. For instance, to get all messages -from PostgreSQL: - + shows all journal entries since the last reboot. (The output of + <command>journalctl</command> is piped into <command>less</command> by + default.) You can use various options and match operators to restrict output + to messages of interest. For instance, to get all messages from PostgreSQL: <screen> $ journalctl -u postgresql.service -- Logs begin at Mon, 2013-01-07 13:28:01 CET, end at Tue, 2013-01-08 01:09:57 CET. -- @@ -32,21 +26,18 @@ Jan 07 15:44:14 hagbard postgres[2681]: [2-1] LOG: database system is shut down Jan 07 15:45:10 hagbard postgres[2532]: [1-1] LOG: database system was shut down at 2013-01-07 15:44:14 CET Jan 07 15:45:13 hagbard postgres[2500]: [1-1] LOG: database system is ready to accept connections </screen> - -Or to get all messages since the last reboot that have at least a -“critical” severity level: - + Or to get all messages since the last reboot that have at least a + “critical” severity level: <screen> $ journalctl -b -p crit Dec 17 21:08:06 mandark sudo[3673]: pam_unix(sudo:auth): auth could not identify password for [alice] Dec 29 01:30:22 mandark kernel[6131]: [1053513.909444] CPU6: Core temperature above threshold, cpu clock throttled (total events = 1) </screen> - -</para> - -<para>The system journal is readable by root and by users in the -<literal>wheel</literal> and <literal>systemd-journal</literal> -groups. All users have a private journal that can be read using -<command>journalctl</command>.</para> - -</chapter> \ No newline at end of file + </para> + <para> + The system journal is readable by root and by users in the + <literal>wheel</literal> and <literal>systemd-journal</literal> groups. All + users have a private journal that can be read using + <command>journalctl</command>. + </para> +</chapter> diff --git a/nixos/doc/manual/administration/maintenance-mode.xml b/nixos/doc/manual/administration/maintenance-mode.xml index 17a1609e5579..71e3f9ea665d 100644 --- a/nixos/doc/manual/administration/maintenance-mode.xml +++ b/nixos/doc/manual/administration/maintenance-mode.xml @@ -3,16 +3,14 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-maintenance-mode"> + <title>Maintenance Mode</title> -<title>Maintenance Mode</title> - -<para>You can enter rescue mode by running: - + <para> + You can enter rescue mode by running: <screen> # systemctl rescue</screen> - -This will eventually give you a single-user root shell. Systemd will -stop (almost) all system services. To get out of maintenance mode, -just exit from the rescue shell.</para> - + This will eventually give you a single-user root shell. Systemd will stop + (almost) all system services. To get out of maintenance mode, just exit from + the rescue shell. + </para> </section> diff --git a/nixos/doc/manual/administration/network-problems.xml b/nixos/doc/manual/administration/network-problems.xml index 91f9eb4e22c6..570f58358845 100644 --- a/nixos/doc/manual/administration/network-problems.xml +++ b/nixos/doc/manual/administration/network-problems.xml @@ -3,31 +3,25 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-nix-network-issues"> + <title>Network Problems</title> -<title>Network Problems</title> - -<para>Nix uses a so-called <emphasis>binary cache</emphasis> to -optimise building a package from source into downloading it as a -pre-built binary. That is, whenever a command like -<command>nixos-rebuild</command> needs a path in the Nix store, Nix -will try to download that path from the Internet rather than build it -from source. The default binary cache is -<uri>https://cache.nixos.org/</uri>. If this cache is unreachable, -Nix operations may take a long time due to HTTP connection timeouts. -You can disable the use of the binary cache by adding <option>--option -use-binary-caches false</option>, e.g. - + <para> + Nix uses a so-called <emphasis>binary cache</emphasis> to optimise building a + package from source into downloading it as a pre-built binary. That is, + whenever a command like <command>nixos-rebuild</command> needs a path in the + Nix store, Nix will try to download that path from the Internet rather than + build it from source. The default binary cache is + <uri>https://cache.nixos.org/</uri>. If this cache is unreachable, Nix + operations may take a long time due to HTTP connection timeouts. You can + disable the use of the binary cache by adding <option>--option + use-binary-caches false</option>, e.g. <screen> # nixos-rebuild switch --option use-binary-caches false </screen> - -If you have an alternative binary cache at your disposal, you can use -it instead: - + If you have an alternative binary cache at your disposal, you can use it + instead: <screen> # nixos-rebuild switch --option binary-caches http://my-cache.example.org/ </screen> - -</para> - + </para> </section> diff --git a/nixos/doc/manual/administration/rebooting.xml b/nixos/doc/manual/administration/rebooting.xml index 23f3a3219c6a..a5abd6f02588 100644 --- a/nixos/doc/manual/administration/rebooting.xml +++ b/nixos/doc/manual/administration/rebooting.xml @@ -3,42 +3,33 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-rebooting"> - -<title>Rebooting and Shutting Down</title> - -<para>The system can be shut down (and automatically powered off) by -doing: - + <title>Rebooting and Shutting Down</title> + <para> + The system can be shut down (and automatically powered off) by doing: <screen> # shutdown </screen> - -This is equivalent to running <command>systemctl -poweroff</command>.</para> - -<para>To reboot the system, run - + This is equivalent to running <command>systemctl poweroff</command>. + </para> + <para> + To reboot the system, run <screen> # reboot </screen> - -which is equivalent to <command>systemctl reboot</command>. -Alternatively, you can quickly reboot the system using -<literal>kexec</literal>, which bypasses the BIOS by directly loading -the new kernel into memory: - + which is equivalent to <command>systemctl reboot</command>. Alternatively, + you can quickly reboot the system using <literal>kexec</literal>, which + bypasses the BIOS by directly loading the new kernel into memory: <screen> # systemctl kexec </screen> - -</para> - -<para>The machine can be suspended to RAM (if supported) using -<command>systemctl suspend</command>, and suspended to disk using -<command>systemctl hibernate</command>.</para> - -<para>These commands can be run by any user who is logged in locally, -i.e. on a virtual console or in X11; otherwise, the user is asked for -authentication.</para> - + </para> + <para> + The machine can be suspended to RAM (if supported) using <command>systemctl + suspend</command>, and suspended to disk using <command>systemctl + hibernate</command>. + </para> + <para> + These commands can be run by any user who is logged in locally, i.e. on a + virtual console or in X11; otherwise, the user is asked for authentication. + </para> </chapter> diff --git a/nixos/doc/manual/administration/rollback.xml b/nixos/doc/manual/administration/rollback.xml index ae621f33de2c..07c6acaa469c 100644 --- a/nixos/doc/manual/administration/rollback.xml +++ b/nixos/doc/manual/administration/rollback.xml @@ -3,46 +3,39 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-rollback"> - -<title>Rolling Back Configuration Changes</title> - -<para>After running <command>nixos-rebuild</command> to switch to a -new configuration, you may find that the new configuration doesn’t -work very well. In that case, there are several ways to return to a -previous configuration.</para> - -<para>First, the GRUB boot manager allows you to boot into any -previous configuration that hasn’t been garbage-collected. These -configurations can be found under the GRUB submenu “NixOS - All -configurations”. This is especially useful if the new configuration -fails to boot. After the system has booted, you can make the selected -configuration the default for subsequent boots: - + <title>Rolling Back Configuration Changes</title> + + <para> + After running <command>nixos-rebuild</command> to switch to a new + configuration, you may find that the new configuration doesn’t work very + well. In that case, there are several ways to return to a previous + configuration. + </para> + + <para> + First, the GRUB boot manager allows you to boot into any previous + configuration that hasn’t been garbage-collected. These configurations can + be found under the GRUB submenu “NixOS - All configurations”. This is + especially useful if the new configuration fails to boot. After the system + has booted, you can make the selected configuration the default for + subsequent boots: <screen> # /run/current-system/bin/switch-to-configuration boot</screen> + </para> -</para> - -<para>Second, you can switch to the previous configuration in a running -system: - + <para> + Second, you can switch to the previous configuration in a running system: <screen> # nixos-rebuild switch --rollback</screen> - -This is equivalent to running: - + This is equivalent to running: <screen> # /nix/var/nix/profiles/system-<replaceable>N</replaceable>-link/bin/switch-to-configuration switch</screen> - -where <replaceable>N</replaceable> is the number of the NixOS system -configuration. To get a list of the available configurations, do: - + where <replaceable>N</replaceable> is the number of the NixOS system + configuration. To get a list of the available configurations, do: <screen> $ ls -l /nix/var/nix/profiles/system-*-link <replaceable>...</replaceable> lrwxrwxrwx 1 root root 78 Aug 12 13:54 /nix/var/nix/profiles/system-268-link -> /nix/store/202b...-nixos-13.07pre4932_5a676e4-4be1055 </screen> - -</para> - + </para> </section> diff --git a/nixos/doc/manual/administration/running.xml b/nixos/doc/manual/administration/running.xml index 9091511ed527..786dd5e2390d 100644 --- a/nixos/doc/manual/administration/running.xml +++ b/nixos/doc/manual/administration/running.xml @@ -3,22 +3,19 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="ch-running"> - -<title>Administration</title> - -<partintro> -<para>This chapter describes various aspects of managing a running -NixOS system, such as how to use the <command>systemd</command> -service manager.</para> -</partintro> - -<xi:include href="service-mgmt.xml" /> -<xi:include href="rebooting.xml" /> -<xi:include href="user-sessions.xml" /> -<xi:include href="control-groups.xml" /> -<xi:include href="logging.xml" /> -<xi:include href="cleaning-store.xml" /> -<xi:include href="containers.xml" /> -<xi:include href="troubleshooting.xml" /> - + <title>Administration</title> + <partintro> + <para> + This chapter describes various aspects of managing a running NixOS system, + such as how to use the <command>systemd</command> service manager. + </para> + </partintro> + <xi:include href="service-mgmt.xml" /> + <xi:include href="rebooting.xml" /> + <xi:include href="user-sessions.xml" /> + <xi:include href="control-groups.xml" /> + <xi:include href="logging.xml" /> + <xi:include href="cleaning-store.xml" /> + <xi:include href="containers.xml" /> + <xi:include href="troubleshooting.xml" /> </part> diff --git a/nixos/doc/manual/administration/service-mgmt.xml b/nixos/doc/manual/administration/service-mgmt.xml index 1627c7a2fdeb..0c2085c81559 100644 --- a/nixos/doc/manual/administration/service-mgmt.xml +++ b/nixos/doc/manual/administration/service-mgmt.xml @@ -3,26 +3,23 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-systemctl"> - -<title>Service Management</title> - -<para>In NixOS, all system services are started and monitored using -the systemd program. Systemd is the “init” process of the system -(i.e. PID 1), the parent of all other processes. It manages a set of -so-called “units”, which can be things like system services -(programs), but also mount points, swap files, devices, targets -(groups of units) and more. Units can have complex dependencies; for -instance, one unit can require that another unit must be successfully -started before the first unit can be started. When the system boots, -it starts a unit named <literal>default.target</literal>; the -dependencies of this unit cause all system services to be started, -file systems to be mounted, swap files to be activated, and so -on.</para> - -<para>The command <command>systemctl</command> is the main way to -interact with <command>systemd</command>. Without any arguments, it -shows the status of active units: - + <title>Service Management</title> + <para> + In NixOS, all system services are started and monitored using the systemd + program. Systemd is the “init” process of the system (i.e. PID 1), the + parent of all other processes. It manages a set of so-called “units”, + which can be things like system services (programs), but also mount points, + swap files, devices, targets (groups of units) and more. Units can have + complex dependencies; for instance, one unit can require that another unit + must be successfully started before the first unit can be started. When the + system boots, it starts a unit named <literal>default.target</literal>; the + dependencies of this unit cause all system services to be started, file + systems to be mounted, swap files to be activated, and so on. + </para> + <para> + The command <command>systemctl</command> is the main way to interact with + <command>systemd</command>. Without any arguments, it shows the status of + active units: <screen> $ systemctl -.mount loaded active mounted / @@ -31,12 +28,10 @@ sshd.service loaded active running SSH Daemon graphical.target loaded active active Graphical Interface <replaceable>...</replaceable> </screen> - -</para> - -<para>You can ask for detailed status information about a unit, for -instance, the PostgreSQL database service: - + </para> + <para> + You can ask for detailed status information about a unit, for instance, the + PostgreSQL database service: <screen> $ systemctl status postgresql.service postgresql.service - PostgreSQL Server @@ -56,28 +51,22 @@ Jan 07 15:55:57 hagbard postgres[2390]: [1-1] LOG: database system is ready to Jan 07 15:55:57 hagbard postgres[2420]: [1-1] LOG: autovacuum launcher started Jan 07 15:55:57 hagbard systemd[1]: Started PostgreSQL Server. </screen> - -Note that this shows the status of the unit (active and running), all -the processes belonging to the service, as well as the most recent log -messages from the service. - -</para> - -<para>Units can be stopped, started or restarted: - + Note that this shows the status of the unit (active and running), all the + processes belonging to the service, as well as the most recent log messages + from the service. + </para> + <para> + Units can be stopped, started or restarted: <screen> # systemctl stop postgresql.service # systemctl start postgresql.service # systemctl restart postgresql.service </screen> - -These operations are synchronous: they wait until the service has -finished starting or stopping (or has failed). Starting a unit will -cause the dependencies of that unit to be started as well (if -necessary).</para> - + These operations are synchronous: they wait until the service has finished + starting or stopping (or has failed). Starting a unit will cause the + dependencies of that unit to be started as well (if necessary). + </para> <!-- - cgroups: each service and user session is a cgroup - cgroup resource management --> - </chapter> diff --git a/nixos/doc/manual/administration/store-corruption.xml b/nixos/doc/manual/administration/store-corruption.xml index 9f567042b727..a4ca3b651e20 100644 --- a/nixos/doc/manual/administration/store-corruption.xml +++ b/nixos/doc/manual/administration/store-corruption.xml @@ -3,35 +3,34 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-nix-store-corruption"> - -<title>Nix Store Corruption</title> - -<para>After a system crash, it’s possible for files in the Nix store -to become corrupted. (For instance, the Ext4 file system has the -tendency to replace un-synced files with zero bytes.) NixOS tries -hard to prevent this from happening: it performs a -<command>sync</command> before switching to a new configuration, and -Nix’s database is fully transactional. If corruption still occurs, -you may be able to fix it automatically.</para> - -<para>If the corruption is in a path in the closure of the NixOS -system configuration, you can fix it by doing - + <title>Nix Store Corruption</title> + + <para> + After a system crash, it’s possible for files in the Nix store to become + corrupted. (For instance, the Ext4 file system has the tendency to replace + un-synced files with zero bytes.) NixOS tries hard to prevent this from + happening: it performs a <command>sync</command> before switching to a new + configuration, and Nix’s database is fully transactional. If corruption + still occurs, you may be able to fix it automatically. + </para> + + <para> + If the corruption is in a path in the closure of the NixOS system + configuration, you can fix it by doing <screen> # nixos-rebuild switch --repair </screen> + This will cause Nix to check every path in the closure, and if its + cryptographic hash differs from the hash recorded in Nix’s database, the + path is rebuilt or redownloaded. + </para> -This will cause Nix to check every path in the closure, and if its -cryptographic hash differs from the hash recorded in Nix’s database, -the path is rebuilt or redownloaded.</para> - -<para>You can also scan the entire Nix store for corrupt paths: - + <para> + You can also scan the entire Nix store for corrupt paths: <screen> # nix-store --verify --check-contents --repair </screen> - -Any corrupt paths will be redownloaded if they’re available in a -binary cache; otherwise, they cannot be repaired.</para> - + Any corrupt paths will be redownloaded if they’re available in a binary + cache; otherwise, they cannot be repaired. + </para> </section> diff --git a/nixos/doc/manual/administration/troubleshooting.xml b/nixos/doc/manual/administration/troubleshooting.xml index 351fb1883310..6496e7bde387 100644 --- a/nixos/doc/manual/administration/troubleshooting.xml +++ b/nixos/doc/manual/administration/troubleshooting.xml @@ -3,16 +3,14 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="ch-troubleshooting"> - -<title>Troubleshooting</title> - -<para>This chapter describes solutions to common problems you might -encounter when you manage your NixOS system.</para> - -<xi:include href="boot-problems.xml" /> -<xi:include href="maintenance-mode.xml" /> -<xi:include href="rollback.xml" /> -<xi:include href="store-corruption.xml" /> -<xi:include href="network-problems.xml" /> - + <title>Troubleshooting</title> + <para> + This chapter describes solutions to common problems you might encounter when + you manage your NixOS system. + </para> + <xi:include href="boot-problems.xml" /> + <xi:include href="maintenance-mode.xml" /> + <xi:include href="rollback.xml" /> + <xi:include href="store-corruption.xml" /> + <xi:include href="network-problems.xml" /> </chapter> diff --git a/nixos/doc/manual/administration/user-sessions.xml b/nixos/doc/manual/administration/user-sessions.xml index 0a7eb8cd123c..1d95cfb22b69 100644 --- a/nixos/doc/manual/administration/user-sessions.xml +++ b/nixos/doc/manual/administration/user-sessions.xml @@ -3,14 +3,12 @@ xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:id="sec-user-sessions"> - -<title>User Sessions</title> - -<para>Systemd keeps track of all users who are logged into the system -(e.g. on a virtual console or remotely via SSH). The command -<command>loginctl</command> allows querying and manipulating user -sessions. For instance, to list all user sessions: - + <title>User Sessions</title> + <para> + Systemd keeps track of all users who are logged into the system (e.g. on a + virtual console or remotely via SSH). The command <command>loginctl</command> + allows querying and manipulating user sessions. For instance, to list all + user sessions: <screen> $ loginctl SESSION UID USER SEAT @@ -18,12 +16,10 @@ $ loginctl c3 0 root seat0 c4 500 alice </screen> - -This shows that two users are logged in locally, while another is -logged in remotely. (“Seats” are essentially the combinations of -displays and input devices attached to the system; usually, there is -only one seat.) To get information about a session: - + This shows that two users are logged in locally, while another is logged in + remotely. (“Seats” are essentially the combinations of displays and input + devices attached to the system; usually, there is only one seat.) To get + information about a session: <screen> $ loginctl session-status c3 c3 - root (0) @@ -38,16 +34,12 @@ c3 - root (0) ├─10339 -bash └─10355 w3m nixos.org </screen> - -This shows that the user is logged in on virtual console 3. It also -lists the processes belonging to this session. Since systemd keeps -track of this, you can terminate a session in a way that ensures that -all the session’s processes are gone: - + This shows that the user is logged in on virtual console 3. It also lists the + processes belonging to this session. Since systemd keeps track of this, you + can terminate a session in a way that ensures that all the session’s + processes are gone: <screen> # loginctl terminate-session c3 </screen> - -</para> - + </para> </chapter> |