summary refs log tree commit diff
path: root/nixos/doc/manual/administration
diff options
context:
space:
mode:
Diffstat (limited to 'nixos/doc/manual/administration')
-rw-r--r--nixos/doc/manual/administration/boot-problems.xml65
-rw-r--r--nixos/doc/manual/administration/cleaning-store.xml62
-rw-r--r--nixos/doc/manual/administration/container-networking.xml50
-rw-r--r--nixos/doc/manual/administration/containers.xml34
-rw-r--r--nixos/doc/manual/administration/control-groups.xml75
-rw-r--r--nixos/doc/manual/administration/declarative-containers.xml52
-rw-r--r--nixos/doc/manual/administration/imperative-containers.xml124
-rw-r--r--nixos/doc/manual/administration/logging.xml52
-rw-r--r--nixos/doc/manual/administration/maintenance-mode.xml18
-rw-r--r--nixos/doc/manual/administration/network-problems.xml33
-rw-r--r--nixos/doc/manual/administration/rebooting.xml44
-rw-r--r--nixos/doc/manual/administration/rollback.xml48
-rw-r--r--nixos/doc/manual/administration/running.xml24
-rw-r--r--nixos/doc/manual/administration/service-mgmt.xml83
-rw-r--r--nixos/doc/manual/administration/store-corruption.xml37
-rw-r--r--nixos/doc/manual/administration/troubleshooting.xml18
-rw-r--r--nixos/doc/manual/administration/user-sessions.xml53
17 files changed, 872 insertions, 0 deletions
diff --git a/nixos/doc/manual/administration/boot-problems.xml b/nixos/doc/manual/administration/boot-problems.xml
new file mode 100644
index 000000000000..be6ff3aac0fe
--- /dev/null
+++ b/nixos/doc/manual/administration/boot-problems.xml
@@ -0,0 +1,65 @@
+<section xmlns="http://docbook.org/ns/docbook"
+        xmlns:xlink="http://www.w3.org/1999/xlink"
+        xmlns:xi="http://www.w3.org/2001/XInclude"
+        version="5.0"
+        xml:id="sec-boot-problems">
+
+<title>Boot Problems</title>
+
+<para>If NixOS fails to boot, there are a number of kernel command
+line parameters that may help you to identify or fix the issue.  You
+can add these parameters in the GRUB boot menu by pressing “e” to
+modify the selected boot entry and editing the line starting with
+<literal>linux</literal>.  The following are some useful kernel command
+line parameters that are recognised by the NixOS boot scripts or by
+systemd:
+
+<variablelist>
+
+  <varlistentry><term><literal>boot.shell_on_fail</literal></term>
+    <listitem><para>Start a root shell if something goes wrong in
+    stage 1 of the boot process (the initial ramdisk).  This is
+    disabled by default because there is no authentication for the
+    root shell.</para></listitem>
+  </varlistentry>
+
+  <varlistentry><term><literal>boot.debug1</literal></term>
+    <listitem><para>Start an interactive shell in stage 1 before
+    anything useful has been done.  That is, no modules have been
+    loaded and no file systems have been mounted, except for
+    <filename>/proc</filename> and
+    <filename>/sys</filename>.</para></listitem>
+  </varlistentry>
+
+  <varlistentry><term><literal>boot.trace</literal></term>
+    <listitem><para>Print every shell command executed by the stage 1
+    and 2 boot scripts.</para></listitem>
+  </varlistentry>
+
+  <varlistentry><term><literal>single</literal></term>
+    <listitem><para>Boot into rescue mode (a.k.a. single user mode).
+    This will cause systemd to start nothing but the unit
+    <literal>rescue.target</literal>, which runs
+    <command>sulogin</command> to prompt for the root password and
+    start a root login shell.  Exiting the shell causes the system to
+    continue with the normal boot process.</para></listitem>
+  </varlistentry>
+
+  <varlistentry><term><literal>systemd.log_level=debug systemd.log_target=console</literal></term>
+    <listitem><para>Make systemd very verbose and send log messages to
+    the console instead of the journal.</para></listitem>
+  </varlistentry>
+
+</variablelist>
+
+For more parameters recognised by systemd, see
+<citerefentry><refentrytitle>systemd</refentrytitle><manvolnum>1</manvolnum></citerefentry>.</para>
+
+<para>If no login prompts or X11 login screens appear (e.g. due to
+hanging dependencies), you can press Alt+ArrowUp.  If you’re lucky,
+this will start rescue mode (described above).  (Also note that since
+most units have a 90-second timeout before systemd gives up on them,
+the <command>agetty</command> login prompts should appear eventually
+unless something is very wrong.)</para>
+
+</section>
\ No newline at end of file
diff --git a/nixos/doc/manual/administration/cleaning-store.xml b/nixos/doc/manual/administration/cleaning-store.xml
new file mode 100644
index 000000000000..41dc65795b68
--- /dev/null
+++ b/nixos/doc/manual/administration/cleaning-store.xml
@@ -0,0 +1,62 @@
+<chapter xmlns="http://docbook.org/ns/docbook"
+        xmlns:xlink="http://www.w3.org/1999/xlink"
+        xmlns:xi="http://www.w3.org/2001/XInclude"
+        version="5.0"
+        xml:id="sec-nix-gc">
+
+<title>Cleaning the Nix Store</title>
+
+<para>Nix has a purely functional model, meaning that packages are
+never upgraded in place.  Instead new versions of packages end up in a
+different location in the Nix store (<filename>/nix/store</filename>).
+You should periodically run Nix’s <emphasis>garbage
+collector</emphasis> to remove old, unreferenced packages.  This is
+easy:
+
+<screen>
+$ nix-collect-garbage
+</screen>
+
+Alternatively, you can use a systemd unit that does the same in the
+background:
+
+<screen>
+$ systemctl start nix-gc.service
+</screen>
+
+You can tell NixOS in <filename>configuration.nix</filename> to run
+this unit automatically at certain points in time, for instance, every
+night at 03:15:
+
+<programlisting>
+nix.gc.automatic = true;
+nix.gc.dates = "03:15";
+</programlisting>
+
+</para>
+
+<para>The commands above do not remove garbage collector roots, such
+as old system configurations.  Thus they do not remove the ability to
+roll back to previous configurations.  The following command deletes
+old roots, removing the ability to roll back to them:
+<screen>
+$ nix-collect-garbage -d
+</screen>
+You can also do this for specific profiles, e.g.
+<screen>
+$ nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations old
+</screen>
+Note that NixOS system configurations are stored in the profile
+<filename>/nix/var/nix/profiles/system</filename>.</para>
+
+<para>Another way to reclaim disk space (often as much as 40% of the
+size of the Nix store) is to run Nix’s store optimiser, which seeks
+out identical files in the store and replaces them with hard links to
+a single copy.
+<screen>
+$ nix-store --optimise
+</screen>
+Since this command needs to read the entire Nix store, it can take
+quite a while to finish.</para>
+
+</chapter>
\ No newline at end of file
diff --git a/nixos/doc/manual/administration/container-networking.xml b/nixos/doc/manual/administration/container-networking.xml
new file mode 100644
index 000000000000..adea3e69840d
--- /dev/null
+++ b/nixos/doc/manual/administration/container-networking.xml
@@ -0,0 +1,50 @@
+<section  xmlns="http://docbook.org/ns/docbook"
+          xmlns:xlink="http://www.w3.org/1999/xlink"
+          xmlns:xi="http://www.w3.org/2001/XInclude"
+          version="5.0"
+          xml:id="sec-container-networking">
+
+
+<title>Container Networking</title>
+
+<para>When you create a container using <literal>nixos-container
+create</literal>, it gets it own private IPv4 address in the range
+<literal>10.233.0.0/16</literal>. You can get the container’s IPv4
+address as follows:
+
+<screen>
+$ nixos-container show-ip foo
+10.233.4.2
+
+$ ping -c1 10.233.4.2
+64 bytes from 10.233.4.2: icmp_seq=1 ttl=64 time=0.106 ms
+</screen>
+
+</para>
+
+<para>Networking is implemented using a pair of virtual Ethernet
+devices. The network interface in the container is called
+<literal>eth0</literal>, while the matching interface in the host is
+called <literal>ve-<replaceable>container-name</replaceable></literal>
+(e.g., <literal>ve-foo</literal>).  The container has its own network
+namespace and the <literal>CAP_NET_ADMIN</literal> capability, so it
+can perform arbitrary network configuration such as setting up
+firewall rules, without affecting or having access to the host’s
+network.</para>
+
+<para>By default, containers cannot talk to the outside network. If
+you want that, you should set up Network Address Translation (NAT)
+rules on the host to rewrite container traffic to use your external
+IP address. This can be accomplished using the following configuration
+on the host:
+
+<programlisting>
+networking.nat.enable = true;
+networking.nat.internalInterfaces = ["ve-+"];
+networking.nat.externalInterface = "eth0";
+</programlisting>
+where <literal>eth0</literal> should be replaced with the desired
+external interface. Note that <literal>ve-+</literal> is a wildcard
+that matches all container interfaces.</para>
+
+</section>
\ No newline at end of file
diff --git a/nixos/doc/manual/administration/containers.xml b/nixos/doc/manual/administration/containers.xml
new file mode 100644
index 000000000000..4cd2c8ae5563
--- /dev/null
+++ b/nixos/doc/manual/administration/containers.xml
@@ -0,0 +1,34 @@
+<chapter xmlns="http://docbook.org/ns/docbook"
+        xmlns:xlink="http://www.w3.org/1999/xlink"
+        xmlns:xi="http://www.w3.org/2001/XInclude"
+        version="5.0"
+        xml:id="ch-containers">
+
+<title>Container Management</title>
+
+<para>NixOS allows you to easily run other NixOS instances as
+<emphasis>containers</emphasis>. Containers are a light-weight
+approach to virtualisation that runs software in the container at the
+same speed as in the host system. NixOS containers share the Nix store
+of the host, making container creation very efficient.</para>
+
+<warning><para>Currently, NixOS containers are not perfectly isolated
+from the host system. This means that a user with root access to the
+container can do things that affect the host. So you should not give
+container root access to untrusted users.</para></warning>
+
+<para>NixOS containers can be created in two ways: imperatively, using
+the command <command>nixos-container</command>, and declaratively, by
+specifying them in your <filename>configuration.nix</filename>. The
+declarative approach implies that containers get upgraded along with
+your host system when you run <command>nixos-rebuild</command>, which
+is often not what you want. By contrast, in the imperative approach,
+containers are configured and updated independently from the host
+system.</para>
+
+<xi:include href="imperative-containers.xml" />
+<xi:include href="declarative-containers.xml" />
+<xi:include href="container-networking.xml" />
+
+</chapter>
+
diff --git a/nixos/doc/manual/administration/control-groups.xml b/nixos/doc/manual/administration/control-groups.xml
new file mode 100644
index 000000000000..86c684cdfe5d
--- /dev/null
+++ b/nixos/doc/manual/administration/control-groups.xml
@@ -0,0 +1,75 @@
+<chapter xmlns="http://docbook.org/ns/docbook"
+        xmlns:xlink="http://www.w3.org/1999/xlink"
+        xmlns:xi="http://www.w3.org/2001/XInclude"
+        version="5.0"
+        xml:id="sec-cgroups">
+
+<title>Control Groups</title>
+
+<para>To keep track of the processes in a running system, systemd uses
+<emphasis>control groups</emphasis> (cgroups).  A control group is a
+set of processes used to allocate resources such as CPU, memory or I/O
+bandwidth.  There can be multiple control group hierarchies, allowing
+each kind of resource to be managed independently.</para>
+
+<para>The command <command>systemd-cgls</command> lists all control
+groups in the <literal>systemd</literal> hierarchy, which is what
+systemd uses to keep track of the processes belonging to each service
+or user session:
+
+<screen>
+$ systemd-cgls
+├─user
+│ └─eelco
+│   └─c1
+│     ├─ 2567 -:0
+│     ├─ 2682 kdeinit4: kdeinit4 Running...
+│     ├─ <replaceable>...</replaceable>
+│     └─10851 sh -c less -R
+└─system
+  ├─httpd.service
+  │ ├─2444 httpd -f /nix/store/3pyacby5cpr55a03qwbnndizpciwq161-httpd.conf -DNO_DETACH
+  │ └─<replaceable>...</replaceable>
+  ├─dhcpcd.service
+  │ └─2376 dhcpcd --config /nix/store/f8dif8dsi2yaa70n03xir8r653776ka6-dhcpcd.conf
+  └─ <replaceable>...</replaceable>
+</screen>
+
+Similarly, <command>systemd-cgls cpu</command> shows the cgroups in
+the CPU hierarchy, which allows per-cgroup CPU scheduling priorities.
+By default, every systemd service gets its own CPU cgroup, while all
+user sessions are in the top-level CPU cgroup.  This ensures, for
+instance, that a thousand run-away processes in the
+<literal>httpd.service</literal> cgroup cannot starve the CPU for one
+process in the <literal>postgresql.service</literal> cgroup.  (By
+contrast, it they were in the same cgroup, then the PostgreSQL process
+would get 1/1001 of the cgroup’s CPU time.)  You can limit a service’s
+CPU share in <filename>configuration.nix</filename>:
+
+<programlisting>
+systemd.services.httpd.serviceConfig.CPUShares = 512;
+</programlisting>
+
+By default, every cgroup has 1024 CPU shares, so this will halve the
+CPU allocation of the <literal>httpd.service</literal> cgroup.</para>
+
+<para>There also is a <literal>memory</literal> hierarchy that
+controls memory allocation limits; by default, all processes are in
+the top-level cgroup, so any service or session can exhaust all
+available memory.  Per-cgroup memory limits can be specified in
+<filename>configuration.nix</filename>; for instance, to limit
+<literal>httpd.service</literal> to 512 MiB of RAM (excluding swap)
+and 640 MiB of RAM (including swap):
+
+<programlisting>
+systemd.services.httpd.serviceConfig.MemoryLimit = "512M";
+systemd.services.httpd.serviceConfig.ControlGroupAttribute = [ "memory.memsw.limit_in_bytes 640M" ];
+</programlisting>
+
+</para>
+
+<para>The command <command>systemd-cgtop</command> shows a
+continuously updated list of all cgroups with their CPU and memory
+usage.</para>
+
+</chapter>
\ No newline at end of file
diff --git a/nixos/doc/manual/administration/declarative-containers.xml b/nixos/doc/manual/administration/declarative-containers.xml
new file mode 100644
index 000000000000..177ebdd8db17
--- /dev/null
+++ b/nixos/doc/manual/administration/declarative-containers.xml
@@ -0,0 +1,52 @@
+<section  xmlns="http://docbook.org/ns/docbook"
+          xmlns:xlink="http://www.w3.org/1999/xlink"
+          xmlns:xi="http://www.w3.org/2001/XInclude"
+          version="5.0"
+          xml:id="sec-declarative-containers">
+
+<title>Declarative Container Specification</title>
+
+<para>You can also specify containers and their configuration in the
+host’s <filename>configuration.nix</filename>.  For example, the
+following specifies that there shall be a container named
+<literal>database</literal> running PostgreSQL:
+
+<programlisting>
+containers.database =
+  { config =
+      { config, pkgs, ... }:
+      { services.postgresql.enable = true;
+        services.postgresql.package = pkgs.postgresql92;
+      };
+  };
+</programlisting>
+
+If you run <literal>nixos-rebuild switch</literal>, the container will
+be built and started. If the container was already running, it will be
+updated in place, without rebooting.</para>
+
+<para>By default, declarative containers share the network namespace
+of the host, meaning that they can listen on (privileged)
+ports. However, they cannot change the network configuration. You can
+give a container its own network as follows:
+
+<programlisting>
+containers.database =
+  { privateNetwork = true;
+    hostAddress = "192.168.100.10";
+    localAddress = "192.168.100.11";
+  };
+</programlisting>
+
+This gives the container a private virtual Ethernet interface with IP
+address <literal>192.168.100.11</literal>, which is hooked up to a
+virtual Ethernet interface on the host with IP address
+<literal>192.168.100.10</literal>.  (See the next section for details
+on container networking.)</para>
+
+<para>To disable the container, just remove it from
+<filename>configuration.nix</filename> and run <literal>nixos-rebuild
+switch</literal>. Note that this will not delete the root directory of
+the container in <literal>/var/lib/containers</literal>.</para>
+
+</section>
\ No newline at end of file
diff --git a/nixos/doc/manual/administration/imperative-containers.xml b/nixos/doc/manual/administration/imperative-containers.xml
new file mode 100644
index 000000000000..6131d4e04ea8
--- /dev/null
+++ b/nixos/doc/manual/administration/imperative-containers.xml
@@ -0,0 +1,124 @@
+<section  xmlns="http://docbook.org/ns/docbook"
+          xmlns:xlink="http://www.w3.org/1999/xlink"
+          xmlns:xi="http://www.w3.org/2001/XInclude"
+          version="5.0"
+          xml:id="sec-imperative-containers">
+
+<title>Imperative Container Management</title>
+
+<para>We’ll cover imperative container management using
+<command>nixos-container</command> first. You create a container with
+identifier <literal>foo</literal> as follows:
+
+<screen>
+$ nixos-container create foo
+</screen>
+
+This creates the container’s root directory in
+<filename>/var/lib/containers/foo</filename> and a small configuration
+file in <filename>/etc/containers/foo.conf</filename>. It also builds
+the container’s initial system configuration and stores it in
+<filename>/nix/var/nix/profiles/per-container/foo/system</filename>. You
+can modify the initial configuration of the container on the command
+line. For instance, to create a container that has
+<command>sshd</command> running, with the given public key for
+<literal>root</literal>:
+
+<screen>
+$ nixos-container create foo --config 'services.openssh.enable = true; \
+  users.extraUsers.root.openssh.authorizedKeys.keys = ["ssh-dss AAAAB3N…"];'
+</screen>
+
+</para>
+
+<para>Creating a container does not start it. To start the container,
+run:
+
+<screen>
+$ nixos-container start foo
+</screen>
+
+This command will return as soon as the container has booted and has
+reached <literal>multi-user.target</literal>. On the host, the
+container runs within a systemd unit called
+<literal>container@<replaceable>container-name</replaceable>.service</literal>.
+Thus, if something went wrong, you can get status info using
+<command>systemctl</command>:
+
+<screen>
+$ systemctl status container@foo
+</screen>
+
+</para>
+
+<para>If the container has started succesfully, you can log in as
+root using the <command>root-login</command> operation:
+
+<screen>
+$ nixos-container root-login foo
+[root@foo:~]#
+</screen>
+
+Note that only root on the host can do this (since there is no
+authentication).  You can also get a regular login prompt using the
+<command>login</command> operation, which is available to all users on
+the host:
+
+<screen>
+$ nixos-container login foo
+foo login: alice
+Password: ***
+</screen>
+
+With <command>nixos-container run</command>, you can execute arbitrary
+commands in the container:
+
+<screen>
+$ nixos-container run foo -- uname -a
+Linux foo 3.4.82 #1-NixOS SMP Thu Mar 20 14:44:05 UTC 2014 x86_64 GNU/Linux
+</screen>
+
+</para>
+
+<para>There are several ways to change the configuration of the
+container. First, on the host, you can edit
+<literal>/var/lib/container/<replaceable>name</replaceable>/etc/nixos/configuration.nix</literal>,
+and run
+
+<screen>
+$ nixos-container update foo
+</screen>
+
+This will build and activate the new configuration. You can also
+specify a new configuration on the command line:
+
+<screen>
+$ nixos-container update foo --config 'services.httpd.enable = true; \
+  services.httpd.adminAddr = "foo@example.org";'
+
+$ curl http://$(nixos-container show-ip foo)/
+&lt;!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">…
+</screen>
+
+However, note that this will overwrite the container’s
+<filename>/etc/nixos/configuration.nix</filename>.</para>
+
+<para>Alternatively, you can change the configuration from within the
+container itself by running <command>nixos-rebuild switch</command>
+inside the container. Note that the container by default does not have
+a copy of the NixOS channel, so you should run <command>nix-channel
+--update</command> first.</para>
+
+<para>Containers can be stopped and started using
+<literal>nixos-container stop</literal> and <literal>nixos-container
+start</literal>, respectively, or by using
+<command>systemctl</command> on the container’s service unit. To
+destroy a container, including its file system, do
+
+<screen>
+$ nixos-container destroy foo
+</screen>
+
+</para>
+
+</section>
\ No newline at end of file
diff --git a/nixos/doc/manual/administration/logging.xml b/nixos/doc/manual/administration/logging.xml
new file mode 100644
index 000000000000..1d5df7770e29
--- /dev/null
+++ b/nixos/doc/manual/administration/logging.xml
@@ -0,0 +1,52 @@
+<chapter xmlns="http://docbook.org/ns/docbook"
+        xmlns:xlink="http://www.w3.org/1999/xlink"
+        xmlns:xi="http://www.w3.org/2001/XInclude"
+        version="5.0"
+        xml:id="sec-logging">
+
+<title>Logging</title>
+
+<para>System-wide logging is provided by systemd’s
+<emphasis>journal</emphasis>, which subsumes traditional logging
+daemons such as syslogd and klogd.  Log entries are kept in binary
+files in <filename>/var/log/journal/</filename>.  The command
+<literal>journalctl</literal> allows you to see the contents of the
+journal.  For example,
+
+<screen>
+$ journalctl -b
+</screen>
+
+shows all journal entries since the last reboot.  (The output of
+<command>journalctl</command> is piped into <command>less</command> by
+default.)  You can use various options and match operators to restrict
+output to messages of interest.  For instance, to get all messages
+from PostgreSQL:
+
+<screen>
+$ journalctl -u postgresql.service
+-- Logs begin at Mon, 2013-01-07 13:28:01 CET, end at Tue, 2013-01-08 01:09:57 CET. --
+...
+Jan 07 15:44:14 hagbard postgres[2681]: [2-1] LOG:  database system is shut down
+-- Reboot --
+Jan 07 15:45:10 hagbard postgres[2532]: [1-1] LOG:  database system was shut down at 2013-01-07 15:44:14 CET
+Jan 07 15:45:13 hagbard postgres[2500]: [1-1] LOG:  database system is ready to accept connections
+</screen>
+
+Or to get all messages since the last reboot that have at least a
+“critical” severity level:
+
+<screen>
+$ journalctl -b -p crit
+Dec 17 21:08:06 mandark sudo[3673]: pam_unix(sudo:auth): auth could not identify password for [alice]
+Dec 29 01:30:22 mandark kernel[6131]: [1053513.909444] CPU6: Core temperature above threshold, cpu clock throttled (total events = 1)
+</screen>
+
+</para>
+
+<para>The system journal is readable by root and by users in the
+<literal>wheel</literal> and <literal>systemd-journal</literal>
+groups.  All users have a private journal that can be read using
+<command>journalctl</command>.</para>
+
+</chapter>
\ No newline at end of file
diff --git a/nixos/doc/manual/administration/maintenance-mode.xml b/nixos/doc/manual/administration/maintenance-mode.xml
new file mode 100644
index 000000000000..15c1f902da79
--- /dev/null
+++ b/nixos/doc/manual/administration/maintenance-mode.xml
@@ -0,0 +1,18 @@
+<section xmlns="http://docbook.org/ns/docbook"
+        xmlns:xlink="http://www.w3.org/1999/xlink"
+        xmlns:xi="http://www.w3.org/2001/XInclude"
+        version="5.0"
+        xml:id="sec-maintenance-mode">
+
+<title>Maintenance Mode</title>
+
+<para>You can enter rescue mode by running:
+
+<screen>
+$ systemctl rescue</screen>
+
+This will eventually give you a single-user root shell.  Systemd will
+stop (almost) all system services.  To get out of maintenance mode,
+just exit from the rescue shell.</para>
+
+</section>
\ No newline at end of file
diff --git a/nixos/doc/manual/administration/network-problems.xml b/nixos/doc/manual/administration/network-problems.xml
new file mode 100644
index 000000000000..5ba1bfd5ac9a
--- /dev/null
+++ b/nixos/doc/manual/administration/network-problems.xml
@@ -0,0 +1,33 @@
+<section xmlns="http://docbook.org/ns/docbook"
+        xmlns:xlink="http://www.w3.org/1999/xlink"
+        xmlns:xi="http://www.w3.org/2001/XInclude"
+        version="5.0"
+        xml:id="sec-nix-network-issues">
+
+<title>Network Problems</title>
+
+<para>Nix uses a so-called <emphasis>binary cache</emphasis> to
+optimise building a package from source into downloading it as a
+pre-built binary.  That is, whenever a command like
+<command>nixos-rebuild</command> needs a path in the Nix store, Nix
+will try to download that path from the Internet rather than build it
+from source.  The default binary cache is
+<uri>http://cache.nixos.org/</uri>.  If this cache is unreachable, Nix
+operations may take a long time due to HTTP connection timeouts.  You
+can disable the use of the binary cache by adding <option>--option
+use-binary-caches false</option>, e.g.
+
+<screen>
+$ nixos-rebuild switch --option use-binary-caches false
+</screen>
+
+If you have an alternative binary cache at your disposal, you can use
+it instead:
+
+<screen>
+$ nixos-rebuild switch --option binary-caches http://my-cache.example.org/
+</screen>
+
+</para>
+
+</section>
\ No newline at end of file
diff --git a/nixos/doc/manual/administration/rebooting.xml b/nixos/doc/manual/administration/rebooting.xml
new file mode 100644
index 000000000000..d1db7b141cf2
--- /dev/null
+++ b/nixos/doc/manual/administration/rebooting.xml
@@ -0,0 +1,44 @@
+<chapter xmlns="http://docbook.org/ns/docbook"
+        xmlns:xlink="http://www.w3.org/1999/xlink"
+        xmlns:xi="http://www.w3.org/2001/XInclude"
+        version="5.0"
+        xml:id="sec-rebooting">
+
+<title>Rebooting and Shutting Down</title>
+
+<para>The system can be shut down (and automatically powered off) by
+doing:
+
+<screen>
+$ shutdown
+</screen>
+
+This is equivalent to running <command>systemctl
+poweroff</command>.</para>
+
+<para>To reboot the system, run
+
+<screen>
+$ reboot
+</screen>
+
+which is equivalent to <command>systemctl reboot</command>.
+Alternatively, you can quickly reboot the system using
+<literal>kexec</literal>, which bypasses the BIOS by directly loading
+the new kernel into memory:
+
+<screen>
+$ systemctl kexec
+</screen>
+
+</para>
+
+<para>The machine can be suspended to RAM (if supported) using
+<command>systemctl suspend</command>, and suspended to disk using
+<command>systemctl hibernate</command>.</para>
+
+<para>These commands can be run by any user who is logged in locally,
+i.e. on a virtual console or in X11; otherwise, the user is asked for
+authentication.</para>
+
+</chapter>
\ No newline at end of file
diff --git a/nixos/doc/manual/administration/rollback.xml b/nixos/doc/manual/administration/rollback.xml
new file mode 100644
index 000000000000..23a3ece7c070
--- /dev/null
+++ b/nixos/doc/manual/administration/rollback.xml
@@ -0,0 +1,48 @@
+<section xmlns="http://docbook.org/ns/docbook"
+        xmlns:xlink="http://www.w3.org/1999/xlink"
+        xmlns:xi="http://www.w3.org/2001/XInclude"
+        version="5.0"
+        xml:id="sec-rollback">
+
+<title>Rolling Back Configuration Changes</title>
+
+<para>After running <command>nixos-rebuild</command> to switch to a
+new configuration, you may find that the new configuration doesn’t
+work very well.  In that case, there are several ways to return to a
+previous configuration.</para>
+
+<para>First, the GRUB boot manager allows you to boot into any
+previous configuration that hasn’t been garbage-collected.  These
+configurations can be found under the GRUB submenu “NixOS - All
+configurations”.  This is especially useful if the new configuration
+fails to boot.  After the system has booted, you can make the selected
+configuration the default for subsequent boots:
+
+<screen>
+$ /run/current-system/bin/switch-to-configuration boot</screen>
+
+</para>
+
+<para>Second, you can switch to the previous configuration in a running
+system:
+
+<screen>
+$ nixos-rebuild switch --rollback</screen>
+
+This is equivalent to running:
+
+<screen>
+$ /nix/var/nix/profiles/system-<replaceable>N</replaceable>-link/bin/switch-to-configuration switch</screen>
+
+where <replaceable>N</replaceable> is the number of the NixOS system
+configuration.  To get a list of the available configurations, do:
+
+<screen>
+$ ls -l /nix/var/nix/profiles/system-*-link
+<replaceable>...</replaceable>
+lrwxrwxrwx 1 root root 78 Aug 12 13:54 /nix/var/nix/profiles/system-268-link -> /nix/store/202b...-nixos-13.07pre4932_5a676e4-4be1055
+</screen>
+
+</para>
+
+</section>
\ No newline at end of file
diff --git a/nixos/doc/manual/administration/running.xml b/nixos/doc/manual/administration/running.xml
new file mode 100644
index 000000000000..9091511ed527
--- /dev/null
+++ b/nixos/doc/manual/administration/running.xml
@@ -0,0 +1,24 @@
+<part xmlns="http://docbook.org/ns/docbook"
+      xmlns:xlink="http://www.w3.org/1999/xlink"
+      xmlns:xi="http://www.w3.org/2001/XInclude"
+      version="5.0"
+      xml:id="ch-running">
+
+<title>Administration</title>
+
+<partintro>
+<para>This chapter describes various aspects of managing a running
+NixOS system, such as how to use the <command>systemd</command>
+service manager.</para>
+</partintro>
+
+<xi:include href="service-mgmt.xml" />
+<xi:include href="rebooting.xml" />
+<xi:include href="user-sessions.xml" />
+<xi:include href="control-groups.xml" />
+<xi:include href="logging.xml" />
+<xi:include href="cleaning-store.xml" />
+<xi:include href="containers.xml" />
+<xi:include href="troubleshooting.xml" />
+
+</part>
diff --git a/nixos/doc/manual/administration/service-mgmt.xml b/nixos/doc/manual/administration/service-mgmt.xml
new file mode 100644
index 000000000000..c0940a42f307
--- /dev/null
+++ b/nixos/doc/manual/administration/service-mgmt.xml
@@ -0,0 +1,83 @@
+<chapter xmlns="http://docbook.org/ns/docbook"
+         xmlns:xlink="http://www.w3.org/1999/xlink"
+         xmlns:xi="http://www.w3.org/2001/XInclude"
+         version="5.0"
+         xml:id="sec-systemctl">
+
+<title>Service Management</title>
+
+<para>In NixOS, all system services are started and monitored using
+the systemd program.  Systemd is the “init” process of the system
+(i.e. PID 1), the parent of all other processes.  It manages a set of
+so-called “units”, which can be things like system services
+(programs), but also mount points, swap files, devices, targets
+(groups of units) and more.  Units can have complex dependencies; for
+instance, one unit can require that another unit must be successfully
+started before the first unit can be started.  When the system boots,
+it starts a unit named <literal>default.target</literal>; the
+dependencies of this unit cause all system services to be started,
+file systems to be mounted, swap files to be activated, and so
+on.</para>
+
+<para>The command <command>systemctl</command> is the main way to
+interact with <command>systemd</command>.  Without any arguments, it
+shows the status of active units:
+
+<screen>
+$ systemctl
+-.mount          loaded active mounted   /
+swapfile.swap    loaded active active    /swapfile
+sshd.service     loaded active running   SSH Daemon
+graphical.target loaded active active    Graphical Interface
+<replaceable>...</replaceable>
+</screen>
+
+</para>
+
+<para>You can ask for detailed status information about a unit, for
+instance, the PostgreSQL database service:
+
+<screen>
+$ systemctl status postgresql.service
+postgresql.service - PostgreSQL Server
+          Loaded: loaded (/nix/store/pn3q73mvh75gsrl8w7fdlfk3fq5qm5mw-unit/postgresql.service)
+          Active: active (running) since Mon, 2013-01-07 15:55:57 CET; 9h ago
+        Main PID: 2390 (postgres)
+          CGroup: name=systemd:/system/postgresql.service
+                  ├─2390 postgres
+                  ├─2418 postgres: writer process
+                  ├─2419 postgres: wal writer process
+                  ├─2420 postgres: autovacuum launcher process
+                  ├─2421 postgres: stats collector process
+                  └─2498 postgres: zabbix zabbix [local] idle
+
+Jan 07 15:55:55 hagbard postgres[2394]: [1-1] LOG:  database system was shut down at 2013-01-07 15:55:05 CET
+Jan 07 15:55:57 hagbard postgres[2390]: [1-1] LOG:  database system is ready to accept connections
+Jan 07 15:55:57 hagbard postgres[2420]: [1-1] LOG:  autovacuum launcher started
+Jan 07 15:55:57 hagbard systemd[1]: Started PostgreSQL Server.
+</screen>
+
+Note that this shows the status of the unit (active and running), all
+the processes belonging to the service, as well as the most recent log
+messages from the service.
+
+</para>
+
+<para>Units can be stopped, started or restarted:
+
+<screen>
+$ systemctl stop postgresql.service
+$ systemctl start postgresql.service
+$ systemctl restart postgresql.service
+</screen>
+
+These operations are synchronous: they wait until the service has
+finished starting or stopping (or has failed).  Starting a unit will
+cause the dependencies of that unit to be started as well (if
+necessary).</para>
+
+<!-- - cgroups: each service and user session is a cgroup
+
+- cgroup resource management -->
+
+</chapter>
diff --git a/nixos/doc/manual/administration/store-corruption.xml b/nixos/doc/manual/administration/store-corruption.xml
new file mode 100644
index 000000000000..0160cb45358b
--- /dev/null
+++ b/nixos/doc/manual/administration/store-corruption.xml
@@ -0,0 +1,37 @@
+<section xmlns="http://docbook.org/ns/docbook"
+        xmlns:xlink="http://www.w3.org/1999/xlink"
+        xmlns:xi="http://www.w3.org/2001/XInclude"
+        version="5.0"
+        xml:id="sec-nix-store-corruption">
+
+<title>Nix Store Corruption</title>
+
+<para>After a system crash, it’s possible for files in the Nix store
+to become corrupted.  (For instance, the Ext4 file system has the
+tendency to replace un-synced files with zero bytes.)  NixOS tries
+hard to prevent this from happening: it performs a
+<command>sync</command> before switching to a new configuration, and
+Nix’s database is fully transactional.  If corruption still occurs,
+you may be able to fix it automatically.</para>
+
+<para>If the corruption is in a path in the closure of the NixOS
+system configuration, you can fix it by doing
+
+<screen>
+$ nixos-rebuild switch --repair
+</screen>
+
+This will cause Nix to check every path in the closure, and if its
+cryptographic hash differs from the hash recorded in Nix’s database,
+the path is rebuilt or redownloaded.</para>
+
+<para>You can also scan the entire Nix store for corrupt paths:
+
+<screen>
+$ nix-store --verify --check-contents --repair
+</screen>
+
+Any corrupt paths will be redownloaded if they’re available in a
+binary cache; otherwise, they cannot be repaired.</para>
+
+</section>
\ No newline at end of file
diff --git a/nixos/doc/manual/administration/troubleshooting.xml b/nixos/doc/manual/administration/troubleshooting.xml
new file mode 100644
index 000000000000..351fb1883310
--- /dev/null
+++ b/nixos/doc/manual/administration/troubleshooting.xml
@@ -0,0 +1,18 @@
+<chapter xmlns="http://docbook.org/ns/docbook"
+        xmlns:xlink="http://www.w3.org/1999/xlink"
+        xmlns:xi="http://www.w3.org/2001/XInclude"
+        version="5.0"
+        xml:id="ch-troubleshooting">
+
+<title>Troubleshooting</title>
+
+<para>This chapter describes solutions to common problems you might
+encounter when you manage your NixOS system.</para>
+
+<xi:include href="boot-problems.xml" />
+<xi:include href="maintenance-mode.xml" />
+<xi:include href="rollback.xml" />
+<xi:include href="store-corruption.xml" />
+<xi:include href="network-problems.xml" />
+
+</chapter>
diff --git a/nixos/doc/manual/administration/user-sessions.xml b/nixos/doc/manual/administration/user-sessions.xml
new file mode 100644
index 000000000000..05e2c1a9b29f
--- /dev/null
+++ b/nixos/doc/manual/administration/user-sessions.xml
@@ -0,0 +1,53 @@
+<chapter xmlns="http://docbook.org/ns/docbook"
+        xmlns:xlink="http://www.w3.org/1999/xlink"
+        xmlns:xi="http://www.w3.org/2001/XInclude"
+        version="5.0"
+        xml:id="sec-user-sessions">
+
+<title>User Sessions</title>
+
+<para>Systemd keeps track of all users who are logged into the system
+(e.g. on a virtual console or remotely via SSH).  The command
+<command>loginctl</command> allows querying and manipulating user
+sessions.  For instance, to list all user sessions:
+
+<screen>
+$ loginctl
+   SESSION        UID USER             SEAT
+        c1        500 eelco            seat0
+        c3          0 root             seat0
+        c4        500 alice
+</screen>
+
+This shows that two users are logged in locally, while another is
+logged in remotely.  (“Seats” are essentially the combinations of
+displays and input devices attached to the system; usually, there is
+only one seat.)  To get information about a session:
+
+<screen>
+$ loginctl session-status c3
+c3 - root (0)
+           Since: Tue, 2013-01-08 01:17:56 CET; 4min 42s ago
+          Leader: 2536 (login)
+            Seat: seat0; vc3
+             TTY: /dev/tty3
+         Service: login; type tty; class user
+           State: online
+          CGroup: name=systemd:/user/root/c3
+                  ├─ 2536 /nix/store/10mn4xip9n7y9bxqwnsx7xwx2v2g34xn-shadow-4.1.5.1/bin/login --
+                  ├─10339 -bash
+                  └─10355 w3m nixos.org
+</screen>
+
+This shows that the user is logged in on virtual console 3.  It also
+lists the processes belonging to this session.  Since systemd keeps
+track of this, you can terminate a session in a way that ensures that
+all the session’s processes are gone:
+
+<screen>
+$ loginctl terminate-session c3
+</screen>
+
+</para>
+
+</chapter>
\ No newline at end of file