about summary refs log tree commit diff
path: root/nixos/doc/manual/configuration/kubernetes.xml
diff options
context:
space:
mode:
Diffstat (limited to 'nixos/doc/manual/configuration/kubernetes.xml')
-rw-r--r--nixos/doc/manual/configuration/kubernetes.xml74
1 files changed, 62 insertions, 12 deletions
diff --git a/nixos/doc/manual/configuration/kubernetes.xml b/nixos/doc/manual/configuration/kubernetes.xml
index de5ceb83e77a..54a100e44795 100644
--- a/nixos/doc/manual/configuration/kubernetes.xml
+++ b/nixos/doc/manual/configuration/kubernetes.xml
@@ -5,10 +5,12 @@
          xml:id="sec-kubernetes">
  <title>Kubernetes</title>
  <para>
-  The NixOS Kubernetes module is a collective term for a handful of individual submodules implementing the Kubernetes cluster components.
+  The NixOS Kubernetes module is a collective term for a handful of individual
+  submodules implementing the Kubernetes cluster components.
  </para>
  <para>
-  There are generally two ways of enabling Kubernetes on NixOS. One way is to enable and configure cluster components appropriately by hand:
+  There are generally two ways of enabling Kubernetes on NixOS. One way is to
+  enable and configure cluster components appropriately by hand:
 <programlisting>
 services.kubernetes = {
   apiserver.enable = true;
@@ -19,7 +21,9 @@ services.kubernetes = {
   flannel.enable = true;
 };
 </programlisting>
-  Another way is to assign cluster roles ("master" and/or "node") to the host. This enables apiserver, controllerManager, scheduler, addonManager, kube-proxy and etcd:
+  Another way is to assign cluster roles ("master" and/or "node") to the host.
+  This enables apiserver, controllerManager, scheduler, addonManager,
+  kube-proxy and etcd:
 <programlisting>
 <xref linkend="opt-services.kubernetes.roles"/> = [ "master" ];
 </programlisting>
@@ -27,29 +31,66 @@ services.kubernetes = {
 <programlisting>
 <xref linkend="opt-services.kubernetes.roles"/> = [ "node" ];
 </programlisting>
-  Assigning both the master and node roles is usable if you want a single node Kubernetes cluster for dev or testing purposes:
+  Assigning both the master and node roles is usable if you want a single node
+  Kubernetes cluster for dev or testing purposes:
 <programlisting>
 <xref linkend="opt-services.kubernetes.roles"/> = [ "master" "node" ];
 </programlisting>
-  Note: Assigning either role will also default both <xref linkend="opt-services.kubernetes.flannel.enable"/> and <xref linkend="opt-services.kubernetes.easyCerts"/> to true. This sets up flannel as CNI and activates automatic PKI bootstrapping.
+  Note: Assigning either role will also default both
+  <xref linkend="opt-services.kubernetes.flannel.enable"/> and
+  <xref linkend="opt-services.kubernetes.easyCerts"/> to true. This sets up
+  flannel as CNI and activates automatic PKI bootstrapping.
  </para>
  <para>
-  As of kubernetes 1.10.X it has been deprecated to open non-tls-enabled ports on kubernetes components. Thus, from NixOS 19.03 all plain HTTP ports have been disabled by default. While opening insecure ports is still possible, it is recommended not to bind these to other interfaces than loopback. To re-enable the insecure port on the apiserver, see options: <xref linkend="opt-services.kubernetes.apiserver.insecurePort"/> and <xref linkend="opt-services.kubernetes.apiserver.insecureBindAddress"/>
+  As of kubernetes 1.10.X it has been deprecated to open non-tls-enabled ports
+  on kubernetes components. Thus, from NixOS 19.03 all plain HTTP ports have
+  been disabled by default. While opening insecure ports is still possible, it
+  is recommended not to bind these to other interfaces than loopback. To
+  re-enable the insecure port on the apiserver, see options:
+  <xref linkend="opt-services.kubernetes.apiserver.insecurePort"/> and
+  <xref linkend="opt-services.kubernetes.apiserver.insecureBindAddress"/>
  </para>
  <note>
   <para>
-   As of NixOS 19.03, it is mandatory to configure: <xref linkend="opt-services.kubernetes.masterAddress"/>. The masterAddress must be resolveable and routeable by all cluster nodes. In single node clusters, this can be set to <literal>localhost</literal>.
+   As of NixOS 19.03, it is mandatory to configure:
+   <xref linkend="opt-services.kubernetes.masterAddress"/>. The masterAddress
+   must be resolveable and routeable by all cluster nodes. In single node
+   clusters, this can be set to <literal>localhost</literal>.
   </para>
  </note>
  <para>
-  Role-based access control (RBAC) authorization mode is enabled by default. This means that anonymous requests to the apiserver secure port will expectedly cause a permission denied error. All cluster components must therefore be configured with x509 certificates for two-way tls communication. The x509 certificate subject section determines the roles and permissions granted by the apiserver to perform clusterwide or namespaced operations. See also: <link
-     xlink:href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/"> Using RBAC Authorization</link>.
+  Role-based access control (RBAC) authorization mode is enabled by default.
+  This means that anonymous requests to the apiserver secure port will
+  expectedly cause a permission denied error. All cluster components must
+  therefore be configured with x509 certificates for two-way tls communication.
+  The x509 certificate subject section determines the roles and permissions
+  granted by the apiserver to perform clusterwide or namespaced operations. See
+  also:
+  <link
+     xlink:href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/">
+  Using RBAC Authorization</link>.
  </para>
  <para>
-  The NixOS kubernetes module provides an option for automatic certificate bootstrapping and configuration, <xref linkend="opt-services.kubernetes.easyCerts"/>. The PKI bootstrapping process involves setting up a certificate authority (CA) daemon (cfssl) on the kubernetes master node. cfssl generates a CA-cert for the cluster, and uses the CA-cert for signing subordinate certs issued to each of the cluster components. Subsequently, the certmgr daemon monitors active certificates and renews them when needed. For single node Kubernetes clusters, setting <xref linkend="opt-services.kubernetes.easyCerts"/> = true is sufficient and no further action is required. For joining extra node machines to an existing cluster on the other hand, establishing initial trust is mandatory.
+  The NixOS kubernetes module provides an option for automatic certificate
+  bootstrapping and configuration,
+  <xref linkend="opt-services.kubernetes.easyCerts"/>. The PKI bootstrapping
+  process involves setting up a certificate authority (CA) daemon (cfssl) on
+  the kubernetes master node. cfssl generates a CA-cert for the cluster, and
+  uses the CA-cert for signing subordinate certs issued to each of the cluster
+  components. Subsequently, the certmgr daemon monitors active certificates and
+  renews them when needed. For single node Kubernetes clusters, setting
+  <xref linkend="opt-services.kubernetes.easyCerts"/> = true is sufficient and
+  no further action is required. For joining extra node machines to an existing
+  cluster on the other hand, establishing initial trust is mandatory.
  </para>
  <para>
-  To add new nodes to the cluster: On any (non-master) cluster node where <xref linkend="opt-services.kubernetes.easyCerts"/> is enabled, the helper script <literal>nixos-kubernetes-node-join</literal> is available on PATH. Given a token on stdin, it will copy the token to the kubernetes secrets directory and restart the certmgr service. As requested certificates are issued, the script will restart kubernetes cluster components as needed for them to pick up new keypairs.
+  To add new nodes to the cluster: On any (non-master) cluster node where
+  <xref linkend="opt-services.kubernetes.easyCerts"/> is enabled, the helper
+  script <literal>nixos-kubernetes-node-join</literal> is available on PATH.
+  Given a token on stdin, it will copy the token to the kubernetes secrets
+  directory and restart the certmgr service. As requested certificates are
+  issued, the script will restart kubernetes cluster components as needed for
+  them to pick up new keypairs.
  </para>
  <note>
   <para>
@@ -57,6 +98,15 @@ services.kubernetes = {
   </para>
  </note>
  <para>
-  In order to interact with an RBAC-enabled cluster as an administrator, one needs to have cluster-admin privileges. By default, when easyCerts is enabled, a cluster-admin kubeconfig file is generated and linked into <literal>/etc/kubernetes/cluster-admin.kubeconfig</literal> as determined by <xref linkend="opt-services.kubernetes.pki.etcClusterAdminKubeconfig"/>. <literal>export KUBECONFIG=/etc/kubernetes/cluster-admin.kubeconfig</literal> will make kubectl use this kubeconfig to access and authenticate the cluster. The cluster-admin kubeconfig references an auto-generated keypair owned by root. Thus, only root on the kubernetes master may obtain cluster-admin rights by means of this file.
+  In order to interact with an RBAC-enabled cluster as an administrator, one
+  needs to have cluster-admin privileges. By default, when easyCerts is
+  enabled, a cluster-admin kubeconfig file is generated and linked into
+  <literal>/etc/kubernetes/cluster-admin.kubeconfig</literal> as determined by
+  <xref linkend="opt-services.kubernetes.pki.etcClusterAdminKubeconfig"/>.
+  <literal>export KUBECONFIG=/etc/kubernetes/cluster-admin.kubeconfig</literal>
+  will make kubectl use this kubeconfig to access and authenticate the cluster.
+  The cluster-admin kubeconfig references an auto-generated keypair owned by
+  root. Thus, only root on the kubernetes master may obtain cluster-admin
+  rights by means of this file.
  </para>
 </chapter>