Skip to content

Commit

Permalink
Amendments to Cluster provisioning and registering documentation
Browse files Browse the repository at this point in the history
A#dding change to the registereing cluster docs
  • Loading branch information
divya-mohan0209 committed Mar 8, 2022
1 parent b7b2a99 commit 929adc3
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 11 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ aliases:
- /rancher/v2.x/en/cluster-provisioning/registered-clusters/
---

The cluster registration feature replaced the feature to import clusters.
Along with importing clusters, as of v2.5, Rancher allows you to tie in closer with cloud APIs and manage your cluster by registering existing clusters.

The control that Rancher has to manage a registered cluster depends on the type of cluster. For details, see [Management Capabilities for Registered Clusters.](#management-capabilities-for-registered-clusters)

Expand Down Expand Up @@ -121,7 +121,7 @@ $ curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s -

### Configuring an Imported EKS Cluster with Terraform

You should define **only** the minimum fields that Rancher requires when importing an EKS cluster with Terraform. This is important as Rancher will overwrite what was in the EKS cluster with any config that the user has provided.
You should define **only** the minimum fields that Rancher requires when importing an EKS cluster with Terraform. This is important as Rancher will overwrite what was in the EKS cluster with any config that the user has provided.

>**Warning:** Even a small difference between the current EKS cluster and a user-provided config could have unexpected results.
Expand Down Expand Up @@ -256,7 +256,7 @@ Also in the K3s documentation, nodes with the worker role are called agent nodes

# Debug Logging and Troubleshooting for Registered K3s Clusters

Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes.
Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes.

To enable debug logging on the system upgrade controller deployment, edit the [configmap](https://github.com/rancher/system-upgrade-controller/blob/50a4c8975543d75f1d76a8290001d87dc298bdb4/manifests/system-upgrade-controller.yaml#L32) to set the debug environment variable to true. Then restart the `system-upgrade-controller` pod.

Expand Down Expand Up @@ -326,4 +326,3 @@ To annotate a registered cluster,
1. Click **Save.**

**Result:** The annotation does not give the capabilities to the cluster, but it does indicate to Rancher that the cluster has those capabilities.

Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Registering Existing Clusters
weight: 6
---

The cluster registration feature replaced the feature to import clusters.
Along with importing clusters, as of v2.5, Rancher allows you to tie in closer with cloud APIs and manage your cluster by registering existing clusters.

The control that Rancher has to manage a registered cluster depends on the type of cluster. For details, see [Management Capabilities for Registered Clusters.](#management-capabilities-for-registered-clusters)

Expand Down Expand Up @@ -168,7 +168,7 @@ Also in the K3s documentation, nodes with the worker role are called agent nodes

# Debug Logging and Troubleshooting for Registered K3s Clusters

Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes.
Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes.

To enable debug logging on the system upgrade controller deployment, edit the [configmap](https://github.com/rancher/system-upgrade-controller/blob/50a4c8975543d75f1d76a8290001d87dc298bdb4/manifests/system-upgrade-controller.yaml#L32) to set the debug environment variable to true. Then restart the `system-upgrade-controller` pod.

Expand Down Expand Up @@ -196,7 +196,7 @@ Authorized Cluster Endpoint (ACE) support has been added for registered RKE2 and

> **Note:**
>
> - These steps only need to be performed on the control plane nodes of the downstream cluster. You must configure each control plane node individually.
> - These steps only need to be performed on the control plane nodes of the downstream cluster. You must configure each control plane node individually.
>
> - The following steps will work on both RKE2 and K3s clusters registered in v2.6.x as well as those registered (or imported) from a previous version of Rancher with an upgrade to v2.6.x.
>
Expand All @@ -223,19 +223,19 @@ Authorized Cluster Endpoint (ACE) support has been added for registered RKE2 and
context:
user: Default
cluster: Default

1. Add the following to the config file (or create one if it doesn’t exist); note that the default location is `/etc/rancher/{rke2,k3s}/config.yaml`:

kube-apiserver-arg:
- authentication-token-webhook-config-file=/var/lib/rancher/{rke2,k3s}/kube-api-authn-webhook.yaml

1. Run the following commands:

sudo systemctl stop {rke2,k3s}-server
sudo systemctl start {rke2,k3s}-server

1. Finally, you **must** go back to the Rancher UI and edit the imported cluster there to complete the ACE enablement. Click on **⋮ > Edit Config**, then click the **Networking** tab under Cluster Configuration. Finally, click the **Enabled** button for **Authorized Endpoint**. Once the ACE is enabled, you then have the option of entering a fully qualified domain name (FQDN) and certificate information.

>**Note:** The <b>FQDN</b> field is optional, and if one is entered, it should point to the downstream cluster. Certificate information is only needed if there is a load balancer in front of the downstream cluster that is using an untrusted certificate. If you have a valid certificate, then nothing needs to be added to the <b>CA Certificates</b> field.
# Annotating Registered Clusters
Expand Down Expand Up @@ -286,4 +286,3 @@ To annotate a registered cluster,
1. Click **Save**.

**Result:** The annotation does not give the capabilities to the cluster, but it does indicate to Rancher that the cluster has those capabilities.

0 comments on commit 929adc3

Please sign in to comment.