Using --cluster-init
for server node when cluster created with config file
#1004
Unanswered
nwithers-ecr
asked this question in
Q&A
Replies: 2 comments
-
Well, I kept trying to do it in the command line by creating a cluster with zero server nodes and then Enabling it through the config file works fine
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi @nwithers-ecr , thanks for starting this discussion! $ k3d cluster create --k3s-arg "--cluster-init@server:0" multiserver
INFO[0000] Loadbalancer image set from env var $K3D_IMAGE_LOADBALANCER: rancher/k3d-proxy:5.3.0
INFO[0000] Loadbalancer image set from env var $K3D_IMAGE_LOADBALANCER: rancher/k3d-proxy:5.3.0
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-multiserver'
INFO[0000] Created image volume k3d-multiserver-images
INFO[0000] Starting new tools node...
INFO[0000] Tools image set from env var $K3D_IMAGE_TOOLS: rancher/k3d-tools:5.3.0
INFO[0001] Creating node 'k3d-multiserver-server-0'
INFO[0001] Pulling image 'rancher/k3d-tools:5.3.0'
INFO[0003] Pulling image 'docker.io/rancher/k3s:v1.22.7-k3s1'
INFO[0003] Starting Node 'k3d-multiserver-tools'
INFO[0007] Creating LoadBalancer 'k3d-multiserver-serverlb'
INFO[0008] Pulling image 'rancher/k3d-proxy:5.3.0'
INFO[0011] Using the k3d-tools node to gather environment information
INFO[0011] HostIP: using network gateway 172.18.0.1 address
INFO[0011] Starting cluster 'multiserver'
INFO[0011] Starting servers...
INFO[0011] Starting Node 'k3d-multiserver-server-0'
INFO[0019] All agents already running.
INFO[0019] Starting helpers...
INFO[0019] Starting Node 'k3d-multiserver-serverlb'
INFO[0026] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0028] Cluster 'multiserver' created successfully!
INFO[0028] You can now use it like this:
kubectl cluster-info
$ k3d node create --trace anotherserver --role server --cluster multiserver
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:20.10.13 OSType:linux OS:Pop!_OS 21.10 Arch:x86_64 CgroupVersion:2 CgroupDriver:systemd Filesystem:extfs}
INFO[0000] Adding 1 node(s) to the runtime local cluster 'multiserver'...
TRAC[0000] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-multiserver-serverlb
TRAC[0000] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-multiserver-server-0
TRAC[0000] Reading path /etc/confd/values.yaml from node k3d-multiserver-serverlb...
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
INFO[0000] Using the k3d-tools node to gather environment information
INFO[0000] Starting new tools node...
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
INFO[0000] Tools image set from env var $K3D_IMAGE_TOOLS: rancher/k3d-tools:5.3.0
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Detected CgroupV2, enabling custom entrypoint (disable by setting K3D_FIX_CGROUPV2=false)
TRAC[0000] Creating node from spec
&{Name:k3d-multiserver-tools Role:noRole Image:rancher/k3d-tools:5.3.0 Volumes:[k3d-multiserver-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] Env:[] Cmd:[] Args:[noop] Ports:map[] Restart:false Created: HostPidMode:false RuntimeLabels:map[app:k3d k3d.cluster:multiserver k3d.version:v5.3.0-13-ga24bda67] K3sNodeLabels:map[] Networks:[k3d-multiserver] ExtraHosts:[host.k3d.internal:host-gateway] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
TRAC[0000] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-multiserver-tools Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[noop] Healthcheck:<nil> ArgsEscaped:false Image:rancher/k3d-tools:5.3.0 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:multiserver k3d.role:noRole k3d.version:v5.3.0-13-ga24bda67] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-multiserver-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode:bridge PortBindings:map[] RestartPolicy:{Name: MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[host.k3d.internal:host-gateway] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc000291e4f} NetworkingConfig:{EndpointsConfig:map[k3d-multiserver:0xc000378600]}}
DEBU[0000] Created container k3d-multiserver-tools (ID: 4f30fdc95352e6e6e4c292f96bd8d989d5bdc1fc438f172e995397ac06ed0572)
DEBU[0000] Node k3d-multiserver-tools Start Time: 2022-03-15 13:16:49.767503616 +0100 CET m=+0.145329163
TRAC[0000] Starting node 'k3d-multiserver-tools'
INFO[0000] Starting Node 'k3d-multiserver-tools'
DEBU[0000] Truncated 2022-03-15 12:16:50.096398872 +0000 UTC to 2022-03-15 12:16:50 +0000 UTC
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
TRAC[0000] GOOS: linux / Runtime OS: linux (Pop!_OS 21.10)
INFO[0000] HostIP: using network gateway 172.18.0.1 address
DEBU[0000] Deleting node k3d-multiserver-tools ...
TRAC[0000] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-multiserver-server-0
TRAC[0000] Dropping '--cluster-init' from source node's cmd
DEBU[0000] Adding node k3d-anotherserver-0 to cluster multiserver based on existing (sanitized) node k3d-multiserver-server-0
TRAC[0000] Sanitized Source Node: &{Name:k3d-multiserver-server-0 Role:server Image:sha256:83db45fbc39149004e78e361222f9786d39de8e48496af6a0673a96d61fb8466 Volumes:[k3d-multiserver-images:/k3d/images] Env:[K3S_TOKEN=GDdamuZgRwTGACJMmJyh K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/bin/aux CRI_CONFIG_FILE=/var/lib/rancher/k3s/agent/etc/crictl.yaml] Cmd:[server --tls-san 0.0.0.0] Args:[] Ports:map[] Restart:true Created:2022-03-15T12:16:04.546550176Z HostPidMode:false RuntimeLabels:map[k3d.cluster:multiserver k3d.cluster.imageVolume:k3d-multiserver-images k3d.cluster.network:k3d-multiserver k3d.cluster.network.external:false k3d.cluster.network.id:5eac41e7f8ea691725f922b2eef75fcd28b72449bc3501096d52e1af01ea230a k3d.cluster.network.iprange:172.18.0.0/16 k3d.cluster.token:GDdamuZgRwTGACJMmJyh k3d.cluster.url:https://k3d-multiserver-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:39503 k3d.version:v5.3.0-13-ga24bda67] K3sNodeLabels:map[] Networks:[k3d-multiserver] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:0xc000520640} AgentOpts:{} GPURequest: Memory: State:{Running:true Status:running Started:} IP:{IP:172.18.0.3 Static:false} HookActions:[]}
New Node: &{Name:k3d-anotherserver-0 Role:server Image:docker.io/rancher/k3s:v1.22.7-k3s1 Volumes:[] Env:[] Cmd:[] Args:[] Ports:map[] Restart:true Created: HostPidMode:false RuntimeLabels:map[k3d.role:server] K3sNodeLabels:map[] Networks:[k3d-multiserver] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
TRAC[0000] Reading path /etc/rancher/k3s/registries.yaml from node k3d-multiserver-server-0...
TRAC[0000] Resulting node &{Name:k3d-anotherserver-0 Role:server Image:docker.io/rancher/k3s:v1.22.7-k3s1 Volumes:[k3d-multiserver-images:/k3d/images] Env:[K3S_TOKEN=GDdamuZgRwTGACJMmJyh K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/bin/aux CRI_CONFIG_FILE=/var/lib/rancher/k3s/agent/etc/crictl.yaml] Cmd:[server --tls-san 0.0.0.0] Args:[] Ports:map[] Restart:true Created:2022-03-15T12:16:04.546550176Z HostPidMode:false RuntimeLabels:map[k3d.cluster:multiserver k3d.cluster.imageVolume:k3d-multiserver-images k3d.cluster.network:k3d-multiserver k3d.cluster.network.external:false k3d.cluster.network.id:5eac41e7f8ea691725f922b2eef75fcd28b72449bc3501096d52e1af01ea230a k3d.cluster.network.iprange:172.18.0.0/16 k3d.cluster.token:GDdamuZgRwTGACJMmJyh k3d.cluster.url:https://k3d-multiserver-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:39503 k3d.version:v5.3.0-13-ga24bda67] K3sNodeLabels:map[] Networks:[k3d-multiserver] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:0xc000520640} AgentOpts:{} GPURequest: Memory: State:{Running:true Status:running Started:} IP:{IP:zero IP Static:false} HookActions:[]}
TRAC[0000] Creating node from spec
&{Name:k3d-anotherserver-0 Role:server Image:docker.io/rancher/k3s:v1.22.7-k3s1 Volumes:[k3d-multiserver-images:/k3d/images] Env:[K3S_TOKEN=GDdamuZgRwTGACJMmJyh K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/bin/aux CRI_CONFIG_FILE=/var/lib/rancher/k3s/agent/etc/crictl.yaml K3S_URL=https://k3d-multiserver-server-0:6443] Cmd:[server --tls-san 0.0.0.0] Args:[] Ports:map[] Restart:true Created:2022-03-15T12:16:04.546550176Z HostPidMode:false RuntimeLabels:map[k3d.cluster:multiserver k3d.cluster.imageVolume:k3d-multiserver-images k3d.cluster.network:k3d-multiserver k3d.cluster.network.external:false k3d.cluster.network.id:5eac41e7f8ea691725f922b2eef75fcd28b72449bc3501096d52e1af01ea230a k3d.cluster.network.iprange:172.18.0.0/16 k3d.cluster.token:GDdamuZgRwTGACJMmJyh k3d.cluster.url:https://k3d-multiserver-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:39503 k3d.version:v5.3.0-13-ga24bda67] K3sNodeLabels:map[] Networks:[k3d-multiserver] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:0xc000520640} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
TRAC[0000] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-anotherserver-0 Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_TOKEN=GDdamuZgRwTGACJMmJyh K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/bin/aux CRI_CONFIG_FILE=/var/lib/rancher/k3s/agent/etc/crictl.yaml K3S_URL=https://k3d-multiserver-server-0:6443 K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[server --tls-san 0.0.0.0 --tls-san 0.0.0.0] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3s:v1.22.7-k3s1 Volumes:map[] WorkingDir: Entrypoint:[/bin/k3d-entrypoint.sh] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:multiserver k3d.cluster.imageVolume:k3d-multiserver-images k3d.cluster.network:k3d-multiserver k3d.cluster.network.external:false k3d.cluster.network.id:5eac41e7f8ea691725f922b2eef75fcd28b72449bc3501096d52e1af01ea230a k3d.cluster.network.iprange:172.18.0.0/16 k3d.cluster.token:GDdamuZgRwTGACJMmJyh k3d.cluster.url:https://k3d-multiserver-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:39503 k3d.version:v5.3.0-13-ga24bda67] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-multiserver-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode:bridge PortBindings:map[] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc0002d300f} NetworkingConfig:{EndpointsConfig:map[k3d-multiserver:0xc0001e0240]}}
DEBU[0000] Created container k3d-anotherserver-0 (ID: c5ae09c30cfb176e1200c606153d550c131dd1a9440b5b119ae7974633a0bb0f)
DEBU[0000] >>> enabling cgroupsv2 magic
DEBU[0000] Node k3d-anotherserver-0 Start Time: 2022-03-15 13:16:50.197535024 +0100 CET m=+0.575360571
TRAC[0000] Node k3d-anotherserver-0: Executing preStartAction 'WriteFileAction': [WriteFileAction] Writing 451 bytes to /bin/k3d-entrypoint.sh (mode -rwxr--r--): Write custom k3d entrypoint script (that powers the magic fixes)
TRAC[0000] Node k3d-anotherserver-0: Executing preStartAction 'WriteFileAction': [WriteFileAction] Writing 1325 bytes to /bin/k3d-entrypoint-cgroupv2.sh (mode -rwxr--r--): Write entrypoint script for CGroupV2 fix
TRAC[0000] Starting node 'k3d-anotherserver-0'
INFO[0000] Starting Node 'k3d-anotherserver-0'
TRAC[0000] [Docker] Deleted Container k3d-multiserver-tools
DEBU[0001] Truncated 2022-03-15 12:16:50.800417232 +0000 UTC to 2022-03-15 12:16:50 +0000 UTC
DEBU[0001] Waiting for node k3d-anotherserver-0 to get ready (Log: 'k3s is up and running')
TRAC[0001] NodeWaitForLogMessage: Node 'k3d-anotherserver-0' waiting for log message 'k3s is up and running' since '2022-03-15 12:16:50 +0000 UTC'
TRAC[0012] Found target message `k3s is up and running` in log line `Ctime="2022-03-15T12:17:01Z" level=info msg="k3s is up and running"`
DEBU[0012] Finished waiting for log message 'k3s is up and running' from node 'k3d-anotherserver-0'
TRAC[0012] Node k3d-anotherserver-0: Executing postStartAction: [ExecAction] Executing `[sh -c echo '172.18.0.1 host.k3d.internal' >> /etc/hosts]` (0 retries): Inject /etc/hosts record for host.k3d.internal
TRAC[0012] ExecAction ([sh -c echo '172.18.0.1 host.k3d.internal' >> /etc/hosts] in k3d-anotherserver-0) try 1/1
DEBU[0012] Executing command '[sh -c echo '172.18.0.1 host.k3d.internal' >> /etc/hosts]' in node 'k3d-anotherserver-0'
TRAC[0012] Exec process '[sh -c echo '172.18.0.1 host.k3d.internal' >> /etc/hosts]' still running in node 'k3d-anotherserver-0'.. sleeping for 1 second...
DEBU[0013] Exec process in node 'k3d-anotherserver-0' exited with '0'
INFO[0013] Updating loadbalancer config to include new server node(s)
TRAC[0013] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-anotherserver-0
TRAC[0013] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-multiserver-serverlb
TRAC[0013] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-multiserver-server-0
TRAC[0013] Reading path /etc/confd/values.yaml from node k3d-multiserver-serverlb...
TRAC[0013] Current loadbalancer config:
{Ports:map[6443.tcp:[k3d-multiserver-server-0]] Settings:{WorkerConnections:1024 DefaultProxyTimeout:0}}
TRAC[0013] New loadbalancer config:
{Ports:map[6443.tcp:[k3d-multiserver-server-0]] Settings:{WorkerConnections:1024 DefaultProxyTimeout:0}}
DEBU[0013] Updating the loadbalancer with this diff: [Ports.map[6443.tcp].slice[1]: <no value> != k3d-anotherserver-0 Settings.WorkerConnections: 1024 != 1026]
DEBU[0013] Writing lb config:
ports:
6443.tcp:
- k3d-multiserver-server-0
- k3d-anotherserver-0
settings:
workerConnections: 1026
TRAC[0013] NodeWaitForLogMessage: Node 'k3d-multiserver-serverlb' waiting for log message 'start worker processes' since '2022-03-15 12:17:03 +0000 UTC'
TRAC[0013] Found target message `start worker processes` in log line `;2022/03/15 12:17:03 [notice] 32#32: start worker processes`
DEBU[0013] Finished waiting for log message 'start worker processes' from node 'k3d-multiserver-serverlb'
INFO[0013] Successfully configured loadbalancer k3d-multiserver-serverlb!
INFO[0014] Successfully created 1 node(s)!
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-anotherserver-0 Ready control-plane,etcd,master 12s v1.22.7+k3s1
k3d-multiserver-server-0 Ready control-plane,etcd,master 58s v1.22.7+k3s1
this worked just fine 👍 UPDATE: Also with the config file: ╰ cat k3d.yaml
───────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ File: k3d.yaml
│ Size: 193 B
───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ apiVersion: k3d.io/v1alpha4
2 │ kind: Simple
3 │ metadata:
4 │ name: multiserver
5 │ servers: 1
6 │ agents: 0
7 │ options:
8 │ k3s:
9 │ extraArgs:
10 │ - arg: --cluster-init
11 │ nodeFilters:
12 │ - server:0
───────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
$ k3d cluster create -c k3d.yaml
INFO[0000] Using config file k3d.yaml (k3d.io/v1alpha4#simple)
INFO[0000] Loadbalancer image set from env var $K3D_IMAGE_LOADBALANCER: rancher/k3d-proxy:5.3.0
INFO[0000] Loadbalancer image set from env var $K3D_IMAGE_LOADBALANCER: rancher/k3d-proxy:5.3.0
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-multiserver'
INFO[0000] Created image volume k3d-multiserver-images
INFO[0000] Starting new tools node...
INFO[0000] Tools image set from env var $K3D_IMAGE_TOOLS: rancher/k3d-tools:5.3.0
INFO[0000] Starting Node 'k3d-multiserver-tools'
INFO[0001] Creating node 'k3d-multiserver-server-0'
INFO[0001] Creating LoadBalancer 'k3d-multiserver-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.20.0.1 address
INFO[0001] Starting cluster 'multiserver'
INFO[0001] Starting servers...
INFO[0001] Starting Node 'k3d-multiserver-server-0'
INFO[0006] All agents already running.
INFO[0006] Starting helpers...
INFO[0006] Starting Node 'k3d-multiserver-serverlb'
INFO[0013] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0015] Cluster 'multiserver' created successfully!
INFO[0015] You can now use it like this:
kubectl cluster-info
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-multiserver-server-0 NotReady control-plane,etcd,master 6s v1.22.7+k3s1
$ k3d node create --trace anotherserver --role server --cluster multiserver
# ... truncated ...
INFO[0014] Successfully created 1 node(s)!
$ kubectl get nodes
Alias tip: kgno
NAME STATUS ROLES AGE VERSION
k3d-anotherserver-0 NotReady control-plane,etcd,master 8s v1.22.7+k3s1
k3d-multiserver-server-0 Ready control-plane,etcd,master 39s v1.22.7+k3s1 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Ran into the issue of adding more server nodes to a single control plane cluster as detailed here https://k3d.io/v5.3.0/usage/multiserver/#adding-server-nodes-to-a-running-cluster
I do want my clusters to be able to have new control-plane nodes added during runtime in case the first one goes down, but to mirror our actual environment we still want to keep only a single node initially.
I don't see any way in the config options https://k3d.io/v5.3.0/usage/configfile/#all-options-example to specify that the server node should be started with
--cluster-init
.Here is my current config
Beta Was this translation helpful? Give feedback.
All reactions