Kubernetes Dashboard access using config file Not enough data to create auth info structure.
KubernetesKubernetes Problem Overview
I am trying to access the kubernetes Dashboard using the config file. From the authentication when I select config file its giving ‘Not enough data to create auth info structure
.’ Bu the same config file work for kubectl command.
here is my config file.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://kubemaster:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
Any help to resolve this issue?
Thanks SR
Kubernetes Solutions
Solution 1 - Kubernetes
After looking at this answer https://stackoverflow.com/questions/46664104/how-to-sign-in-kubernetes-dashboard and source code figured the kubeconfig authentication.
After kubeadm install on the master server get the default service account token and add it to config file. Then use the config file to authenticate.
You can use this to add the token.
#!/bin/bash
TOKEN=$(kubectl -n kube-system describe secret default| awk '$1=="token:"{print $2}')
kubectl config set-credentials kubernetes-admin --token="${TOKEN}"
your config file should be looking like this.
kubectl config view |cut -c1-50|tail -10
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.ey
Solution 2 - Kubernetes
Only authentication options specified by --authentication-mode
flag is supported in kubeconfig file.
You can auth with token (any token in kube-system
namespace):
$ kubectl get secrets -n kube-system
$ kubectl get secret $SECRET_NAME -n=kube-system -o json | jq -r '.data["token"]' | base64 -d > user_token.txt
and auth with the token (see user_token.txt file).
Solution 3 - Kubernetes
Two things are going on here
- the Kubernetes Dashboard Application needs an authentication token
- and this authentication token must be linked to an account with sufficient privileges.
The usual way to deploy the Dashboard Application is just
- to
kubectl apply
a YAML file pulled from the configuration recommended at the Github project(for the dashboard):/src/deploy/recommended/kubernetes-dashboard.yaml
⟹ master•v1.10.1 - then to run
kubectl proxy
and access the dashbord through the locally mapped Port 8001.
However this default configuration is generic and minimal. It just maps a role binding with minimal privileges. And, especially on DigitalOcean, the kubeconfig
file provided when provisioning the cluster lacks the actual token, which is necessary to log into the dashboard.
Thus, to fix these shortcomings, we need to ensure there is an account, which has a RoleBinding to the cluster-admin ClusterRole in the Namespace kube-system. The above mentioned default setup just provides a binding to kubernetes-dashboard-minimal
.
We can fix that by deplyoing explicitly
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
And then we also need to get the token for this ServiceAccount...
kubectl get serviceaccount -n kube-system
will list you all service accounts. Check that the one you want/created is presentkubectl get secrets -n kube-system
should list a secret for this account- and with
kubectl describe secret -n kube-system admin-user-token-
XXXXXX you'd get the information about the token.
The other answers to this question provide ample hints, how this access could be scripted in a convenient way (like e.g. using awk, using grep, using kubectl get
with -o=json
and piping to jq, or using -o=jsonpath
)
You can then either:
- store this token into a text file and upload this
- edit your
kubeconfig
file and paste in the token to the admin user provided there
Solution 4 - Kubernetes
If you want to get past dashboard's authentification prompt and then be able to do admin-things on the dashboard, I recommmend this: https://github.com/kubernetes/dashboard/wiki/Creating-sample-user.
Solution 5 - Kubernetes
1 - Assuming one has followed the directions to setup the dashboard here. https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html
2 - And your normal kubectl access works from the command line (i.e. kubectl get services).
3 - And you are able to login manually to the Dashboard with the token (with kubectl -n kube-system describe secret ...), by using copy/paste.
4 - But now you want to use the "Kubeconfig" (instead of "Token") option to login to the Dashboard, for simplicity.
Solution:
- Find your user in the config file, that is used to access the cluster.
- The user is "kubernetes-admin" in this original posted question.
- Add a line with the "token:".
- Dont forget this is YAML, so use spaces, not tabs.
Here is what it should look like...
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://kubemaster:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: PUT_YOUR_TOKEN_HERE_THAT_YOU_USED_TO_MANUALLY_LOGIN
Solution 6 - Kubernetes
For me i realised i was following a slightly out of date tutorial for installing the dashboard (i was installing v1 of the dashboard, but v2 exists).
- First go to the official source which will instruct you the correct most up to date version to install.
- Then Create An Authentication Token (RBAC)
Now you can use the token to login to the dashboard.
Solution 7 - Kubernetes
Answers can be dependent on versions. FYI, I am using K8s v1.20 and Dashboard v2.1.0 First I created a dashboard admin service account via the following dashboardsvcacct.yaml file
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kubernetes-dashboard
and applied it to the cluster via
#kubectl apply -f dashboardsvcacct.yaml
#serviceaccount/dashboard-admin created
Then I created a role binding to allow the above service account cluster-admin role access instead of sharing the kubernetes-admin account. Done via the following dashboardrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
and applied it to the cluster via
#kubectl apply -f dashboardrolebinding.yaml
#clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
Then extracted the token from the newly created dashboard-admin account associated with the kubernetes-dashboard namespace via
#kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
the output includes the token below. I have shortened the token for posting purposes. Note not to include linebreaks when copying/pasting from terminal.
Name: dashboard-admin-token-9pzgf
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 7efde521-60fd-40f3-9fe0-2097c123421c
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Im1OVl<Shortened for posting>
ca.crt: 1066 bytes
namespace: 20 bytes
Copying and pasting the full token from the above output allowed me to access the dashboard via the Token option, but this is a pain to do every time.
The main question was being able to use a Config File option. The config file does not need to contain any X509 certificates as those are not/cannot be used by the dashboard. Also, it is not secure to share out the kubernetes-admin config file entirely which includes the both the certificate and private key. So the config file needed to access the dashboard can be based on the kubernetes-admin config without the kubernetes-admin data, since only the cluster server API target and public ca cert data are needed. The rest of the file is is information from the dashboard-admin service account including the token. The config file needs to look like the below. Note that everything under the "cluster" section will be specific to your install. The "contexts" and "users" section that will be the same, except the token for your install.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL<Shortened for posting>
server: https://10.175.0.3:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: dashboard-admin
name: dashboard-admin@kubernetes
current-context: dashboard-admin@kubernetes
kind: Config
preferences: {}
users:
- name: dashboard-admin
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Im1OVl<Same Token as above, Shortened for posting>
Pointing the dashboard UI to that config file allowed me to login.
Solution 8 - Kubernetes
This happened to me when I was using kubectl proxy
to access the dashboard. I fixed it by accessing the minikube dashboard
on the physical machine that was running minikube.
Solution 9 - Kubernetes
If you want to see dashboard in action before going through a major investment setting up security, here is the way I got things going quickly. I did this with v2.0.0-rc7
:
- Install with the alternative method, which just sets things up a little less securely to start with.
- The
ClusterRoleRef
that installs with this method needs to be replaced with this one. (You need to delete the existing one first withkubectl delete ...
, then add it.) - The second paragraph here documents the "skip method". Update your
deployment
to get that set up.
Now you can go to the web page and click "skip". Voila! All your keys are exposed with no password. Pray nobody gets ahold of that link!
But wait, you say it's still too hard to get in? If you have a load balancer installed, here's two additional steps:
kubectl -n kubernetes-dashboard edit service kubernetes-dashboard
will allow you to change the service spec totype: LoadBalancer
.- If your load balancer is set up properly,
kubectl -n kubernetes-dashboard describe service kubernetes-dashboard
will now show you the IP address that it has kindly put your insecure dashboard on.
Now you have an insecure port with no password to easily browse your crown jewels. Enjoy!
Solution 10 - Kubernetes
There are two method to provide kubernetes resources access.
-
User
-
Service account
-
User. create user using crt and key. Assign role and bindings. However you cannot access dashboard using user as it is based on cert/key
-
service account.
-
create namespace, service account, role and rolebindings. Assign rolebindings to role then assign to service account using kubectl rolebinding command.
then get the secret from kubectl get secret. The secret name will start with service account name. Whenever you create a service account, secret also created for it.
copy the only token to dashboard webgui to gain access.
If you provided admin and admin rolebinding then you will get full access, else, the access enabled in role will be available in dashboard.