Trouble with AWS plugin and Kubernetes with EKS
Greetings. I'm seeing a problem with the 1Password AWS shell plugin and EKS when using kubeconfig.
I have an ~/.aws/config
files that looks something like this:
[profile nonprod] aws_account_id = 99999999999 role_arn = arn:aws:iam::99999999999:role/Developer region=us-east-1
The 1Password AWS plugin seems to be configured correctly according to op plugin inspect aws
. I can also use this "nonprod" profile to make aws cli commands.
I am able to generate the Kubernetes configuration using the AWS CLI:
$ aws eks update-kubeconfig --name EKS-Non-Prod --profile nonprod Updated context arn:aws:eks:us-east-1:99999999999:cluster/EKS-Non-Prod in /Users/mreardon/.kube/config
This creates a Kubernetes configuration that looks correct...
- name: arn:aws:eks:us-east-1:99999999999:cluster/EKS-Non-Prod user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: - --region - us-east-1 - eks - get-token - --cluster-name - EKS-Non-Prod - --output - json command: aws env: - name: AWS_PROFILE value: nonprod
In fact, if I make the configured get-token call manually, it works fine:
$ aws --region us-east-1 eks get-token --cluster-name EKS-Non-Prod --profile nonprod
However, when I try to use kubectl commands are run, I get the following error:
$ kubectl get pods -n dev Partial credentials found in assume-role, missing: source_profile or credential_source E0304 15:08:42.659372 17814 memcache.go:265] couldn't get current server API group list: Get "https://zzzzzzzzz.us-east-1.eks.amazonaws.com/api?timeout=32s": getting credentials: exec: executable aws failed with exit code 255 Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 255
I'm guessing that the 1Password AWS plugin is not being used when the aws eks get-token
command is run. Any ideas?
1Password Version: 1Password for Mac 8.10.28 (81028011)
1Password Version: Not Provided
Extension Version: Not Provided
OS Version: Not Provided
Browser: Not Provided
Comments
-
One follow-up...
It occurred to me to manually change the command defined ~/.kube/config to use op. That seems to work around the problem- name: arn:aws:eks:us-east-1:99999999999:cluster/EKS-Non-Prod user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: - plugin - run - -- - aws - --region - us-east-1 - eks - get-token - --cluster-name - EKS-Non-Prod - --output - json command: op env: - name: AWS_PROFILE value: nonprod
1 -
1Password representatives, can we get a nicer solution from your side or should we ask kubernetes to abide aliases?
0