terraform " status 401: Each header's KID must match the KID of the public key" error
I have a kubernetes cluster running on google cloud. This cluster has a op_connect_server up and running and I am trying to use terraform to create the items on some specific vaults.
To be able to run it locally, I am port-forwarding the port 8080 to my kubernetes op_connect_server pod. (kubectl get pods -A |grep onepassword-connect |grep -v operator |awk '{print $2}'
8080:8080 -n tools)
My kubernetes cluster is a private one with a public address attached to it. To run it locally, I am accessing it's public address and to run it on gitlab I am accessing it's private address (Because my gitlab pipeline machine is running from inside kubernetes cluster and has access to it's private address. It works for other features.)
When I run it locally, everything works well. The items are created on vault without any problems, and also during the terraform plan it can connect to the op_connect_server and check the items without any error.
On my terraform provider for one_password I am setting the token and the op_connect_server address.
When I run it on my pipeline (gitlab), I get the error: status 401: Authentication: (Invalid token signature), Each header's KID must match the KID of the public key.
This error happens during plan step, when checking for some onepassword_item. I tried to retrieve the same information using curl and I am able to do it, but for some reason, it fails on terraform.
I already checked/tried:
- Check all variables like token, op_connect server address and they are the same on both (local and gitlab)
- Use the same cluster endpoint (public one) when running locally or from gitlab
- Delete the cluster and create/run everything from the gitlab pipeline.
- -> The creation process works (op_connect_server, all items are created and so on) but when I run it again, it fails with the same error message.
This is my code for creating the items:
resource "onepassword_item" "credentials" {
vault = ""
title = "Redis Database cache"
category = "database"
type = "other"
username = ""
database = "Redis Database"
hostname = module.beta_redis.database_host_access_private
port = module.beta_redis.database_host_access_port
password = module.beta_redis.auth_string
section {
label = "TLS"
field {
label = "tls_cert"
value = module.beta_redis.tls_cert
type = "CONCEALED"
}
field {
label = "tls_transit_encryption_mode"
value = module.beta_redis.tls_transit_encryption_mode
type = "CONCEALED"
}
field {
label = "tls_sha1_fingerprint"
value = module.beta_redis.tls_sha1_fingerprint
type = "CONCEALED"
}
}
My op_connect_server has these settings:
set {
name = "connect.credentials_base64"
value = data.local_file.input.content_base64
type = "string"
}
set {
name = "connect.serviceType"
value = "NodePort"
}
set {
name = "operator.create"
value = "true"
}
set {
name = "operator.autoRestart"
value = "true"
}
set {
name = "operator.clusterRole.create"
value = "true"
}
set {
name = "operator.roleBinding.create"
value = "true"
}
set {
name = "connect.api.name"
value = "beta-connect-api"
}
set {
name = "operator.token.value"
value = var.op_token_beta
}
My one password version is:
1.1.4
Does someone have any clue why this could be happening, or how can I debug it?
1Password Version: Not Provided
Extension Version: Not Provided
OS Version: Not Provided
Browser:_ Not Provided
Comments
-
At the end, I was calling a different connect server. Solved.
0