On a previous blog post, we talked about using SSM parameters with ECS to pull secrets from a remote store. If you are using EKS instead of ECS, you’ve probably noticed that this is not a built-in feature. Kubernetes has built-in secrets but base64 encoding is not encryption, and many teams still prefer an external secret store to keep secret values in a central location, only allowing access to authenticated users and services.

Fortunately, the GoDaddy engineering team has created an open source project that helps with this challenge. The external-secrets project allows us to reference AWS secrets within Kubernetes pods.

Before you even install it, you’ll probably have an obvious first question: “How will the pod be granted access to the secrets manager?” Let’s talk about that real quick, we need to set it up first before we even install the external-secrets controller and CRD.

The pod that is requesting the “ExternalSecret” needs to have AWS authentication credentials. The quick and easy way to do this is to give your EC2 node IAM role access to your Secrets Manager, but this is not recommended. If you provide access to the entire node, then any pod running can access your secrets, even one that might have been launched by an attacker. It also doesn’t let you define granular access, only allowing certain pods to access certain secrets. If you want to follow the pattern of ‘least-privilege’, the best way forward is to use IAM Roles for Service Accounts. The tl;dr here is that you associate the service account that runs the external-secrets controller with an IAM role that grants it access to AWS services, so that it can create secrets for you.

Configuring this happens in 3 fairly easy steps that AWS has already documented for us:

  1. Enable IAM Roles for Service Accounts
  2. Creating an IAM Role and Policy for your Service Account
  3. Associate an IAM Role to a Service Account

These docs are fantastic, but in our case we need to push these changes out to 3 completely separate environments (dev, test, and prod) in a reliable way, while providing segregation between them and I don’t want to do the work by hand. We are going to use Terraform for the scripting, separating our environments by terraform workspace. If you would rather create objects manually following the guides above, feel free to skip the Terraform section and join us in Step 2 below.

Step 1: Terraforming

This is a rough representation of what we need to create.

The following terraform code is what we use to create an EKS cluster with OIDC provider enabled, and IAM Roles for Service Accounts preconfigured. For this example we will keep it simple and just say that for each environment, we want to create 1 service account and 1 role specific to that environment, that only allows access to secrets with a certain prefix that matches that environment, or a global prefix. After running this, we will have a service account named external-secrets that will be associated with the apps_role_dev IAM role. This is OK since we have a different cluster for each environment, but we are using the same AWS account so we need segregated role names.

cluster.tf

resource "aws_eks_cluster" "eks_cluster" {
  name     = "${var.cluster_name}-${terraform.workspace}"
   
  role_arn = aws_iam_role.eks_cluster_role.arn
  enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
  
   vpc_config {
    subnet_ids =  concat(var.public_subnets, var.private_subnets)
  }
   
   timeouts {
     delete    = "30m"
   }
}
Code language: JavaScript (javascript)

irsa.tf

Configuring IAM Roles for Service Accounts is actually pretty easy with terraform, with the help of a module, eks-irsa. All we need to do is pass in the name of the role we want it to create, the cluster information, and any additional policy information (like pulling secrets) and it will do the hard work. We still have to setup the OIDC provider here but again that’s very easy with terraform. Also notice how we use the terraform workspace within the iamSecretPolicy object to restrict what this role will be able to access.

data "tls_certificate" "eks_cert" {
  url = aws_eks_cluster.eks_cluster.identity[0].oidc[0].issuer
  depends_on = [
    aws_eks_cluster.eks_cluster
  ]
}

resource "aws_iam_openid_connect_provider" "openid_provider" {
  client_id_list  = ["sts.amazonaws.com"]
  thumbprint_list = [data.tls_certificate.eks_cert.certificates[0].sha1_fingerprint]
  url             = aws_eks_cluster.eks_cluster.identity[0].oidc[0].issuer
  depends_on = [
    aws_eks_cluster.eks_cluster
  ]
}

module "eks-irsa" {
  source  = "nalbam/eks-irsa/aws"
  version = "0.13.2"

  name = "apps_role_${terraform.workspace}"
  region = var.aws_region
  cluster_name = aws_eks_cluster.eks_cluster.name
  cluster_names = [
    aws_eks_cluster.eks_cluster.name
  ]
  kube_namespace      = "default"
  kube_serviceaccount = "external-secrets"

  policy_arns = [
    aws_iam_policy.iamSecretPolicy.arn
  ]

  depends_on = [
    aws_eks_cluster.eks_cluster
  ]
}

resource "aws_iam_policy" "iamSecretPolicy" {
  name        = "${terraform.workspace}_secretPolicy"
  path        = "/"
  description = "Allow access to ${terraform.workspace} secrets"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "secretsmanager:GetResourcePolicy",
          "secretsmanager:GetSecretValue",
          "secretsmanager:DescribeSecret",
          "secretsmanager:ListSecretVersionIds"
        ]
        Effect   = "Allow"
        Resource = [
          "arn:aws:secretsmanager:${var.aws_region}:${var.account_id}:secret:${terraform.workspace}/*"
        ]
      },
    ]
  })
}Code language: JavaScript (javascript)

The cluster is now ready to use Service Accounts linked to IAM roles to pull secrets from AWS, and we had Terraform create an IAM role for us that is already setup. The great thing about this is it doesn’t have to be specific to secrets! Your pods tied to service accounts now have the power to perform all sorts of AWS automation.

Step 2: Installing the external-secrets library

helm.tf

resource "helm_release" "external-secrets" {
  name       = "external-secrets"
  repository = "https://external-secrets.github.io/kubernetes-external-secrets/"
  chart      = "kubernetes-external-secrets"
  verify     = "false"

  values = [
    templatefile("./helm/kubernetes-external-secrets/values.yml", { roleArn = "${module.eks-irsa.arn}" })
  ]

  set {
    name  = "metrics.enabled"
    value = "true"
  }

  set {
    name  = "service.annotations.prometheus\\.io/port"
    value = "9127"
    type  = "string"
  }
}
Code language: JavaScript (javascript)

./helm/kubernetes-external-secrets/values.yml

Here we pass the role ARN that the IRSA module created, to the external-secrets values. The ‘external-secrets’ service account will be created an annotated with that role arn via a templatefile.

serviceAccount:
  name: "external-secrets"
  annotations: {
    eks.amazonaws.com/role-arn: "${roleArn}"
  }
securityContext:
  fsGroup: 65534Code language: JavaScript (javascript)

Step 3: Putting it all together

Now that the cluster is configured properly, and the external-secrets library is installed, there is nothing stopping us from using the ‘ExternalSecret’ CRD to create a secret that our pods can use.

apiVersion: "kubernetes-client.io/v1"
kind: ExternalSecret
metadata:
  name: test-db-secret
spec:
  backendType: secretsManager
  data:
    - key: dev/database
      name: DB_CREDENTIALSCode language: JavaScript (javascript)

This code will generate an opaque secret object in Kubernetes, but you no longer have to keep those secret values in your yaml, and they will only be pulled and associated with pods running in a ServiceAccount/IAM Role with access to those secrets!

Jul 5 21
derrickatalto9