mirror of
https://github.com/gravitational/teleport
synced 2024-10-20 01:03:40 +00:00
5840ae7169
This PR enables AWS E2E integration tests for EKS auto-discovery. This process uses Github's OIDC connector to access AWS API by assuming the `arn:aws:iam::307493967395:role/tf-aws-e2e-gha-role` role. ```yaml - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v2 with: aws-region: ${{ env.AWS_REGION }} role-to-assume: ${{ env.GHA_ASSUME_ROLE }} ``` `aws-actions/configure-aws-credentials` action generates a new ID token with the information required and signs it using Github's OIDC workflow. The role `arn:aws:iam::307493967395:role/tf-aws-e2e-gha-role` is an intermediate role for the runner to be able to assume two distinct roles: - `arn:aws:iam::307493967395:role/tf-eks-discovery-ci-cluster-kubernetes-service-access-role` - used by Kubernetes Service - `arn:aws:iam::307493967395:role/tf-eks-discovery-ci-cluster-discovery-service-access-role` - used by Discovery Service The Discovery service will assume role `arn:aws:iam::307493967395:role/tf-eks-discovery-ci-cluster-discovery-service-access-role` which defines the following policy: - `eks:ListClusters` - `eks:DescribeCluster` These are the minimal permissions required to list the available clusters and retrieve their state and labels. Teleport Discovery Service will pull the EKS cluster available and for each cluster to import, it will create a `kube_cluster` object in Auth Server. Once the cluster is discovered and the `kube_cluster` exists in Auth server, the Teleport Kubernetes Service will start proxying the cluster. For that, it must pull the cluster API endpoint and its CA data to create a client. Role `arn:aws:iam::307493967395:role/tf-eks-discovery-ci-cluster-kubernetes-service-access-role` allows Kubernetes Service to describe the cluster and retrieve its details. - `eks:DescribeCluster` The IAM role used by the Kubernetes Service must be mapped to a Kubernetes Group that allows impersonation in order to be able to proxy requests with the user's permissions. ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: teleport-role rules: - apiGroups: - "" resources: - users - groups - serviceaccounts verbs: - impersonate - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "authorization.k8s.io" resources: - selfsubjectaccessreviews - selfsubjectrulesreviews verbs: - create --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: teleport-crb roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: teleport-role subjects: - kind: Group name: ${group_name} ``` During the cluster provisioning phase, we mapped the Kubernetes Service IAM role into a Kubernetes Group ` ${group_name}`. ```yaml mapRoles: - groups: - ${group_name} rolearn:arn:aws:iam::307493967395:role/tf-eks-discovery-ci-cluster-kubernetes-service-access-role username: "eleport:{{SessionName}} ``` The final step is to validate the client is working correctly and that the Kubernetes Service was able to generate a valid token that can impersonate Kubernetes groups and users. For that, we simulate a user calling `kubectl get services -n default` through Teleport that must return 1 entry, the default service `kubernetes`. Implements #27156 |
||
---|---|---|
.. | ||
aws |