5840ae7169
This PR enables AWS E2E integration tests for EKS auto-discovery. This process uses Github's OIDC connector to access AWS API by assuming the `arn:aws:iam::307493967395:role/tf-aws-e2e-gha-role` role. ```yaml - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v2 with: aws-region: ${{ env.AWS_REGION }} role-to-assume: ${{ env.GHA_ASSUME_ROLE }} ``` `aws-actions/configure-aws-credentials` action generates a new ID token with the information required and signs it using Github's OIDC workflow. The role `arn:aws:iam::307493967395:role/tf-aws-e2e-gha-role` is an intermediate role for the runner to be able to assume two distinct roles: - `arn:aws:iam::307493967395:role/tf-eks-discovery-ci-cluster-kubernetes-service-access-role` - used by Kubernetes Service - `arn:aws:iam::307493967395:role/tf-eks-discovery-ci-cluster-discovery-service-access-role` - used by Discovery Service The Discovery service will assume role `arn:aws:iam::307493967395:role/tf-eks-discovery-ci-cluster-discovery-service-access-role` which defines the following policy: - `eks:ListClusters` - `eks:DescribeCluster` These are the minimal permissions required to list the available clusters and retrieve their state and labels. Teleport Discovery Service will pull the EKS cluster available and for each cluster to import, it will create a `kube_cluster` object in Auth Server. Once the cluster is discovered and the `kube_cluster` exists in Auth server, the Teleport Kubernetes Service will start proxying the cluster. For that, it must pull the cluster API endpoint and its CA data to create a client. Role `arn:aws:iam::307493967395:role/tf-eks-discovery-ci-cluster-kubernetes-service-access-role` allows Kubernetes Service to describe the cluster and retrieve its details. - `eks:DescribeCluster` The IAM role used by the Kubernetes Service must be mapped to a Kubernetes Group that allows impersonation in order to be able to proxy requests with the user's permissions. ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: teleport-role rules: - apiGroups: - "" resources: - users - groups - serviceaccounts verbs: - impersonate - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "authorization.k8s.io" resources: - selfsubjectaccessreviews - selfsubjectrulesreviews verbs: - create --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: teleport-crb roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: teleport-role subjects: - kind: Group name: ${group_name} ``` During the cluster provisioning phase, we mapped the Kubernetes Service IAM role into a Kubernetes Group ` ${group_name}`. ```yaml mapRoles: - groups: - ${group_name} rolearn:arn:aws:iam::307493967395:role/tf-eks-discovery-ci-cluster-kubernetes-service-access-role username: "eleport:{{SessionName}} ``` The final step is to validate the client is working correctly and that the Kubernetes Service was able to generate a valid token that can impersonate Kubernetes groups and users. For that, we simulate a user calling `kubectl get services -n default` through Teleport that must return 1 entry, the default service `kubernetes`. Implements #27156 |
||
---|---|---|
.. | ||
charts | ||
download-hashes | ||
flake | ||
gpg | ||
macos | ||
pam | ||
pkgconfig | ||
rpm | ||
rpm-sign | ||
tooling | ||
windows | ||
.bashrc | ||
.dockerignore | ||
.gitignore | ||
build-common.sh | ||
build-fido2-macos.sh | ||
build-package.sh | ||
build-pkg-tsh.sh | ||
build-test-compat.sh | ||
build-webassets-if-changed.sh | ||
changelog.sh | ||
Dockerfile | ||
Dockerfile-arm | ||
Dockerfile-centos7 | ||
Dockerfile-centos7-assets | ||
Dockerfile-centos7-fips | ||
Dockerfile-connect | ||
Dockerfile-grpcbox | ||
Dockerfile-multiarch | ||
Dockerfile-multiarch-base | ||
Dockerfile-multiarch-clang | ||
genproto.sh | ||
grpcbox.mk | ||
images.mk | ||
install | ||
keychain-setup.sh | ||
locale.gen | ||
Makefile | ||
profile | ||
README.md |
Dockerized Teleport Build
This directory is used to produce a containerized production Teleport build. No need to have Golang. Only Docker is required.
It is a part of Gravitational CI/CD pipeline. To build Teleport type:
make
Safely updating build box Dockerfiles
The build box images are used in Drone pipelines and GitHub Actions. The resulting image is pushed to Amazon ECR and ghcr.io. This means that to safely introduce changes to Dockerfiles, those changes should be split into two stages:
- First you open a PR which updates a Dockerfile and get the PR merged.
- Once it's merged, Drone is going to pick it up, build a new build box image and push it to Amazon ECR.
- Then you can open another PR which starts using the new build box image.
DynamoDB static binary docker build
The static binary will be built along with all nodejs assets inside the container. From the root directory of the source checkout run:
docker build -f build.assets/Dockerfile.dynamodb -t teleportbuilder .
Then you can upload the result to an S3 bucket for release.
docker run -it -e AWS_ACL=public-read -e S3_BUCKET=my-teleport-releases -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY teleportbuilder
Or simply copy the binary out of the image using a volume (it will be copied to current directory/build/teleport).
docker run -v $(pwd)/build:/builds -it teleportbuilder cp /gopath/src/github.com/gravitational/teleport/teleport.tgz /builds
OS package repo migrations
An OS package repo migration is semi-manually publishing specific releases to the new APT and YUM repos. This is required in several situations:
- A customer requests that we add an older version to the repos
- We add another OS package repo (for example APK)
- A OS package promotion fails (for example https://drone.platform.teleport.sh/gravitational/teleport/14666/1/3), requires a PR to fix, and we don't want to cut another minor version
Multiple migrations can be performed at once. To run a migration do the following:
- Clone https://github.com/gravitational/teleport.git.
- Change to the directory the repo was cloned to.
- Create a new branch from master.
- Add the Teleport versions you wish to migration as demonstrated here:
151a2f489e (diff-2e3a64c97d186491e06fb2c7ead081b7ace2b67c4a4d974a563daf7c117a2c50)
. - Set the
migrationBranch
variable to the name of the branch you created in (3) as demonstrated here:151a2f489e (diff-2e3a64c97d186491e06fb2c7ead081b7ace2b67c4a4d974a563daf7c117a2c50)
. - Get your Drone credentials from here: https://drone.platform.teleport.sh/account.
- Export your drone credentials as shown under "Example CLI Usage" on the Drone account page
- Open a new terminal.
- Run
tsh apps login drone
and follow any prompts. - Run
tsh proxy app drone
and copy the printed socket. This should look something like127.0.0.1:60982
- Switch back to your previous terminal.
- Run
export DRONE_SERVER=http://{host:port}
, replacing{host:port}
with the data you copied in (10) - Run
make dronegen
- Commit the two changed files and push/publish the branch
- Open a PR merging your changes into master via https://github.com/gravitational/teleport/compare
- Under the "checks" section, click "details" on the check labeled "continuous-integration/drone/push"
- Once the pipelines complete, comment out the versions you added and blank out the
migrationBranch
string set in (4, 5) as demonstrated here:9095880560 (diff-2e3a64c97d186491e06fb2c7ead081b7ace2b67c4a4d974a563daf7c117a2c50)
- Run
make dronegen
- Commit and push the changes.
- Merge the PR and backport if required.