helm: Add ingress support (#25815)

* helm: Add ingress template

* helm: Add ingress support

With the changes introducing automatic websocket upgrades for TLS routing in Teleport 13, we can finally add support for a Kubernetes ingress.

* Remove unnecessary brackets

* Tidying

* Gating

* Fix lint and schema

* Fix lint examples

* Handle wildcards

* Tidy up wildcard support

* Don't add AWS annotations when using ingress

* Update AWS docs to use Ingress/ALB with ACM

* Automatically listens on 443, make values simpler

* Support ingress.spec overrides

* Enable ingress and set spec.ingressClassName

* Update values schema

* typo

* Whitelist 'healthcheck' for spellcheck

* Address Hugo's comments from code review

* Apply Paul's comments from code review

* Few more docs fixes

* Update teleport-cluster reference

* Add values file and fix lint/tests

* Fix docs lint

* Add proxy_service.trust_x_forwarded_for when ingress is enabled and Teleport version >=14

* Fix semver check for pre-releases

* Indent ingress section correctly

* Address docs feedback from Hugo/Tiago

* Add warning about using tsh with ingress

* Fix lint spelling

* Add instructions for checking AWS LB controller installation

* Whitelist ingressclass in spellcheck

* What a stupid error
This commit is contained in:
Gus Luxton 2023-07-13 16:59:02 -03:00 committed by GitHub
parent 192e623406
commit c811cd9a0f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
16 changed files with 1460 additions and 115 deletions

View file

@ -408,6 +408,7 @@
"gworkspace",
"hashfile",
"hashicorp",
"healthcheck",
"healthz",
"highavailability",
"highavailabilitycertmanager",
@ -421,6 +422,7 @@
"iamserviceaccount",
"idps",
"importcert",
"ingressclass",
"initcontainers",
"insecureskipproxytlsverify",
"ioreg",

View file

@ -18,11 +18,28 @@ cluster to Teleport.
(!docs/pages/kubernetes-access/helm/includes/teleport-cluster-prereqs.mdx!)
## Step 1/6. Add the Teleport Helm chart repository
### Choose a Kubernetes namespace and Helm release name
<Admonition type="note">
Before starting, setting your Kubernetes namespace and Helm release name here will
enable easier copy/pasting of commands for installation.
If you don't know what to put here, use `teleport` for both values.
Namespace: <Var name="namespace" description="Kubernetes namespace" />
Release name: <Var name="release-name" description="Helm release name" />
</Admonition>
## Step 1/7. Install Helm
(!docs/pages/kubernetes-access/helm/includes/teleport-cluster-install.mdx!)
## Step 2/7. Add the Teleport Helm chart repository
(!docs/pages/kubernetes-access/helm/includes/helm-repo-add.mdx!)
## Step 2/6. Set up AWS IAM configuration
## Step 3/7. Set up AWS IAM configuration
For Teleport to be able to manage the DynamoDB tables, indexes, and the S3
storage bucket it needs, you'll need to configure AWS IAM policies to allow
@ -40,50 +57,61 @@ access.
(!docs/pages/includes/s3-iam-policy.mdx!)
## Step 3/6. Configure TLS certificates for Teleport
The `teleport-cluster` chart deploys a Kubernetes `LoadBalancer` to handle incoming connections to the Teleport Proxy Service.
## Step 4/7. Configure TLS certificates for Teleport
We now need to configure TLS certificates for Teleport to secure its
communications and allow external clients to connect.
Depending on the approach you use for provisioning TLS certificates, the `teleport-cluster` chart can
deploy either a Kubernetes `LoadBalancer` or Kubernetes `Ingress` to handle incoming connections to
the Teleport Proxy Service.
### Determining an approach
There are three supported options when using AWS. You must choose only one of
these options:
| Approach | AWS Load Balancer Type | Kubernetes Traffic Destination | Can use an existing AWS LB? | Caveats |
| - | - | - | - | - |
| [Using `cert-manager`](#using-cert-manager) | Network Load Balancer (NLB) | `LoadBalancer` | No | Requires a Route 53 domain and an `Issuer` configured with IAM permissions to change DNS records for your domain |
| [Using AWS Certificate Manager](#using-aws-certificate-manager) | Application Load Balancer (ALB) | `Ingress` | Yes | Requires a working instance of the AWS Load Balancer controller installed in your Kubernetes cluster |
| [Using your own TLS credentials](#using-your-own-tls-credentials) | Network Load Balancer (NLB) | `LoadBalancer` | No | Requires you to independently manage the maintenance, renewal and trust of the TLS certificates securing Teleport's web listener |
#### Using `cert-manager`
You can use `cert-manager` to provision and automatically renew TLS credentials
by completing ACME challenges via Let's Encrypt. We recommend this approach if
you require CLI access to web applications using client certificates via
the Teleport Application Service.
by completing ACME challenges via Let's Encrypt.
This method uses a Kubernetes `LoadBalancer`, which will provision an underlying AWS Network Load
Balancer (NLB) to handle incoming traffic.
#### Using AWS Certificate Manager
You can use AWS Certificate Manager to handle TLS termination with AWS-managed
certificates.
You can use AWS Certificate Manager to handle TLS termination with AWS-managed certificates.
You should be aware of the limitations for using AWS Certificate Manager to
provision TLS credentials for Teleport:
This method uses a Kubernetes `Ingress`, which can provision an underlying AWS Application Load
Balancer (ALB) to handle incoming traffic if one does not already exist. It also requires the
installation and setup of the AWS Load Balancer controller.
- This will prevent the Teleport Application Service from working via CLI using
client certificates. Application access will still work via a browser.
- Command-line application access does not work with ACM. Using ACM will prevent
Teleport from facilitating application access via CLI (using client
certificates), as Teleport will not be handling its own TLS termination.
- Using ACM through a AWS Load Balancer prevents the required traffic for
Postgres or MongoDB through Teleport's web port. If you choose to use the ACM
approach, we will show you how to configure a separate listener for Postgres
or MongoDB.
You should be aware of these potential limitations and differences when using Layer 7 load balancers with Teleport:
<Notice type="warning">
- Connecting to Kubernetes clusters at the command line requires the use of the `tsh proxy kube` or
`tsh kubectl` commands and `tsh proxy db`/`tsh db connect` commands respectively. It is not
possible to connect `kubectl` directly to Teleport listeners without the use of `tsh` as a proxy client
in this mode.
- Connecting to databases at the command line requires the use of the `tsh proxy db` or `tsh db connect`
commands. It is not possible to connect database clients directly to Teleport listeners without the use of `tsh`
as a proxy client in this mode.
- The reason for both of these requirements is that Teleport uses X509 certificates for authentication, which requires
that it terminate all inbound TLS traffic itself on the Teleport proxy. This is not directly possible when using
a Layer 7 load balancer, so the `tsh` client implements this flow itself
[using ALPN connection upgrades](../../architecture/tls-routing.mdx#working-with-layer-7-load-balancers-or-reverse-proxies-preview).
- The use of Teleport and `tsh` v13 or higher is required.
If you would like the Teleport Application Service and Database Service to
function as expected, you should use the `cert-manager` approach unless there is
a specific reason to use ACM.
</Notice>
<Admonition type="warning">
Using ACM with an ALB also requires that your cluster has a fully functional installation of the AWS Load Balancer
controller with required IAM permissions. This guide provides more details below.
</Admonition>
#### Using your own TLS credentials
@ -91,7 +119,12 @@ With this approach, you are responsible for determining how to obtain a TLS
certificate and private key for your Teleport cluster, and for renewing your
credentials periodically. Use this approach if you would like to use a trusted
internal certificate authority instead of Let's Encrypt or AWS Certificate
Manager.
Manager. This method uses a Kubernetes `LoadBalancer` and will provision an
underlying AWS NLB.
### Steps to follow
Once you have chosen an approach based on the details above, select the correct tab below for instructions.
<Tabs>
<TabItem label="cert-manager">
@ -189,9 +222,9 @@ EOF
After you have created the `Issuer` and updated the values, add it to your cluster using `kubectl`:
```code
$ kubectl create namespace teleport
$ kubectl create namespace <Var name="namespace" />
$ kubectl label namespace teleport 'pod-security.kubernetes.io/enforce=baseline'
$ kubectl --namespace teleport create -f aws-issuer.yaml
$ kubectl --namespace <Var name="namespace" /> create -f aws-issuer.yaml
```
</TabItem>
@ -199,40 +232,36 @@ $ kubectl --namespace teleport create -f aws-issuer.yaml
In this step, you will configure Teleport to use AWS Certificate Manager (ACM)
to provision your Teleport instances with TLS credentials.
To use ACM to handle TLS, add annotations to the chart specifying the ACM
certificate ARN to use and the port it should be served on.
<Admonition type="warning" title="Prerequisite: Install and configure the AWS Load Balancer controller">
You must either follow the [AWS-maintained documentation on installing the AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html)
or already have a working installation of the AWS LB controller before continuing with these instructions. Failure to do this will result in an unusable Teleport cluster.
Replace
`arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece`
Assuming you follow the AWS guide linked above, you can check whether the AWS LB controller is running in your cluster by looking
for pods with the `app.kubernetes.io/name=aws-load-balancer-controller` label:
```code
$ kubectl get pods -A -l app.kubernetes.io/name=aws-load-balancer-controller
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-load-balancer-controller-655f647b95-5vz56 1/1 Running 0 109d
kube-system aws-load-balancer-controller-655f647b95-b4brx 1/1 Running 0 109d
```
You can also check whether `alb` is registered as an `IngressClass` in your cluster:
```code
$ kubectl get ingressclass
NAME CONTROLLER PARAMETERS AGE
alb ingress.k8s.aws/alb <none> 109d
```
</Admonition>
To use ACM to handle TLS, we will add annotations to the chart values in the section below specifying
the ACM certificate ARN to use, the port it should be served on and other ALB configuration
parameters.
Replace `arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece`
with your actual ACM certificate ARN.
Edit your `values.yaml` file to complete the `annotations.service` field as
follows:
```yaml
annotations:
service:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
```
To use an internal AWS network load balancer (as opposed to the default
internet-facing NLB), you should add two annotations:
```yaml
service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
```
If you plan to use Postgres or MongoDB with Teleport, add the following options,
depending on whether you are running PostgreSQL or MongoDB, to your values file:
```yaml
separatePostgresListener: true
separateMongoListener: true
```
</TabItem>
<TabItem label="Your own TLS credentials">
@ -242,10 +271,10 @@ UI using existing TLS credentials within a Kubernetes secret.
Use the following command to create your secret:
```code
$ kubectl -n teleport create secret tls my-tls-secret --cert=/path/to/cert/file --key=/path/to/key/file
$ kubectl -n <Var name="namespace" /> create secret tls my-tls-secret --cert=/path/to/cert/file --key=/path/to/key/file
```
Edit your `values.yaml` file to refer to the name of your secret:
Edit your `aws-values.yaml` file (created below) to refer to the name of your secret:
```yaml
tls:
@ -255,7 +284,7 @@ Edit your `values.yaml` file to refer to the name of your secret:
</TabItem>
</Tabs>
## Step 4/6. Set values to configure the cluster
## Step 5/7. Set values to configure the cluster
<ScopedBlock scope="enterprise">
@ -268,7 +297,7 @@ Create a secret from your license file. Teleport will automatically discover
this secret as long as your file is named `license.pem`.
```code
$ kubectl -n teleport create secret generic license --from-file=license.pem
$ kubectl -n <Var name="namespace" /> create secret generic license --from-file=license.pem
```
</ScopedBlock>
@ -283,6 +312,7 @@ a file called `aws-values.yaml` and write the values you've chosen above to it:
```yaml
chartMode: aws
clusterName: teleport.example.com # Name of your cluster. Use the FQDN you intend to configure in DNS below.
proxyListenerMode: multiplex
aws:
region: us-west-2 # AWS region
backendTable: teleport-helm-backend # DynamoDB table to use for the Teleport backend
@ -305,6 +335,7 @@ podSecurityPolicy:
```yaml
chartMode: aws
clusterName: teleport.example.com # Name of your cluster. Use the FQDN you intend to configure in DNS below.
proxyListenerMode: multiplex
aws:
region: us-west-2 # AWS region
backendTable: teleport-helm-backend # DynamoDB table to use for the Teleport backend
@ -315,16 +346,39 @@ aws:
dynamoAutoScaling: false # Whether Teleport should configure DynamoDB's autoscaling.
highAvailability:
replicaCount: 2 # Number of replicas to configure
ingress:
enabled: true
spec:
ingressClassName: alb
annotations:
service:
ingress:
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=350
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/success-codes: 200,301,302
# Replace with your AWS certificate ARN
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece"
# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies
podSecurityPolicy:
enabled: false
```
To use an internal AWS application load balancer (as opposed to an internet-facing ALB), you should
edit the `alb.ingress.kubernetes.io/scheme` annotation:
```yaml
alb.ingress.kubernetes.io/scheme: internal
```
To automatically redirect HTTP requests on port 80 to HTTPS requests on port 443, you
can also optionally provide these two values under `annotations.ingress`:
```yaml
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
```
</TabItem>
</Tabs>
@ -336,6 +390,7 @@ podSecurityPolicy:
```yaml
chartMode: aws
clusterName: teleport.example.com # Name of your cluster. Use the FQDN you intend to configure in DNS below.
proxyListenerMode: multiplex
aws:
region: us-west-2 # AWS region
backendTable: teleport-helm-backend # DynamoDB table to use for the Teleport backend
@ -359,6 +414,7 @@ podSecurityPolicy:
```yaml
chartMode: aws
clusterName: teleport.example.com # Name of your cluster. Use the FQDN you intend to configure in DNS below.
proxyListenerMode: multiplex
aws:
region: us-west-2 # AWS region
backendTable: teleport-helm-backend # DynamoDB table to use for the Teleport backend
@ -370,16 +426,40 @@ aws:
highAvailability:
replicaCount: 2 # Number of replicas to configure
enterprise: true # Indicate that this is a Teleport Enterprise deployment
ingress:
enabled: true
spec:
ingressClassName: alb
annotations:
service:
ingress:
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=350
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/success-codes: 200,301,302
# Replace with your AWS certificate ARN
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece"
# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies
podSecurityPolicy:
enabled: false
```
To use an internal AWS Application Load Balancer (as opposed to an internet-facing ALB), you should
edit the `alb.ingress.kubernetes.io/scheme` annotation:
```yaml
alb.ingress.kubernetes.io/scheme: internal
```
To automatically redirect HTTP requests on port 80 to HTTPS requests on port 443, you
can also optionally provide these two values under `annotations.ingress`:
```yaml
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
```
</TabItem>
</Tabs>
@ -388,9 +468,9 @@ podSecurityPolicy:
Install the chart with the values from your `aws-values.yaml` file using this command:
```code
$ helm install teleport teleport/teleport-cluster \
$ helm install <Var name="release-name" /> teleport/teleport-cluster \
--create-namespace \
--namespace teleport \
--namespace <Var name="namespace" /> \
-f aws-values.yaml
```
@ -398,10 +478,10 @@ $ helm install teleport teleport/teleport-cluster \
You cannot change the `clusterName` after the cluster is configured, so make sure you choose wisely. You should use the fully-qualified domain name that you'll use for external access to your Teleport cluster.
</Admonition>
Once the chart is installed, you can use `kubectl` commands to view the deployment:
Once the chart is installed, you can use `kubectl` commands to view the deployment (example using `cert-manager`):
```code
$ kubectl --namespace teleport get all
$ kubectl --namespace <Var name="namespace" /> get all
NAME READY STATUS RESTARTS AGE
pod/teleport-auth-57989d4cbd-4q2ds 1/1 Running 0 22h
@ -410,7 +490,7 @@ pod/teleport-proxy-c6bf55cfc-w96d2 1/1 Running 0 22h
pod/teleport-proxy-c6bf55cfc-z256w 1/1 Running 0 22h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/teleport LoadBalancer 10.40.11.180 xxxxx.elb.us-east-1.amazonaws.com 443:30258/TCP,3023:31802/TCP,3026:32182/TCP,3024:30101/TCP,3036:30302/TCP 22h
service/teleport LoadBalancer 10.40.11.180 xxxxx.elb.us-east-1.amazonaws.com 443:30258/TCP 22h
service/teleport-auth ClusterIP 10.40.8.251 <none> 3025/TCP,3026/TCP 22h
service/teleport-auth-v11 ClusterIP None <none> <none> 22h
service/teleport-auth-v12 ClusterIP None <none> <none> 22h
@ -424,7 +504,7 @@ replicaset.apps/teleport-auth-57989d4cbd 2 2 2 22h
replicaset.apps/teleport-proxy-c6bf55cfc 2 2 2 22h
```
## Step 5/6. Set up DNS
## Step 6/7. Set up DNS
You'll need to set up a DNS `A` record for `teleport.example.com`. In our example, this record is an alias to an ELB.
@ -434,10 +514,13 @@ You'll need to set up a DNS `A` record for `teleport.example.com`. In our exampl
Here's how to do this in a hosted zone with AWS Route 53:
<Tabs>
<TabItem label="cert-manager">
```code
# Change these parameters if you altered them above
$ NAMESPACE='teleport'
$ RELEASE_NAME='teleport'
$ NAMESPACE='<Var name="namespace" />'
$ RELEASE_NAME='<Var name="release-name" />'
# DNS settings (change as necessary)
$ MYZONE_DNS='example.com'
@ -446,7 +529,7 @@ $ MY_CLUSTER_REGION='us-west-2'
# Find AWS Zone ID and ELB Zone ID
$ MYZONE="$(aws route53 list-hosted-zones-by-name --dns-name="${MYZONE_DNS?}" | jq -r '.HostedZones[0].Id' | sed s_/hostedzone/__)"
$ MYELB="$(kubectl --namespace "${NAMESPACE?}" get "service/${RELEASE_NAME?}" -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')"
$ MYELB="$(kubectl --namespace "${NAMESPACE?}" get "service/${RELEASE_NAME?}-proxy" -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')"
$ MYELB_NAME="${MYELB%%-*}"
$ MYELB_ZONE="$(aws elbv2 describe-load-balancers --region "${MY_CLUSTER_REGION?}" --names "${MYELB_NAME?}" | jq -r '.LoadBalancers[0].CanonicalHostedZoneId')"
@ -492,14 +575,79 @@ $ aws route53 get-change --id "${CHANGEID?}" | jq '.ChangeInfo.Status'
# "INSYNC"
```
## Step 6/6. Create a Teleport user
</TabItem>
<TabItem label="AWS Certificate Manager">
```code
# Change these parameters if you altered them above
$ NAMESPACE='<Var name="namespace" />'
$ RELEASE_NAME='<Var name="release-name" />'
# DNS settings (change as necessary)
$ MYZONE_DNS='example.com'
$ MYDNS='teleport.example.com'
$ MY_CLUSTER_REGION='us-west-2'
# Find AWS Zone ID and Ingress Controller ALB Zone ID
$ MYZONE="$(aws route53 list-hosted-zones-by-name --dns-name="${MYZONE_DNS?}" | jq -r '.HostedZones[0].Id' | sed s_/hostedzone/__)"
$ MYELB="$(kubectl --namespace "${NAMESPACE?}" get "ingress/${RELEASE_NAME?}-proxy" -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')"
$ MYELB_ROOT="${MYELB%%.*}"
$ MYELB_NAME="${MYELB_ROOT%-*}"
$ MYELB_ZONE="$(aws elbv2 describe-load-balancers --region "${MY_CLUSTER_REGION?}" --names "${MYELB_NAME?}" | jq -r '.LoadBalancers[0].CanonicalHostedZoneId')"
# Create a JSON file changeset for AWS.
$ jq -n --arg dns "${MYDNS?}" --arg elb "${MYELB?}" --arg elbz "${MYELB_ZONE?}" \
'{
"Comment": "Create records",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": $dns,
"Type": "A",
"AliasTarget": {
"HostedZoneId": $elbz,
"DNSName": ("dualstack." + $elb),
"EvaluateTargetHealth": false
}
}
},
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": ("*." + $dns),
"Type": "A",
"AliasTarget": {
"HostedZoneId": $elbz,
"DNSName": ("dualstack." + $elb),
"EvaluateTargetHealth": false
}
}
}
]
}' > myrecords.json
# Review records before applying.
$ cat myrecords.json | jq
# Apply the records and capture change id
$ CHANGEID="$(aws route53 change-resource-record-sets --hosted-zone-id "${MYZONE?}" --change-batch file://myrecords.json | jq -r '.ChangeInfo.Id')"
# Verify that change has been applied
$ aws route53 get-change --id "${CHANGEID?}" | jq '.ChangeInfo.Status'
# "INSYNC"
```
</TabItem>
</Tabs>
## Step 7/7. Create a Teleport user
Create a user to be able to log into Teleport. This needs to be done on the Teleport auth server,
so we can run the command using `kubectl`:
<ScopedBlock scope={["oss"]}>
```code
$ kubectl --namespace teleport exec deploy/teleport-auth -- tctl users add test --roles=access,editor
$ kubectl --namespace <Var name="namespace" /> exec deploy/<Var name="release-name" />-auth -- tctl users add test --roles=access,editor
User "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h:
https://teleport.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68
@ -534,29 +682,20 @@ version of Teleport. You can make sure that the repo is up to date by running `h
Here's an example where we set the chart to use 3 replicas:
<Tabs>
<TabItem label="Using values.yaml">
Edit your `aws-values.yaml` file from above and make the appropriate changes.
Edit your `aws-values.yaml` file from above and make the appropriate changes:
Upgrade the deployment with the values from your `aws-values.yaml` file using this command:
```yaml
highAvailability:
replicaCount: 3
```
```code
$ helm upgrade teleport teleport/teleport-cluster \
--namespace teleport \
-f aws-values.yaml
```
Upgrade the deployment with the values from your `aws-values.yaml` file using this command:
</TabItem>
<TabItem label="Using --set via CLI">
Run this command, editing your command line parameters as appropriate:
```code
$ helm upgrade teleport teleport/teleport-cluster \
--namespace teleport \
--set highAvailability.replicaCount=3
```
</TabItem>
</Tabs>
```code
$ helm upgrade <Var name="release-name" /> teleport/teleport-cluster \
--namespace <Var name="namespace" /> \
-f aws-values.yaml
```
<Admonition type="note">
To change `chartMode`, `clusterName`, or any `aws` settings, you must first uninstall the existing chart and then install a new version with the appropriate values.
@ -592,8 +731,8 @@ aws:
Then perform a cluster upgrade with the new values:
```code
$ helm upgrade teleport teleport/teleport-cluster \
--namespace teleport \
$ helm upgrade <Var name="release-name" /> teleport/teleport-cluster \
--namespace <Var name="namespace" /> \
-f aws-values.yaml
```
@ -602,7 +741,7 @@ $ helm upgrade teleport teleport/teleport-cluster \
To uninstall the `teleport-cluster` chart, use `helm uninstall <release-name>`. For example:
```code
$ helm --namespace teleport uninstall teleport
$ helm --namespace <Var name="namespace" /> uninstall <Var name="release-name" />
```
### Uninstalling cert-manager

View file

@ -996,7 +996,7 @@ cluster deployed in HA mode.
You must install and configure `cert-manager` in your Kubernetes cluster yourself.
See the [cert-manager Helm install instructions](https://cert-manager.io/docs/installation/kubernetes/#option-2-install-crds-as-part-of-the-helm-release)
and the relevant sections of the [AWS](../../deploy-a-cluster/helm-deployments/aws.mdx#step-36-configure-tls-certificates-for-teleport) and [GCP](../../deploy-a-cluster/helm-deployments/gcp.mdx#step-36-install-and-configure-cert-manager) guides for more information.
and the relevant sections of the [AWS](../../deploy-a-cluster/helm-deployments/aws.mdx#step-47-configure-tls-certificates-for-teleport) and [GCP](../../deploy-a-cluster/helm-deployments/gcp.mdx#step-36-install-and-configure-cert-manager) guides for more information.
</Admonition>
### `highAvailability.certManager.addCommonName`
@ -1011,7 +1011,7 @@ Setting `highAvailability.certManager.addCommonName` to `true` will instruct `ce
You must install and configure `cert-manager` in your Kubernetes cluster yourself.
See the [cert-manager Helm install instructions](https://cert-manager.io/docs/installation/kubernetes/#option-2-install-crds-as-part-of-the-helm-release)
and the relevant sections of the [AWS](../../deploy-a-cluster/helm-deployments/aws.mdx#step-36-configure-tls-certificates-for-teleport) and [GCP](../../deploy-a-cluster/helm-deployments/gcp.mdx#step-36-install-and-configure-cert-manager) guides for more information.
and the relevant sections of the [AWS](../../deploy-a-cluster/helm-deployments/aws.mdx#step-47-configure-tls-certificates-for-teleport) and [GCP](../../deploy-a-cluster/helm-deployments/gcp.mdx#step-36-install-and-configure-cert-manager) guides for more information.
</Admonition>
`values.yaml` example:
@ -1036,7 +1036,7 @@ Sets the name of the `cert-manager` `Issuer` or `ClusterIssuer` to use for issui
You must install configure an appropriate `Issuer` supporting a DNS01 challenge yourself.
Please see the [cert-manager DNS01 docs](https://cert-manager.io/docs/configuration/acme/dns01/#supported-dns01-providers) and the relevant sections
of the [AWS](../../deploy-a-cluster/helm-deployments/aws.mdx#step-36-configure-tls-certificates-for-teleport) and [GCP](../../deploy-a-cluster/helm-deployments/gcp.mdx#step-36-install-and-configure-cert-manager) guides for more information.
of the [AWS](../../deploy-a-cluster/helm-deployments/aws.mdx#step-47-configure-tls-certificates-for-teleport) and [GCP](../../deploy-a-cluster/helm-deployments/gcp.mdx#step-36-install-and-configure-cert-manager) guides for more information.
</Admonition>
`values.yaml` example:
@ -1425,6 +1425,25 @@ Kubernetes annotations which should be applied to the `secret` generated by
kubernetes.io/annotation: value
```
## `annotations.ingress`
| Type | Default value |
|----------|---------------|
| `object` | `{}` |
[Kubernetes reference](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/)
Kubernetes annotations which should be applied to the `Ingress` created by the chart.
`values.yaml` example:
```yaml
annotations:
ingress:
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTPS
```
## `serviceAccount.create`
| Type | Default value | Required? |
@ -1479,6 +1498,60 @@ Allows to specify the `loadBalancerIP`.
loadBalancerIP: 1.2.3.4
```
## `ingress.enabled`
| Type | Default value | Required? |
|-----------|---------------|-----------|
| `boolean` | `false` | No |
[Kubernetes reference](https://kubernetes.io/docs/concepts/services-networking/ingress/)
Boolean value that specifies whether to generate a Kubernetes `Ingress` for the Teleport deployment.
`values.yaml` example:
```yaml
ingress:
enabled: true
```
## `ingress.suppressAutomaticWildcards`
| Type | Default value | Required? |
|-----------|---------------|-----------|
| `boolean` | `false` | No |
Setting `suppressAutomaticWildcards` to true will not automatically add `*.<clusterName>` as a hostname served
by the Ingress. This may be desirable if you don't use Teleport application access, or want to configure
individual public addresses for applications instead.
`values.yaml` example:
```yaml
ingress:
enabled: true
suppressAutomaticWildcards: true
```
## `ingress.spec`
| Type | Default value | Required? |
|-----------|---------------|-----------|
| `object` | `{}` | No |
Object value which can be used to define additional properties for the configured Ingress.
For example, you can use this to set an `ingressClassName`:
`values.yaml` example:
```yaml
ingress:
enabled: true
spec:
ingressClassName: alb
```
## `extraArgs`
| Type | Default value |

View file

@ -0,0 +1,8 @@
clusterName: teleport.example.com
publicAddr: ["my-teleport-ingress.example.com:443"]
ingress:
enabled: true
suppressAutomaticWildcards: true
proxyListenerMode: multiplex
service:
type: ClusterIP

View file

@ -0,0 +1,6 @@
clusterName: teleport.example.com
ingress:
enabled: true
proxyListenerMode: multiplex
service:
type: ClusterIP

View file

@ -70,4 +70,7 @@ proxy_service:
uri: {{ .Values.acmeURI }}
{{- end }}
{{- end }}
{{- if and .Values.ingress.enabled (semverCompare ">= 14.0.0-0" (include "teleport-cluster.version" .)) }}
trust_x_forwarded_for: true
{{- end }}
{{- end -}}

View file

@ -0,0 +1,52 @@
{{- $proxy := mustMergeOverwrite (mustDeepCopy .Values) .Values.proxy -}}
{{- if .Values.ingress.enabled -}}
{{- if (not (eq .Values.proxyListenerMode "multiplex")) -}}
{{- fail "Use of an ingress requires TLS multiplexing to be enabled, so you must also set proxyListenerMode=multiplex - see https://goteleport.com/docs/architecture/tls-routing/" -}}
{{- end -}}
{{- $publicAddr := coalesce .Values.publicAddr (list .Values.clusterName) -}}
{{- /* Trim ports from all public addresses if present */ -}}
{{- range $publicAddr -}}
{{- $address := . -}}
{{- if (contains ":" $address) -}}
{{- $split := split ":" $address -}}
{{- $address = $split._0 -}}
{{- $publicAddr = append (mustWithout $publicAddr .) $address -}}
{{- end -}}
{{- $wildcard := printf "*.%s" $address -}}
{{- /* Add wildcard versions of all public addresses to ingress, unless 1) suppressed or 2) wildcard version already exists */ -}}
{{- if and (not $.Values.ingress.suppressAutomaticWildcards) (not (hasPrefix "*." $address)) (not (has $wildcard $publicAddr)) -}}
{{- $publicAddr = append $publicAddr (printf "*.%s" $address) -}}
{{- end -}}
{{- end -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-proxy
namespace: {{ .Release.Namespace }}
labels: {{- include "teleport-cluster.proxy.labels" . | nindent 4 }}
{{- if $proxy.annotations.ingress }}
annotations: {{- toYaml $proxy.annotations.ingress | nindent 4 }}
{{- end }}
spec:
{{- with $proxy.ingress.spec }}
{{- toYaml . | nindent 2 }}
{{- end }}
tls:
- hosts:
{{- range $publicAddr }}
- {{ quote . }}
{{- end }}
rules:
{{- range $publicAddr }}
- host: {{ quote . }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ $.Release.Name }}
port:
number: 443
{{- end }}
{{- end }}

View file

@ -1,5 +1,9 @@
{{- $proxy := mustMergeOverwrite (mustDeepCopy .Values) .Values.proxy -}}
{{- $backendProtocol := ternary "ssl" "tcp" (hasKey $proxy.annotations.service "service.beta.kubernetes.io/aws-load-balancer-ssl-cert") -}}
{{- /* Fail early if proxy service type is set to LoadBalancer when ingress.enabled=true */ -}}
{{- if and $proxy.ingress.enabled (eq $proxy.service.type "LoadBalancer") -}}
{{- fail "proxy.service.type must not be LoadBalancer when using an ingress - any load balancer should be provisioned by your ingress controller. Set proxy.service.type=ClusterIP instead" -}}
{{- end -}}
apiVersion: v1
kind: Service
metadata:
@ -8,7 +12,7 @@ metadata:
labels: {{- include "teleport-cluster.proxy.labels" . | nindent 4 }}
{{- if (or ($proxy.annotations.service) (eq $proxy.chartMode "aws")) }}
annotations:
{{- if eq $proxy.chartMode "aws" }}
{{- if and (eq $proxy.chartMode "aws") (not $proxy.ingress.enabled) }}
{{- if not (hasKey $proxy.annotations.service "service.beta.kubernetes.io/aws-load-balancer-backend-protocol")}}
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: {{ $backendProtocol }}
{{- end }}

View file

@ -0,0 +1,38 @@
does not add additional wildcard publicAddrs when Ingress is enabled and a publicAddr already contains a wildcard:
1: |
- hosts:
- helm-lint.example.com
- '*.helm-lint.example.com'
- helm-lint-second-domain.example.com
- '*.helm-lint-second-domain.example.com'
does not set a wildcard of clusterName as a hostname when Ingress is enabled and ingress.suppressAutomaticWildcards is true:
1: |
- hosts:
- teleport.example.com
? does not set a wildcard of publicAddr as a hostname when Ingress is enabled, publicAddr
is set and ingress.suppressAutomaticWildcards is true
: 1: |
- hosts:
- helm-lint.example.com
exposes all publicAddrs and wildcard publicAddrs as hostnames when Ingress is enabled and multiple publicAddrs are set:
1: |
- hosts:
- helm-lint.example.com
- helm-lint-second-domain.example.com
- '*.helm-lint.example.com'
- '*.helm-lint-second-domain.example.com'
sets the clusterName and wildcard of clusterName as hostnames when Ingress is enabled:
1: |
- hosts:
- teleport.example.com
- '*.teleport.example.com'
sets the publicAddr and wildcard of publicAddr as hostnames when Ingress is enabled and publicAddr is set:
1: |
- hosts:
- helm-lint.example.com
- '*.helm-lint.example.com'
trims ports from publicAddr and uses it as the hostname when Ingress is enabled and publicAddr is set:
1: |
- hosts:
- helm-lint.example.com
- '*.helm-lint.example.com'

View file

@ -1,3 +1,89 @@
generates a config WITHOUT proxy_service.trust_x_forwarded_for=true when version < 14.0.0 and ingress.enabled is not set:
1: |
|-
auth_service:
enabled: false
proxy_service:
enabled: true
kube_listen_addr: 0.0.0.0:3026
listen_addr: 0.0.0.0:3023
mysql_listen_addr: 0.0.0.0:3036
public_addr: helm-test.example.com:443
tunnel_listen_addr: 0.0.0.0:3024
ssh_service:
enabled: false
teleport:
auth_server: RELEASE-NAME-auth.NAMESPACE.svc.cluster.local:3025
join_params:
method: kubernetes
token_name: RELEASE-NAME-proxy
log:
format:
extra_fields:
- timestamp
- level
- component
- caller
output: text
output: stderr
severity: INFO
version: v3
generates a config WITHOUT proxy_service.trust_x_forwarded_for=true when version < 14.0.0 and ingress.enabled=true:
1: |
|-
auth_service:
enabled: false
proxy_service:
enabled: true
public_addr: helm-test.example.com:443
ssh_service:
enabled: false
teleport:
auth_server: RELEASE-NAME-auth.NAMESPACE.svc.cluster.local:3025
join_params:
method: kubernetes
token_name: RELEASE-NAME-proxy
log:
format:
extra_fields:
- timestamp
- level
- component
- caller
output: text
output: stderr
severity: INFO
version: v3
generates a config WITHOUT proxy_service.trust_x_forwarded_for=true when version >=14.0.0 and ingress.enabled is not set:
1: |
|-
auth_service:
enabled: false
proxy_service:
enabled: true
kube_listen_addr: 0.0.0.0:3026
listen_addr: 0.0.0.0:3023
mysql_listen_addr: 0.0.0.0:3036
public_addr: helm-test.example.com:443
tunnel_listen_addr: 0.0.0.0:3024
ssh_service:
enabled: false
teleport:
auth_server: RELEASE-NAME-auth.NAMESPACE.svc.cluster.local:3025
join_params:
method: kubernetes
token_name: RELEASE-NAME-proxy
log:
format:
extra_fields:
- timestamp
- level
- component
- caller
output: text
output: stderr
severity: INFO
version: v3
generates a config with a clusterName containing a regular string:
1: |
|-
@ -28,6 +114,60 @@ generates a config with a clusterName containing a regular string:
output: stderr
severity: INFO
version: v3
generates a config with proxy_service.trust_x_forwarded_for=true when version = 14.0.0-rc.1 and ingress.enabled=true:
1: |
|-
auth_service:
enabled: false
proxy_service:
enabled: true
public_addr: helm-test.example.com:443
trust_x_forwarded_for: true
ssh_service:
enabled: false
teleport:
auth_server: RELEASE-NAME-auth.NAMESPACE.svc.cluster.local:3025
join_params:
method: kubernetes
token_name: RELEASE-NAME-proxy
log:
format:
extra_fields:
- timestamp
- level
- component
- caller
output: text
output: stderr
severity: INFO
version: v3
generates a config with proxy_service.trust_x_forwarded_for=true when version >=14.0.0 and ingress.enabled=true:
1: |
|-
auth_service:
enabled: false
proxy_service:
enabled: true
public_addr: helm-test.example.com:443
trust_x_forwarded_for: true
ssh_service:
enabled: false
teleport:
auth_server: RELEASE-NAME-auth.NAMESPACE.svc.cluster.local:3025
join_params:
method: kubernetes
token_name: RELEASE-NAME-proxy
log:
format:
extra_fields:
- timestamp
- level
- component
- caller
output: text
output: stderr
severity: INFO
version: v3
matches snapshot for acme-on.yaml:
1: |
|-

View file

@ -1,9 +1,27 @@
does not expose separate listener ports by default when ingress.enabled=true:
1: |
- name: tls
port: 443
protocol: TCP
targetPort: 3080
does not expose separate listener ports when running in separate mode and ingress.enabled=true:
1: |
- name: tls
port: 443
protocol: TCP
targetPort: 3080
exposes a single port when running in multiplex mode:
1: |
- name: tls
port: 443
protocol: TCP
targetPort: 3080
exposes a single port when running in multiplex mode and ingress.enabled=true:
1: |
- name: tls
port: 443
protocol: TCP
targetPort: 3080
exposes separate listener ports by default:
1: |
- name: tls

View file

@ -0,0 +1,502 @@
suite: Proxy Ingress
templates:
- proxy/ingress.yaml
tests:
- it: does not create an Ingress by default
set:
clusterName: teleport.example.com
asserts:
- hasDocuments:
count: 0
- it: creates an Ingress when ingress.enabled=true and proxyListenerMode=multiplex
values:
- ../.lint/ingress.yaml
asserts:
- hasDocuments:
count: 1
- isKind:
of: Ingress
- it: fails to deploy an Ingress when ingress.enabled=true and proxyListenerMode is not set
values:
- ../.lint/ingress.yaml
set:
proxyListenerMode: ""
asserts:
- failedTemplate:
errorMessage: "Use of an ingress requires TLS multiplexing to be enabled, so you must also set proxyListenerMode=multiplex - see https://goteleport.com/docs/architecture/tls-routing/"
- it: fails to deploy an Ingress when ingress.enabled=true and proxyListenerMode=separate
values:
- ../.lint/ingress.yaml
set:
proxyListenerMode: separate
asserts:
- failedTemplate:
errorMessage: "Use of an ingress requires TLS multiplexing to be enabled, so you must also set proxyListenerMode=multiplex - see https://goteleport.com/docs/architecture/tls-routing/"
- it: wears annotations when set
values:
- ../.lint/ingress.yaml
set:
annotations:
ingress:
test-annotation: test-annotation-value
another-annotation: some-other-value
asserts:
- hasDocuments:
count: 1
- isKind:
of: Ingress
- equal:
path: metadata.annotations.test-annotation
value: test-annotation-value
- equal:
path: metadata.annotations.another-annotation
value: some-other-value
- it: sets the clusterName and wildcard of clusterName as hostnames when Ingress is enabled
values:
- ../.lint/ingress.yaml
asserts:
- equal:
path: spec.tls[0].hosts[0]
value: "teleport.example.com"
- contains:
path: spec.tls
content:
hosts:
- "teleport.example.com"
- "*.teleport.example.com"
- equal:
path: spec.rules[0].host
value: "teleport.example.com"
- contains:
path: spec.rules
content:
host: "teleport.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- equal:
path: spec.rules[1].host
value: "*.teleport.example.com"
- contains:
path: spec.rules
content:
host: "*.teleport.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- matchSnapshot:
path: spec.tls
- it: does not set a wildcard of clusterName as a hostname when Ingress is enabled and ingress.suppressAutomaticWildcards is true
values:
- ../.lint/ingress.yaml
set:
ingress:
suppressAutomaticWildcards: true
asserts:
- equal:
path: spec.tls[0].hosts[0]
value: "teleport.example.com"
- contains:
path: spec.tls
content:
hosts:
- "teleport.example.com"
- equal:
path: spec.rules[0].host
value: "teleport.example.com"
- contains:
path: spec.rules
content:
host: "teleport.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- notContains:
path: spec.rules
content:
host: "*.teleport.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- matchSnapshot:
path: spec.tls
- it: sets the publicAddr and wildcard of publicAddr as hostnames when Ingress is enabled and publicAddr is set
values:
- ../.lint/ingress.yaml
set:
publicAddr: ["helm-lint.example.com"]
asserts:
- equal:
path: spec.tls[0].hosts[0]
value: "helm-lint.example.com"
- contains:
path: spec.tls
content:
hosts:
- "helm-lint.example.com"
- "*.helm-lint.example.com"
- equal:
path: spec.rules[0].host
value: helm-lint.example.com
- contains:
path: spec.rules
content:
host: "helm-lint.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- equal:
path: spec.rules[1].host
value: "*.helm-lint.example.com"
- contains:
path: spec.rules
content:
host: "*.helm-lint.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- matchSnapshot:
path: spec.tls
- it: does not set a wildcard of publicAddr as a hostname when Ingress is enabled, publicAddr is set and ingress.suppressAutomaticWildcards is true
values:
- ../.lint/ingress.yaml
set:
publicAddr: ["helm-lint.example.com"]
ingress:
suppressAutomaticWildcards: true
asserts:
- equal:
path: spec.tls[0].hosts[0]
value: "helm-lint.example.com"
- contains:
path: spec.tls
content:
hosts:
- "helm-lint.example.com"
- equal:
path: spec.rules[0].host
value: helm-lint.example.com
- contains:
path: spec.rules
content:
host: "helm-lint.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- notContains:
path: spec.rules
content:
host: "*.helm-lint.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- matchSnapshot:
path: spec.tls
- it: trims ports from publicAddr and uses it as the hostname when Ingress is enabled and publicAddr is set
values:
- ../.lint/ingress.yaml
set:
publicAddr: ["helm-lint.example.com:443"]
asserts:
- equal:
path: spec.tls[0].hosts[0]
value: "helm-lint.example.com"
- contains:
path: spec.tls
content:
hosts:
- "helm-lint.example.com"
- "*.helm-lint.example.com"
- equal:
path: spec.rules[0].host
value: "helm-lint.example.com"
- contains:
path: spec.rules
content:
host: helm-lint.example.com
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- equal:
path: spec.rules[1].host
value: "*.helm-lint.example.com"
- contains:
path: spec.rules
content:
host: "*.helm-lint.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- matchSnapshot:
path: spec.tls
- it: exposes all publicAddrs and wildcard publicAddrs as hostnames when Ingress is enabled and multiple publicAddrs are set
values:
- ../.lint/ingress.yaml
set:
publicAddr: ["helm-lint.example.com", "helm-lint-second-domain.example.com"]
asserts:
- equal:
path: spec.tls[0].hosts[0]
value: "helm-lint.example.com"
- equal:
path: spec.tls[0].hosts[1]
value: "helm-lint-second-domain.example.com"
- contains:
path: spec.tls
content:
hosts:
- "helm-lint.example.com"
- "helm-lint-second-domain.example.com"
- "*.helm-lint.example.com"
- "*.helm-lint-second-domain.example.com"
- equal:
path: spec.rules[0].host
value: "helm-lint.example.com"
- equal:
path: spec.rules[1].host
value: "helm-lint-second-domain.example.com"
- equal:
path: spec.rules[2].host
value: "*.helm-lint.example.com"
- equal:
path: spec.rules[3].host
value: "*.helm-lint-second-domain.example.com"
- contains:
path: spec.rules
content:
host: "helm-lint.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- contains:
path: spec.rules
content:
host: "helm-lint-second-domain.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- contains:
path: spec.rules
content:
host: "*.helm-lint.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- contains:
path: spec.rules
content:
host: "*.helm-lint-second-domain.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- matchSnapshot:
path: spec.tls
# this is a very contrived example which wouldn't even work in reality
# it's just to test the logic in the hostname generation code
- it: does not add additional wildcard publicAddrs when Ingress is enabled and a publicAddr already contains a wildcard
values:
- ../.lint/ingress.yaml
set:
publicAddr: ["helm-lint.example.com", "*.helm-lint.example.com", "helm-lint-second-domain.example.com:443"]
asserts:
- equal:
path: spec.tls[0].hosts[0]
value: "helm-lint.example.com"
- equal:
path: spec.tls[0].hosts[1]
value: "*.helm-lint.example.com"
- equal:
path: spec.tls[0].hosts[2]
value: "helm-lint-second-domain.example.com"
- equal:
path: spec.tls[0].hosts[3]
value: "*.helm-lint-second-domain.example.com"
- contains:
path: spec.tls
content:
hosts:
- "helm-lint.example.com"
- "*.helm-lint.example.com"
- "helm-lint-second-domain.example.com"
- "*.helm-lint-second-domain.example.com"
- equal:
path: spec.rules[0].host
value: "helm-lint.example.com"
- equal:
path: spec.rules[1].host
value: "*.helm-lint.example.com"
- equal:
path: spec.rules[2].host
value: "helm-lint-second-domain.example.com"
- equal:
path: spec.rules[3].host
value: "*.helm-lint-second-domain.example.com"
- contains:
path: spec.rules
content:
host: "helm-lint.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- contains:
path: spec.rules
content:
host: "*.helm-lint.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- contains:
path: spec.rules
content:
host: "helm-lint-second-domain.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- contains:
path: spec.rules
content:
host: "*.helm-lint-second-domain.example.com"
http:
paths:
- backend:
service:
name: RELEASE-NAME
port:
number: 443
path: /
pathType: Prefix
- matchSnapshot:
path: spec.tls
- it: sets spec when passed
values:
- ../.lint/ingress.yaml
set:
ingress:
spec:
ingressClassName: nginx
otherSpecStuff: lint
asserts:
- hasDocuments:
count: 1
- isKind:
of: Ingress
- equal:
path: spec.ingressClassName
value: nginx
- equal:
path: spec.otherSpecStuff
value: lint

View file

@ -162,3 +162,74 @@ tests:
asserts:
- failedTemplate:
errorMessage: "clusterName must not contain a colon, you can override the cluster's public address with publicAddr"
- it: generates a config with proxy_service.trust_x_forwarded_for=true when version >=14.0.0 and ingress.enabled=true
chart:
version: 14.0.0
values:
- ../.lint/ingress.yaml
set:
clusterName: "helm-test.example.com"
asserts:
- hasDocuments:
count: 1
- isKind:
of: ConfigMap
- matchSnapshot:
path: data.teleport\.yaml
- it: generates a config with proxy_service.trust_x_forwarded_for=true when version = 14.0.0-rc.1 and ingress.enabled=true
chart:
version: "14.0.0-rc.1"
values:
- ../.lint/ingress.yaml
set:
clusterName: "helm-test.example.com"
asserts:
- hasDocuments:
count: 1
- isKind:
of: ConfigMap
- matchSnapshot:
path: data.teleport\.yaml
- it: generates a config WITHOUT proxy_service.trust_x_forwarded_for=true when version >=14.0.0 and ingress.enabled is not set
chart:
version: 14.0.0
set:
clusterName: "helm-test.example.com"
asserts:
- hasDocuments:
count: 1
- isKind:
of: ConfigMap
- matchSnapshot:
path: data.teleport\.yaml
- it: generates a config WITHOUT proxy_service.trust_x_forwarded_for=true when version < 14.0.0 and ingress.enabled=true
chart:
version: 13.1.5
values:
- ../.lint/ingress.yaml
set:
clusterName: "helm-test.example.com"
asserts:
- hasDocuments:
count: 1
- isKind:
of: ConfigMap
- matchSnapshot:
path: data.teleport\.yaml
- it: generates a config WITHOUT proxy_service.trust_x_forwarded_for=true when version < 14.0.0 and ingress.enabled is not set
chart:
version: 14.0.0
set:
clusterName: "helm-test.example.com"
asserts:
- hasDocuments:
count: 1
- isKind:
of: ConfigMap
- matchSnapshot:
path: data.teleport\.yaml

View file

@ -28,6 +28,115 @@ tests:
path: spec.type
value: ClusterIP
- it: uses a ClusterIP when proxy.service.type=ClusterIP
set:
clusterName: teleport.example.com
service:
type: NodePort
proxy:
service:
type: ClusterIP
asserts:
- hasDocuments:
count: 1
- isKind:
of: Service
- equal:
path: spec.type
value: ClusterIP
- it: fails to deploy when ingress.enabled=true and proxy.service.type is set to LoadBalancer (default)
set:
clusterName: teleport.example.com
ingress:
enabled: true
asserts:
- failedTemplate:
errorMessage: "proxy.service.type must not be LoadBalancer when using an ingress - any load balancer should be provisioned by your ingress controller. Set proxy.service.type=ClusterIP instead"
- it: uses a ClusterIP when ingress.enabled=true and service.type=ClusterIP
set:
clusterName: teleport.example.com
ingress:
enabled: true
service:
type: ClusterIP
asserts:
- hasDocuments:
count: 1
- isKind:
of: Service
- equal:
path: spec.type
value: ClusterIP
- it: uses a ClusterIP when ingress.enabled=true and proxy.service.type=ClusterIP
set:
clusterName: teleport.example.com
ingress:
enabled: true
proxy:
service:
type: ClusterIP
asserts:
- hasDocuments:
count: 1
- isKind:
of: Service
- equal:
path: spec.type
value: ClusterIP
- it: uses a NodePort when ingress.enabled=true and proxy.service.type=NodePort
set:
clusterName: teleport.example.com
ingress:
enabled: true
proxy:
service:
type: NodePort
asserts:
- hasDocuments:
count: 1
- isKind:
of: Service
- equal:
path: spec.type
value: NodePort
- it: uses a NodePort when ingress.enabled=true and service.type=NodePort
set:
clusterName: teleport.example.com
ingress:
enabled: true
service:
type: NodePort
asserts:
- hasDocuments:
count: 1
- isKind:
of: Service
- equal:
path: spec.type
value: NodePort
- it: uses a NodePort when ingress.enabled=true and proxy.service.type is overridden
set:
clusterName: teleport.example.com
ingress:
enabled: true
proxy:
service:
type: NodePort
asserts:
- hasDocuments:
count: 1
- isKind:
of: Service
- equal:
path: spec.type
value: NodePort
- it: sets AWS annotations when chartMode=aws
set:
clusterName: teleport.example.com
@ -73,6 +182,24 @@ tests:
targetPort: 5432
protocol: TCP
- it: does not add a separate Postgres listener port when separatePostgresListener is true and ingress.enabled=true
values:
- ../.lint/separate-postgres-listener.yaml
set:
ingress:
enabled: true
proxyListenerMode: multiplex
service:
type: ClusterIP
asserts:
- notContains:
path: spec.ports
content:
name: postgres
port: 5432
targetPort: 5432
protocol: TCP
- it: adds a separate Mongo listener port when separateMongoListener is true
values:
- ../.lint/separate-mongo-listener.yaml
@ -85,6 +212,24 @@ tests:
targetPort: 27017
protocol: TCP
- it: does not add a separate Mongo listener port when separateMongoListener is true and ingress.enabled=true
values:
- ../.lint/separate-mongo-listener.yaml
set:
ingress:
enabled: true
proxyListenerMode: multiplex
service:
type: ClusterIP
asserts:
- notContains:
path: spec.ports
content:
name: mongo
port: 27017
targetPort: 27017
protocol: TCP
- it: sets AWS backend protocol annotation to ssl when in AWS mode and ACM annotation is set
values:
- ../.lint/aws-ha.yaml
@ -98,6 +243,22 @@ tests:
path: metadata.annotations.service\.beta\.kubernetes\.io/aws-load-balancer-backend-protocol
value: ssl
- it: does not add AWS backend protocol annotation when in AWS mode, ACM annotation is set and ingress is enabled
values:
- ../.lint/aws-ha.yaml
set:
ingress:
enabled: true
service:
type: ClusterIP
annotations:
service:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:1234567890:certificate/a857a76c-51d0-4d3d-8000-465bb3e9829b
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443
asserts:
- isNull:
path: metadata.annotations.service\.beta\.kubernetes\.io/aws-load-balancer-backend-protocol
- it: sets AWS backend protocol annotation to tcp when in AWS mode and ACM annotation is not set
values:
- ../.lint/aws-ha.yaml
@ -106,6 +267,22 @@ tests:
path: metadata.annotations.service\.beta\.kubernetes\.io/aws-load-balancer-backend-protocol
value: tcp
- it: does not set AWS backend protocol annotation when in AWS mode, ACM annotation is not set and ingress is enabled
values:
- ../.lint/aws-ha.yaml
set:
ingress:
enabled: true
service:
type: ClusterIP
annotations:
service:
# required so at least one service annotation exists, to avoid non map type error
service.beta.kubernetes.io/random-annotation: helm-lint
asserts:
- isNull:
path: metadata.annotations.service\.beta\.kubernetes\.io/aws-load-balancer-backend-protocol
- it: exposes separate listener ports by default
values:
- ../.lint/example-minimal-standalone.yaml
@ -113,6 +290,38 @@ tests:
- matchSnapshot:
path: spec.ports
- it: does not expose separate listener ports by default when ingress.enabled=true
values:
- ../.lint/example-minimal-standalone.yaml
set:
ingress:
enabled: true
proxyListenerMode: multiplex
service:
type: ClusterIP
asserts:
- notContains:
path: spec.ports
content:
- name: sshproxy
port: 3023
targetPort: 3023
protocol: TCP
- name: k8s
port: 3026
targetPort: 3026
protocol: TCP
- name: sshtun
port: 3024
targetPort: 3024
protocol: TCP
- name: mysql
port: 3036
targetPort: 3036
protocol: TCP
- matchSnapshot:
path: spec.ports
- it: exposes separate listener ports when running in separate mode
values:
- ../.lint/proxy-listener-mode-separate.yaml
@ -120,9 +329,53 @@ tests:
- matchSnapshot:
path: spec.ports
- it: does not expose separate listener ports when running in separate mode and ingress.enabled=true
values:
- ../.lint/proxy-listener-mode-separate.yaml
set:
ingress:
enabled: true
proxyListenerMode: multiplex
service:
type: ClusterIP
asserts:
- notContains:
path: spec.ports
content:
- name: sshproxy
port: 3023
targetPort: 3023
protocol: TCP
- name: k8s
port: 3026
targetPort: 3026
protocol: TCP
- name: sshtun
port: 3024
targetPort: 3024
protocol: TCP
- name: mysql
port: 3036
targetPort: 3036
protocol: TCP
- matchSnapshot:
path: spec.ports
- it: exposes a single port when running in multiplex mode
values:
- ../.lint/proxy-listener-mode-multiplex.yaml
asserts:
- matchSnapshot:
path: spec.ports
- it: exposes a single port when running in multiplex mode and ingress.enabled=true
values:
- ../.lint/proxy-listener-mode-multiplex.yaml
set:
ingress:
enabled: true
service:
type: ClusterIP
asserts:
- matchSnapshot:
path: spec.ports

View file

@ -682,7 +682,8 @@
"pod",
"service",
"serviceAccount",
"certSecret"
"certSecret",
"ingress"
],
"properties": {
"config": {
@ -736,6 +737,23 @@
}
}
},
"ingress": {
"enabled": {
"$id": "#/properties/ingress/enabled",
"type": "boolean",
"default": false
},
"suppressAutomaticWildcards": {
"$id": "#/properties/ingress/suppressAutomaticWildcards",
"type": "boolean",
"default": false
},
"spec": {
"$id": "#/properties/ingress/spec",
"type": "object",
"default": {}
}
},
"serviceAccount": {
"$id": "#/properties/serviceAccount",
"type": "object",

View file

@ -144,8 +144,7 @@ authentication:
# Teleport supports TLS routing. In this mode, all client connections are wrapped in TLS and multiplexed on one Teleport proxy port.
# Default mode will not utilize TLS routing and operate in backwards-compatibility mode.
#
# WARNING: setting this value to 'multiplex' requires Teleport to terminate TLS itself.
# TLS multiplexing is not supported when using ACM+NLB for TLS termination.
# To use an ingress, set proxyListenerMode=multiplex, ingress.enabled=true and service.type=ClusterIP
#
# Possible values are 'separate' and 'multiplex'
proxyListenerMode: "separate"
@ -499,6 +498,8 @@ annotations:
# Annotations for the certificate secret generated by cert-manager v1.5+ when
# highAvailability.certManager.enabled is true
certSecret: {}
# Annotations for the Ingress object
ingress: {}
# Kubernetes service account to create/use.
serviceAccount:
@ -516,13 +517,30 @@ rbac:
# Set to false if your cluster level resources are managed separately.
create: true
# Options for the Teleport service
# Options for the Teleport proxy service
# This setting only applies to the proxy service. The teleport auth service is internal-only and always uses a ClusterIP.
# You can override the proxy's backend service to any service type (other than "LoadBalancer") here if really needed.
# To use an Ingress, set service.type=ClusterIP and ingress.enabled=true
service:
type: LoadBalancer
# Additional entries here will be added to the service spec.
spec: {}
# loadBalancerIP: "1.2.3.4"
# Options for ingress
# If you set ingress.enabled to true, service.type MUST also be set to something other than "LoadBalancer" to prevent
# additional unnecessary load balancers from being created. Ingress controllers should provision their own load balancer.
# Using an Ingress also requires that you use the `tsh` client to connect to Kubernetes clusters and databases behind Teleport.
# See https://goteleport.com/docs/architecture/tls-routing/#working-with-layer-7-load-balancers-or-reverse-proxies-preview for details.
ingress:
enabled: false
# Setting suppressAutomaticWildcards to true will not automatically add *.<clusterName> as a hostname served
# by the Ingress. This may be desirable if you don't use Teleport Application Access.
suppressAutomaticWildcards: false
# Additional entries here will be added to the ingress spec.
spec: {}
# ingressClassName: nginx
# Extra arguments to pass to 'teleport start' for the main Teleport pod
extraArgs: []