Add an Elastic Stack guide (#16646)

* Add an Elastic Stack guide

Closes #13157

Add a guide to using Teleport's Event Handler plugin with the Elastic
Stack.

* Respond to PR feedback
This commit is contained in:
Paul Gottschling 2022-09-29 11:23:25 -04:00 committed by GitHub
parent 2754a513b8
commit 448b52e9a3
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
7 changed files with 848 additions and 1 deletions

View file

@ -449,7 +449,13 @@
},
{
"title": "Fluentd",
"slug": "/management/guides/fluentd/"
"slug": "/management/guides/fluentd/",
"forScopes": ["enterprise", "cloud"]
},
{
"title": "Monitor Audit Events with the Elastic Stack",
"slug": "/management/guides/elastic-stack/",
"forScopes": ["enterprise", "cloud"]
},
{
"title": "EC2 Tags",

Binary file not shown.

After

Width:  |  Height:  |  Size: 567 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 505 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 713 KiB

View file

@ -8,6 +8,9 @@ layout: tocless-doc
- [Terraform Provider](./guides/terraform-provider.mdx). How to configure Teleport Cloud, Open Source, and Enterprise with the Terraform Provider for Teleport.
- [Docker](./guides/docker.mdx). Getting started with Teleport Open Source using Docker.
- [Fluentd Event Forwarder](./guides/fluentd.mdx). Forwarding events with Fluentd and the Teleport Events Handler.
- [Monitor Audit Events with the Elastic Stack](./guides/elastic-stack.mdx).
How to send audit events to Elastcsearch for visualization and alerting in
Kibana.
- [EC2 tags as Teleport Nodes](./guides/ec2-tags.mdx). How to set up Teleport Node labels based on EC2 tags.
- [Joining Nodes via AWS IAM Role](./guides/joining-nodes-aws-iam.mdx). Use the IAM join method to add Nodes to your Teleport cluster on AWS.
<ScopedBlock scope={["oss", "enterprise"]}>

View file

@ -0,0 +1,838 @@
---
title: "Monitor Teleport Audit Events with the Elastic Stack"
description: "How to configure Teleport's Event Handler plugin to send audit events to the Elastic Stack"
---
Teleport's Event Handler plugin receives audit events from the Teleport Auth
Service and forwards them to your log management solution, letting you perform
historical analysis, detect unusual behavior, and form a better understanding of
how users interact with your Teleport cluster.
In this guide, we will show you how to configure Teleport's Event Handler plugin
to send your Teleport audit events to the Elastic Stack. In this setup, the
Event Handler plugin forwards audit events from Teleport to Logstash, which
stores them in Elasticsearch for visualization and alerting in Kibana.
## Prerequisites
(!docs/pages/includes/commercial-prereqs-tabs.mdx!)
- Logstash version 8.4.1 or above running on a Linux host. Logstash must be
listening on a TCP port that is open to traffic from <ScopedBlock
scope={["oss", "enterprise"]}>the Teleport Auth
Service</ScopedBlock><ScopedBlock scope="cloud">your Teleport Cloud
tenant</ScopedBlock>. In this guide, you will also run the Event Handler
plugin on this host.
- Elasticsearch and Kibana version 8.4.1 or above, either running via an Elastic
Cloud account or on your own infrastructure. You will need permissions to
create and manage users in Elasticsearch.
We have tested this guide on the Elastic Stack version 8.4.1.
## Step 1/4. Set up the Event Handler plugin
The Event Handler plugin is a binary that runs independently of your Teleport
cluster. It authenticates to your Teleport cluster and Logstash using mutual
TLS. In this section, you will install the Event Handler plugin on the Linux
host where you are running Logstash and generate credentials that the plugin
will use for authentication.
### Install the Event Handler plugin
Follow the instructions for your environment to install the Event Handler
plugin on your Logstash host:
<Tabs dropdownCaption="Method" dropdownSelected="Executable">
<TabItem options="Executable" label="Latest Release">
On the host where you are running Logstash, execute the following commands:
```code
$ curl -L -O https://get.gravitational.com/teleport-event-handler-v(=teleport.plugin.version=)-linux-amd64-bin.tar.gz
$ tar -zxvf teleport-event-handler-v(=teleport.plugin.version=)-linux-amd64-bin.tar.gz
```
Move the `teleport-event-handler` binary into your `PATH`.
</TabItem>
<TabItem options="Docker" label="Latest Release ">
On the host where you are running Logstash, execute the following command:
```code
$ docker pull public.ecr.aws/gravitational/teleport-plugin-event-handler:(=teleport.plugin.version=)
```
</TabItem>
<TabItem options="Docker" label="From Source">
You must have Docker Desktop installed on your Logstash host.
Log in to the host and execute the following commands to build the plugin:
```code
$ git clone https://github.com/gravitational/teleport-plugins.git --depth 1
$ cd teleport-plugins/event-handler/build.assets
$ make build
```
You can find the compiled binary within your clone of the `teleport-plugins`
repo, with the file path, `event-handler/build/teleport-event-handler`. Move
this into your `PATH`.
</TabItem>
<TabItem options="Executable" label="From Source ">
You will need Go >= (=teleport.golang=) installed.
Run the following commands on your Logstash host:
```code
$ git clone https://github.com/gravitational/teleport-plugins.git --depth 1
$ cd teleport-plugins/event-handler
$ go build
```
The resulting executable will have the name `event-handler`. To follow the
rest of this guide, rename this file to `teleport-event-handler` and move it
into your `PATH`.
</TabItem>
</Tabs>
### Generate a starter config file
Generate a configuration file with placeholder values for the Teleport Event
Handler plugin. Later in this guide, we will edit the configuration file for
your environment.
On the host where you are running Logstash and the Event Handler plugin, execute
the following command to generate a sample configuration that we will edit
later. Replace `mytenant.teleport.sh` with the DNS name of your Teleport Proxy
Service or Teleport Cloud tenant:
```code
$ teleport-event-handler configure . mytenant.teleport.sh
```
The `teleport-event-handler configure` command generates several files that you
will refer to when generating credentials for the plugin and configuring a
Logstash pipeline:
|File(s)|Purpose|
|---|---|
| `ca.crt` | Self-signed CA certificate that the Event Handler plugin and Logstash will use to validate one another's TLS certificates.|
| `server.crt` and `server.key`| TLS certificate and key for the Logstash server.|
| `client.crt` and `client.key`| TLS certificate and key for the Event Handler plugin.|
| `teleport-event-handler-role.yaml`| `user` and `role` resource definitions for the Event Handler plugin.|
| `fluent.conf`| Fluentd configuration file. We will use parts of this to define a Logstash pipeline.|
| `teleport-event-handler.toml` | Starter configuration file for the Event Handler plugin.|
| `ca.key` | Not used in this setup. |
### Define RBAC resources
The `teleport-event-handler configure` command generated a file called
`teleport-event-handler-role.yaml`. This file defines a `teleport-event-handler`
role and a user with read-only access to the `event` API:
```yaml
kind: role
metadata:
name: teleport-event-handler
spec:
allow:
rules:
- resources: ['event', 'session']
verbs: ['list','read']
version: v5
---
kind: user
metadata:
name: teleport-event-handler
spec:
roles: ['teleport-event-handler']
version: v2
```
Use `tctl` to create the role and the user:
```code
$ tctl create -f teleport-event-handler-role.yaml
# user "teleport-event-handler" has been created
# role 'teleport-event-handler' has been created
```
<Details title="Using tctl on the Logstash host?">
If you are running Teleport on your Elastic Stack host, e.g., you are exposing
Kibana's HTTP endpoint via the Teleport Application Service, running the `tctl
create` command above will generate an error similar to the following:
```text
ERROR: tctl must be either used on the auth server or provided with the identity file via --identity flag
```
To avoid this error, create the `teleport-event-handler-role.yaml` file on your
workstation, then sign in to your Teleport cluster and run the `tctl` command
locally.
</Details>
### Enable impersonation of the Event Handler plugin user
In order for the Event Handler plugin to forward events from your Teleport
cluster, it needs signed credentials from the cluster's certificate authority.
The `teleport-event-handler` user cannot request this itself, and requires
another user to **impersonate** this account in order to request credentials.
Create a role that enables your user to impersonate the `teleport-event-handler`
user. First, paste the following YAML document into a file called
`teleport-event-handler-impersonator.yaml`:
```yaml
kind: role
version: v5
metadata:
name: teleport-event-handler-impersonator
spec:
options:
# max_session_ttl defines the TTL (time to live) of SSH certificates
# issued to the users with this role.
max_session_ttl: 10h
# This section declares a list of resource/verb combinations that are
# allowed for the users of this role. By default nothing is allowed.
allow:
impersonate:
users: ["teleport-event-handler"]
roles: ["teleport-event-handler"]
```
Next, create the role:
```code
$ tctl create teleport-event-handler-impersonator.yaml
```
Assign this role to your Teleport user by running the following commands,
depending on whether you authenticate as a local Teleport user or via the
`github`, `saml`, or `oidc` authentication connectors:
<Tabs>
<TabItem label="Local User">
Retrieve your local user's configuration resource:
```code
$ tctl get users/$(tsh status -f json | jq -r '.active.username') > out.yaml
```
Edit `out.yaml`, adding `teleport-event-handler-impersonator` to the list of
existing roles.
Apply your changes:
```code
$ tctl create -f out.yaml
```
</TabItem>
<TabItem label="GitHub">
Retrieve your `github` configuration resource:
```code
$ tctl get github/github > github.yaml
```
Edit `github.yaml`, adding `teleport-event-handler-impersonator` to the
`teams_to_roles` section. The team you will map to this role will depend on how
you have designed your organization's RBAC, but it should be the smallest team
possible within your organization. This team must also include your user.
Here is an example:
```diff
teams_to_roles:
- organization: octocats
team: admins
roles:
- access
+ - teleport-event-handler-impersonator
```
Apply your changes:
```code
$ tctl create -f github.yaml
```
</TabItem>
<TabItem label="SAML">
Retrieve your `saml` configuration resource:
```code
$ tctl get saml/mysaml > saml.yaml
```
Edit `saml.yaml`, adding `teleport-event-handler-impersonator` to the
`attributes_to_roles` section. The attribute you will map to this role will
depend on how you have designed your organization's RBAC, but it should be the
smallest group possible within your organization. This group must also include
your user.
Here is an example:
```diff
attributes_to_roles:
- name: "groups"
value: "my-group"
roles:
- access
+ - teleport-event-handler-impersonator
```
Apply your changes:
```code
$ tctl create -f saml.yaml
```
</TabItem>
<TabItem label="OIDC">
Retrieve your `oidc` configuration resource:
```code
$ tctl get oidc/myoidc > oidc.yaml
```
Edit `oidc.yaml`, adding `teleport-event-handler-impersonator` to the
`claims_to_roles` section. The claim you will map to this role will depend on
how you have designed your organization's RBAC, but it should be the smallest
group possible within your organization. This group must also include your
user.
Here is an example:
```diff
claims_to_roles:
- name: "groups"
value: "my-group"
roles:
- access
+ - teleport-event-handler-impersonator
```
Apply your changes:
```code
$ tctl create -f saml.yaml
```
</TabItem>
</Tabs>
Log out of your Teleport cluster and log in again to assume the new role.
### Export the access plugin identity
Like all Teleport users, `teleport-event-handler` needs signed credentials in
order to connect to your Teleport cluster. You will use the `tctl auth sign`
command to request these credentials for the plugin.
<ScopedBlock scope={["enterprise", "oss"]}>
The format of the credentials depends on whether you have set up your network to
give the plugin direct access to the Teleport Auth Service, or if all Teleport
clients and services connect to the Teleport Proxy Service instead.
<Tabs>
<TabItem label="Connect to the Proxy Service">
The following `tctl auth sign` command impersonates the `access-plugin` user,
generates signed credentials, and writes an identity file to the local
directory:
```code
$ tctl auth sign --user=teleport-event-handler --out=auth.pem
```
The Event Handler plugin listens for audit events by connecting to the Teleport
Auth Service's gRPC endpoint over TLS.
The identity file, `auth.pem`, includes both TLS and SSH credentials. Your
Event Handler plugin uses the SSH credentials to connect to the Proxy Service,
which establishes a reverse tunnel connection to the Auth Service. The plugin
uses this reverse tunnel, along with your TLS credentials, to connect to the
Auth Service's gRPC endpoint.
You will refer to this file later when configuring the plugin.
</TabItem>
<TabItem label="Connect to the Auth Service">
If your network allows your plugin to access the Auth Service directly, e.g.,
you are running the plugin on the Auth Service host, the plugin uses TLS
credentials to connect to the Auth Service's gRPC endpoint and listen for audit
events.
You can generate TLS credentials with the following command:
```code
$ tctl auth sign --format=tls --user=teleport-event-handler --out=auth
```
This command should result in three PEM-encoded files: `auth.crt`,
`auth.key`, and `auth.cas` (certificate, private key, and CA certs
respectively). Later, you will configure the plugin to use these credentials to
connect to the Auth Service.
</TabItem>
</Tabs>
</ScopedBlock>
<ScopedBlock scope="cloud">
The following `tctl auth sign` command impersonates the `teleport-event-handler`
user, generates signed credentials, and writes an identity file to the local
directory:
```code
$ tctl auth sign --user=teleport-event-handler --out=auth.pem
```
Teleport's Event Handler plugin listens for new and updated audit events by
connecting to the Teleport Auth Service's gRPC endpoint over TLS.
The identity file, `auth.pem`, includes both TLS and SSH credentials. The Event
Handler plugin uses the SSH credentials to connect to the Proxy Service, which
establishes a reverse tunnel connection to the Auth Service. The plugin uses
this reverse tunnel, along with your TLS credentials, to connect to the Auth
Service's gRPC endpoint.
You will refer to this file later when configuring the plugin.
</ScopedBlock>
<Admonition
title="Certificate Lifetime"
>
By default, `tctl auth sign` produces certificates with a relatively short
lifetime. For production deployments, you can use the `--ttl` flag to ensure a
more practical certificate lifetime, e.g., `--ttl=8760h` to export a one-year
certificate.
</Admonition>
## Step 2/4. Configure a Logstash pipeline
The Event Handler plugin forwards audit logs from Teleport by sending HTTP
requests to a user-configured endpoint. We will define a Logstash pipeline that
handles these requests, extracts logs, and sends them to Elasticsearch.
### Create a role for the Event Handler plugin
Your Logstash pipeline will require permissions to create and manage
Elasticsearch indexes and index lifecycle management policies, plus get
information about your Elasticsearch deployment. Create a role with these
permissions so you can later assign it to the Elasticsearch user you will
create for the Event Handler.
In Kibana, navigate to "Management" > "Roles" and click "Create role". Enter the
name `teleport-plugin` for the new role. Under the "Elasticsearch" section,
under "Cluster privileges", enter `manage_index_templates`, `manage_ilm`, and
`monitor`.
Under "Index privileges", define an entry with `audit-events-*` in the "Indices"
field and `write` and `manage` in the "Privileges" field. Click "Create role".
![Creating an Elasticsearch role](../../../img/enterprise/plugins/elasticsearch/create-role.png)
### Create an Elasticsearch user for the Event Handler
Create an Elasticsearch user that Logstash can authenticate as when making
requests to the Elasticsearch API.
In Kibana, find the hamburger menu on the upper left and click "Management",
then "Users" > "Create user". Enter `teleport` for the "Username" and provide a
secure password.
Assign the user the `teleport-plugin` role we defined earlier.
### Prepare TLS credentials for Logstash
Later in this guide, your Logstash pipeline will use an HTTP input to receive
audit events from the Teleport Event Handler plugin.
Logstash's HTTP input can only sign certificates with a private key that uses
the unencrypted PKCS #8 format. When you ran `teleport-event-handler configure`
earlier, the command generated an encrypted RSA key. We will convert this key to
PKCS #8.
You will need a password to decrypt the RSA key. To retrieve this, execute the
following command in the directory where you ran `teleport-event-handler
configure`:
```code
$ cat fluent.conf | grep passphrase
private_key_passphrase "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"
```
Convert the encrypted RSA key to an unencrypted PKCS #8 key. The command will
prompt your for the password you retrieved:
```code
$ openssl pkcs8 -topk8 -in server.key -nocrypt -out pkcs8.key
```
Enable Logstash to read the new key, plus the CA and certificate we generated
earlier:
```code
$ chmod +r pkcs8.key ca.crt server.crt
```
### Define an index template
When the Event Handler plugin sends audit events to Logstash, Logstash needs to
know how to parse these events to forward them to Elasticsearch. You can define
this logic using an index template, which Elasticsearch uses to construct an
index for data it receives.
Create a file called `audit-events.json` with the following content:
```json
{
"index_patterns": ["audit-events-*"],
"template": {
"settings": {},
"mappings": {
"dynamic":"true"
}
}
}
```
This index template modifies any index with the pattern `audit-events-*`.
Because it includes the `"dynamic": "true"` setting, it instructs Elasticsearch
to define index fields dynamically based on the events it receives. This is
useful for Teleport audit events, which use a variety of fields depending on
the event type.
### Define a Logstash pipeline
On the host where you are running Logstash, create a configuration file that
defines a Logstash pipeline. This pipeline will receive logs from port `9601`
and forward them to Elasticsearch.
On the host running Logstash, create a file called
`/etc/logstash/conf.d/teleport-audit.conf` with the following content:
```ruby
input {
http {
port => 9601
ssl => true
ssl_certificate => "/home/server.crt"
ssl_key => "/home/pkcs8.key"
ssl_certificate_authorities => [
"/home/ca.crt"
]
ssl_verify_mode => "force_peer"
}
}
output {
elasticsearch {
user => "teleport"
password => "ELASTICSEARCH_PASSPHRASE"
template_name => "audit-events"
template => "/home/audit-events.json"
index => "audit-events-%{+yyyy.MM.dd}"
template_overwrite => true
}
}
```
In the `input.http` section, update `ssl_certificate` and
`ssl_certificate_authorities` to include the locations of the server certificate
and certificate authority files that the `teleport-event-handler configure`
command generated earlier.
Logstash will authenticate client certificates against the CA file and present a
signed certificate to the Teleport Event Handler plugin.
Edit the `ssl_key` field to include the path to the `pkcs8.key` file we
generated earlier.
In the `output.elasticsearch` section, edit the following fields depending on
whether you are using Elastic Cloud or your own Elastic Stack deployment:
<Tabs>
<TabItem label="Elastic Cloud">
Assign `cloud_auth` to a string with the content `teleport:PASSWORD`, replacing
`PASSWORD` with the password you assigned to your `teleport` user earlier.
Visit `https://cloud.elastic.co/deployments`, find the "Cloud ID" field, copy
the content, and add it as the value of `cloud_id` in your Logstash pipeline
configuration. The `elasticsearch` section should resemble the following:
```ruby
elasticsearch {
cloud_id => "CLOUD_ID"
cloud_auth => "teleport:PASSWORD"
template_name => "audit-events"
template => "/home/audit-events.json"
index => "audit-events-%{+yyyy.MM.dd}"
template_overwrite => true
}
```
</TabItem>
<TabItem label="Self-Hosted">
Assign `hosts` to a string indicating the hostname of your Elasticsearch host.
Assign `user` to `teleport` and `password` to the passphrase you created for
your `teleport` user earlier.
The `elasticsearch` section should resemble the following:
```ruby
elasticsearch {
hosts => "elasticsearch.example.com"
user => "teleport"
password => "PASSWORD"
template_name => "audit-events"
template => "/home/audit-events.json"
index => "audit-events-%{+yyyy.MM.dd}"
template_overwrite => true
}
```
</TabItem>
</Tabs>
Finally, modify `template` to point to the path to the `audit-events.json` file
you created earlier.
Because the index template we will create with this file applies to indices
with the prefix `audit-events-*`, and we have configured our Logstash pipeline
to create an index with the title `"audit-events-%{+yyyy.MM.dd}`, Elasticsearch
will automatically index fields from Teleport audit events.
### Disable the Elastic Common Schema for your pipeline
The Elastic Common Schema (ECS) is a standard set of fields that Elastic Stack
uses to parse and visualize data. Since we are configuring Elasticsearch to
index all fields from your Teleport audit logs dynamically, we will disable the
ECS for your Logstash pipeline.
On the host where you are running Logstash, edit `/etc/logstash/pipelines.yml`
to add the following entry:
```yaml
- pipeline.id: teleport-audit-logs
path.config: "/etc/logstash/conf.d/teleport-audit.conf"
pipeline.ecs_compatibility: disabled
```
This disables the ECS for your Teleport audit log pipeline.
<Admonition type="tip">
If your `pipelines.yml` file defines an existing pipeline that includes
`teleport-audit.conf`, e.g., by using a wildcard value in `path.config`, adjust
the existing pipeline definition so it no longer applies to
`teleport-audit.conf`.
</Admonition>
### Run the Logstash pipeline
Restart Logstash:
```code
$ sudo systemctl restart logstash
```
Make sure your Logstash pipeline started successfully by running the following
command to tail Logstash's logs:
```code
$ sudo journalctl -u logstash -f
```
When your Logstash pipeline initializes its `http` input and starts running, you
should see a log similar to this:
```text
Sep 15 18:27:13 myhost logstash[289107]: [2022-09-15T18:27:13,491][INFO ][logstash.inputs.http][main][33bdff0416b6a2b643e6f4ab3381a90c62b3aa05017770f4eb9416d797681024] Starting http input listener {:address=>"0.0.0.0:9601", :ssl=>"true"}
```
These logs indicate that your Logstash pipeline has connected to Elasticsearch
and installed a new index template:
```text
Sep 12 19:49:06 myhost logstash[33762]: [2022-09-12T19:49:06,309][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.4.1) {:es_version=>8}
Sep 12 19:50:00 myhost logstash[33762]: [2022-09-12T19:50:00,993][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"audit-events"}
```
<Details title="Pipeline not starting?">
If Logstash fails to initialize the pipeline, it may continue to attempt to
contact Elasticsearch. In that case, you will see repeated logs like the one below:
```text
Sep 12 19:43:04 myhost logstash[33762]: [2022-09-12T19:43:04,519][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://teleport:xxxxxx@127.0.0.1:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::ClientProtocolException] 127.0.0.1:9200 failed to respond"}
```
### Diagnosing the problem
To diagnose the cause of errors initializing your Logstash pipeline, search your
Logstash `journalctl` logs for the following, which indicate that the pipeline is
starting. The relevant error logs should come shortly after these:
```text
Sep 12 18:15:52 myhost logstash[27906]: [2022-09-12T18:15:52,146][INFO][logstash.javapipeline][main] Starting pipeline {:pipeline_id=>"main","pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50,"pipeline.max_inflight"=>250,"pipeline.sources"=>["/etc/logstash/conf.d/teleport-audit.conf"],:thread=>"#<Thread:0x1c1a3ee5 run>"}
Sep 12 18:15:52 myhost logstash[27906]: [2022-09-12T18:15:52,912][INFO][logstash.javapipeline][main] Pipeline Java execution initialization time {"seconds"=>0.76}
```
### Disabling Elasticsearch TLS
This guide assumes that you have already configured Elasticsearch and Logstash
to communicate with one another via TLS.
If your Elastic Stack deployment is in a sandboxed or low-security environment
(e.g., a demo environment), and your `journalctl` logs for Logstash show that
Elasticsearch is unreachable, you can disable TLS for communication between
Logstash and Elasticsearch.
Edit the file `/etc/elasticsearch/elasticsearch.yml` to set
`xpack.security.http.ssl.enabled` to `false`, then restart Elasticsearch.
</Details>
## Step 3/4. Run the Event Handler plugin
### Complete the Event Handler configuration
Earlier, we generated a file called `teleport-event-handler.toml` to configure
the Event Handler plugin. This file includes settings similar to the following:
```toml
storage = "./storage"
timeout = "10s"
batch = 20
namespace = "default"
[forward.fluentd]
ca = "/home/ca.crt"
cert = "/home/client.crt"
key = "/home/client.key"
url = "https://localhost:8888/test.log"
[teleport]
addr = "example.teleport.com:443"
identity = "identity"
```
Update the configuration file as follows.
Change `forward.fluentd.url` to the scheme, host and port you configured for
your Logstash `http` input earlier, `https://localhost:9601`. Change
`forward.fluentd.session-url` to the same value with the root URL path:
`https://localhost:9601/`.
Change `teleport.addr` to the host and port of your Teleport Proxy Service, or
the Auth Service if you have configured the Event Handler to connect to it
directly, e.g., `mytenant.teleport.sh:443`.
<Tabs>
<TabItem label="Connect to the Proxy Service">
Assign `teleport.identity` to a path to the identity file you exported earlier,
e.g., `/home/auth.pem`.
</TabItem>
<TabItem label="Connect to the Auth Service">
Assign `teleport.ca`, `teleport.cert`, and `teleport.key` to the paths of the
TLS credentials you generated earlier. Respectively, these are the certificate
authority, certificate, and private key.
</TabItem>
</Tabs>
### Start the Event Handler
To start the Event Handler, run the following command on your Event Handler
host:
```code
$ teleport-event-handler start -d --config teleport-event-handler.toml
```
Use the `-d` flag to print debug logs and identify any issues with
starting the Event Handler. Once you have completed this guide and confirmed
that the Event Handler runs as expected, you can run the command without the
`-d` flag.
<Details title="Choose when to start exporting events">
You can configure when you would like the Event Handler to begin exporting
events when you run the `start` command. This example will start exporting from
May 5th, 2021:
```code
$ teleport-event-handler start --config teleport-event-handler.toml --start-time "2021-05-05T00:00:00Z"
```
You can only determine the start time once, when first running the Event
Handler. If you want to change the time frame later, remove the plugin state
directory that you specified in the `storage` field of the handler's
configuration file.
</Details>
Once the Event Handler starts, you will see notifications about scanned and
forwarded events:
```txt
DEBU Event sent id:f19cf375-4da6-4338-bfdc-e38334c60fd1 index:0 ts:2022-09-21
18:51:04.849 +0000 UTC type:cert.create event-handler/app.go:140
...
```
## Step 4/4. Create a data view in Kibana
Make it possible to explore your Teleport audit events in Kibana by creating a
data view. In the Elastic Stack UI, find the hamburger menu on the upper
left of the screen, then click "Management" > "Data Views". Click "Create data
view".
For the "Name" field, use "Teleport Audit Events". In "Index pattern", use
`audit-events-*` to select all indices created by our Logstash pipeline. In
"Timestamp field", choose `time`, which Teleport adds to its audit events.
![Creating a data view](../../../img/enterprise/plugins/elasticsearch/data-view-create.png)
To use your data view, find the search box at the top of the Elastic Stack UI
and enter "Discover". On the upper left of the screen, click the dropdown menu
and select "Teleport Audit Events". You can now search and filter your Teleport
audit events in order to get a better understanding how users are interacting
with your Teleport cluster.
![Creating a data view](../../../img/enterprise/plugins/elasticsearch/data-view-explore.png)
For example, we can click the `event` field on the left sidebar and visualize
the event types for your Teleport audit events over time:
![Creating a visualization](../../../img/enterprise/plugins/elasticsearch/lens.png)
## Next steps
Now that you are exporting your audit events to the Elastic Stack, consult our
[audit event reference](../../reference/audit.mdx#event-types) so you can plan
visualizations and alerts.