The base SSH server implementation now sends SSH keepalive at ta rate of
1/4 of "idle timeout" constant. The client properly responds to keepalive
pings.
The SSH client, instead of creating 2 goroutines for handling SSH
requests and SSH channels now uses the same (existing) goroutine with
for-loop + select statement.
Fix one:
Fixed typo in defining `teleport.HOTP` constant.
This fixes bug #721
Fix two:
Removes 'drop tunnel connection' logic on any tunnel-related error. This
fixes 2nd problem "Handling Unreachable nodes" for issue #717 (see
klizhentas comment there)
BoltDB backend is now compatible with how all backends should
initialize.
Also all BoltDB-specific code/constants have been consolidated inside of
`backend.boltbk` package.
- Added idle timeout handling to every SSH connection.
- A bit of code refactoring (removing unused code paths)
Most importantly:
Added a custom SSH handshake between SSH Teleport proxies
and SSH Teleport servers. This handshake sends a custom JSON payload
from a proxy to a server, allowing to exchange additional information,
like the true IP of a client.
* Downgraded many messages from `Debug` to `Info`
* Edited messages so they're not verbose and not too short
* Added "context" to some
* Added logical teleport component as [COMPONENT] at the beginning of
many, making logs **vastly** easier to read.
* Added one more logging level option when creating Teleport (only
Teleconsole uses it for now)
The output with 'info' severity now look extremely clean.
This is startup, for example:
```
INFO[0000] [AUTH] Auth service is starting on turing:32829 file=utils/cli.go:107
INFO[0000] [SSH:auth] listening socket: 127.0.0.1:32829 file=sshutils/server.go:119
INFO[0000] [SSH:auth] is listening on 127.0.0.1:32829 file=sshutils/server.go:144
INFO[0000] [Proxy] Successfully registered with the cluster file=utils/cli.go:107
INFO[0000] [Node] Successfully registered with the cluster file=utils/cli.go:107
INFO[0000] [AUTH] keyAuth: 127.0.0.1:56886->127.0.0.1:32829, user=turing file=auth/tun.go:370
WARN[0000] unable to load the auth server cache: open /tmp/cluster-teleconsole-client781495771/authservers.json: no such file or directory file=auth/tun.go:594
INFO[0000] [SSH:auth] new connection 127.0.0.1:56886 -> 127.0.0.1:32829 vesion: SSH-2.0-Go file=sshutils/server.go:205
INFO[0000] [AUTH] keyAuth: 127.0.0.1:56888->127.0.0.1:32829, user=turing.teleconsole-client file=auth/tun.go:370
INFO[0000] [AUTH] keyAuth: 127.0.0.1:56890->127.0.0.1:32829, user=turing.teleconsole-client file=auth/tun.go:370
INFO[0000] [Node] turing connected to the cluster 'teleconsole-client' file=service/service.go:158
INFO[0000] [AUTH] keyAuth: 127.0.0.1:56892->127.0.0.1:32829, user=turing file=auth/tun.go:370
INFO[0000] [SSH:auth] new connection 127.0.0.1:56890 -> 127.0.0.1:32829 vesion: SSH-2.0-Go file=sshutils/server.go:205
INFO[0000] [SSH:auth] new connection 127.0.0.1:56888 -> 127.0.0.1:32829 vesion: SSH-2.0-Go file=sshutils/server.go:205
INFO[0000] [Node] turing.teleconsole-client connected to the cluster 'teleconsole-client' file=service/service.go:158
INFO[0000] [Node] turing.teleconsole-client connected to the cluster 'teleconsole-client' file=service/service.go:158
INFO[0000] [SSH] received event(SSHIdentity) file=service/service.go:436
INFO[0000] [SSH] received event(ProxyIdentity) file=service/service.go:563
```
You can easily tell that auth, ssh node and proxy have successfully started.
When teleport starts, it looks for web assets in the following
directories:
- Dir where executable is
- /usr/local/share/teleport
- /usr/share/teleport
- /opt/teleport
This commit includes refactoring and cleanup of cert authority sybsystem:
* User keys methods are deleted
* Authorities CRUD is simplified
* Lots of code removed