Compare commits

..

3 commits
gaming ... main

Author SHA1 Message Date
2319bfb378
add k8s 2025-06-21 09:59:49 +02:00
9b15512ff6
add trailsense 2025-06-21 07:46:07 +02:00
0a4140ae8c
rm todo 2025-06-21 04:09:59 +02:00
9 changed files with 446 additions and 4 deletions

View file

@ -139,6 +139,7 @@ rev: 2025-01-30
- [Google Calendar](./office/Google%20Calendar.md)
- [Google Contacts](./office/Google%20Contacts.md)
- [OwnTracks](./mobile/OwnTracks.md)
- [TrailSense](./mobile/TrailSense.md)
# Web
- [Authelia](./web/Authelia.md)
@ -299,6 +300,8 @@ rev: 2025-01-30
- [Ansible](../tools/Ansible/Ansible.md)
- [Docker](../tools/Docker.md)
- [Podman](../tools/Podman.md)
- [k3s](../tools/k3s.md)
- [k9s](../tools/k9s.md)
- [sops](../tools/sops.md)
- [serie](./cli/serie.md)
- [usql](./cli/usql.md)

View file

@ -5,8 +5,6 @@ repo: https://github.com/NationalSecurityAgency/ghidra
rev: 2024-04-15
---
#refactor
# Ghidra
Ghidra is a powerful open-source software reverse engineering (SRE) suite developed by the National Security Agency (NSA) that enables users to analyze compiled code to understand its functionality, vulnerabilities, and inner workings.

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

View file

@ -0,0 +1,11 @@
---
obj: application
repo: https://github.com/kylecorry31/Trail-Sense
android-id: com.kylecorry.trail_sense
---
# TrailSense
**Trail Sense** is a powerful, offline-first Android app that transforms your phone into a wilderness survival toolkit .
![Screenshot](TrailSense.avif)

View file

@ -177,11 +177,11 @@ enum Route {
}
fn Home() -> Element {
todo!()
// HomePage...
}
fn Blog() -> Element {
todo!()
// BlogPage...
}
```

76
technology/tools/k3s.md Normal file
View file

@ -0,0 +1,76 @@
---
obj: application
website: https://k3s.io
repo: https://github.com/k3s-io/k3s
---
# k3s
K3s is a certified [Kubernetes](./kubernetes.md) distribution developed by Rancher (now part of SUSE). It is designed to be lightweight, simple to install, and optimized for resource-constrained environments such as edge computing, IoT devices, and development setups.
## Installation
K3s provides an installation script that is a convenient way to install it as a service on systemd or openrc based systems. This script is available at https://get.k3s.io. To install K3s using this method, just run:
```sh
curl -sfL https://get.k3s.io | sh -
```
After running this installation:
- The K3s service will be configured to automatically restart after node reboots or if the process crashes or is killed
- Additional utilities will be installed, including `kubectl`, `crictl`, `ctr`, `k3s-killall.sh`, and `k3s-uninstall.sh`
- A kubeconfig file will be written to `/etc/rancher/k3s/k3s.yaml` and the `kubectl` installed by K3s will automatically use it
A single-node server installation is a fully-functional Kubernetes cluster, including all the datastore, control-plane, kubelet, and container runtime components necessary to host workload pods. It is not necessary to add additional server or agents nodes, but you may want to do so to add additional capacity or redundancy to your cluster.
To install additional agent nodes and add them to the cluster, run the installation script with the `K3S_URL` and `K3S_TOKEN` environment variables. Here is an example showing how to join an agent:
```sh
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -
```
Setting the `K3S_URL` parameter causes the installer to configure K3s as an agent, instead of a server. The K3s agent will register with the K3s server listening at the supplied URL. The value to use for `K3S_TOKEN` is stored at `/var/lib/rancher/k3s/server/node-token` on your server node.
## HA (Embedded etcd)
To get started, first launch a server node with the `cluster-init` flag to enable clustering and a token that will be used as a shared secret to join additional servers to the cluster.
```sh
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server \
--cluster-init \
--tls-san=<FIXED_IP> # Optional, needed if using a fixed registration address
```
After launching the first server, join the second and third servers to the cluster using the shared secret:
```sh
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server \
--server https://<ip or hostname of server1>:6443 \
--tls-san=<FIXED_IP> # Optional, needed if using a fixed registration address
```
Check to see that the second and third servers are now part of the cluster:
```
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
server1 Ready control-plane,etcd,master 28m vX.Y.Z
server2 Ready control-plane,etcd,master 13m vX.Y.Z
server3 Ready control-plane,etcd,master 10m vX.Y.Z
```
Now you have a highly available control plane. Any successfully clustered servers can be used in the `--server` argument to join additional server and agent nodes. Joining additional agent nodes to the cluster follows the same procedure as servers:
```sh
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - agent --server https://<ip or hostname of server>:6443
```
There are a few config flags that must be the same in all server nodes:
- Network related flags: `--cluster-dns`, `--cluster-domain`, `--cluster-cidr`, `--service-cidr`
- Flags controlling the deployment of certain components: `--disable-helm-controller`, `--disable-kube-proxy`, `--disable-network-policy` and any component passed to `--disable`
- Feature related flags: `--secrets-encryption`
### Existing single-node clusters
If you have an existing cluster using the default embedded SQLite database, you can convert it to etcd by simply restarting your K3s server with the `--cluster-init` flag. Once you've done that, you'll be able to add additional instances as described above.
If an etcd datastore is found on disk either because that node has either initialized or joined a cluster already, the datastore arguments (`--cluster-init`, `--server`, `--datastore-endpoint`, etc) are ignored.

BIN
technology/tools/k9s.avif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

11
technology/tools/k9s.md Normal file
View file

@ -0,0 +1,11 @@
---
obj: application
website: https://k9scli.io
repo: https://github.com/derailed/k9s
---
# k9s
K9s is a terminal based UI to interact with your Kubernetes clusters. The aim of this project is to make it easier to navigate, observe and manage your deployed applications in the wild. K9s continually watches Kubernetes for changes and offers subsequent commands to interact with your observed resources.
![Screenshot](k9s.avif)

View file

@ -0,0 +1,343 @@
---
obj: concept
website: https://kubernetes.io
---
# Kubernetes
## Overview
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers into logical units called **Pods**, which run on **Nodes** in a cluster. A simple solution to get up and running is [k3s](k3s.md).
You can manage k8s clusters via `kubectl`. Most things are defined via yaml manifest files decleratively. You can throw these into your cluster with `kubectl apply -f FILE`.
## Resources
### Namespace
Logical separation of resources within a cluster.
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: example-namespace
```
### Pod
The smallest deployable unit in Kubernetes.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: full-example-pod
namespace: example-namespace
labels:
app: web
tier: frontend
annotations:
description: "A full-featured pod example for demonstration purposes"
spec:
restartPolicy: Always
# Init container (runs before main containers)
initContainers:
- name: init-permissions
image: busybox
command: ["sh", "-c", "chmod 777 /mnt/data"]
volumeMounts:
- name: data-volume
mountPath: /mnt/data
containers:
- name: main-app
image: nginx:1.25
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: http
env:
# Environment
- name: ENVIRONMENT
value: production
# Env from ConfigMap
- name: CONFIG_TIMEOUT
valueFrom:
configMapKeyRef:
name: example-config
key: TIMEOUT
# Env from Secret
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: example-secret
key: password
volumeMounts:
- name: data-volume
mountPath: /usr/share/nginx/html
- name: config-volume
mountPath: /etc/config
readOnly: true
resources:
limits:
cpu: "500m"
memory: "256Mi"
requests:
cpu: "250m"
memory: "128Mi"
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
lifecycle:
preStop:
exec:
command: ["sh", "-c", "echo stopping..."]
- name: sidecar-logger
image: busybox
args: ["sh", "-c", "tail -f /var/log/app.log"]
volumeMounts:
- name: log-volume
mountPath: /var/log
# Volumes
volumes:
# ConfigMap - inject config files
- name: config-volume
configMap:
name: example-config
items:
- key: config.json
path: config.json
# Secret - inject sensitive data
- name: secret-volume
secret:
secretName: example-secret
items:
- key: password
path: password.txt
# EmptyDir - ephemeral shared storage between containers
- name: log-volume
emptyDir:
medium: ""
sizeLimit: 500Mi
# HostPath - access host node's filesystem (example: logs)
- name: host-logs
hostPath:
path: /var/log
type: Directory
```
### Deployment
Ensures a specified number of identical Pods are running and up-to-date. Supports rolling updates and rollbacks.
```yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
namespace: example-namespace
spec:
replicas: 3
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: web
image: nginx:alpine
ports:
- containerPort: 80
```
### StatefulSet
Like a Deployment, but for workloads requiring stable network IDs, persistent storage, and ordered startup/shutdown.
```yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: example-statefulset
namespace: example-namespace
spec:
serviceName: "example"
replicas: 2
selector:
matchLabels:
app: stateful-app
template:
metadata:
labels:
app: stateful-app
spec:
containers:
- name: web
image: nginx:alpine
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
```
### DaemonSet
Ensures a copy of a Pod runs on all (or some) Nodes in the cluster. Ideal for log collectors or system-level agents.
```yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: example-daemonset
namespace: example-namespace
spec:
selector:
matchLabels:
name: ds-app
template:
metadata:
labels:
name: ds-app
spec:
containers:
- name: node-monitor
image: busybox
args: ["sh", "-c", "while true; do echo hello; sleep 10; done"]
```
### Job
Runs a Pod (or multiple) to completion. Used for batch processing or one-off tasks.
```yml
apiVersion: batch/v1
kind: Job
metadata:
name: example-job
namespace: example-namespace
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
```
### CronJob
Schedules Jobs to run periodically, similar to [UNIX cron](../linux/cron.md).
```yml
apiVersion: batch/v1
kind: CronJob
metadata:
name: example-cronjob
namespace: example-namespace
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args: ["echo", "Hello from the CronJob"]
restartPolicy: OnFailure
```
> Note: You can quickly run CronJobs as a job with: `kubectl create job --from=cronjob.batch/my_cron_job new_job`
### Service
Defines a stable network endpoint to access a set of Pods. Supports different types like `ClusterIP`, `NodePort`, and `LoadBalancer`.
```yml
apiVersion: v1
kind: Service
metadata:
name: example-service
namespace: example-namespace
spec:
selector:
app: example
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
```
### ConfigMap
Injects configuration data (as key-value pairs) into Pods, keeping config decoupled from code.
```yml
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
namespace: example-namespace
data:
APP_ENV: production
TIMEOUT: "30"
```
Usage in a Pod:
```yml
envFrom:
- configMapRef:
name: example-config
```
### Secret
Similar to ConfigMap, but for sensitive data like passwords, tokens, or keys.
If you want encryption on rest for your manifests, look at [sops](../tools/sops.md).
```yml
apiVersion: v1
kind: Secret
metadata:
name: example-secret
namespace: example-namespace
type: Opaque
data:
username: YWRtaW4= # base64 of 'admin'
password: cGFzc3dvcmQ= # base64 of 'password'
```
Usage in a Pod:
```yml
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: example-secret
key: username
```