Compare commits

..

15 Commits

Author SHA1 Message Date
Bastian de Byl
b4ebc4bad7 feat: increase act_runner capacity for parallel job execution
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 13:51:24 -04:00
Bastian de Byl
d5e473304a fix: use python_env as guard for ESP-IDF install task
The tools directory can exist without the Python venv being created,
causing install.sh to be skipped on re-runs. Check for python_env
instead, which is the actual output we need.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 12:43:14 -04:00
Bastian de Byl
5deb2e6e48 feat: add SSH key and known_hosts for gitea-runner
Generate ed25519 deploy key and add git.skudak.com/git.debyl.io host
keys to known_hosts so the runner can clone SSH submodules in CI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 17:53:01 -04:00
Bastian de Byl
1c478e6ab5 fix: add ESP-IDF to git safe.directory before submodule init
Root-owned /opt/esp-idf triggers git dubious ownership check when
running submodule update. Add safe.directory config beforehand.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 17:10:25 -04:00
Bastian de Byl
dbd898cb2f feat: support multiple Gitea instances for actions runner
The gitea-actions role now uses a `gitea_runners` list instead of a
single `gitea_instance_url`. Each instance gets its own config, systemd
service, working directory, and cache. Migrates from the old single
`act_runner.service` to per-instance `act_runner-{name}.service`.

Adds git.skudak.com alongside git.debyl.io as runner targets.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 17:04:23 -04:00
Bastian de Byl
43fbcf59a5 add n8n workflow automation and fix cloud backup rsync
- Add n8n container (n8nio/n8n:2.11.3) with Caddy reverse proxy at n8n.debyl.io
- Add --exclude .ssh to cloud backup rsync to prevent overwriting
  authorized_keys on TrueNAS backup targets

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 12:12:19 -04:00
Bastian de Byl
f23fc62ada fix: move cloud backup keys and scripts out of container volume paths
SSH keys moved to /etc/ssh/backup_keys/ (ssh_home_t) and backup scripts
to /usr/local/bin/ (bin_t) to fix SELinux denials - container_file_t
context blocked rsync from exec'ing ssh. Also fixes skudak key path
mismatch (was truenas_skudak, key deployed as truenas_skudak-cloud).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 14:45:03 -05:00
Bastian de Byl
d4b01468ba chore: update vault variables 2026-03-05 14:00:16 -05:00
Bastian de Byl
8fd220a16e noticket - update zomboid b42revamp modpack to collection 3672556207
Replaces old 168-mod collection (3636931465) with new 385-mod collection.
Cleaned BBCode artifacts from mod IDs, updated map folders for 32 maps.
LogCabin retained for player connect/disconnect logging.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 13:59:33 -05:00
Bastian de Byl
3637b3ba23 noticket - remove karrio, update gregtime, fix caddy duplicate redirect
Remove Karrio shipping platform (containers, config, vault secrets,
Caddy site block). Bump gregtime 3.4.1 -> 3.4.3. Remove duplicate
home.debyl.io redirect in Caddyfile. Update zomboid b42revamp mod list.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 17:40:00 -05:00
Bastian de Byl
9f95585daa noticket - updated gregtime 2026-02-17 14:21:02 -05:00
Bastian de Byl
495943b837 feat: add ollama and searxng, migrate to debyl.io hostname
- Add ollama role for local LLM inference (install, service, models)
- Add searxng container for private search
- Migrate hostname from home.bdebyl.net to home.debyl.io
  (inventory, awsddns, zomboid entrypoint, home_server_name)
- Update vault with new secrets

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 15:13:25 -05:00
Bastian de Byl
3eb6938b62 feat: switch FISTO to dolphin-mistral with dolphin-phi fallback
Benchmarked uncensored models for the gregtime FISTO bot. dolphin-mistral
produces the best uncensored creative content, dolphin-phi is faster fallback.
Added OLLAMA_NUM_PREDICT env var (300) and bumped image to 3.3.0.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 14:56:52 -05:00
Bastian de Byl
d10cd49cf0 refactor: use variables for graylog stack image versions
Move hardcoded image versions to variables defined in main.yml for
easier version management in one place.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 12:35:51 -05:00
Bastian de Byl
61692b36a2 refactor: reorganize fluent-bit and geoip out of containers
- Move fluent-bit to common role (systemd service, not a container)
- Move geoip to podman/tasks/data/ (data prep, not a container)
- Remove debyltech tag from geoip (not a debyltech service)
- Fix check_mode for fetch subuid task to enable dry-run mode

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 12:34:43 -05:00
45 changed files with 531 additions and 91 deletions

View File

@@ -9,7 +9,7 @@ This is a home infrastructure deployment repository using Ansible for automated
## Development Commands ## Development Commands
### Core Commands ### Core Commands
- `make` or `make lint` - Run linting (yamllint + ansible-lint) on all YAML files - `make` or `make lint` - Run yamllint on all YAML files. Output may only show "Running yamllint..." and "Done." with no errors listed — this means linting passed. Do NOT run yamllint or ansible-lint manually; `make lint` is the only lint step needed.
- `make deploy` - Deploy all configurations to the home server - `make deploy` - Deploy all configurations to the home server
- `make deploy TAGS=sometag` - Deploy only specific tagged tasks - `make deploy TAGS=sometag` - Deploy only specific tagged tasks
- `make deploy TARGET=specific-host` - Deploy to specific host instead of all - `make deploy TARGET=specific-host` - Deploy to specific host instead of all
@@ -96,9 +96,22 @@ Tasks are tagged by service/component for selective deployment:
## Target Environment ## Target Environment
- Single target host: `home.bdebyl.net` - Single target host: `home.debyl.io`
- OS: Fedora (ansible_user: fedora) - OS: Fedora (ansible_user: fedora)
- Container runtime: Podman - Container runtime: Podman
- Web server: Caddy with automatic HTTPS and built-in security (replaced nginx + ModSecurity) - Web server: Caddy with automatic HTTPS and built-in security (replaced nginx + ModSecurity)
- All services accessible via HTTPS with automatic certificate renewal - All services accessible via HTTPS with automatic certificate renewal
- ~~CI/CD: Drone CI infrastructure completely decommissioned~~ - ~~CI/CD: Drone CI infrastructure completely decommissioned~~
### Remote SSH Commands for Service Users
The `podman` user (and other service users) have `/bin/nologin` as their shell. To run commands as these users via SSH:
- **One-off commands**: `sudo -H -u podman bash -c 'command here'`
- **Interactive shell**: `sudo -H -u podman bash -c 'cd; bash'`
- **systemctl --user** requires `XDG_RUNTIME_DIR`:
```bash
sudo -H -u podman bash -c 'export XDG_RUNTIME_DIR=/run/user/$(id -u); systemctl --user <action> <service>'
```
Podman is a user-specific (rootless) container runtime, not a system service like Docker. The user context matters for all podman and systemctl --user operations. The default SSH user (`fedora`) has sudo access and can run commands directly.

View File

@@ -70,6 +70,7 @@ vault: ${ANSIBLE_VAULT} ${VAULT_FILE}
lint: ${LINT_YAML} ${SKIP_FILE} lint: ${LINT_YAML} ${SKIP_FILE}
@printf "Running yamllint...\n" @printf "Running yamllint...\n"
-@${LINT_YAML} ${YAML_FILES} -@${LINT_YAML} ${YAML_FILES}
@printf "Done.\n"
# Git-crypt management # Git-crypt management
git-crypt-backup: git-crypt-backup:

View File

@@ -8,6 +8,8 @@
- role: podman - role: podman
# SSL certificates are now handled automatically by Caddy # SSL certificates are now handled automatically by Caddy
# - role: ssl # REMOVED - Caddy handles all certificate management # - role: ssl # REMOVED - Caddy handles all certificate management
- role: ollama
tags: ollama
- role: github-actions - role: github-actions
- role: graylog-config - role: graylog-config
tags: graylog-config tags: graylog-config

View File

@@ -1,5 +1,5 @@
--- ---
all: all:
hosts: hosts:
home.bdebyl.net: home.debyl.io:
ansible_user: fedora ansible_user: fedora

View File

@@ -10,3 +10,9 @@
ansible.builtin.service: ansible.builtin.service:
name: fail2ban name: fail2ban
state: restarted state: restarted
- name: restart fluent-bit
become: true
ansible.builtin.systemd:
name: fluent-bit
state: restarted

View File

@@ -2,25 +2,6 @@
# Fluent Bit - Log forwarder from journald to Graylog GELF # Fluent Bit - Log forwarder from journald to Graylog GELF
# Deployed as systemd service (not container) for direct journal access # Deployed as systemd service (not container) for direct journal access
# Clean up old container deployment if it exists
- name: stop and remove fluent-bit container if exists
become: true
become_user: "{{ podman_user }}"
containers.podman.podman_container:
name: fluent-bit
state: absent
ignore_errors: true
- name: disable old fluent-bit container systemd service
become: true
become_user: "{{ podman_user }}"
ansible.builtin.systemd:
name: fluent-bit
enabled: false
state: stopped
scope: user
ignore_errors: true
- name: install fluent-bit package - name: install fluent-bit package
become: true become: true
ansible.builtin.dnf: ansible.builtin.dnf:

View File

@@ -3,6 +3,9 @@
- import_tasks: security.yml - import_tasks: security.yml
- import_tasks: service.yml - import_tasks: service.yml
- import_tasks: fluent-bit.yml
tags: fluent-bit, graylog
- name: create the docker group - name: create the docker group
become: true become: true
ansible.builtin.group: ansible.builtin.group:

View File

@@ -74,6 +74,12 @@
# ============================================================================= # =============================================================================
# FILTERS: Add metadata for Graylog categorization # FILTERS: Add metadata for Graylog categorization
# ============================================================================= # =============================================================================
# Exclude Graylog stack containers to prevent feedback loop
[FILTER]
Name grep
Match podman.*
Exclude CONTAINER_NAME ^graylog
[FILTER] [FILTER]
Name record_modifier Name record_modifier
Match podman.* Match podman.*
@@ -143,7 +149,7 @@
Name gelf Name gelf
Match * Match *
Host 127.0.0.1 Host 127.0.0.1
Port 12203 Port 12202
Mode udp Mode tcp
Gelf_Short_Message_Key MESSAGE Gelf_Short_Message_Key MESSAGE
Gelf_Host_Key host Gelf_Host_Key host

View File

@@ -3,7 +3,17 @@ gitea_runner_user: gitea-runner
gitea_runner_home: /home/gitea-runner gitea_runner_home: /home/gitea-runner
gitea_runner_version: "0.2.13" gitea_runner_version: "0.2.13"
gitea_runner_arch: linux-amd64 gitea_runner_arch: linux-amd64
gitea_instance_url: https://git.debyl.io gitea_runner_capacity: 4
# Multiple Gitea instances to run actions runners for
gitea_runners:
- name: debyl
instance_url: https://git.debyl.io
- name: skudak
instance_url: https://git.skudak.com
# Old single-instance format (replaced by gitea_runners list above):
# gitea_instance_url: https://git.debyl.io
# Paths # Paths
act_runner_bin: /usr/local/bin/act_runner act_runner_bin: /usr/local/bin/act_runner

View File

@@ -1,10 +1,11 @@
--- ---
- name: restart act_runner - name: restart act_runner services
become: true become: true
ansible.builtin.systemd: ansible.builtin.systemd:
name: act_runner name: "act_runner-{{ item.name }}"
state: restarted state: restarted
daemon_reload: true daemon_reload: true
loop: "{{ gitea_runners }}"
- name: restart podman socket - name: restart podman socket
become: true become: true

View File

@@ -35,6 +35,13 @@
when: not esp_idf_dir.stat.exists when: not esp_idf_dir.stat.exists
tags: gitea-actions tags: gitea-actions
- name: add ESP-IDF to git safe.directory
become: true
ansible.builtin.command:
cmd: git config --global --add safe.directory {{ esp_idf_path }}
changed_when: false
tags: gitea-actions
- name: ensure ESP-IDF submodules are initialized - name: ensure ESP-IDF submodules are initialized
become: true become: true
ansible.builtin.command: ansible.builtin.command:
@@ -79,7 +86,7 @@
export IDF_TOOLS_PATH="{{ gitea_runner_home }}/.espressif" export IDF_TOOLS_PATH="{{ gitea_runner_home }}/.espressif"
{{ esp_idf_path }}/install.sh esp32 {{ esp_idf_path }}/install.sh esp32
args: args:
creates: "{{ gitea_runner_home }}/.espressif/tools" creates: "{{ gitea_runner_home }}/.espressif/python_env"
environment: environment:
HOME: "{{ gitea_runner_home }}" HOME: "{{ gitea_runner_home }}"
tags: gitea-actions tags: gitea-actions

View File

@@ -15,31 +15,36 @@
mode: "0755" mode: "0755"
tags: gitea-actions tags: gitea-actions
- name: create act_runner working directory - name: create per-runner working directory
become: true become: true
ansible.builtin.file: ansible.builtin.file:
path: "{{ act_runner_work_dir }}" path: "{{ act_runner_work_dir }}/{{ item.name }}"
state: directory state: directory
owner: "{{ gitea_runner_user }}" owner: "{{ gitea_runner_user }}"
group: "{{ gitea_runner_user }}" group: "{{ gitea_runner_user }}"
mode: "0755" mode: "0755"
loop: "{{ gitea_runners }}"
tags: gitea-actions tags: gitea-actions
- name: create act_runner cache directory - name: create per-runner cache directory
become: true become: true
ansible.builtin.file: ansible.builtin.file:
path: "{{ act_runner_work_dir }}/cache" path: "{{ act_runner_work_dir }}/{{ item.name }}/cache"
state: directory state: directory
owner: "{{ gitea_runner_user }}" owner: "{{ gitea_runner_user }}"
group: "{{ gitea_runner_user }}" group: "{{ gitea_runner_user }}"
mode: "0755" mode: "0755"
loop: "{{ gitea_runners }}"
tags: gitea-actions tags: gitea-actions
- name: deploy act_runner configuration - name: deploy per-runner configuration
become: true become: true
ansible.builtin.template: ansible.builtin.template:
src: config.yaml.j2 src: config.yaml.j2
dest: "{{ act_runner_config_dir }}/config.yaml" dest: "{{ act_runner_config_dir }}/config-{{ item.name }}.yaml"
mode: "0644" mode: "0644"
notify: restart act_runner vars:
runner_name: "{{ item.name }}"
loop: "{{ gitea_runners }}"
notify: restart act_runner services
tags: gitea-actions tags: gitea-actions

View File

@@ -1,17 +1,37 @@
--- ---
- name: deploy act_runner systemd service - name: stop and disable legacy act_runner service
become: true
ansible.builtin.template:
src: act_runner.service.j2
dest: /etc/systemd/system/act_runner.service
mode: "0644"
notify: restart act_runner
tags: gitea-actions
- name: enable act_runner service
become: true become: true
ansible.builtin.systemd: ansible.builtin.systemd:
name: act_runner name: act_runner
state: stopped
enabled: false
failed_when: false
tags: gitea-actions
- name: remove legacy act_runner service file
become: true
ansible.builtin.file:
path: /etc/systemd/system/act_runner.service
state: absent
tags: gitea-actions
- name: deploy per-runner systemd service
become: true
ansible.builtin.template:
src: act_runner.service.j2
dest: "/etc/systemd/system/act_runner-{{ item.name }}.service"
mode: "0644"
vars:
runner_name: "{{ item.name }}"
loop: "{{ gitea_runners }}"
notify: restart act_runner services
tags: gitea-actions
- name: enable per-runner services
become: true
ansible.builtin.systemd:
name: "act_runner-{{ item.name }}"
daemon_reload: true daemon_reload: true
enabled: true enabled: true
loop: "{{ gitea_runners }}"
tags: gitea-actions tags: gitea-actions

View File

@@ -32,3 +32,42 @@
state: directory state: directory
mode: "0755" mode: "0755"
tags: gitea-actions tags: gitea-actions
- name: create .ssh directory
become: true
ansible.builtin.file:
path: "{{ gitea_runner_home }}/.ssh"
state: directory
owner: "{{ gitea_runner_user }}"
group: "{{ gitea_runner_user }}"
mode: "0700"
tags: gitea-actions
- name: generate SSH key for gitea-runner
become: true
become_user: "{{ gitea_runner_user }}"
ansible.builtin.command:
cmd: ssh-keygen -t ed25519 -f {{ gitea_runner_home }}/.ssh/id_ed25519 -N "" -C "gitea-runner@galactica"
creates: "{{ gitea_runner_home }}/.ssh/id_ed25519"
tags: gitea-actions
- name: add Gitea SSH host keys to known_hosts
become: true
become_user: "{{ gitea_runner_user }}"
ansible.builtin.shell:
cmd: ssh-keyscan -p 2222 {{ item }} >> {{ gitea_runner_home }}/.ssh/known_hosts 2>/dev/null
args:
creates: "{{ gitea_runner_home }}/.ssh/known_hosts"
loop:
- git.skudak.com
- git.debyl.io
tags: gitea-actions
- name: set known_hosts permissions
become: true
ansible.builtin.file:
path: "{{ gitea_runner_home }}/.ssh/known_hosts"
owner: "{{ gitea_runner_user }}"
group: "{{ gitea_runner_user }}"
mode: "0644"
tags: gitea-actions

View File

@@ -1,11 +1,11 @@
[Unit] [Unit]
Description=Gitea Actions runner Description=Gitea Actions runner ({{ runner_name }})
Documentation=https://gitea.com/gitea/act_runner Documentation=https://gitea.com/gitea/act_runner
After=network.target podman.socket After=network.target podman.socket
[Service] [Service]
ExecStart={{ act_runner_bin }} daemon --config {{ act_runner_config_dir }}/config.yaml ExecStart={{ act_runner_bin }} daemon --config {{ act_runner_config_dir }}/config-{{ runner_name }}.yaml
WorkingDirectory={{ act_runner_work_dir }} WorkingDirectory={{ act_runner_work_dir }}/{{ runner_name }}
TimeoutSec=0 TimeoutSec=0
RestartSec=10 RestartSec=10
Restart=always Restart=always

View File

@@ -2,8 +2,8 @@ log:
level: info level: info
runner: runner:
file: {{ act_runner_work_dir }}/.runner file: {{ act_runner_work_dir }}/{{ runner_name }}/.runner
capacity: 1 capacity: {{ gitea_runner_capacity | default(4) }}
timeout: 3h timeout: 3h
insecure: false insecure: false
fetch_timeout: 5s fetch_timeout: 5s
@@ -15,7 +15,7 @@ runner:
cache: cache:
enabled: true enabled: true
dir: {{ act_runner_work_dir }}/cache dir: {{ act_runner_work_dir }}/{{ runner_name }}/cache
container: container:
network: host network: host
@@ -27,4 +27,4 @@ container:
force_pull: false force_pull: false
host: host:
workdir_parent: {{ act_runner_work_dir }}/workdir workdir_parent: {{ act_runner_work_dir }}/{{ runner_name }}/workdir

View File

@@ -0,0 +1,6 @@
---
ollama_models:
- dolphin-phi
- dolphin-mistral
ollama_host: "127.0.0.1"
ollama_port: 11434

View File

@@ -0,0 +1,8 @@
---
- name: restart ollama
become: true
ansible.builtin.systemd:
name: ollama
state: restarted
daemon_reload: true
tags: ollama

View File

@@ -0,0 +1,3 @@
---
dependencies:
- role: common

View File

@@ -0,0 +1,11 @@
---
- name: check if ollama is already installed
ansible.builtin.stat:
path: /usr/local/bin/ollama
register: ollama_binary
- name: install ollama via official install script
become: true
ansible.builtin.shell: |
curl -fsSL https://ollama.com/install.sh | sh
when: not ollama_binary.stat.exists

View File

@@ -0,0 +1,9 @@
---
- import_tasks: install.yml
tags: ollama
- import_tasks: service.yml
tags: ollama
- import_tasks: models.yml
tags: ollama

View File

@@ -0,0 +1,10 @@
---
- name: pull ollama models
become: true
ansible.builtin.command: ollama pull {{ item }}
loop: "{{ ollama_models }}"
register: result
retries: 3
delay: 10
until: result is not failed
changed_when: "'pulling' in result.stderr or 'pulling' in result.stdout"

View File

@@ -0,0 +1,23 @@
---
- name: create ollama systemd override directory
become: true
ansible.builtin.file:
path: /etc/systemd/system/ollama.service.d
state: directory
mode: 0755
- name: template ollama environment override
become: true
ansible.builtin.template:
src: ollama.env.j2
dest: /etc/systemd/system/ollama.service.d/override.conf
mode: 0644
notify: restart ollama
- name: enable and start ollama service
become: true
ansible.builtin.systemd:
name: ollama
enabled: true
state: started
daemon_reload: true

View File

@@ -0,0 +1,4 @@
[Service]
Environment="OLLAMA_HOST={{ ollama_host }}:{{ ollama_port }}"
Environment="OLLAMA_NUM_PARALLEL=1"
Environment="OLLAMA_MAX_LOADED_MODELS=1"

File diff suppressed because one or more lines are too long

View File

@@ -42,11 +42,3 @@
scope: user scope: user
tags: tags:
- zomboid - zomboid
- name: restart fluent-bit
become: true
ansible.builtin.systemd:
name: fluent-bit
state: restarted
tags:
- fluent-bit

View File

@@ -4,7 +4,7 @@
container_name: awsddns container_name: awsddns
container_image: "{{ image }}" container_image: "{{ image }}"
- name: create home.bdebyl.net awsddns server container - name: create home.debyl.io awsddns server container
become: true become: true
become_user: "{{ podman_user }}" become_user: "{{ podman_user }}"
diff: false diff: false

View File

@@ -0,0 +1,59 @@
---
- name: create backup SSH key directory
become: true
ansible.builtin.file:
path: /etc/ssh/backup_keys
state: directory
owner: root
group: root
mode: 0700
- name: deploy {{ backup_name }} backup SSH key
become: true
ansible.builtin.copy:
content: "{{ ssh_key_content }}"
dest: "{{ ssh_key_path }}"
owner: root
group: root
mode: 0600
setype: ssh_home_t
- name: template {{ backup_name }} backup script
become: true
ansible.builtin.template:
src: nextcloud/cloud-backup.sh.j2
dest: "{{ script_path }}"
owner: root
group: root
mode: 0755
setype: bin_t
- name: template {{ backup_name }} backup systemd service
become: true
ansible.builtin.template:
src: nextcloud/cloud-backup.service.j2
dest: "/etc/systemd/system/{{ backup_name }}-backup.service"
owner: root
group: root
mode: 0644
vars:
instance_name: "{{ backup_name }}"
- name: template {{ backup_name }} backup systemd timer
become: true
ansible.builtin.template:
src: nextcloud/cloud-backup.timer.j2
dest: "/etc/systemd/system/{{ backup_name }}-backup.timer"
owner: root
group: root
mode: 0644
vars:
instance_name: "{{ backup_name }}"
- name: enable and start {{ backup_name }} backup timer
become: true
ansible.builtin.systemd:
name: "{{ backup_name }}-backup.timer"
enabled: true
state: started
daemon_reload: true

View File

@@ -75,7 +75,7 @@
- import_tasks: podman/podman-check.yml - import_tasks: podman/podman-check.yml
vars: vars:
container_name: graylog-mongo container_name: graylog-mongo
container_image: docker.io/mongo:6 container_image: "{{ mongo_image }}"
tags: graylog tags: graylog
- name: create graylog-mongo container - name: create graylog-mongo container
@@ -83,7 +83,7 @@
become_user: "{{ podman_user }}" become_user: "{{ podman_user }}"
containers.podman.podman_container: containers.podman.podman_container:
name: graylog-mongo name: graylog-mongo
image: docker.io/mongo:6 image: "{{ mongo_image }}"
state: started state: started
restart_policy: on-failure:3 restart_policy: on-failure:3
log_driver: journald log_driver: journald
@@ -103,7 +103,7 @@
- import_tasks: podman/podman-check.yml - import_tasks: podman/podman-check.yml
vars: vars:
container_name: graylog-opensearch container_name: graylog-opensearch
container_image: docker.io/opensearchproject/opensearch:2 container_image: "{{ opensearch_image }}"
tags: graylog tags: graylog
- name: create graylog-opensearch container - name: create graylog-opensearch container
@@ -111,7 +111,7 @@
become_user: "{{ podman_user }}" become_user: "{{ podman_user }}"
containers.podman.podman_container: containers.podman.podman_container:
name: graylog-opensearch name: graylog-opensearch
image: docker.io/opensearchproject/opensearch:2 image: "{{ opensearch_image }}"
state: started state: started
restart_policy: on-failure:3 restart_policy: on-failure:3
log_driver: journald log_driver: journald
@@ -135,7 +135,7 @@
- import_tasks: podman/podman-check.yml - import_tasks: podman/podman-check.yml
vars: vars:
container_name: graylog container_name: graylog
container_image: docker.io/graylog/graylog:6.0 container_image: "{{ image }}"
tags: graylog tags: graylog
# Graylog uses host network to reach MongoDB/OpenSearch on 127.0.0.1 # Graylog uses host network to reach MongoDB/OpenSearch on 127.0.0.1
@@ -145,7 +145,7 @@
become_user: "{{ podman_user }}" become_user: "{{ podman_user }}"
containers.podman.podman_container: containers.podman.podman_container:
name: graylog name: graylog
image: docker.io/graylog/graylog:6.0 image: "{{ image }}"
state: started state: started
restart_policy: on-failure:3 restart_policy: on-failure:3
log_driver: journald log_driver: journald

View File

@@ -0,0 +1,54 @@
---
- name: create n8n host directory volumes
become: true
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: "{{ podman_user }}"
group: "{{ podman_user }}"
mode: 0755
notify: restorecon podman
loop:
- "{{ n8n_path }}"
- name: set n8n volume ownership for node user
become: true
become_user: "{{ podman_user }}"
ansible.builtin.command:
cmd: podman unshare chown -R 1000:1000 {{ n8n_path }}
changed_when: false
- name: flush handlers
ansible.builtin.meta: flush_handlers
- import_tasks: podman/podman-check.yml
vars:
container_name: n8n
container_image: "{{ image }}"
- name: create n8n container
become: true
become_user: "{{ podman_user }}"
containers.podman.podman_container:
name: n8n
image: "{{ image }}"
image_strict: true
restart_policy: on-failure:3
log_driver: journald
network: shared
volumes:
- "{{ n8n_path }}:/home/node/.n8n"
ports:
- 5678:5678/tcp
env:
N8N_HOST: "{{ n8n_server_name }}"
N8N_PORT: "5678"
N8N_PROTOCOL: https
WEBHOOK_URL: "https://{{ n8n_server_name }}/"
N8N_SECURE_COOKIE: "true"
GENERIC_TIMEZONE: America/New_York
- name: create systemd startup job for n8n
include_tasks: podman/systemd-generate.yml
vars:
container_name: n8n

View File

@@ -83,3 +83,13 @@
include_tasks: podman/systemd-generate.yml include_tasks: podman/systemd-generate.yml
vars: vars:
container_name: cloud container_name: cloud
- include_tasks: containers/cloud-backup.yml
vars:
backup_name: cloud
data_path: "{{ cloud_path }}/data"
ssh_key_path: /etc/ssh/backup_keys/cloud
ssh_key_content: "{{ cloud_backup_ssh_key }}"
ssh_user: cloud
remote_path: /mnt/glacier/nextcloud
script_path: /usr/local/bin/cloud-backup.sh

View File

@@ -40,7 +40,13 @@
- host - host
env: env:
TZ: America/New_York TZ: America/New_York
# Gemini AI for @bot ask command # Ollama + SearXNG for FISTO AI responses
OLLAMA_HOST: "http://127.0.0.1:11434"
OLLAMA_MODEL: "dolphin-mistral"
OLLAMA_FALLBACK_MODEL: "dolphin-phi"
OLLAMA_NUM_PREDICT: "300"
SEARXNG_URL: "http://127.0.0.1:8080"
# Gemini API for @bot gemini command
GEMINI_API_KEY: "{{ gemini_api_key }}" GEMINI_API_KEY: "{{ gemini_api_key }}"
# Zomboid RCON configuration for Discord restart command # Zomboid RCON configuration for Discord restart command
ZOMBOID_RCON_HOST: "127.0.0.1" ZOMBOID_RCON_HOST: "127.0.0.1"

View File

@@ -0,0 +1,59 @@
---
- name: create searxng host directory volumes
become: true
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: "{{ podman_subuid.stdout }}"
group: "{{ podman_user }}"
mode: 0755
notify: restorecon podman
loop:
- "{{ searxng_path }}/config"
- "{{ searxng_path }}/data"
- name: template searxng settings
become: true
ansible.builtin.template:
src: searxng/settings.yml.j2
dest: "{{ searxng_path }}/config/settings.yml"
owner: "{{ podman_subuid.stdout }}"
group: "{{ podman_user }}"
mode: 0644
- name: unshare chown the searxng volumes for internal uid 977
become: true
become_user: "{{ podman_user }}"
changed_when: false
ansible.builtin.shell: |
podman unshare chown -R 977:977 {{ searxng_path }}/config
podman unshare chown -R 977:977 {{ searxng_path }}/data
- name: flush handlers
ansible.builtin.meta: flush_handlers
- import_tasks: podman/podman-check.yml
vars:
container_name: searxng
container_image: "{{ image }}"
- name: create searxng container
become: true
become_user: "{{ podman_user }}"
containers.podman.podman_container:
name: searxng
image: "{{ image }}"
restart_policy: on-failure:3
log_driver: journald
network:
- host
env:
SEARXNG_BASE_URL: "http://127.0.0.1:8080/"
volumes:
- "{{ searxng_path }}/config:/etc/searxng"
- "{{ searxng_path }}/data:/srv/searxng/data"
- name: create systemd startup job for searxng
include_tasks: podman/systemd-generate.yml
vars:
container_name: searxng

View File

@@ -130,3 +130,13 @@
register: trusted_domain_result register: trusted_domain_result
changed_when: "'System config value trusted_domains' in trusted_domain_result.stdout" changed_when: "'System config value trusted_domains' in trusted_domain_result.stdout"
failed_when: false failed_when: false
- include_tasks: containers/cloud-backup.yml
vars:
backup_name: skudak-cloud
data_path: "{{ cloud_skudak_path }}/data"
ssh_key_path: /etc/ssh/backup_keys/skudak-cloud
ssh_key_content: "{{ cloud_skudak_backup_ssh_key }}"
ssh_user: skucloud
remote_path: /mnt/glacier/skudakcloud
script_path: /usr/local/bin/skudak-cloud-backup.sh

View File

@@ -31,7 +31,7 @@
- import_tasks: containers/home/hass.yml - import_tasks: containers/home/hass.yml
vars: vars:
image: ghcr.io/home-assistant/home-assistant:2025.9 image: ghcr.io/home-assistant/home-assistant:2026.1
tags: hass tags: hass
- import_tasks: containers/home/partkeepr.yml - import_tasks: containers/home/partkeepr.yml
@@ -62,13 +62,13 @@
- import_tasks: containers/home/cloud.yml - import_tasks: containers/home/cloud.yml
vars: vars:
db_image: docker.io/library/mariadb:10.6 db_image: docker.io/library/mariadb:10.6
image: docker.io/library/nextcloud:32.0.1-apache image: docker.io/library/nextcloud:33.0.0-apache
tags: cloud tags: cloud
- import_tasks: containers/skudak/cloud.yml - import_tasks: containers/skudak/cloud.yml
vars: vars:
db_image: docker.io/library/mariadb:10.6 db_image: docker.io/library/mariadb:10.6
image: docker.io/library/nextcloud:32.0.1-apache image: docker.io/library/nextcloud:33.0.0-apache
tags: skudak, skudak-cloud tags: skudak, skudak-cloud
- import_tasks: containers/debyltech/fulfillr.yml - import_tasks: containers/debyltech/fulfillr.yml
@@ -76,6 +76,11 @@
image: git.debyl.io/debyltech/fulfillr:20260124.0411 image: git.debyl.io/debyltech/fulfillr:20260124.0411
tags: debyltech, fulfillr tags: debyltech, fulfillr
- import_tasks: containers/debyltech/n8n.yml
vars:
image: docker.io/n8nio/n8n:2.11.3
tags: debyltech, n8n
- import_tasks: containers/debyltech/uptime-kuma.yml - import_tasks: containers/debyltech/uptime-kuma.yml
vars: vars:
image: docker.io/louislam/uptime-kuma:2.0.2 image: docker.io/louislam/uptime-kuma:2.0.2
@@ -86,21 +91,28 @@
image: docker.io/louislam/uptime-kuma:2.0.2 image: docker.io/louislam/uptime-kuma:2.0.2
tags: home, uptime tags: home, uptime
- import_tasks: containers/debyltech/geoip.yml - import_tasks: data/geoip.yml
tags: debyltech, graylog, geoip tags: graylog, geoip
- import_tasks: containers/debyltech/graylog.yml - import_tasks: containers/debyltech/graylog.yml
vars:
mongo_image: docker.io/mongo:7.0
opensearch_image: docker.io/opensearchproject/opensearch:2
image: docker.io/graylog/graylog:7.0.1
tags: debyltech, graylog tags: debyltech, graylog
- import_tasks: containers/base/fluent-bit.yml - import_tasks: containers/home/searxng.yml
tags: fluent-bit, graylog vars:
image: docker.io/searxng/searxng:latest
tags: searxng
- import_tasks: containers/home/gregtime.yml - import_tasks: containers/home/gregtime.yml
vars: vars:
image: localhost/greg-time-bot:3.0.2 image: localhost/greg-time-bot:3.4.3
tags: gregtime tags: gregtime
- import_tasks: containers/home/zomboid.yml - import_tasks: containers/home/zomboid.yml
vars: vars:
image: docker.io/cm2network/steamcmd:root image: docker.io/cm2network/steamcmd:root
tags: zomboid tags: zomboid

View File

@@ -112,6 +112,7 @@
- name: fetch subuid of {{ podman_user }} - name: fetch subuid of {{ podman_user }}
become: true become: true
changed_when: false changed_when: false
check_mode: false
ansible.builtin.shell: | ansible.builtin.shell: |
set -o pipefail && cat /etc/subuid | awk -F':' '/{{ podman_user }}/{ print $2 }' | head -n 1 set -o pipefail && cat /etc/subuid | awk -F':' '/{{ podman_user }}/{ print $2 }' | head -n 1
register: podman_subuid register: podman_subuid

View File

@@ -130,11 +130,6 @@
# CI/Drone - REMOVED # CI/Drone - REMOVED
# ci.bdebyl.net configuration removed - Drone CI infrastructure decommissioned # ci.bdebyl.net configuration removed - Drone CI infrastructure decommissioned
# Home server - redirect old to new
{{ home_server_name }} {
redir https://{{ home_server_name_io }}{uri} 302
}
# Home server - {{ home_server_name_io }} # Home server - {{ home_server_name_io }}
{{ home_server_name_io }} { {{ home_server_name_io }} {
{{ ip_restricted_site() }} {{ ip_restricted_site() }}
@@ -337,6 +332,23 @@
} }
} }
# N8N Workflow Automation - {{ n8n_server_name }}
{{ n8n_server_name }} {
{{ ip_restricted_site() }}
handle @local {
import common_headers
reverse_proxy localhost:5678 {
flush_interval -1
}
}
log {
output file {{ caddy_log_path }}/n8n.log
format {{ caddy_log_format }}
}
}
# Fulfillr - {{ fulfillr_server_name }} (Static + API with IP restrictions) # Fulfillr - {{ fulfillr_server_name }} (Static + API with IP restrictions)
{{ fulfillr_server_name }} { {{ fulfillr_server_name }} {
{{ ip_restricted_site() }} {{ ip_restricted_site() }}

View File

@@ -0,0 +1,6 @@
[Unit]
Description=Nextcloud {{ instance_name }} backup to TrueNAS
[Service]
Type=oneshot
ExecStart={{ script_path }}

View File

@@ -0,0 +1,4 @@
#!/bin/bash
set -euo pipefail
rsync -az --exclude .ssh -e "ssh -i {{ ssh_key_path }} -o StrictHostKeyChecking=accept-new" \
{{ data_path }}/ {{ ssh_user }}@truenas.localdomain:{{ remote_path }}/

View File

@@ -0,0 +1,9 @@
[Unit]
Description=Daily Nextcloud {{ instance_name }} backup
[Timer]
OnCalendar=*-*-* 04:00:00
Persistent=true
[Install]
WantedBy=timers.target

View File

@@ -0,0 +1,35 @@
use_default_settings: true
general:
instance_name: "SearXNG"
debug: false
server:
bind_address: "127.0.0.1"
port: 8080
secret_key: "{{ searxng_secret_key }}"
limiter: false
image_proxy: false
search:
safe_search: 0
formats:
- html
- json
engines:
- name: duckduckgo
engine: duckduckgo
disabled: false
- name: google
engine: google
disabled: false
- name: wikipedia
engine: wikipedia
disabled: false
- name: bing
engine: bing
disabled: false

View File

@@ -84,6 +84,6 @@ fi
# Start server # Start server
cd "${INSTALL_DIR}" cd "${INSTALL_DIR}"
echo "=== Starting Project Zomboid Server ===" echo "=== Starting Project Zomboid Server ==="
echo "Connect to: home.bdebyl.net:16261" echo "Connect to: home.debyl.io:16261"
exec su -c "export LD_LIBRARY_PATH=${INSTALL_DIR}/jre64/lib:\${LD_LIBRARY_PATH} && ./start-server.sh ${SERVER_ARGS}" steam exec su -c "export LD_LIBRARY_PATH=${INSTALL_DIR}/jre64/lib:\${LD_LIBRARY_PATH} && ./start-server.sh ${SERVER_ARGS}" steam

Binary file not shown.