Compare commits

...

37 Commits

Author SHA1 Message Date
Bastian de Byl
b4ebc4bad7 feat: increase act_runner capacity for parallel job execution
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 13:51:24 -04:00
Bastian de Byl
d5e473304a fix: use python_env as guard for ESP-IDF install task
The tools directory can exist without the Python venv being created,
causing install.sh to be skipped on re-runs. Check for python_env
instead, which is the actual output we need.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-29 12:43:14 -04:00
Bastian de Byl
5deb2e6e48 feat: add SSH key and known_hosts for gitea-runner
Generate ed25519 deploy key and add git.skudak.com/git.debyl.io host
keys to known_hosts so the runner can clone SSH submodules in CI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 17:53:01 -04:00
Bastian de Byl
1c478e6ab5 fix: add ESP-IDF to git safe.directory before submodule init
Root-owned /opt/esp-idf triggers git dubious ownership check when
running submodule update. Add safe.directory config beforehand.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 17:10:25 -04:00
Bastian de Byl
dbd898cb2f feat: support multiple Gitea instances for actions runner
The gitea-actions role now uses a `gitea_runners` list instead of a
single `gitea_instance_url`. Each instance gets its own config, systemd
service, working directory, and cache. Migrates from the old single
`act_runner.service` to per-instance `act_runner-{name}.service`.

Adds git.skudak.com alongside git.debyl.io as runner targets.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-28 17:04:23 -04:00
Bastian de Byl
43fbcf59a5 add n8n workflow automation and fix cloud backup rsync
- Add n8n container (n8nio/n8n:2.11.3) with Caddy reverse proxy at n8n.debyl.io
- Add --exclude .ssh to cloud backup rsync to prevent overwriting
  authorized_keys on TrueNAS backup targets

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 12:12:19 -04:00
Bastian de Byl
f23fc62ada fix: move cloud backup keys and scripts out of container volume paths
SSH keys moved to /etc/ssh/backup_keys/ (ssh_home_t) and backup scripts
to /usr/local/bin/ (bin_t) to fix SELinux denials - container_file_t
context blocked rsync from exec'ing ssh. Also fixes skudak key path
mismatch (was truenas_skudak, key deployed as truenas_skudak-cloud).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 14:45:03 -05:00
Bastian de Byl
d4b01468ba chore: update vault variables 2026-03-05 14:00:16 -05:00
Bastian de Byl
8fd220a16e noticket - update zomboid b42revamp modpack to collection 3672556207
Replaces old 168-mod collection (3636931465) with new 385-mod collection.
Cleaned BBCode artifacts from mod IDs, updated map folders for 32 maps.
LogCabin retained for player connect/disconnect logging.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 13:59:33 -05:00
Bastian de Byl
3637b3ba23 noticket - remove karrio, update gregtime, fix caddy duplicate redirect
Remove Karrio shipping platform (containers, config, vault secrets,
Caddy site block). Bump gregtime 3.4.1 -> 3.4.3. Remove duplicate
home.debyl.io redirect in Caddyfile. Update zomboid b42revamp mod list.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 17:40:00 -05:00
Bastian de Byl
9f95585daa noticket - updated gregtime 2026-02-17 14:21:02 -05:00
Bastian de Byl
495943b837 feat: add ollama and searxng, migrate to debyl.io hostname
- Add ollama role for local LLM inference (install, service, models)
- Add searxng container for private search
- Migrate hostname from home.bdebyl.net to home.debyl.io
  (inventory, awsddns, zomboid entrypoint, home_server_name)
- Update vault with new secrets

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 15:13:25 -05:00
Bastian de Byl
3eb6938b62 feat: switch FISTO to dolphin-mistral with dolphin-phi fallback
Benchmarked uncensored models for the gregtime FISTO bot. dolphin-mistral
produces the best uncensored creative content, dolphin-phi is faster fallback.
Added OLLAMA_NUM_PREDICT env var (300) and bumped image to 3.3.0.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 14:56:52 -05:00
Bastian de Byl
d10cd49cf0 refactor: use variables for graylog stack image versions
Move hardcoded image versions to variables defined in main.yml for
easier version management in one place.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 12:35:51 -05:00
Bastian de Byl
61692b36a2 refactor: reorganize fluent-bit and geoip out of containers
- Move fluent-bit to common role (systemd service, not a container)
- Move geoip to podman/tasks/data/ (data prep, not a container)
- Remove debyltech tag from geoip (not a debyltech service)
- Fix check_mode for fetch subuid task to enable dry-run mode

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 12:34:43 -05:00
Bastian de Byl
9d562c7188 feat: smart zomboid traffic filtering with packet-size detection
Replace per-IP hashlimit with smarter filtering that distinguishes
legitimate players from scanner bots based on packet behavior:
- Players send varied packet sizes (53, 37, 1472 bytes)
- Scanners only send 53-byte query packets

New firewall rule chain:
- Priority 2: Mark + ACCEPT non-query packets (verifies player)
- Priority 3: ACCEPT queries from verified IPs (1 hour TTL)
- Priority 4: LOG rate-limited queries from unverified IPs
- Priority 5: DROP rate-limited queries (2 burst, then 1/hour)

Also includes:
- Fail2ban zomboid jail with tighter thresholds (5 retries/4h, 1w ban)
- Graylog streams for zomboid-connections, zomboid-ratelimit, fail2ban
- GeoIP pipeline enrichment for zomboid traffic
- Fluent-bit inputs for ratelimit logs and fail2ban events
- Remove Legendary Katana mod (Workshop 3418366499) - removed from Steam
- Bump Immich to v2.5.0
- Fix fulfillr config (nil → null)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 15:09:26 -05:00
Bastian de Byl
33eceff1fe feat: add personal uptime kuma instance at uptime.debyl.io
- Add uptime-kuma-personal container on port 3002
- Add Caddy config for uptime.debyl.io with IP restriction
- Update both uptime-kuma instances to 2.0.2
- Rename debyltech tag from uptime-kuma to uptime-debyltech

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 08:04:33 -05:00
Bastian de Byl
bc26fcd1f9 chore: fluent-bit zomboid, zomboid stats, home assistant, gregbot 2026-01-24 17:08:05 -05:00
Bastian de Byl
045eb0b5a7 chore: update fulfillr 2026-01-23 12:07:08 -05:00
Bastian de Byl
9a95eecfd5 chore: zomboid stats for gregtime, updates 2026-01-23 12:02:57 -05:00
Bastian de Byl
a59dc7a050 chore: bump gregtime to 2.0.9
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 23:16:51 -05:00
Bastian de Byl
2b4844b211 feat: add fulfillr outreach email configuration
- Update street2 address to Unit 95
- Add outreach config with DynamoDB tables and SES settings

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 23:11:58 -05:00
Bastian de Byl
86e1b88d5a chore: bump image versions
- fulfillr: 20260109.0522 -> 20260123.0109
- gregtime: 1.9.0 -> 2.0.8

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 23:11:57 -05:00
Bastian de Byl
9e04727b0e feat: update zomboid b42revamp server name and mods
- Rename b42revamp server from "zomboidb42revamp" to "gregboid"
- Remove mod 3238830225 from workshop items
- Replace Real Firearms with B42RainsFirearmsAndGunPartsExpanded4213
- Remove 2788256295/ammomaker mod

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 23:11:56 -05:00
Bastian de Byl
2c7704b6f9 feat: add zomboid world reset via systemd path unit
Deploy systemd path unit that watches for trigger file from Discord
bot and executes world reset script to delete saves and restart server.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 23:11:54 -05:00
Bastian de Byl
c2d117bd95 feat: add systemd timer for zomboid container stats
Deploy systemd timer that writes zomboid container stats to
zomboid-stats.json every 30 seconds for gregtime to read.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 23:10:05 -05:00
Bastian de Byl
5ac9b2fabe noticket - chore: update gregtime 2026-01-20 13:03:05 -05:00
Bastian de Byl
9974e2f773 chore: gregtime to use gemini API key 2026-01-18 12:50:17 -05:00
Bastian de Byl
ca1a12ba60 noticket - gregtime updates 2026-01-16 21:55:34 -05:00
Bastian de Byl
7a257f9daa chore: consolidate skudak SMTP config to shared variables
- Update Gitea Skudak to use skudaknoreply_mail_* vault variables
- Remove redundant gitea_skudak_smtp_* variables from vault
- Update skudaknoreply credentials to noreply@skudak.com

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 22:32:32 -05:00
Bastian de Byl
c96aeafb3f feat: add git.skudak.com Gitea instance and skudak domain migrations
Gitea Skudak (git.skudak.com):
- New Gitea instance with PostgreSQL in podman pod under git user
- SSH access via Gitea's built-in SSH server on port 2222
- Registration restricted to @skudak.com emails with email confirmation
- SMTP configured for email delivery

Domain migrations:
- wiki.skudakrennsport.com → wiki.skudak.com (302 redirect)
- cloud.skudakrennsport.com + cloud.skudak.com (dual-domain serving)
- BookStack APP_URL updated to wiki.skudak.com
- Nextcloud trusted_domains updated for cloud.skudak.com

Infrastructure:
- SELinux context for git user container storage (container_file_t)
- Firewall rule for port 2222/tcp (Gitea Skudak SSH)
- Caddy reverse proxy for git.skudak.com

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 22:27:02 -05:00
Bastian de Byl
9e665a841d chore: non-cifs nextcloud, partsy, zomboid updates 2026-01-15 16:48:07 -05:00
Bastian de Byl
34b45853e2 graylog updates, test.debyl.io, scripts for reference 2026-01-13 16:08:38 -05:00
Bastian de Byl
364047558c fix: add backinstock_table for fulfillr config 2026-01-09 15:16:47 -05:00
Bastian de Byl
6af3c5dc69 feat: add comprehensive access logging to Graylog with GeoIP
- Add fluent-bit inputs for Caddy access logs (JSON) and SSH logs
- Create GeoIP task to download MaxMind GeoLite2-City database
- Mount GeoIP database in Graylog container
- Enable Gitea access logging via environment variables
- Add parsers.conf for Caddy JSON log parsing
- Remove unused nosql/redis container and configuration

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 15:16:21 -05:00
Bastian de Byl
8685676109 feat: add esp-idf installation and depends for gitea act runner 2026-01-04 15:15:09 -05:00
Bastian de Byl
d9bf3e5c75 chore: fixed gregtime, updated immich 2026-01-03 21:50:05 -05:00
79 changed files with 2905 additions and 199 deletions

View File

@@ -9,7 +9,7 @@ This is a home infrastructure deployment repository using Ansible for automated
## Development Commands
### Core Commands
- `make` or `make lint` - Run linting (yamllint + ansible-lint) on all YAML files
- `make` or `make lint` - Run yamllint on all YAML files. Output may only show "Running yamllint..." and "Done." with no errors listed — this means linting passed. Do NOT run yamllint or ansible-lint manually; `make lint` is the only lint step needed.
- `make deploy` - Deploy all configurations to the home server
- `make deploy TAGS=sometag` - Deploy only specific tagged tasks
- `make deploy TARGET=specific-host` - Deploy to specific host instead of all
@@ -96,9 +96,22 @@ Tasks are tagged by service/component for selective deployment:
## Target Environment
- Single target host: `home.bdebyl.net`
- Single target host: `home.debyl.io`
- OS: Fedora (ansible_user: fedora)
- Container runtime: Podman
- Web server: Caddy with automatic HTTPS and built-in security (replaced nginx + ModSecurity)
- All services accessible via HTTPS with automatic certificate renewal
- ~~CI/CD: Drone CI infrastructure completely decommissioned~~
- ~~CI/CD: Drone CI infrastructure completely decommissioned~~
### Remote SSH Commands for Service Users
The `podman` user (and other service users) have `/bin/nologin` as their shell. To run commands as these users via SSH:
- **One-off commands**: `sudo -H -u podman bash -c 'command here'`
- **Interactive shell**: `sudo -H -u podman bash -c 'cd; bash'`
- **systemctl --user** requires `XDG_RUNTIME_DIR`:
```bash
sudo -H -u podman bash -c 'export XDG_RUNTIME_DIR=/run/user/$(id -u); systemctl --user <action> <service>'
```
Podman is a user-specific (rootless) container runtime, not a system service like Docker. The user context matters for all podman and systemctl --user operations. The default SSH user (`fedora`) has sudo access and can run commands directly.

View File

@@ -70,6 +70,7 @@ vault: ${ANSIBLE_VAULT} ${VAULT_FILE}
lint: ${LINT_YAML} ${SKIP_FILE}
@printf "Running yamllint...\n"
-@${LINT_YAML} ${YAML_FILES}
@printf "Done.\n"
# Git-crypt management
git-crypt-backup:

View File

@@ -8,4 +8,8 @@
- role: podman
# SSL certificates are now handled automatically by Caddy
# - role: ssl # REMOVED - Caddy handles all certificate management
- role: ollama
tags: ollama
- role: github-actions
- role: graylog-config
tags: graylog-config

View File

@@ -1,5 +1,5 @@
---
all:
hosts:
home.bdebyl.net:
home.debyl.io:
ansible_user: fedora

View File

@@ -12,7 +12,8 @@ deps:
python-docker,
]
fail2ban_jails: [sshd.local]
fail2ban_jails: [sshd.local, zomboid.local]
fail2ban_filters: [zomboid.conf]
services:
- crond

View File

@@ -0,0 +1,5 @@
[Definition]
# Match ZOMBOID_RATELIMIT firewall log entries
# Example: ZOMBOID_RATELIMIT: IN=eth0 OUT= MAC=... SRC=1.2.3.4 DST=...
failregex = ZOMBOID_RATELIMIT:.*SRC=<HOST>
ignoreregex =

View File

@@ -0,0 +1,9 @@
[zomboid]
enabled = true
filter = zomboid
banaction = iptables-allports
backend = systemd
maxretry = 5
findtime = 4h
bantime = 1w
ignoreip = 127.0.0.1/32 192.168.1.0/24

View File

@@ -10,3 +10,9 @@
ansible.builtin.service:
name: fail2ban
state: restarted
- name: restart fluent-bit
become: true
ansible.builtin.systemd:
name: fluent-bit
state: restarted

View File

@@ -2,31 +2,31 @@
# Fluent Bit - Log forwarder from journald to Graylog GELF
# Deployed as systemd service (not container) for direct journal access
# Clean up old container deployment if it exists
- name: stop and remove fluent-bit container if exists
become: true
become_user: "{{ podman_user }}"
containers.podman.podman_container:
name: fluent-bit
state: absent
ignore_errors: true
- name: disable old fluent-bit container systemd service
become: true
become_user: "{{ podman_user }}"
ansible.builtin.systemd:
name: fluent-bit
enabled: false
state: stopped
scope: user
ignore_errors: true
- name: install fluent-bit package
become: true
ansible.builtin.dnf:
name: fluent-bit
state: present
- name: create fluent-bit state directory for tail db files
become: true
ansible.builtin.file:
path: /var/lib/fluent-bit
state: directory
owner: root
group: root
mode: '0755'
- name: deploy fluent-bit parsers configuration
become: true
ansible.builtin.template:
src: fluent-bit/parsers.conf.j2
dest: /etc/fluent-bit/parsers.conf
owner: root
group: root
mode: '0644'
notify: restart fluent-bit
- name: deploy fluent-bit configuration
become: true
ansible.builtin.template:

View File

@@ -3,6 +3,9 @@
- import_tasks: security.yml
- import_tasks: service.yml
- import_tasks: fluent-bit.yml
tags: fluent-bit, graylog
- name: create the docker group
become: true
ansible.builtin.group:

View File

@@ -21,6 +21,16 @@
notify: restart_sshd
tags: security
- name: setup fail2ban filters
become: true
ansible.builtin.copy:
src: files/fail2ban/filters/{{ item }}
dest: /etc/fail2ban/filter.d/{{ item }}
mode: 0644
loop: "{{ fail2ban_filters }}"
notify: restart_fail2ban
tags: security
- name: setup fail2ban jails
become: true
ansible.builtin.copy:

View File

@@ -0,0 +1,155 @@
[SERVICE]
Flush 5
Daemon Off
Log_Level info
Parsers_File parsers.conf
# =============================================================================
# INPUT: Podman container logs
# =============================================================================
# Container logs come from conmon process with CONTAINER_NAME field
[INPUT]
Name systemd
Tag podman.*
Systemd_Filter _COMM=conmon
Read_From_Tail On
Strip_Underscores On
# =============================================================================
# INPUT: SSH logs for security monitoring
# =============================================================================
[INPUT]
Name systemd
Tag ssh.*
Systemd_Filter _SYSTEMD_UNIT=sshd.service
Read_From_Tail On
Strip_Underscores On
# =============================================================================
# INPUT: Kernel firewall logs for Zomboid connections
# =============================================================================
# Captures ZOMBOID_CONN firewall events with source IP for player correlation
[INPUT]
Name systemd
Tag firewall.zomboid
Systemd_Filter _TRANSPORT=kernel
Read_From_Tail On
Strip_Underscores On
# =============================================================================
# INPUT: Kernel firewall logs for Zomboid rate limiting
# =============================================================================
# Captures ZOMBOID_RATELIMIT firewall events for fail2ban monitoring
[INPUT]
Name systemd
Tag firewall.zomboid.ratelimit
Systemd_Filter _TRANSPORT=kernel
Read_From_Tail On
Strip_Underscores On
# =============================================================================
# INPUT: Fail2ban actions (ban/unban events)
# =============================================================================
[INPUT]
Name systemd
Tag fail2ban.*
Systemd_Filter _SYSTEMD_UNIT=fail2ban.service
Read_From_Tail On
Strip_Underscores On
# =============================================================================
# INPUT: Caddy access logs (JSON format)
# =============================================================================
{% for log_name in caddy_log_names %}
[INPUT]
Name tail
Tag caddy.{{ log_name }}
Path {{ caddy_log_path }}/{{ log_name }}.log
Parser caddy_json
Read_From_Head False
Refresh_Interval 5
DB /var/lib/fluent-bit/caddy_{{ log_name }}.db
{% endfor %}
# =============================================================================
# FILTERS: Add metadata for Graylog categorization
# =============================================================================
# Exclude Graylog stack containers to prevent feedback loop
[FILTER]
Name grep
Match podman.*
Exclude CONTAINER_NAME ^graylog
[FILTER]
Name record_modifier
Match podman.*
Record host {{ ansible_hostname }}
Record source podman
Record log_type container
[FILTER]
Name record_modifier
Match ssh.*
Record host {{ ansible_hostname }}
Record source sshd
Record log_type security
# Copy msg to MESSAGE for caddy logs (GELF requires MESSAGE)
[FILTER]
Name modify
Match caddy.*
Copy msg MESSAGE
[FILTER]
Name record_modifier
Match caddy.*
Record host {{ ansible_hostname }}
Record source caddy
Record log_type access
# Filter kernel logs to only keep ZOMBOID_CONN messages
[FILTER]
Name grep
Match firewall.zomboid
Regex MESSAGE ZOMBOID_CONN
[FILTER]
Name record_modifier
Match firewall.zomboid
Record host {{ ansible_hostname }}
Record source firewall
Record log_type zomboid_connection
# Filter kernel logs to only keep ZOMBOID_RATELIMIT messages
[FILTER]
Name grep
Match firewall.zomboid.ratelimit
Regex MESSAGE ZOMBOID_RATELIMIT
[FILTER]
Name record_modifier
Match firewall.zomboid.ratelimit
Record host {{ ansible_hostname }}
Record source firewall
Record log_type zomboid_ratelimit
# Fail2ban ban/unban events
[FILTER]
Name record_modifier
Match fail2ban.*
Record host {{ ansible_hostname }}
Record source fail2ban
Record log_type security
# =============================================================================
# OUTPUT: All logs to Graylog GELF UDP
# =============================================================================
# Graylog needs a GELF UDP input configured on port 12203
[OUTPUT]
Name gelf
Match *
Host 127.0.0.1
Port 12202
Mode tcp
Gelf_Short_Message_Key MESSAGE
Gelf_Host_Key host

View File

@@ -0,0 +1,24 @@
[PARSER]
Name caddy_json
Format json
Time_Key ts
Time_Format %s.%L
# Generic JSON parser for nested message fields
[PARSER]
Name json
Format json
# Parse ZOMBOID_CONN firewall logs to extract source IP
# Example: ZOMBOID_CONN: IN=enp0s31f6 OUT= MAC=... SRC=45.5.113.90 DST=192.168.1.10 ...
[PARSER]
Name zomboid_firewall
Format regex
Regex ZOMBOID_CONN:.*SRC=(?<src_ip>[0-9.]+).*DST=(?<dst_ip>[0-9.]+).*DPT=(?<dst_port>[0-9]+)
# Parse ZOMBOID_RATELIMIT firewall logs to extract source IP
# Example: ZOMBOID_RATELIMIT: IN=enp0s31f6 OUT= MAC=... SRC=45.5.113.90 DST=192.168.1.10 ...
[PARSER]
Name zomboid_ratelimit
Format regex
Regex ZOMBOID_RATELIMIT:.*SRC=(?<src_ip>[0-9.]+).*DST=(?<dst_ip>[0-9.]+).*DPT=(?<dst_port>[0-9]+)

View File

@@ -6,3 +6,7 @@ git_home: "/srv/{{ git_user }}"
gitea_debyl_server_name: git.debyl.io
gitea_image: docker.gitea.com/gitea:1.25.2
gitea_db_image: docker.io/library/postgres:14-alpine
# Skudak Gitea configuration
gitea_skudak_server_name: git.skudak.com
gitea_skudak_ssh_port: 2222

View File

@@ -0,0 +1,114 @@
---
# Deploy Gitea Skudak containers using Podman pod
# NOTE: Directories are created in podman.yml (psql dir created by postgres container)
# Ensure SELinux contexts are applied before pod creation
- name: flush handlers before gitea-skudak pod creation
ansible.builtin.meta: flush_handlers
tags: gitea, gitea-skudak
# Create pod for Skudak Gitea services
- name: create gitea-skudak pod
become: true
become_user: "{{ git_user }}"
containers.podman.podman_pod:
name: gitea-skudak-pod
state: started
ports:
- "3101:3000"
- "{{ gitea_skudak_ssh_port }}:2222"
tags: gitea, gitea-skudak
# PostgreSQL container in pod
- name: create gitea-skudak-postgres container
become: true
become_user: "{{ git_user }}"
containers.podman.podman_container:
name: gitea-skudak-postgres
image: "{{ gitea_db_image }}"
pod: gitea-skudak-pod
restart_policy: on-failure:3
log_driver: journald
env:
POSTGRES_DB: gitea
POSTGRES_USER: gitea
POSTGRES_PASSWORD: "{{ gitea_skudak_db_pass }}"
volumes:
- "{{ git_home }}/volumes/gitea-skudak/psql:/var/lib/postgresql/data"
tags: gitea, gitea-skudak
# Gitea container in pod
- name: create gitea-skudak container
become: true
become_user: "{{ git_user }}"
containers.podman.podman_container:
name: gitea-skudak
image: "{{ gitea_image }}"
pod: gitea-skudak-pod
restart_policy: on-failure:3
log_driver: journald
env:
USER_UID: "1000"
USER_GID: "1000"
GITEA__database__DB_TYPE: postgres
GITEA__database__HOST: "127.0.0.1:5432"
GITEA__database__NAME: gitea
GITEA__database__USER: gitea
GITEA__database__PASSWD: "{{ gitea_skudak_db_pass }}"
GITEA__server__DOMAIN: "{{ gitea_skudak_server_name }}"
GITEA__server__ROOT_URL: "https://{{ gitea_skudak_server_name }}/"
GITEA__server__SSH_DOMAIN: "{{ gitea_skudak_server_name }}"
# Use Gitea's built-in SSH server (non-privileged port inside container)
GITEA__server__START_SSH_SERVER: "true"
GITEA__server__DISABLE_SSH: "false"
GITEA__server__SSH_PORT: "{{ gitea_skudak_ssh_port }}"
GITEA__server__SSH_LISTEN_PORT: "2222"
GITEA__security__SECRET_KEY: "{{ gitea_skudak_secret_key }}"
GITEA__security__INTERNAL_TOKEN: "{{ gitea_skudak_internal_token }}"
GITEA__security__INSTALL_LOCK: "true"
# Allow registration only for @skudak.com emails
GITEA__service__DISABLE_REGISTRATION: "false"
GITEA__service__EMAIL_DOMAIN_ALLOWLIST: "skudak.com"
GITEA__service__REGISTER_EMAIL_CONFIRM: "true"
GITEA__service__REQUIRE_SIGNIN_VIEW: "false"
# Mailer configuration for email confirmation
GITEA__mailer__ENABLED: "true"
GITEA__mailer__PROTOCOL: "smtps"
GITEA__mailer__SMTP_ADDR: "{{ skudaknoreply_mail_host }}"
GITEA__mailer__SMTP_PORT: "465"
GITEA__mailer__USER: "{{ skudaknoreply_mail_user }}"
GITEA__mailer__PASSWD: "{{ skudaknoreply_mail_pass }}"
GITEA__mailer__FROM: "Skudak Git <{{ skudaknoreply_mail_user }}>"
# Logging configuration - output to journald for fluent-bit capture
GITEA__log__MODE: console
GITEA__log__LEVEL: Info
GITEA__log__ENABLE_ACCESS_LOG: "true"
volumes:
- "{{ git_home }}/volumes/gitea-skudak/data:/data"
- /etc/localtime:/etc/localtime:ro
tags: gitea, gitea-skudak
# Generate systemd service for the pod
- name: create systemd job for gitea-skudak-pod
become: true
become_user: "{{ git_user }}"
ansible.builtin.shell: |
podman generate systemd --name gitea-skudak-pod --files --new
mv pod-gitea-skudak-pod.service {{ git_home }}/.config/systemd/user/
mv container-gitea-skudak-postgres.service {{ git_home }}/.config/systemd/user/
mv container-gitea-skudak.service {{ git_home }}/.config/systemd/user/
args:
chdir: "{{ git_home }}"
changed_when: false
tags: gitea, gitea-skudak
- name: enable gitea-skudak-pod service
become: true
become_user: "{{ git_user }}"
ansible.builtin.systemd:
name: pod-gitea-skudak-pod.service
daemon_reload: true
enabled: true
state: started
scope: user
tags: gitea, gitea-skudak

View File

@@ -59,6 +59,10 @@
GITEA__security__INSTALL_LOCK: "true"
GITEA__service__DISABLE_REGISTRATION: "true"
GITEA__service__REQUIRE_SIGNIN_VIEW: "false"
# Logging configuration - output to journald for fluent-bit capture
GITEA__log__MODE: console
GITEA__log__LEVEL: Info
GITEA__log__ENABLE_ACCESS_LOG: "true"
volumes:
- "{{ git_home }}/volumes/gitea/data:/data"
- /etc/localtime:/etc/localtime:ro

View File

@@ -6,6 +6,7 @@
- import_tasks: selinux.yml
- import_tasks: selinux-podman.yml
- import_tasks: gitea.yml
- import_tasks: gitea-skudak.yml
# git-daemon no longer needed - commented out
# - import_tasks: systemd.yml

View File

@@ -45,9 +45,14 @@
- "{{ git_home }}/volumes"
- "{{ git_home }}/volumes/gitea"
- "{{ git_home }}/volumes/gitea/data"
# NOTE: psql directory is created by PostgreSQL container with container user ownership
- "{{ git_home }}/volumes/gitea/psql"
- "{{ git_home }}/volumes/gitea-skudak"
- "{{ git_home }}/volumes/gitea-skudak/data"
- "{{ git_home }}/volumes/gitea-skudak/psql"
# NOTE: psql directories may already exist with postgres ownership - ignore errors
failed_when: false
notify: restorecon git
tags: git, gitea
tags: git, gitea, gitea-skudak
# SELinux context for container volumes
- name: selinux context for git container volumes
@@ -57,7 +62,17 @@
setype: container_file_t
state: present
notify: restorecon git
tags: git, gitea, selinux
tags: git, gitea, gitea-skudak, selinux
# SELinux context for container storage (images, overlays, etc.)
- name: selinux context for git container storage
become: true
community.general.sefcontext:
target: "{{ git_home }}/.local/share/containers(/.*)?"
setype: container_file_t
state: present
notify: restorecon git
tags: git, gitea, gitea-skudak, selinux
# Enable podman socket for SSH key lookup via AuthorizedKeysCommand
- name: enable podman socket for git user

View File

@@ -3,9 +3,23 @@ gitea_runner_user: gitea-runner
gitea_runner_home: /home/gitea-runner
gitea_runner_version: "0.2.13"
gitea_runner_arch: linux-amd64
gitea_instance_url: https://git.debyl.io
gitea_runner_capacity: 4
# Multiple Gitea instances to run actions runners for
gitea_runners:
- name: debyl
instance_url: https://git.debyl.io
- name: skudak
instance_url: https://git.skudak.com
# Old single-instance format (replaced by gitea_runners list above):
# gitea_instance_url: https://git.debyl.io
# Paths
act_runner_bin: /usr/local/bin/act_runner
act_runner_config_dir: /etc/act_runner
act_runner_work_dir: /var/lib/act_runner
# ESP-IDF configuration
esp_idf_version: v5.4.1
esp_idf_path: /opt/esp-idf

View File

@@ -1,10 +1,11 @@
---
- name: restart act_runner
- name: restart act_runner services
become: true
ansible.builtin.systemd:
name: act_runner
name: "act_runner-{{ item.name }}"
state: restarted
daemon_reload: true
loop: "{{ gitea_runners }}"
- name: restart podman socket
become: true
@@ -12,3 +13,9 @@
name: podman.socket
state: restarted
daemon_reload: true
- name: restore esp-idf selinux context
become: true
ansible.builtin.command:
cmd: restorecon -R {{ esp_idf_path }}
changed_when: true

View File

@@ -0,0 +1,92 @@
---
- name: install ESP-IDF build dependencies
become: true
ansible.builtin.dnf:
name:
- git
- wget
- flex
- bison
- gperf
- python3
- python3-pip
- cmake
- ninja-build
- ccache
- libffi-devel
- libusb1-devel
state: present
tags: gitea-actions
- name: check if ESP-IDF is installed
ansible.builtin.stat:
path: "{{ esp_idf_path }}"
register: esp_idf_dir
tags: gitea-actions
- name: clone ESP-IDF repository
become: true
ansible.builtin.git:
repo: https://github.com/espressif/esp-idf.git
dest: "{{ esp_idf_path }}"
version: "{{ esp_idf_version }}"
recursive: true
force: false
when: not esp_idf_dir.stat.exists
tags: gitea-actions
- name: add ESP-IDF to git safe.directory
become: true
ansible.builtin.command:
cmd: git config --global --add safe.directory {{ esp_idf_path }}
changed_when: false
tags: gitea-actions
- name: ensure ESP-IDF submodules are initialized
become: true
ansible.builtin.command:
cmd: git submodule update --init --recursive
chdir: "{{ esp_idf_path }}"
changed_when: false
tags: gitea-actions
- name: set ESP-IDF directory ownership
become: true
ansible.builtin.file:
path: "{{ esp_idf_path }}"
owner: "{{ gitea_runner_user }}"
group: "{{ gitea_runner_user }}"
recurse: true
tags: gitea-actions
- name: set SELinux context for ESP-IDF directory
become: true
community.general.sefcontext:
target: "{{ esp_idf_path }}(/.*)?"
setype: usr_t
state: present
when: ansible_selinux.status == "enabled"
notify: restore esp-idf selinux context
tags: gitea-actions
- name: create ESP-IDF tools directory for runner user
become: true
ansible.builtin.file:
path: "{{ gitea_runner_home }}/.espressif"
state: directory
owner: "{{ gitea_runner_user }}"
group: "{{ gitea_runner_user }}"
mode: "0755"
tags: gitea-actions
- name: install ESP-IDF tools for runner user
become: true
become_user: "{{ gitea_runner_user }}"
ansible.builtin.shell: |
export IDF_TOOLS_PATH="{{ gitea_runner_home }}/.espressif"
{{ esp_idf_path }}/install.sh esp32
args:
creates: "{{ gitea_runner_home }}/.espressif/python_env"
environment:
HOME: "{{ gitea_runner_home }}"
tags: gitea-actions

View File

@@ -3,6 +3,8 @@
tags: gitea-actions
- import_tasks: deps.yml
tags: gitea-actions
- import_tasks: esp-idf.yml
tags: gitea-actions
- import_tasks: runner.yml
tags: gitea-actions
- import_tasks: systemd.yml

View File

@@ -15,31 +15,36 @@
mode: "0755"
tags: gitea-actions
- name: create act_runner working directory
- name: create per-runner working directory
become: true
ansible.builtin.file:
path: "{{ act_runner_work_dir }}"
path: "{{ act_runner_work_dir }}/{{ item.name }}"
state: directory
owner: "{{ gitea_runner_user }}"
group: "{{ gitea_runner_user }}"
mode: "0755"
loop: "{{ gitea_runners }}"
tags: gitea-actions
- name: create act_runner cache directory
- name: create per-runner cache directory
become: true
ansible.builtin.file:
path: "{{ act_runner_work_dir }}/cache"
path: "{{ act_runner_work_dir }}/{{ item.name }}/cache"
state: directory
owner: "{{ gitea_runner_user }}"
group: "{{ gitea_runner_user }}"
mode: "0755"
loop: "{{ gitea_runners }}"
tags: gitea-actions
- name: deploy act_runner configuration
- name: deploy per-runner configuration
become: true
ansible.builtin.template:
src: config.yaml.j2
dest: "{{ act_runner_config_dir }}/config.yaml"
dest: "{{ act_runner_config_dir }}/config-{{ item.name }}.yaml"
mode: "0644"
notify: restart act_runner
vars:
runner_name: "{{ item.name }}"
loop: "{{ gitea_runners }}"
notify: restart act_runner services
tags: gitea-actions

View File

@@ -1,17 +1,37 @@
---
- name: deploy act_runner systemd service
become: true
ansible.builtin.template:
src: act_runner.service.j2
dest: /etc/systemd/system/act_runner.service
mode: "0644"
notify: restart act_runner
tags: gitea-actions
- name: enable act_runner service
- name: stop and disable legacy act_runner service
become: true
ansible.builtin.systemd:
name: act_runner
state: stopped
enabled: false
failed_when: false
tags: gitea-actions
- name: remove legacy act_runner service file
become: true
ansible.builtin.file:
path: /etc/systemd/system/act_runner.service
state: absent
tags: gitea-actions
- name: deploy per-runner systemd service
become: true
ansible.builtin.template:
src: act_runner.service.j2
dest: "/etc/systemd/system/act_runner-{{ item.name }}.service"
mode: "0644"
vars:
runner_name: "{{ item.name }}"
loop: "{{ gitea_runners }}"
notify: restart act_runner services
tags: gitea-actions
- name: enable per-runner services
become: true
ansible.builtin.systemd:
name: "act_runner-{{ item.name }}"
daemon_reload: true
enabled: true
loop: "{{ gitea_runners }}"
tags: gitea-actions

View File

@@ -32,3 +32,42 @@
state: directory
mode: "0755"
tags: gitea-actions
- name: create .ssh directory
become: true
ansible.builtin.file:
path: "{{ gitea_runner_home }}/.ssh"
state: directory
owner: "{{ gitea_runner_user }}"
group: "{{ gitea_runner_user }}"
mode: "0700"
tags: gitea-actions
- name: generate SSH key for gitea-runner
become: true
become_user: "{{ gitea_runner_user }}"
ansible.builtin.command:
cmd: ssh-keygen -t ed25519 -f {{ gitea_runner_home }}/.ssh/id_ed25519 -N "" -C "gitea-runner@galactica"
creates: "{{ gitea_runner_home }}/.ssh/id_ed25519"
tags: gitea-actions
- name: add Gitea SSH host keys to known_hosts
become: true
become_user: "{{ gitea_runner_user }}"
ansible.builtin.shell:
cmd: ssh-keyscan -p 2222 {{ item }} >> {{ gitea_runner_home }}/.ssh/known_hosts 2>/dev/null
args:
creates: "{{ gitea_runner_home }}/.ssh/known_hosts"
loop:
- git.skudak.com
- git.debyl.io
tags: gitea-actions
- name: set known_hosts permissions
become: true
ansible.builtin.file:
path: "{{ gitea_runner_home }}/.ssh/known_hosts"
owner: "{{ gitea_runner_user }}"
group: "{{ gitea_runner_user }}"
mode: "0644"
tags: gitea-actions

View File

@@ -1,11 +1,11 @@
[Unit]
Description=Gitea Actions runner
Description=Gitea Actions runner ({{ runner_name }})
Documentation=https://gitea.com/gitea/act_runner
After=network.target podman.socket
[Service]
ExecStart={{ act_runner_bin }} daemon --config {{ act_runner_config_dir }}/config.yaml
WorkingDirectory={{ act_runner_work_dir }}
ExecStart={{ act_runner_bin }} daemon --config {{ act_runner_config_dir }}/config-{{ runner_name }}.yaml
WorkingDirectory={{ act_runner_work_dir }}/{{ runner_name }}
TimeoutSec=0
RestartSec=10
Restart=always

View File

@@ -2,8 +2,8 @@ log:
level: info
runner:
file: {{ act_runner_work_dir }}/.runner
capacity: 1
file: {{ act_runner_work_dir }}/{{ runner_name }}/.runner
capacity: {{ gitea_runner_capacity | default(4) }}
timeout: 3h
insecure: false
fetch_timeout: 5s
@@ -15,7 +15,7 @@ runner:
cache:
enabled: true
dir: {{ act_runner_work_dir }}/cache
dir: {{ act_runner_work_dir }}/{{ runner_name }}/cache
container:
network: host
@@ -27,4 +27,4 @@ container:
force_pull: false
host:
workdir_parent: {{ act_runner_work_dir }}/workdir
workdir_parent: {{ act_runner_work_dir }}/{{ runner_name }}/workdir

View File

@@ -0,0 +1,211 @@
---
# Graylog API Configuration
graylog_api_url: "https://logs.debyl.io/api"
# graylog_api_token: defined in vault
# Default index set for new streams (Default Stream index set)
graylog_default_index_set: "6955a9d3cc3f442e78805871"
# Stream definitions
graylog_streams:
- title: "debyltech-api"
description: "Lambda API events from debyltech-api service"
rules:
- field: "service"
value: "debyltech-api"
type: 1 # EXACT match
inverted: false
- title: "caddy-access"
description: "Web traffic access logs from Caddy"
rules:
- field: "source"
value: "caddy"
type: 1
inverted: false
- field: "log_type"
value: "access"
type: 1
inverted: false
- title: "caddy-fulfillr"
description: "Fulfillr-specific web traffic"
rules:
- field: "source"
value: "caddy"
type: 1
inverted: false
- field: "tag"
value: "caddy.fulfillr"
type: 1
inverted: false
- title: "ssh-security"
description: "SSH access and security logs"
rules:
- field: "source"
value: "sshd"
type: 1
inverted: false
- title: "container-logs"
description: "Container stdout/stderr from Podman"
rules:
- field: "source"
value: "podman"
type: 1
inverted: false
- title: "zomboid-connections"
description: "Zomboid game server connection logs"
rules:
- field: "log_type"
value: "zomboid_connection"
type: 1
inverted: false
- title: "zomboid-ratelimit"
description: "Zomboid rate-limited connection attempts"
rules:
- field: "log_type"
value: "zomboid_ratelimit"
type: 1
inverted: false
- title: "fail2ban-actions"
description: "Fail2ban ban and unban events"
rules:
- field: "source"
value: "fail2ban"
type: 1
inverted: false
# Pipeline definitions
graylog_pipelines:
- title: "GeoIP Enrichment"
description: "Add geographic information to access logs"
stages:
- stage: 0
match: "EITHER"
rules:
- "geoip_caddy_access"
- "geoip_zomboid"
- title: "Debyltech Event Classification"
description: "Categorize debyltech-api events"
stages:
- stage: 0
match: "EITHER"
rules:
- "classify_order_events"
- "classify_review_events"
- "classify_backinstock_events"
- "classify_shipping_events"
- "classify_product_events"
- stage: 1
match: "EITHER"
rules:
- "classify_default_events"
# Pipeline rule definitions
graylog_pipeline_rules:
- title: "geoip_caddy_access"
description: "GeoIP lookup for Caddy access logs"
source: |
rule "GeoIP for Caddy Access"
when
has_field("request_remote_ip")
then
let ip = to_string($message.request_remote_ip);
let geo = lookup("geoip-lookup", ip);
set_field("geo_country", geo["country"].iso_code);
set_field("geo_city", geo["city"].names.en);
set_field("geo_coordinates", geo["coordinates"]);
end
- title: "geoip_zomboid"
description: "GeoIP lookup for Zomboid connection logs"
source: |
rule "GeoIP for Zomboid"
when
has_field("src_ip")
then
let ip = to_string($message.src_ip);
let geo = lookup("geoip-lookup", ip);
set_field("geo_country", geo["country"].iso_code);
set_field("geo_city", geo["city"].names.en);
set_field("geo_coordinates", geo["coordinates"]);
end
- title: "classify_order_events"
description: "Classify order events"
source: |
rule "Classify order events"
when
has_field("event") AND contains(to_string($message.event), "order")
then
set_field("event_category", "order");
end
- title: "classify_review_events"
description: "Classify review events"
source: |
rule "Classify review events"
when
has_field("event") AND contains(to_string($message.event), "review")
then
set_field("event_category", "review");
end
- title: "classify_backinstock_events"
description: "Classify back-in-stock events"
source: |
rule "Classify back-in-stock events"
when
has_field("event") AND contains(to_string($message.event), "backinstock")
then
set_field("event_category", "backinstock");
end
- title: "classify_shipping_events"
description: "Classify shipping events"
source: |
rule "Classify shipping events"
when
has_field("event") AND contains(to_string($message.event), "shipping")
then
set_field("event_category", "shipping");
end
- title: "classify_product_events"
description: "Classify product events"
source: |
rule "Classify product events"
when
has_field("event") AND contains(to_string($message.event), "product")
then
set_field("event_category", "product");
end
- title: "classify_default_events"
description: "Default category for unclassified events"
source: |
rule "Classify default events"
when
has_field("event") AND NOT has_field("event_category")
then
set_field("event_category", "other");
end
# Pipeline to stream connections
graylog_pipeline_connections:
- pipeline: "GeoIP Enrichment"
streams:
- "caddy-access"
- "caddy-fulfillr"
- "zomboid-connections"
- "zomboid-ratelimit"
- pipeline: "Debyltech Event Classification"
streams:
- "debyltech-api"

View File

@@ -0,0 +1,187 @@
---
# Graylog Lookup Table Management via REST API
# Creates Data Adapters, Caches, and Lookup Tables for GeoIP
# =============================================================================
# Data Adapters
# =============================================================================
- name: get existing data adapters
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/lookup/adapters"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
register: existing_adapters
tags: graylog-config, lookup-tables
- name: build list of existing adapter names
ansible.builtin.set_fact:
existing_adapter_names: "{{ existing_adapters.json.data_adapters | default([]) | map(attribute='name') | list }}"
tags: graylog-config, lookup-tables
- name: create GeoIP data adapter
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/lookup/adapters"
method: POST
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Content-Type: application/json
body_format: json
body:
name: "geoip-adapter"
title: "GeoIP MaxMind Adapter"
description: "MaxMind GeoLite2-City database adapter"
config:
type: "maxmind_geoip"
path: "/usr/share/graylog/geoip/GeoLite2-City.mmdb"
database_type: "MAXMIND_CITY"
check_interval: 86400
check_interval_unit: "SECONDS"
status_code: [200, 201]
when: "'geoip-adapter' not in existing_adapter_names"
register: created_adapter
tags: graylog-config, lookup-tables
# =============================================================================
# Caches
# =============================================================================
- name: get existing caches
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/lookup/caches"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
register: existing_caches
tags: graylog-config, lookup-tables
- name: build list of existing cache names
ansible.builtin.set_fact:
existing_cache_names: "{{ existing_caches.json.caches | default([]) | map(attribute='name') | list }}"
tags: graylog-config, lookup-tables
- name: create GeoIP cache
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/lookup/caches"
method: POST
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Content-Type: application/json
body_format: json
body:
name: "geoip-cache"
title: "GeoIP Cache"
description: "Cache for GeoIP lookups"
config:
type: "guava_cache"
max_size: 10000
expire_after_access: 3600
expire_after_access_unit: "SECONDS"
expire_after_write: 0
expire_after_write_unit: "SECONDS"
status_code: [200, 201]
when: "'geoip-cache' not in existing_cache_names"
register: created_cache
tags: graylog-config, lookup-tables
# =============================================================================
# Lookup Tables
# =============================================================================
- name: refresh adapters list
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/lookup/adapters"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
register: all_adapters
tags: graylog-config, lookup-tables
- name: refresh caches list
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/lookup/caches"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
register: all_caches
tags: graylog-config, lookup-tables
- name: build adapter and cache ID maps
ansible.builtin.set_fact:
adapter_id_map: "{{ all_adapters.json.data_adapters | default([]) | items2dict(key_name='name', value_name='id') }}"
cache_id_map: "{{ all_caches.json.caches | default([]) | items2dict(key_name='name', value_name='id') }}"
tags: graylog-config, lookup-tables
- name: get existing lookup tables
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/lookup/tables"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
register: existing_tables
tags: graylog-config, lookup-tables
- name: build list of existing table names
ansible.builtin.set_fact:
existing_table_names: "{{ existing_tables.json.lookup_tables | default([]) | map(attribute='name') | list }}"
tags: graylog-config, lookup-tables
- name: create GeoIP lookup table
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/lookup/tables"
method: POST
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Content-Type: application/json
body_format: json
body:
name: "geoip-lookup"
title: "GeoIP Lookup Table"
description: "Lookup table for GeoIP resolution"
cache_id: "{{ cache_id_map['geoip-cache'] }}"
data_adapter_id: "{{ adapter_id_map['geoip-adapter'] }}"
default_single_value: ""
default_single_value_type: "NULL"
default_multi_value: ""
default_multi_value_type: "NULL"
status_code: [200, 201]
when:
- "'geoip-lookup' not in existing_table_names"
- "'geoip-adapter' in adapter_id_map"
- "'geoip-cache' in cache_id_map"
tags: graylog-config, lookup-tables

View File

@@ -0,0 +1,15 @@
---
# Graylog Configuration via REST API
# Configures lookup tables, streams, pipelines, and pipeline rules
- name: include lookup table configuration
ansible.builtin.include_tasks: lookup_tables.yml
tags: graylog-config, lookup-tables
- name: include stream configuration
ansible.builtin.include_tasks: streams.yml
tags: graylog-config, streams
- name: include pipeline configuration
ansible.builtin.include_tasks: pipelines.yml
tags: graylog-config, pipelines

View File

@@ -0,0 +1,188 @@
---
# Graylog Pipeline Management via REST API
# Idempotent: checks for existing pipelines/rules before creating
# =============================================================================
# Pipeline Rules
# =============================================================================
- name: get existing pipeline rules
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/pipelines/rule"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
register: existing_rules
tags: graylog-config, pipelines
- name: build list of existing rule titles
ansible.builtin.set_fact:
existing_rule_titles: "{{ existing_rules.json | map(attribute='title') | list }}"
existing_rule_map: "{{ existing_rules.json | items2dict(key_name='title', value_name='id') }}"
tags: graylog-config, pipelines
- name: create pipeline rules
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/pipelines/rule"
method: POST
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Content-Type: application/json
body_format: json
body:
title: "{{ item.title }}"
description: "{{ item.description | default('') }}"
source: "{{ item.source }}"
status_code: [200, 201]
loop: "{{ graylog_pipeline_rules }}"
loop_control:
label: "{{ item.title }}"
when: item.title not in existing_rule_titles
register: created_rules
tags: graylog-config, pipelines
- name: refresh rule list after creation
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/pipelines/rule"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
register: all_rules
tags: graylog-config, pipelines
- name: build rule ID lookup
ansible.builtin.set_fact:
rule_id_map: "{{ all_rules.json | items2dict(key_name='title', value_name='id') }}"
tags: graylog-config, pipelines
# =============================================================================
# Pipelines
# =============================================================================
- name: get existing pipelines
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/pipelines/pipeline"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
register: existing_pipelines
tags: graylog-config, pipelines
- name: build list of existing pipeline titles
ansible.builtin.set_fact:
existing_pipeline_titles: "{{ existing_pipelines.json | map(attribute='title') | list }}"
existing_pipeline_map: "{{ existing_pipelines.json | items2dict(key_name='title', value_name='id') }}"
tags: graylog-config, pipelines
- name: build pipeline source for each pipeline
ansible.builtin.set_fact:
pipeline_sources: "{{ pipeline_sources | default({}) | combine({item.title: lookup('template', 'pipeline_source.j2')}) }}"
loop: "{{ graylog_pipelines }}"
loop_control:
label: "{{ item.title }}"
vars:
pipeline: "{{ item }}"
tags: graylog-config, pipelines
- name: create pipelines
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/pipelines/pipeline"
method: POST
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Content-Type: application/json
body_format: json
body:
title: "{{ item.title }}"
description: "{{ item.description | default('') }}"
source: "{{ pipeline_sources[item.title] }}"
status_code: [200, 201]
loop: "{{ graylog_pipelines }}"
loop_control:
label: "{{ item.title }}"
when: item.title not in existing_pipeline_titles
register: created_pipelines
tags: graylog-config, pipelines
- name: refresh pipeline list after creation
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/pipelines/pipeline"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
register: all_pipelines
tags: graylog-config, pipelines
- name: build pipeline ID lookup
ansible.builtin.set_fact:
pipeline_id_map: "{{ all_pipelines.json | items2dict(key_name='title', value_name='id') }}"
tags: graylog-config, pipelines
# =============================================================================
# Pipeline to Stream Connections
# =============================================================================
- name: get current pipeline connections
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/pipelines/connections"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
register: current_connections
tags: graylog-config, pipelines
- name: connect pipelines to streams
ansible.builtin.uri:
url: "{{ graylog_api_url }}/system/pipelines/connections/to_stream"
method: POST
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Content-Type: application/json
body_format: json
body:
stream_id: "{{ stream_id_map[item.1] }}"
pipeline_ids:
- "{{ pipeline_id_map[item.0.pipeline] }}"
status_code: [200, 201]
loop: "{{ graylog_pipeline_connections | subelements('streams') }}"
loop_control:
label: "{{ item.0.pipeline }} -> {{ item.1 }}"
when:
- item.0.pipeline in pipeline_id_map
- item.1 in stream_id_map
ignore_errors: true
tags: graylog-config, pipelines

View File

@@ -0,0 +1,127 @@
---
# Graylog Stream Management via REST API
# Idempotent: checks for existing streams before creating
- name: get existing streams
ansible.builtin.uri:
url: "{{ graylog_api_url }}/streams"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
register: existing_streams
tags: graylog-config, streams
- name: build list of existing stream titles
ansible.builtin.set_fact:
existing_stream_titles: "{{ existing_streams.json.streams | map(attribute='title') | list }}"
existing_stream_map: "{{ existing_streams.json.streams | items2dict(key_name='title', value_name='id') }}"
tags: graylog-config, streams
- name: create streams
ansible.builtin.uri:
url: "{{ graylog_api_url }}/streams"
method: POST
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Content-Type: application/json
body_format: json
body:
title: "{{ item.title }}"
description: "{{ item.description | default('') }}"
index_set_id: "{{ item.index_set_id | default(graylog_default_index_set) }}"
remove_matches_from_default_stream: "{{ item.remove_from_default | default(true) }}"
status_code: [200, 201]
loop: "{{ graylog_streams }}"
loop_control:
label: "{{ item.title }}"
when: item.title not in existing_stream_titles
register: created_streams
tags: graylog-config, streams
- name: refresh stream list after creation
ansible.builtin.uri:
url: "{{ graylog_api_url }}/streams"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
register: all_streams
tags: graylog-config, streams
- name: build stream ID lookup
ansible.builtin.set_fact:
stream_id_map: "{{ all_streams.json.streams | items2dict(key_name='title', value_name='id') }}"
tags: graylog-config, streams
- name: get existing rules for each stream
ansible.builtin.uri:
url: "{{ graylog_api_url }}/streams/{{ stream_id_map[item.title] }}/rules"
method: GET
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Accept: application/json
status_code: 200
loop: "{{ graylog_streams }}"
loop_control:
label: "{{ item.title }}"
when: item.title in stream_id_map
register: stream_rules
tags: graylog-config, streams
- name: create stream rules
ansible.builtin.uri:
url: "{{ graylog_api_url }}/streams/{{ stream_id_map[item.0.title] }}/rules"
method: POST
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
Content-Type: application/json
body_format: json
body:
field: "{{ item.1.field }}"
value: "{{ item.1.value }}"
type: "{{ item.1.type | default(1) }}"
inverted: "{{ item.1.inverted | default(false) }}"
description: "{{ item.1.description | default('') }}"
status_code: [200, 201]
loop: "{{ graylog_streams | subelements('rules', skip_missing=True) }}"
loop_control:
label: "{{ item.0.title }} - {{ item.1.field }}:{{ item.1.value }}"
when:
- item.0.title in stream_id_map
- stream_rules.results | selectattr('item.title', 'equalto', item.0.title) | map(attribute='json.stream_rules') | first | default([]) | selectattr('field', 'equalto', item.1.field) | selectattr('value', 'equalto', item.1.value) | list | length == 0
tags: graylog-config, streams
- name: start streams
ansible.builtin.uri:
url: "{{ graylog_api_url }}/streams/{{ stream_id_map[item.title] }}/resume"
method: POST
user: "{{ graylog_api_token }}"
password: token
force_basic_auth: true
headers:
X-Requested-By: ansible
status_code: [200, 204]
loop: "{{ graylog_streams }}"
loop_control:
label: "{{ item.title }}"
when: item.title in stream_id_map
ignore_errors: true
tags: graylog-config, streams

View File

@@ -0,0 +1,8 @@
pipeline "{{ pipeline.title }}"
{% for stage in pipeline.stages %}
stage {{ stage.stage }} match {{ stage.match | default('EITHER') }}
{% for rule in stage.rules %}
rule "{{ rule }}"
{% endfor %}
{% endfor %}
end

View File

@@ -0,0 +1,6 @@
---
ollama_models:
- dolphin-phi
- dolphin-mistral
ollama_host: "127.0.0.1"
ollama_port: 11434

View File

@@ -0,0 +1,8 @@
---
- name: restart ollama
become: true
ansible.builtin.systemd:
name: ollama
state: restarted
daemon_reload: true
tags: ollama

View File

@@ -0,0 +1,3 @@
---
dependencies:
- role: common

View File

@@ -0,0 +1,11 @@
---
- name: check if ollama is already installed
ansible.builtin.stat:
path: /usr/local/bin/ollama
register: ollama_binary
- name: install ollama via official install script
become: true
ansible.builtin.shell: |
curl -fsSL https://ollama.com/install.sh | sh
when: not ollama_binary.stat.exists

View File

@@ -0,0 +1,9 @@
---
- import_tasks: install.yml
tags: ollama
- import_tasks: service.yml
tags: ollama
- import_tasks: models.yml
tags: ollama

View File

@@ -0,0 +1,10 @@
---
- name: pull ollama models
become: true
ansible.builtin.command: ollama pull {{ item }}
loop: "{{ ollama_models }}"
register: result
retries: 3
delay: 10
until: result is not failed
changed_when: "'pulling' in result.stderr or 'pulling' in result.stdout"

View File

@@ -0,0 +1,23 @@
---
- name: create ollama systemd override directory
become: true
ansible.builtin.file:
path: /etc/systemd/system/ollama.service.d
state: directory
mode: 0755
- name: template ollama environment override
become: true
ansible.builtin.template:
src: ollama.env.j2
dest: /etc/systemd/system/ollama.service.d/override.conf
mode: 0644
notify: restart ollama
- name: enable and start ollama service
become: true
ansible.builtin.systemd:
name: ollama
enabled: true
state: started
daemon_reload: true

View File

@@ -0,0 +1,4 @@
[Service]
Environment="OLLAMA_HOST={{ ollama_host }}:{{ ollama_port }}"
Environment="OLLAMA_NUM_PARALLEL=1"
Environment="OLLAMA_MAX_LOADED_MODELS=1"

File diff suppressed because one or more lines are too long

View File

@@ -633,3 +633,38 @@
entity_id: 81c486d682afcc94e98e377475cc92fc
domain: light
mode: single
- id: '1768862300896'
alias: Bedroom On
description: ''
triggers:
- type: turned_on
device_id: afb9734fe9b187ab6881a64d24e1c2f5
entity_id: 27efa149b9ebb388e7c21ba89e671b42
domain: switch
trigger: device
conditions: []
actions:
- action: light.turn_on
metadata: {}
data:
brightness_pct: 100
target:
area_id: bedroom
mode: single
- id: '1768862339192'
alias: Bedroom Off
description: ''
triggers:
- type: turned_off
device_id: afb9734fe9b187ab6881a64d24e1c2f5
entity_id: 27efa149b9ebb388e7c21ba89e671b42
domain: switch
trigger: device
conditions: []
actions:
- action: light.turn_off
metadata: {}
data: {}
target:
area_id: bedroom
mode: single

View File

@@ -42,11 +42,3 @@
scope: user
tags:
- zomboid
- name: restart fluent-bit
become: true
ansible.builtin.systemd:
name: fluent-bit
state: restarted
tags:
- fluent-bit

View File

@@ -4,7 +4,7 @@
container_name: awsddns
container_image: "{{ image }}"
- name: create home.bdebyl.net awsddns server container
- name: create home.debyl.io awsddns server container
become: true
become_user: "{{ podman_user }}"
diff: false

View File

@@ -25,6 +25,7 @@
# Legacy volume mounts removed - Caddy manages certificates automatically
# Mount static site directories
- "/usr/local/share/fulfillr-site:/usr/local/share/fulfillr-site:ro"
- "/usr/local/share/test-site:/srv/test-site:ro"
env:
CADDY_ADMIN: "0.0.0.0:2019"
restart_policy: always

View File

@@ -27,6 +27,17 @@
- caddy
- ssl
- name: create test-site directory
become: true
ansible.builtin.file:
path: /usr/local/share/test-site
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0755'
tags:
- caddy
- name: deploy caddyfile
become: true
ansible.builtin.template:

View File

@@ -0,0 +1,59 @@
---
- name: create backup SSH key directory
become: true
ansible.builtin.file:
path: /etc/ssh/backup_keys
state: directory
owner: root
group: root
mode: 0700
- name: deploy {{ backup_name }} backup SSH key
become: true
ansible.builtin.copy:
content: "{{ ssh_key_content }}"
dest: "{{ ssh_key_path }}"
owner: root
group: root
mode: 0600
setype: ssh_home_t
- name: template {{ backup_name }} backup script
become: true
ansible.builtin.template:
src: nextcloud/cloud-backup.sh.j2
dest: "{{ script_path }}"
owner: root
group: root
mode: 0755
setype: bin_t
- name: template {{ backup_name }} backup systemd service
become: true
ansible.builtin.template:
src: nextcloud/cloud-backup.service.j2
dest: "/etc/systemd/system/{{ backup_name }}-backup.service"
owner: root
group: root
mode: 0644
vars:
instance_name: "{{ backup_name }}"
- name: template {{ backup_name }} backup systemd timer
become: true
ansible.builtin.template:
src: nextcloud/cloud-backup.timer.j2
dest: "/etc/systemd/system/{{ backup_name }}-backup.timer"
owner: root
group: root
mode: 0644
vars:
instance_name: "{{ backup_name }}"
- name: enable and start {{ backup_name }} backup timer
become: true
ansible.builtin.systemd:
name: "{{ backup_name }}-backup.timer"
enabled: true
state: started
daemon_reload: true

View File

@@ -75,7 +75,7 @@
- import_tasks: podman/podman-check.yml
vars:
container_name: graylog-mongo
container_image: docker.io/mongo:6
container_image: "{{ mongo_image }}"
tags: graylog
- name: create graylog-mongo container
@@ -83,7 +83,7 @@
become_user: "{{ podman_user }}"
containers.podman.podman_container:
name: graylog-mongo
image: docker.io/mongo:6
image: "{{ mongo_image }}"
state: started
restart_policy: on-failure:3
log_driver: journald
@@ -103,7 +103,7 @@
- import_tasks: podman/podman-check.yml
vars:
container_name: graylog-opensearch
container_image: docker.io/opensearchproject/opensearch:2
container_image: "{{ opensearch_image }}"
tags: graylog
- name: create graylog-opensearch container
@@ -111,7 +111,7 @@
become_user: "{{ podman_user }}"
containers.podman.podman_container:
name: graylog-opensearch
image: docker.io/opensearchproject/opensearch:2
image: "{{ opensearch_image }}"
state: started
restart_policy: on-failure:3
log_driver: journald
@@ -135,7 +135,7 @@
- import_tasks: podman/podman-check.yml
vars:
container_name: graylog
container_image: docker.io/graylog/graylog:6.0
container_image: "{{ image }}"
tags: graylog
# Graylog uses host network to reach MongoDB/OpenSearch on 127.0.0.1
@@ -145,7 +145,7 @@
become_user: "{{ podman_user }}"
containers.podman.podman_container:
name: graylog
image: docker.io/graylog/graylog:6.0
image: "{{ image }}"
state: started
restart_policy: on-failure:3
log_driver: journald
@@ -159,6 +159,7 @@
GRAYLOG_MONGODB_URI: "mongodb://127.0.0.1:27017/graylog"
volumes:
- "{{ graylog_path }}/graylog/data:/usr/share/graylog/data:z"
- "{{ geoip_path }}/{{ geoip_database_edition }}.mmdb:/etc/graylog/server/GeoLite2-City.mmdb:ro"
requires:
- graylog-mongo
- graylog-opensearch

View File

@@ -0,0 +1,54 @@
---
- name: create n8n host directory volumes
become: true
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: "{{ podman_user }}"
group: "{{ podman_user }}"
mode: 0755
notify: restorecon podman
loop:
- "{{ n8n_path }}"
- name: set n8n volume ownership for node user
become: true
become_user: "{{ podman_user }}"
ansible.builtin.command:
cmd: podman unshare chown -R 1000:1000 {{ n8n_path }}
changed_when: false
- name: flush handlers
ansible.builtin.meta: flush_handlers
- import_tasks: podman/podman-check.yml
vars:
container_name: n8n
container_image: "{{ image }}"
- name: create n8n container
become: true
become_user: "{{ podman_user }}"
containers.podman.podman_container:
name: n8n
image: "{{ image }}"
image_strict: true
restart_policy: on-failure:3
log_driver: journald
network: shared
volumes:
- "{{ n8n_path }}:/home/node/.n8n"
ports:
- 5678:5678/tcp
env:
N8N_HOST: "{{ n8n_server_name }}"
N8N_PORT: "5678"
N8N_PROTOCOL: https
WEBHOOK_URL: "https://{{ n8n_server_name }}/"
N8N_SECURE_COOKIE: "true"
GENERIC_TIMEZONE: America/New_York
- name: create systemd startup job for n8n
include_tasks: podman/systemd-generate.yml
vars:
container_name: n8n

View File

@@ -21,21 +21,6 @@
ansible.builtin.command: |
podman unshare chown -R 33:33 {{ cloud_path }}/apps {{ cloud_path }}/data {{ cloud_path}}/config
- name: get user/group id from unshare
become: true
ansible.builtin.stat:
path: "{{ cloud_path }}/data"
register: cloud_owner
- name: mount cloud cifs
become: true
ansible.posix.mount:
src: "{{ cloud_cifs_src }}"
path: "{{ cloud_path }}/data"
fstype: cifs
opts: "username=cloud,password={{ cloud_cifs_pass }},uid={{ cloud_owner.stat.uid }},gid={{ cloud_owner.stat.uid }},file_mode=0770,dir_mode=0770"
state: mounted
- name: flush handlers
ansible.builtin.meta: flush_handlers
@@ -98,3 +83,13 @@
include_tasks: podman/systemd-generate.yml
vars:
container_name: cloud
- include_tasks: containers/cloud-backup.yml
vars:
backup_name: cloud
data_path: "{{ cloud_path }}/data"
ssh_key_path: /etc/ssh/backup_keys/cloud
ssh_key_content: "{{ cloud_backup_ssh_key }}"
ssh_user: cloud
remote_path: /mnt/glacier/nextcloud
script_path: /usr/local/bin/cloud-backup.sh

View File

@@ -10,13 +10,15 @@
notify: restorecon podman
loop:
- "{{ gregtime_path }}/logs"
- "{{ gregtime_path }}/data"
- name: unshare chown the gregtime log volume
- name: unshare chown the gregtime volumes
become: true
become_user: "{{ podman_user }}"
changed_when: false
ansible.builtin.command: |
ansible.builtin.shell: |
podman unshare chown -R 1000:1000 {{ gregtime_path }}/logs
podman unshare chown -R 1000:1000 {{ gregtime_path }}/data
- name: flush handlers
ansible.builtin.meta: flush_handlers
@@ -38,14 +40,27 @@
- host
env:
TZ: America/New_York
# Ollama + SearXNG for FISTO AI responses
OLLAMA_HOST: "http://127.0.0.1:11434"
OLLAMA_MODEL: "dolphin-mistral"
OLLAMA_FALLBACK_MODEL: "dolphin-phi"
OLLAMA_NUM_PREDICT: "300"
SEARXNG_URL: "http://127.0.0.1:8080"
# Gemini API for @bot gemini command
GEMINI_API_KEY: "{{ gemini_api_key }}"
# Zomboid RCON configuration for Discord restart command
ZOMBOID_RCON_HOST: "127.0.0.1"
ZOMBOID_RCON_PORT: "{{ zomboid_rcon_port }}"
ZOMBOID_RCON_PASSWORD: "{{ zomboid_admin_password }}"
# Path to zomboid log file for mod check results
ZOMBOID_LOG_FILE: "/zomboid-logs/server-console.txt"
volumes:
- "{{ gregtime_path }}/logs:/app/logs"
- "{{ gregtime_path }}/data:/app/data"
- "{{ zomboid_path }}/data:/zomboid-logs:ro"
- "{{ podman_volumes }}/zomboid-stats.json:/app/data/zomboid-stats.json:ro"
- name: create systemd startup job for gregtime
include_tasks: podman/systemd-generate.yml
vars:
container_name: gregtime
container_name: gregtime

View File

@@ -1,4 +1,6 @@
---
- import_tasks: gitea/podman-gitea-login.yml
- name: create partsy host directory volumes
become: true
ansible.builtin.file:

View File

@@ -0,0 +1,59 @@
---
- name: create searxng host directory volumes
become: true
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: "{{ podman_subuid.stdout }}"
group: "{{ podman_user }}"
mode: 0755
notify: restorecon podman
loop:
- "{{ searxng_path }}/config"
- "{{ searxng_path }}/data"
- name: template searxng settings
become: true
ansible.builtin.template:
src: searxng/settings.yml.j2
dest: "{{ searxng_path }}/config/settings.yml"
owner: "{{ podman_subuid.stdout }}"
group: "{{ podman_user }}"
mode: 0644
- name: unshare chown the searxng volumes for internal uid 977
become: true
become_user: "{{ podman_user }}"
changed_when: false
ansible.builtin.shell: |
podman unshare chown -R 977:977 {{ searxng_path }}/config
podman unshare chown -R 977:977 {{ searxng_path }}/data
- name: flush handlers
ansible.builtin.meta: flush_handlers
- import_tasks: podman/podman-check.yml
vars:
container_name: searxng
container_image: "{{ image }}"
- name: create searxng container
become: true
become_user: "{{ podman_user }}"
containers.podman.podman_container:
name: searxng
image: "{{ image }}"
restart_policy: on-failure:3
log_driver: journald
network:
- host
env:
SEARXNG_BASE_URL: "http://127.0.0.1:8080/"
volumes:
- "{{ searxng_path }}/config:/etc/searxng"
- "{{ searxng_path }}/data:/srv/searxng/data"
- name: create systemd startup job for searxng
include_tasks: podman/systemd-generate.yml
vars:
container_name: searxng

View File

@@ -1,5 +1,5 @@
---
- name: create nosql host directory volumes
- name: create uptime-kuma-personal host directory volumes
become: true
ansible.builtin.file:
path: "{{ item }}"
@@ -9,36 +9,30 @@
mode: 0755
notify: restorecon podman
loop:
- "{{ nosql_path }}/conf"
- "{{ nosql_path }}/data"
- "{{ uptime_kuma_personal_path }}/data"
- name: flush handlers
ansible.builtin.meta: flush_handlers
- import_tasks: podman/podman-check.yml
vars:
container_name: nosql
container_name: uptime-kuma-personal
container_image: "{{ image }}"
- name: create nosql container
- name: create uptime-kuma-personal container
become: true
become_user: "{{ podman_user }}"
containers.podman.podman_container:
name: nosql
name: uptime-kuma-personal
image: "{{ image }}"
command: redis-server --requirepass {{ nosql_password }}
restart_policy: on-failure:3
log_driver: journald
volumes:
- "{{ nosql_path }}/conf:/usr/local/etc/redis/"
- "{{ nosql_path }}/data:/var/lib/redis"
env:
TZ: America/New_York
REDIS_REPLICATION_MODE: master
- "{{ uptime_kuma_personal_path }}/data:/app/data"
ports:
- 6379:6379/tcp
- "3002:3001/tcp"
- name: create systemd startup job for nosql
- name: create systemd startup job for uptime-kuma-personal
include_tasks: podman/systemd-generate.yml
vars:
container_name: nosql
container_name: uptime-kuma-personal

View File

@@ -12,6 +12,95 @@
- "{{ zomboid_path }}/server"
- "{{ zomboid_path }}/data"
- "{{ zomboid_path }}/scripts"
- "{{ zomboid_path }}/logs"
- name: create podman bin directory
become: true
ansible.builtin.file:
path: "{{ podman_home }}/bin"
state: directory
owner: "{{ podman_user }}"
group: "{{ podman_user }}"
mode: '0755'
- name: deploy zomboid world reset script
become: true
ansible.builtin.template:
src: zomboid/world-reset.sh.j2
dest: "{{ podman_home }}/bin/zomboid-world-reset.sh"
owner: "{{ podman_user }}"
group: "{{ podman_user }}"
mode: '0755'
- name: deploy zomboid world reset path unit
become: true
ansible.builtin.template:
src: zomboid/zomboid-world-reset.path.j2
dest: "{{ podman_home }}/.config/systemd/user/zomboid-world-reset.path"
owner: "{{ podman_user }}"
group: "{{ podman_user }}"
mode: '0644'
notify: reload zomboid systemd
- name: deploy zomboid world reset service unit
become: true
ansible.builtin.template:
src: zomboid/zomboid-world-reset.service.j2
dest: "{{ podman_home }}/.config/systemd/user/zomboid-world-reset.service"
owner: "{{ podman_user }}"
group: "{{ podman_user }}"
mode: '0644'
notify: reload zomboid systemd
- name: deploy zomboid stats script
become: true
ansible.builtin.template:
src: zomboid/zomboid-stats.sh.j2
dest: "{{ podman_home }}/bin/zomboid-stats.sh"
owner: "{{ podman_user }}"
group: "{{ podman_user }}"
mode: '0755'
- name: create zomboid stats file with correct permissions
become: true
ansible.builtin.file:
path: "{{ podman_volumes }}/zomboid-stats.json"
state: touch
owner: "{{ podman_user }}"
group: "{{ podman_user }}"
mode: '0644'
modification_time: preserve
access_time: preserve
- name: deploy zomboid stats service unit
become: true
ansible.builtin.template:
src: zomboid/zomboid-stats.service.j2
dest: "{{ podman_home }}/.config/systemd/user/zomboid-stats.service"
owner: "{{ podman_user }}"
group: "{{ podman_user }}"
mode: '0644'
notify: reload zomboid systemd
- name: deploy zomboid stats timer unit
become: true
ansible.builtin.template:
src: zomboid/zomboid-stats.timer.j2
dest: "{{ podman_home }}/.config/systemd/user/zomboid-stats.timer"
owner: "{{ podman_user }}"
group: "{{ podman_user }}"
mode: '0644'
notify: reload zomboid systemd
- name: enable zomboid stats timer
become: true
become_user: "{{ podman_user }}"
ansible.builtin.systemd:
name: zomboid-stats.timer
scope: user
enabled: true
state: started
daemon_reload: true
- name: copy zomboid entrypoint script
become: true
@@ -148,7 +237,7 @@
when: zomboid_ini_stat.stat.exists
tags: zomboid-conf
# Mod configuration (only for modded server profile)
# Mod configuration for 'modded' server profile
- name: configure zomboid mods for modded server
become: true
ansible.builtin.lineinfile:
@@ -163,9 +252,267 @@
- zomboid_ini_stat.stat.exists
tags: zomboid-conf
# World reset tasks REMOVED - too dangerous to have in automation
# To reset the world manually:
# 1. Stop the server: systemctl --user stop zomboid.service
# 2. Delete saves: rm -rf /home/podman/.local/share/volumes/zomboid/data/Saves
# 3. Delete db: rm -rf /home/podman/.local/share/volumes/zomboid/data/db
# 4. Start the server: systemctl --user start zomboid.service
# Mod configuration for 'b42revamp' server profile (168 mods from Steam collection)
- name: configure zomboid mods for b42revamp server
become: true
ansible.builtin.lineinfile:
path: "{{ zomboid_path }}/data/Server/{{ zomboid_server_names[zomboid_server_mode] }}.ini"
regexp: "^{{ item.key }}="
line: "{{ item.key }}={{ item.value }}"
loop:
- { key: "Mods", value: "{{ zomboid_mods_b42revamp.mod_ids }}" }
- { key: "WorkshopItems", value: "{{ zomboid_mods_b42revamp.workshop_items }}" }
when:
- zomboid_server_mode == 'b42revamp'
- zomboid_ini_stat.stat.exists
tags: zomboid-conf
# Map configuration (for modded servers with custom maps)
- name: configure zomboid map
become: true
ansible.builtin.lineinfile:
path: "{{ zomboid_path }}/data/Server/{{ zomboid_server_names[zomboid_server_mode] }}.ini"
regexp: "^Map="
line: "Map={{ zomboid_maps[zomboid_server_mode] }}"
when:
- zomboid_server_mode != 'vanilla'
- zomboid_ini_stat.stat.exists
tags: zomboid-conf
# Firewall logging for player IP correlation
# Logs new UDP connections to Zomboid port for IP address tracking
- name: add firewall rule to log zomboid connections
become: true
ansible.builtin.command: >
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0
-p udp --dport 16261 -m conntrack --ctstate NEW
-j LOG --log-prefix "ZOMBOID_CONN: " --log-level 4
register: firewall_result
changed_when: "'already' not in firewall_result.stderr"
failed_when: false
notify: restart firewalld
tags: firewall
- name: add firewall rule to log zomboid connections (runtime)
become: true
ansible.builtin.command: >
firewall-cmd --direct --add-rule ipv4 filter INPUT 0
-p udp --dport 16261 -m conntrack --ctstate NEW
-j LOG --log-prefix "ZOMBOID_CONN: " --log-level 4
changed_when: false
failed_when: false
tags: firewall
# =============================================================================
# Add logging for port 16262 (mirrors existing 16261 logging)
# =============================================================================
- name: add firewall rule to log zomboid connections on 16262
become: true
ansible.builtin.command: >
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0
-p udp --dport 16262 -m conntrack --ctstate NEW
-j LOG --log-prefix "ZOMBOID_CONN: " --log-level 4
register: firewall_result_16262
changed_when: "'already' not in firewall_result_16262.stderr"
failed_when: false
notify: restart firewalld
tags: firewall
- name: add firewall rule to log zomboid connections on 16262 (runtime)
become: true
ansible.builtin.command: >
firewall-cmd --direct --add-rule ipv4 filter INPUT 0
-p udp --dport 16262 -m conntrack --ctstate NEW
-j LOG --log-prefix "ZOMBOID_CONN: " --log-level 4
changed_when: false
failed_when: false
tags: firewall
# =============================================================================
# Zomboid Rate Limiting and Query Flood Protection
# =============================================================================
# These rules mitigate Steam server query floods while allowing legitimate play.
# Query packets are typically 53 bytes; game traffic is larger and sustained.
#
# Rule priority: 0=logging (existing), 1=allow established, 2=rate limit queries
# Allow established/related connections without rate limiting
# This ensures active players aren't affected by query rate limits
- name: allow established zomboid connections on 16261
become: true
ansible.builtin.command: >
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1
-p udp --dport 16261 -m conntrack --ctstate ESTABLISHED,RELATED
-j ACCEPT
register: established_result
changed_when: "'already' not in established_result.stderr"
failed_when: false
notify: restart firewalld
tags: firewall
- name: allow established zomboid connections on 16262
become: true
ansible.builtin.command: >
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1
-p udp --dport 16262 -m conntrack --ctstate ESTABLISHED,RELATED
-j ACCEPT
register: established_result_16262
changed_when: "'already' not in established_result_16262.stderr"
failed_when: false
notify: restart firewalld
tags: firewall
# =============================================================================
# Smart Zomboid Traffic Filtering (Packet-Size Based)
# =============================================================================
# Distinguishes legitimate players from scanner bots:
# - Players send varied packet sizes (53, 37, 1472 bytes)
# - Scanners only send 53-byte query packets
#
# Rule priority:
# 0 = LOG all (existing above)
# 1 = ACCEPT established (existing above)
# 2 = Mark + ACCEPT non-query packets (verifies player)
# 3 = ACCEPT queries from verified IPs
# 4 = LOG rate-limited queries from unverified IPs
# 5 = DROP rate-limited queries from unverified IPs
# Priority 2: Mark IPs sending non-query packets as verified (1 hour TTL)
# Any packet NOT 53 bytes proves actual connection attempt
- name: mark verified players on 16261 (non-query packets)
become: true
ansible.builtin.command: >
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 2
-p udp --dport 16261 -m conntrack --ctstate NEW
-m length ! --length 53
-m recent --name zomboid_verified --set
-j ACCEPT
register: verify_result
changed_when: "'already' not in verify_result.stderr"
failed_when: false
notify: restart firewalld
tags: firewall
- name: mark verified players on 16262 (non-query packets)
become: true
ansible.builtin.command: >
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 2
-p udp --dport 16262 -m conntrack --ctstate NEW
-m length ! --length 53
-m recent --name zomboid_verified --set
-j ACCEPT
register: verify_result_16262
changed_when: "'already' not in verify_result_16262.stderr"
failed_when: false
notify: restart firewalld
tags: firewall
# Priority 3: Allow queries from verified players (within 1 hour)
- name: allow queries from verified players on 16261
become: true
ansible.builtin.command: >
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 3
-p udp --dport 16261 -m conntrack --ctstate NEW
-m length --length 53
-m recent --name zomboid_verified --rcheck --seconds 3600
-j ACCEPT
register: verified_query_result
changed_when: "'already' not in verified_query_result.stderr"
failed_when: false
notify: restart firewalld
tags: firewall
- name: allow queries from verified players on 16262
become: true
ansible.builtin.command: >
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 3
-p udp --dport 16262 -m conntrack --ctstate NEW
-m length --length 53
-m recent --name zomboid_verified --rcheck --seconds 3600
-j ACCEPT
register: verified_query_result_16262
changed_when: "'already' not in verified_query_result_16262.stderr"
failed_when: false
notify: restart firewalld
tags: firewall
# Priority 4: LOG rate-limited queries from unverified IPs
# Very aggressive: 2 burst, then 1 per hour
# Note: Uses same hashlimit name as DROP rule to share bucket
- name: log rate-limited queries from unverified IPs on 16261
become: true
ansible.builtin.command: >
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 4
-p udp --dport 16261 -m conntrack --ctstate NEW
-m length --length 53
-m hashlimit --hashlimit-above 1/hour --hashlimit-burst 2
--hashlimit-mode srcip --hashlimit-name zomboid_query_16261
--hashlimit-htable-expire 3600000
-j LOG --log-prefix "ZOMBOID_RATELIMIT: " --log-level 4
register: unverified_log_result
changed_when: "'already' not in unverified_log_result.stderr"
failed_when: false
notify: restart firewalld
tags: firewall
- name: log rate-limited queries from unverified IPs on 16262
become: true
ansible.builtin.command: >
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 4
-p udp --dport 16262 -m conntrack --ctstate NEW
-m length --length 53
-m hashlimit --hashlimit-above 1/hour --hashlimit-burst 2
--hashlimit-mode srcip --hashlimit-name zomboid_query_16262
--hashlimit-htable-expire 3600000
-j LOG --log-prefix "ZOMBOID_RATELIMIT: " --log-level 4
register: unverified_log_result_16262
changed_when: "'already' not in unverified_log_result_16262.stderr"
failed_when: false
notify: restart firewalld
tags: firewall
# Priority 5: DROP rate-limited queries from unverified IPs
# Note: Uses same hashlimit name as LOG rule to share bucket
- name: drop rate-limited queries from unverified IPs on 16261
become: true
ansible.builtin.command: >
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 5
-p udp --dport 16261 -m conntrack --ctstate NEW
-m length --length 53
-m hashlimit --hashlimit-above 1/hour --hashlimit-burst 2
--hashlimit-mode srcip --hashlimit-name zomboid_query_16261
--hashlimit-htable-expire 3600000
-j DROP
register: unverified_drop_result
changed_when: "'already' not in unverified_drop_result.stderr"
failed_when: false
notify: restart firewalld
tags: firewall
- name: drop rate-limited queries from unverified IPs on 16262
become: true
ansible.builtin.command: >
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 5
-p udp --dport 16262 -m conntrack --ctstate NEW
-m length --length 53
-m hashlimit --hashlimit-above 1/hour --hashlimit-burst 2
--hashlimit-mode srcip --hashlimit-name zomboid_query_16262
--hashlimit-htable-expire 3600000
-j DROP
register: unverified_drop_result_16262
changed_when: "'already' not in unverified_drop_result_16262.stderr"
failed_when: false
notify: restart firewalld
tags: firewall
# World reset is now triggered via Discord bot -> systemd path unit
# See zomboid-world-reset.path and zomboid-world-reset.service
- name: enable zomboid world reset path unit
become: true
become_user: "{{ podman_user }}"
ansible.builtin.systemd:
name: zomboid-world-reset.path
scope: user
enabled: true
state: started
daemon_reload: true

View File

@@ -13,6 +13,7 @@
- "{{ cloud_skudak_path }}/config"
- "{{ cloud_skudak_path }}/data"
- "{{ cloud_skudak_path }}/mysql"
- "{{ cloud_skudak_path }}/scripts"
- name: unshare chown the skudak cloud volumes
become: true
@@ -21,24 +22,19 @@
ansible.builtin.command: |
podman unshare chown -R 33:33 {{ cloud_skudak_path }}/apps {{ cloud_skudak_path }}/data {{ cloud_skudak_path}}/config
- name: get user/group id from unshare
become: true
ansible.builtin.stat:
path: "{{ cloud_skudak_path }}/data"
register: cloud_skudak_owner
- name: mount cloud cifs
become: true
ansible.posix.mount:
src: "{{ cloud_skudak_cifs_src }}"
path: "{{ cloud_skudak_path }}/data"
fstype: cifs
opts: "username=skucloud,password={{ cloud_skudak_cifs_pass }},uid={{ cloud_skudak_owner.stat.uid }},gid={{ cloud_skudak_owner.stat.uid }},file_mode=0770,dir_mode=0770"
state: mounted
- name: flush handlers
ansible.builtin.meta: flush_handlers
- name: copy skudak cloud libresign setup script
become: true
ansible.builtin.template:
src: nextcloud/libresign-setup.sh.j2
dest: "{{ cloud_skudak_path }}/scripts/libresign-setup.sh"
owner: "{{ podman_subuid.stdout }}"
group: "{{ podman_subuid.stdout }}"
mode: 0755
notify: restorecon podman
- import_tasks: podman/podman-check.yml
vars:
container_name: skudak-cloud-db
@@ -91,6 +87,7 @@
- "{{ cloud_skudak_path }}/apps:/var/www/html/custom_apps"
- "{{ cloud_skudak_path }}/data:/var/www/html/data"
- "{{ cloud_skudak_path }}/config:/var/www/html/config"
- "{{ cloud_skudak_path }}/scripts/libresign-setup.sh:/docker-entrypoint-hooks.d/before-starting/libresign-setup.sh:ro"
ports:
- "8090:80"
@@ -99,6 +96,20 @@
vars:
container_name: skudak-cloud
# Install poppler-utils for pdfsig/pdfinfo (LibreSign handles java/pdftk/jsignpdf via occ)
# This needs to be reinstalled on each container recreation
- name: install poppler-utils in skudak-cloud
become: true
become_user: "{{ podman_user }}"
ansible.builtin.command:
cmd: >
podman exec -u 0 skudak-cloud
sh -c "apt-get update && apt-get install -y --no-install-recommends
poppler-utils && rm -rf /var/lib/apt/lists/*"
register: poppler_install
changed_when: "'is already the newest version' not in poppler_install.stdout"
failed_when: false
- name: disable nextcloud signup link in config
become: true
ansible.builtin.lineinfile:
@@ -108,3 +119,24 @@
insertbefore: '^\);'
create: false
failed_when: false
# Add cloud.skudak.com to Nextcloud trusted_domains
- name: add cloud.skudak.com to nextcloud trusted_domains
become: true
become_user: "{{ podman_user }}"
ansible.builtin.command: >
podman exec -u www-data skudak-cloud
php occ config:system:set trusted_domains 1 --value="cloud.skudak.com"
register: trusted_domain_result
changed_when: "'System config value trusted_domains' in trusted_domain_result.stdout"
failed_when: false
- include_tasks: containers/cloud-backup.yml
vars:
backup_name: skudak-cloud
data_path: "{{ cloud_skudak_path }}/data"
ssh_key_path: /etc/ssh/backup_keys/skudak-cloud
ssh_key_content: "{{ cloud_skudak_backup_ssh_key }}"
ssh_user: skucloud
remote_path: /mnt/glacier/skudakcloud
script_path: /usr/local/bin/skudak-cloud-backup.sh

View File

@@ -68,7 +68,7 @@
network:
- shared
env:
APP_URL: "https://wiki.skudakrennsport.com"
APP_URL: "https://wiki.skudak.com"
APP_KEY: "{{ bookstack_app_key }}"
DB_HOST: "bookstack-db"
DB_USERNAME: "bookstack"

View File

@@ -0,0 +1,59 @@
---
# Download MaxMind GeoLite2 database for Graylog GeoIP enrichment
# Requires free MaxMind account: https://dev.maxmind.com/geoip/geolite2-free-geolocation-data
- name: create geoip directory
become: true
ansible.builtin.file:
path: "{{ geoip_path }}"
state: directory
owner: "{{ podman_subuid.stdout }}"
group: "{{ podman_subuid.stdout }}"
mode: '0755'
notify: restorecon podman
tags: graylog, geoip
- name: download GeoLite2 database
become: true
ansible.builtin.get_url:
url: "https://download.maxmind.com/geoip/databases/{{ geoip_database_edition }}/download?suffix=tar.gz"
dest: "{{ geoip_path }}/{{ geoip_database_edition }}.tar.gz"
url_username: "{{ geoip_maxmind_account_id }}"
url_password: "{{ geoip_maxmind_license_key }}"
force: false
mode: '0644'
register: geoip_download
tags: graylog, geoip
- name: extract GeoLite2 database
become: true
ansible.builtin.unarchive:
src: "{{ geoip_path }}/{{ geoip_database_edition }}.tar.gz"
dest: "{{ geoip_path }}"
remote_src: true
extra_opts:
- --strip-components=1
- --wildcards
- "*/{{ geoip_database_edition }}.mmdb"
when: geoip_download.changed
tags: graylog, geoip
# Fix ownership of downloaded files to podman user's subuid range
- name: fix geoip files ownership for podman user
become: true
ansible.builtin.file:
path: "{{ geoip_path }}"
state: directory
owner: "{{ podman_subuid.stdout }}"
group: "{{ podman_subuid.stdout }}"
recurse: true
tags: graylog, geoip
# Graylog runs as UID 1100 inside the container
- name: fix geoip database ownership for graylog container
become: true
become_user: "{{ podman_user }}"
changed_when: false
ansible.builtin.command: |
podman unshare chown -R 1100:1100 {{ geoip_path }}
tags: graylog, geoip

View File

@@ -13,6 +13,8 @@
# web server (Caddy)
- 80/tcp
- 443/tcp
# Gitea Skudak SSH
- 2222/tcp
# pihole (unused?)
- 53/tcp
- 53/udp

View File

@@ -31,7 +31,7 @@
- import_tasks: containers/home/hass.yml
vars:
image: ghcr.io/home-assistant/home-assistant:2025.9
image: ghcr.io/home-assistant/home-assistant:2026.1
tags: hass
- import_tasks: containers/home/partkeepr.yml
@@ -42,7 +42,7 @@
- import_tasks: containers/home/partsy.yml
vars:
image: localhost/partsy:f3bb84a-45ae7ef
image: "git.debyl.io/debyltech/partsy:latest"
tags: partsy
- import_tasks: containers/skudak/wiki.yml
@@ -54,50 +54,65 @@
- import_tasks: containers/home/photos.yml
vars:
db_image: docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
ml_image: ghcr.io/immich-app/immich-machine-learning:v2.4.0
ml_image: ghcr.io/immich-app/immich-machine-learning:v2.5.0
redis_image: docker.io/redis:6.2-alpine@sha256:eaba718fecd1196d88533de7ba49bf903ad33664a92debb24660a922ecd9cac8
image: ghcr.io/immich-app/immich-server:v2.4.0
image: ghcr.io/immich-app/immich-server:v2.5.0
tags: photos
- import_tasks: containers/home/cloud.yml
vars:
db_image: docker.io/library/mariadb:10.6
image: docker.io/library/nextcloud:32.0.1-apache
image: docker.io/library/nextcloud:33.0.0-apache
tags: cloud
- import_tasks: containers/skudak/cloud.yml
vars:
db_image: docker.io/library/mariadb:10.6
image: docker.io/library/nextcloud:32.0.1-apache
image: docker.io/library/nextcloud:33.0.0-apache
tags: skudak, skudak-cloud
- import_tasks: containers/debyltech/fulfillr.yml
vars:
image: "git.debyl.io/debyltech/fulfillr:20260104.0001"
image: git.debyl.io/debyltech/fulfillr:20260124.0411
tags: debyltech, fulfillr
- import_tasks: containers/debyltech/n8n.yml
vars:
image: docker.io/n8nio/n8n:2.11.3
tags: debyltech, n8n
- import_tasks: containers/debyltech/uptime-kuma.yml
vars:
image: docker.io/louislam/uptime-kuma:1
tags: debyltech, uptime-kuma
image: docker.io/louislam/uptime-kuma:2.0.2
tags: debyltech, uptime-debyltech
- import_tasks: containers/home/uptime-kuma.yml
vars:
image: docker.io/louislam/uptime-kuma:2.0.2
tags: home, uptime
- import_tasks: data/geoip.yml
tags: graylog, geoip
- import_tasks: containers/debyltech/graylog.yml
vars:
mongo_image: docker.io/mongo:7.0
opensearch_image: docker.io/opensearchproject/opensearch:2
image: docker.io/graylog/graylog:7.0.1
tags: debyltech, graylog
- import_tasks: containers/base/fluent-bit.yml
tags: fluent-bit, graylog
- import_tasks: containers/home/nosql.yml
- import_tasks: containers/home/searxng.yml
vars:
image: docker.io/redis:7.2.1-alpine
tags: nosql
image: docker.io/searxng/searxng:latest
tags: searxng
- import_tasks: containers/home/gregtime.yml
vars:
image: localhost/greg-time-bot:1.4.3
image: localhost/greg-time-bot:3.4.3
tags: gregtime
- import_tasks: containers/home/zomboid.yml
vars:
image: docker.io/cm2network/steamcmd:root
tags: zomboid
tags: zomboid

View File

@@ -112,6 +112,7 @@
- name: fetch subuid of {{ podman_user }}
become: true
changed_when: false
check_mode: false
ansible.builtin.shell: |
set -o pipefail && cat /etc/subuid | awk -F':' '/{{ podman_user }}/{ print $2 }' | head -n 1
register: podman_subuid

View File

@@ -79,11 +79,16 @@
}
# Wiki/BookStack - {{ bookstack_server_name }}
# Wiki/BookStack - {{ bookstack_server_name }} redirect to new domain
{{ bookstack_server_name }} {
redir https://{{ bookstack_server_name_new }}{uri} 302
}
# Wiki/BookStack - {{ bookstack_server_name_new }} (new primary domain)
{{ bookstack_server_name_new }} {
import common_headers
reverse_proxy localhost:6875
log {
output file /var/log/caddy/wiki.log
format json
@@ -125,11 +130,6 @@
# CI/Drone - REMOVED
# ci.bdebyl.net configuration removed - Drone CI infrastructure decommissioned
# Home server - redirect old to new
{{ home_server_name }} {
redir https://{{ home_server_name_io }}{uri} 302
}
# Home server - {{ home_server_name_io }}
{{ home_server_name_io }} {
{{ ip_restricted_site() }}
@@ -159,7 +159,7 @@
}
}
# Uptime Kuma - {{ uptime_kuma_server_name }}
# Uptime Kuma (Debyltech) - {{ uptime_kuma_server_name }}
{{ uptime_kuma_server_name }} {
{{ ip_restricted_site() }}
@@ -177,6 +177,24 @@
}
}
# Uptime Kuma (Personal) - {{ uptime_kuma_personal_server_name }}
{{ uptime_kuma_personal_server_name }} {
{{ ip_restricted_site() }}
handle @local {
import common_headers
reverse_proxy localhost:3002 {
# WebSocket support for live updates
flush_interval -1
}
}
log {
output file /var/log/caddy/uptime-kuma-personal.log
format json
}
}
# Graylog Logs - {{ logs_server_name }}
{{ logs_server_name }} {
# GELF HTTP endpoint - open for Lambda (auth via header)
@@ -258,28 +276,28 @@
}
}
# Skudak Nextcloud - {{ cloud_skudak_server_name }}
{{ cloud_skudak_server_name }} {
# Skudak Nextcloud - serve both domains (migration period)
{{ cloud_skudak_server_name }}, {{ cloud_skudak_server_name_new }} {
request_body {
max_size {{ caddy_max_request_body_mb }}MB
}
reverse_proxy localhost:8090 {
header_up Host {host}
header_up X-Real-IP {remote}
}
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains"
X-Content-Type-Options "nosniff"
Referrer-Policy "same-origin"
-X-Powered-By
}
# Nextcloud specific redirects
redir /.well-known/carddav /remote.php/dav 301
redir /.well-known/caldav /remote.php/dav 301
log {
output file /var/log/caddy/cloud-skudak.log
format json
@@ -300,6 +318,37 @@
}
}
# Gitea Skudak - {{ gitea_skudak_server_name }}
{{ gitea_skudak_server_name }} {
import common_headers
reverse_proxy localhost:3101 {
flush_interval -1
}
log {
output file /var/log/caddy/gitea-skudak.log
format json
}
}
# N8N Workflow Automation - {{ n8n_server_name }}
{{ n8n_server_name }} {
{{ ip_restricted_site() }}
handle @local {
import common_headers
reverse_proxy localhost:5678 {
flush_interval -1
}
}
log {
output file {{ caddy_log_path }}/n8n.log
format {{ caddy_log_format }}
}
}
# Fulfillr - {{ fulfillr_server_name }} (Static + API with IP restrictions)
{{ fulfillr_server_name }} {
{{ ip_restricted_site() }}
@@ -327,22 +376,45 @@
reverse_proxy localhost:9054
}
# Serve static files with SPA fallback
handle {
root * /usr/local/share/fulfillr-site
try_files {path} {path}/ /index.html
file_server
}
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains"
X-Content-Type-Options "nosniff"
Referrer-Policy "same-origin"
}
log {
output file /var/log/caddy/fulfillr.log
format json
}
}
# ============================================================================
# TEST/STAGING SITES
# ============================================================================
# Test Site - test.debyl.io (Public static site hosting, no caching)
test.debyl.io {
import common_headers
root * /srv/test-site
try_files {path} {path}/ /index.html
file_server
# Disable all caching for test sites
header Cache-Control "no-store, no-cache, must-revalidate, max-age=0"
header Pragma "no-cache"
log {
output file /var/log/caddy/test.log
format json
level {{ caddy_log_level }}
}
}

View File

@@ -1,32 +0,0 @@
[SERVICE]
Flush 5
Daemon Off
Log_Level info
Parsers_File parsers.conf
# Read from systemd journal - filter for Podman container logs
# Container logs come from conmon process with CONTAINER_NAME field
[INPUT]
Name systemd
Tag journal.*
Systemd_Filter _COMM=conmon
Read_From_Tail On
Strip_Underscores On
# Extract container name for better filtering in Graylog
[FILTER]
Name record_modifier
Match journal.*
Record host {{ ansible_hostname }}
Record source podman
# Output to Graylog GELF UDP (local, port 12203)
# Graylog needs a GELF UDP input configured on this port
[OUTPUT]
Name gelf
Match journal.*
Host 127.0.0.1
Port 12203
Mode udp
Gelf_Short_Message_Key MESSAGE
Gelf_Host_Key host

View File

@@ -1,6 +1,7 @@
{
"snipcart_api_key": "{{ snipcart_api_key }}",
"easypost_api_key": "{{ easypost_api_key }}",
"backinstock_table": "{{ fulfillr_backinstock_table }}",
"aws": {
"access_key": "{{ fulfillr_access_key }}",
"secret_key": "{{ fulfillr_secret_key }}",
@@ -9,7 +10,7 @@
},
"tax": {
"ein": "{{ fulfillr_tax_ein }}",
"ioss": "{{ fulfillr_tax_ioss }}"
"ioss": null
},
"sender_address": {
"city": "Newbury",
@@ -19,7 +20,18 @@
"phone": "6034160859",
"state": "NH",
"street1": "976 Route 103",
"street2": "Unit 509",
"street2": "Unit 95",
"zip": "03255"
}
},
"outreach": {
"outreach_table": "debyltech-outreach-prod",
"unsubscribe_table": "debyltech-unsubscribe-prod",
"email_log_table": "debyltech-email-log-prod",
"reviews_table": "debyltech-reviews-prod",
"hmac_secret_arn": "{{ fulfillr_hmac_arn }}",
"ses_from_email": "noreply@debyltech.com",
"ses_reply_to": "support@debyltech.com",
"ses_region": "us-east-1",
"base_url": "https://debyltech.com"
}
}

View File

@@ -0,0 +1,6 @@
[Unit]
Description=Nextcloud {{ instance_name }} backup to TrueNAS
[Service]
Type=oneshot
ExecStart={{ script_path }}

View File

@@ -0,0 +1,4 @@
#!/bin/bash
set -euo pipefail
rsync -az --exclude .ssh -e "ssh -i {{ ssh_key_path }} -o StrictHostKeyChecking=accept-new" \
{{ data_path }}/ {{ ssh_user }}@truenas.localdomain:{{ remote_path }}/

View File

@@ -0,0 +1,9 @@
[Unit]
Description=Daily Nextcloud {{ instance_name }} backup
[Timer]
OnCalendar=*-*-* 04:00:00
Persistent=true
[Install]
WantedBy=timers.target

View File

@@ -0,0 +1,22 @@
#!/bin/bash
# LibreSign dependency setup for Skudak Nextcloud
# Runs on container start via /docker-entrypoint-hooks.d/before-starting/
# Note: This runs as www-data, not root. poppler-utils is installed
# separately via Ansible using podman exec -u 0.
echo "=== LibreSign Setup: Installing dependencies ==="
# Install LibreSign-managed Java (required for PDFtk and jSignPdf)
# This downloads a specific Java version that LibreSign validates
echo "Installing Java..."
php /var/www/html/occ libresign:install --java || echo "Java install skipped or failed"
# Install PDFtk (requires Java)
echo "Installing PDFtk..."
php /var/www/html/occ libresign:install --pdftk || echo "PDFtk install skipped or failed"
# Install jSignPdf (requires Java)
echo "Installing jSignPdf..."
php /var/www/html/occ libresign:install --jsignpdf || echo "jSignPdf install skipped or failed"
echo "=== LibreSign Setup: Complete ==="

View File

@@ -0,0 +1,35 @@
use_default_settings: true
general:
instance_name: "SearXNG"
debug: false
server:
bind_address: "127.0.0.1"
port: 8080
secret_key: "{{ searxng_secret_key }}"
limiter: false
image_proxy: false
search:
safe_search: 0
formats:
- html
- json
engines:
- name: duckduckgo
engine: duckduckgo
disabled: false
- name: google
engine: google
disabled: false
- name: wikipedia
engine: wikipedia
disabled: false
- name: bing
engine: bing
disabled: false

View File

@@ -84,6 +84,6 @@ fi
# Start server
cd "${INSTALL_DIR}"
echo "=== Starting Project Zomboid Server ==="
echo "Connect to: home.bdebyl.net:16261"
echo "Connect to: home.debyl.io:16261"
exec su -c "export LD_LIBRARY_PATH=${INSTALL_DIR}/jre64/lib:\${LD_LIBRARY_PATH} && ./start-server.sh ${SERVER_ARGS}" steam

View File

@@ -0,0 +1,57 @@
#!/bin/bash
# Zomboid World Reset Script
# Triggered by systemd path unit when discord bot requests reset
set -e
LOGFILE="{{ podman_home }}/.local/share/volumes/zomboid/logs/world-reset.log"
TRIGGER_FILE="{{ podman_home }}/.local/share/volumes/gregtime/data/zomboid-reset.trigger"
SERVER_NAME="{{ zomboid_server_names[zomboid_server_mode] }}"
SAVES_PATH="{{ podman_home }}/.local/share/volumes/zomboid/data/Saves/Multiplayer/${SERVER_NAME}"
DB_PATH="{{ podman_home }}/.local/share/volumes/zomboid/data/db/${SERVER_NAME}.db"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOGFILE"
}
# Ensure XDG_RUNTIME_DIR is set for systemctl --user
export XDG_RUNTIME_DIR="/run/user/$(id -u)"
log "World reset triggered"
# Read requester info from trigger file if available
# Note: Must use podman unshare because file is owned by container's UID (232071)
if podman unshare test -f "$TRIGGER_FILE"; then
REQUESTER=$(podman unshare cat "$TRIGGER_FILE")
log "Requested by: $REQUESTER"
podman unshare rm -f "$TRIGGER_FILE"
fi
# Stop server
log "Stopping zomboid service..."
systemctl --user stop zomboid.service || true
sleep 5
# Delete world (using podman unshare to work within user namespace)
log "Deleting world saves at: $SAVES_PATH"
if [[ -d "$SAVES_PATH" ]]; then
podman unshare rm -rf "$SAVES_PATH"
log "World saves deleted"
else
log "No world saves found at $SAVES_PATH"
fi
# Delete player database
log "Deleting player database at: $DB_PATH"
if [[ -f "$DB_PATH" ]]; then
podman unshare rm -f "$DB_PATH"
log "Player database deleted"
else
log "No database found at $DB_PATH"
fi
# Start server
log "Starting zomboid service..."
systemctl --user start zomboid.service
log "World reset complete - new world will generate on first connection"

View File

@@ -0,0 +1,6 @@
[Unit]
Description=Write Zomboid container stats to file
[Service]
Type=oneshot
ExecStart={{ podman_home }}/bin/zomboid-stats.sh

View File

@@ -0,0 +1,3 @@
#!/bin/bash
# Write zomboid container stats to file for gregtime to read
podman stats --no-stream --format json zomboid 2>/dev/null > {{ podman_volumes }}/zomboid-stats.json || true

View File

@@ -0,0 +1,9 @@
[Unit]
Description=Update Zomboid container stats every 30 seconds
[Timer]
OnBootSec=30s
OnUnitActiveSec=30s
[Install]
WantedBy=timers.target

View File

@@ -0,0 +1,9 @@
[Unit]
Description=Watch for Zomboid world reset trigger
[Path]
PathExists={{ podman_home }}/.local/share/volumes/gregtime/data/zomboid-reset.trigger
Unit=zomboid-world-reset.service
[Install]
WantedBy=default.target

View File

@@ -0,0 +1,8 @@
[Unit]
Description=Zomboid World Reset Service
[Service]
Type=oneshot
ExecStart={{ podman_home }}/bin/zomboid-world-reset.sh
StandardOutput=journal
StandardError=journal

Binary file not shown.

401
scripts/steam-workshop-query.py Executable file
View File

@@ -0,0 +1,401 @@
#!/usr/bin/env python3
"""
Steam Workshop Query Tool for Project Zomboid Mods
Queries Steam API to get mod details including correct Mod IDs with special characters.
Useful for generating properly formatted mod lists for Build 42 servers.
Usage:
# Query individual workshop items (semicolon-separated)
python steam-workshop-query.py "ID1;ID2;ID3"
# Query from a Steam Workshop collection
python steam-workshop-query.py --collection 3625776190
python steam-workshop-query.py --collection "https://steamcommunity.com/sharedfiles/filedetails?id=3625776190"
# Output formats
--json Output raw JSON data
--ansible Output workshop_items and mod_ids strings for ansible config
--report Human-readable report (default)
Examples:
python steam-workshop-query.py "3171167894;3330403100" --ansible
python steam-workshop-query.py --collection 3625776190 --report
"""
import requests
import json
import sys
import time
import re
import argparse
from typing import List, Dict, Optional, Tuple
from datetime import datetime
STEAM_API_DETAILS = "https://api.steampowered.com/ISteamRemoteStorage/GetPublishedFileDetails/v1/"
STEAM_API_COLLECTION = "https://api.steampowered.com/ISteamRemoteStorage/GetCollectionDetails/v1/"
BATCH_SIZE = 50 # Conservative batch size to avoid rate limits
DELAY_BETWEEN_BATCHES = 1.0 # seconds
def get_collection_items(collection_id: str) -> List[str]:
"""Fetch all workshop item IDs from a Steam Workshop collection."""
data = {"collectioncount": 1, "publishedfileids[0]": collection_id}
response = requests.post(STEAM_API_COLLECTION, data=data)
response.raise_for_status()
result = response.json()
items = []
collection_details = result.get("response", {}).get("collectiondetails", [])
if not collection_details:
print(f"Warning: No collection found with ID {collection_id}", file=sys.stderr)
return items
for coll in collection_details:
if coll.get("result") != 1:
print(f"Warning: Collection {collection_id} returned error result", file=sys.stderr)
continue
for child in coll.get("children", []):
file_id = child.get("publishedfileid")
if file_id:
items.append(file_id)
return items
def query_workshop_items_batch(item_ids: List[str]) -> List[Dict]:
"""Query Steam API for a batch of workshop item details."""
data = {"itemcount": len(item_ids)}
for i, item_id in enumerate(item_ids):
data[f"publishedfileids[{i}]"] = item_id
response = requests.post(STEAM_API_DETAILS, data=data)
response.raise_for_status()
result = response.json()
return result.get("response", {}).get("publishedfiledetails", [])
def query_all_workshop_items(item_ids: List[str]) -> List[Dict]:
"""Query Steam API for all workshop items, handling batching."""
all_items = []
for i in range(0, len(item_ids), BATCH_SIZE):
batch = item_ids[i:i + BATCH_SIZE]
print(f"Querying batch {i // BATCH_SIZE + 1} ({len(batch)} items)...", file=sys.stderr)
items = query_workshop_items_batch(batch)
all_items.extend(items)
# Delay between batches to avoid rate limiting
if i + BATCH_SIZE < len(item_ids):
time.sleep(DELAY_BETWEEN_BATCHES)
return all_items
def extract_mod_id(item: Dict) -> Optional[str]:
"""
Extract Mod ID(s) from item description.
PZ mods typically include 'Mod ID: xxx' in their description.
Some mods have multiple Mod IDs on separate lines or comma-separated.
"""
description = item.get("description", "")
# Find ALL "Mod ID: xxx" patterns in description (multiple lines)
matches = re.findall(r'Mod ID:\s*([^\r\n]+)', description, re.IGNORECASE)
if not matches:
return None
all_mod_ids = []
for match in matches:
mod_id_str = match.strip().rstrip('.')
# Handle comma or semicolon separated mod IDs on same line
if ',' in mod_id_str:
all_mod_ids.extend([m.strip() for m in mod_id_str.split(',')])
elif ';' in mod_id_str:
all_mod_ids.extend([m.strip() for m in mod_id_str.split(';')])
else:
all_mod_ids.append(mod_id_str)
# Remove empty strings and duplicates while preserving order
seen = set()
unique_ids = []
for mod_id in all_mod_ids:
if mod_id and mod_id not in seen:
seen.add(mod_id)
unique_ids.append(mod_id)
return ';'.join(unique_ids) if unique_ids else None
def check_b42_compatible(item: Dict) -> Tuple[bool, str]:
"""
Check if mod appears to be B42 compatible.
Returns (is_compatible, reason).
"""
title = item.get("title", "").lower()
tags = [t.get("tag", "").lower() for t in item.get("tags", [])]
all_tags_str = " ".join(tags)
# B42 indicators in title or tags
b42_patterns = [
r'\bb42\b',
r'build\s*42',
r'\b42\.\d+',
r'\[b42\]',
r'\(b42\)',
]
for pattern in b42_patterns:
if re.search(pattern, title) or re.search(pattern, all_tags_str):
return True, "B42 mentioned in title/tags"
# Check for B41 only indicators (might not be compatible)
b41_only = re.search(r'\bb41\b.*only', title) or re.search(r'build\s*41\s*only', title)
if b41_only:
return False, "B41 only"
return False, "No B42 indicator found"
def has_special_characters(text: str) -> bool:
"""Check if text contains special characters that need attention."""
special = ["'", '"', "!", "&", "(", ")"]
return any(c in text for c in special)
def extract_collection_id(url_or_id: str) -> str:
"""Extract collection ID from URL or return as-is if already an ID."""
match = re.search(r'[?&]id=(\d+)', url_or_id)
return match.group(1) if match else url_or_id
def format_timestamp(unix_ts: int) -> str:
"""Format Unix timestamp as readable date."""
if not unix_ts:
return "Unknown"
return datetime.fromtimestamp(unix_ts).strftime("%Y-%m-%d")
def process_items(items: List[Dict]) -> Dict:
"""
Process workshop items and extract relevant information.
Returns a dict with processed data and analysis.
"""
processed = []
duplicates = {}
issues = []
for item in items:
workshop_id = item.get("publishedfileid", "unknown")
title = item.get("title", "Unknown")
mod_id = extract_mod_id(item)
b42_compat, b42_reason = check_b42_compatible(item)
last_updated = item.get("time_updated", 0)
result_code = item.get("result", 0)
entry = {
"workshop_id": workshop_id,
"title": title,
"mod_id": mod_id,
"b42_compatible": b42_compat,
"b42_reason": b42_reason,
"last_updated": format_timestamp(last_updated),
"has_special_chars": has_special_characters(mod_id or ""),
"result_code": result_code,
}
# Track duplicates by mod_id
if mod_id:
if mod_id in duplicates:
duplicates[mod_id].append(workshop_id)
else:
duplicates[mod_id] = [workshop_id]
# Track issues
if result_code != 1:
issues.append(f"Workshop item {workshop_id} returned error (result={result_code})")
if not mod_id:
issues.append(f"Workshop item {workshop_id} ({title}) has no Mod ID tag")
if entry["has_special_chars"]:
issues.append(f"Mod ID '{mod_id}' contains special characters")
processed.append(entry)
# Find actual duplicates (mod_id appearing more than once)
duplicate_mod_ids = {k: v for k, v in duplicates.items() if len(v) > 1}
return {
"items": processed,
"duplicates": duplicate_mod_ids,
"issues": issues,
"total_count": len(items),
"valid_count": len([i for i in processed if i["mod_id"]]),
}
def output_report(data: Dict) -> None:
"""Output human-readable report."""
print("\n" + "=" * 80)
print("STEAM WORKSHOP MOD ANALYSIS REPORT")
print("=" * 80)
print(f"\nTotal items: {data['total_count']}")
print(f"Valid items (with Mod ID): {data['valid_count']}")
if data["duplicates"]:
print(f"\n{'=' * 40}")
print("DUPLICATE MOD IDs:")
print(f"{'=' * 40}")
for mod_id, workshop_ids in data["duplicates"].items():
print(f" {mod_id}: {', '.join(workshop_ids)}")
if data["issues"]:
print(f"\n{'=' * 40}")
print("ISSUES:")
print(f"{'=' * 40}")
for issue in data["issues"]:
print(f" - {issue}")
print(f"\n{'=' * 40}")
print("MOD LIST:")
print(f"{'=' * 40}")
for item in data["items"]:
b42_status = "[B42]" if item["b42_compatible"] else "[???]"
special = " [SPECIAL CHARS]" if item["has_special_chars"] else ""
mod_id_display = item["mod_id"] or "<NO MOD ID>"
print(f"\n Workshop: {item['workshop_id']}")
print(f" Title: {item['title']}")
print(f" Mod ID: {mod_id_display}{special}")
print(f" Status: {b42_status} {item['b42_reason']}")
print(f" Updated: {item['last_updated']}")
def output_ansible(data: Dict) -> None:
"""Output ansible-ready configuration strings."""
# Get unique, valid mod IDs (preserving order, removing duplicates)
seen_workshop = set()
seen_mod_ids = set()
workshop_items = []
mod_ids = []
for item in data["items"]:
workshop_id = item["workshop_id"]
mod_id_str = item["mod_id"]
# Skip if we've seen this workshop item
if workshop_id in seen_workshop:
continue
seen_workshop.add(workshop_id)
workshop_items.append(workshop_id)
# Handle mod_id which may contain multiple IDs separated by semicolon
if mod_id_str:
for mod_id in mod_id_str.split(';'):
mod_id = mod_id.strip()
if mod_id and mod_id not in seen_mod_ids:
seen_mod_ids.add(mod_id)
mod_ids.append(mod_id)
# Format for Build 42 (backslash prefix)
workshop_str = ";".join(workshop_items)
mod_ids_str = ";".join(f"\\{mid}" for mid in mod_ids)
print("\n# Ansible Configuration for zomboid_mods")
print("# Copy these values to ansible/roles/podman/defaults/main.yml")
print("")
print("zomboid_mods:")
print(" workshop_items: >-")
print(f" {workshop_str}")
print(" mod_ids: >-")
print(f" {mod_ids_str}")
if data["duplicates"]:
print("\n# WARNING: The following Mod IDs had duplicates (kept first occurrence):")
for mod_id, workshop_ids in data["duplicates"].items():
print(f"# {mod_id}: {', '.join(workshop_ids)}")
if data["issues"]:
print("\n# Issues found:")
for issue in data["issues"]:
print(f"# - {issue}")
def output_json(data: Dict) -> None:
"""Output JSON data."""
print(json.dumps(data, indent=2))
def main():
parser = argparse.ArgumentParser(
description="Query Steam Workshop for Project Zomboid mod details",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__
)
parser.add_argument(
"workshop_ids",
nargs="?",
help="Semicolon-separated workshop IDs (e.g., 'ID1;ID2;ID3')"
)
parser.add_argument(
"--collection", "-c",
help="Steam Workshop collection ID or URL"
)
parser.add_argument(
"--json", "-j",
action="store_true",
help="Output raw JSON data"
)
parser.add_argument(
"--ansible", "-a",
action="store_true",
help="Output ansible-ready configuration"
)
parser.add_argument(
"--report", "-r",
action="store_true",
help="Output human-readable report (default)"
)
args = parser.parse_args()
# Determine input source
if args.collection:
collection_id = extract_collection_id(args.collection)
print(f"Fetching collection {collection_id}...", file=sys.stderr)
item_ids = get_collection_items(collection_id)
if not item_ids:
print("Error: No items found in collection", file=sys.stderr)
sys.exit(1)
print(f"Found {len(item_ids)} items in collection", file=sys.stderr)
elif args.workshop_ids:
item_ids = [id.strip() for id in args.workshop_ids.split(";") if id.strip()]
else:
parser.print_help()
sys.exit(1)
# Query Steam API
print(f"Querying {len(item_ids)} workshop items...", file=sys.stderr)
items = query_all_workshop_items(item_ids)
print(f"Retrieved {len(items)} item details", file=sys.stderr)
# Process items
data = process_items(items)
# Output based on format
if args.json:
output_json(data)
elif args.ansible:
output_ansible(data)
else:
output_report(data)
if __name__ == "__main__":
main()