diff --git a/inventory/hosts.example b/inventory/hosts.example index 0fe0e81..0f7a5c9 100644 --- a/inventory/hosts.example +++ b/inventory/hosts.example @@ -8,3 +8,7 @@ all: marge: homer: maggie: + vars: + ansible_user: jgarcia + ansible_become: true + ansible_become_method: sudo diff --git a/roles/immich/README.md b/roles/immich/README.md new file mode 100644 index 0000000..d6eb160 --- /dev/null +++ b/roles/immich/README.md @@ -0,0 +1,378 @@ +# Immich Role + +This Ansible role deploys [Immich](https://immich.app/) - a high performance self-hosted photo and video management solution - using Podman with docker-compose files. + +## Requirements + +- Podman installed on the target system (handled by the `podman` role dependency) +- Podman compose support (`podman compose` command available) +- Sufficient disk space for photos/videos at the upload location + +## Role Variables + +See `defaults/main.yml` for all available variables and their default values. + +### Key Configuration Requirements + +#### Required Passwords + +Both passwords must be set in your inventory (min 12 characters): +- `immich_postgres_password` - PostgreSQL database password +- `immich_valkey_password` - Valkey/Redis password + +#### Valkey ACL Configuration + +**Important:** Immich requires a dedicated Valkey ACL user with specific permissions. This role provides the ACL configuration, but you must register it with the Valkey role. + +**Required Setup in Inventory:** + +Add the Immich user to your `valkey_acl_users` list in your inventory or host_vars: + +```yaml +# inventory/host_vars/yourserver.yml or group_vars/all.yml +valkey_acl_users: + - username: immich + password: "{{ immich_valkey_password }}" + keypattern: "immich_bull* immich_channel*" + commands: "&* -@dangerous +@read +@write +@pubsub +select +auth +ping +info +eval +evalsha" +``` + +**ACL Breakdown:** +- `keypattern: "immich_bull* immich_channel*"` - Restricts access to BullMQ keys used by Immich +- `&*` - Allow all pub/sub channels (required for BullMQ job queues) +- `-@dangerous` - Deny dangerous commands (FLUSHDB, FLUSHALL, KEYS, etc.) +- `+@read +@write` - Allow read/write command groups +- `+@pubsub` - Allow pub/sub commands (SUBSCRIBE, PUBLISH, etc.) +- `+select` - Allow SELECT command (database switching) +- `+auth +ping +info` - Connection management commands +- `+eval +evalsha` - Lua scripting (required by BullMQ for atomic operations) + +**Based on:** [Immich GitHub Discussion #19727](https://github.com/immich-app/immich/discussions/19727#discussioncomment-13668749) + +**Security Benefits:** +- Immich cannot access keys from other services +- Cannot execute admin commands (FLUSHDB, CONFIG, etc.) +- Cannot view all keys (KEYS command denied) +- Defense-in-depth with ACL + key patterns + database numbers + +#### External Network Configuration + +Immich requires a dedicated external network to be defined in your inventory. Add this to your `host_vars` or `group_vars`: + +```yaml +podman_external_networks: + - name: immich + subnet: 172.20.0.0/16 + gateway: 172.20.0.1 +``` + +**How it works:** +1. Define the Immich network in `podman_external_networks` list in your inventory +2. The `podman` role (a dependency) creates the external network before Immich deployment +3. The Immich docker-compose file references this external network +4. The network persists across container restarts and compose stack rebuilds + +## Dependencies + +This role depends on: +- `podman` - Container runtime +- `postgres` - PostgreSQL database +- `valkey` - Redis-compatible cache (formerly Redis) + +**Note:** The Valkey role must be configured with the Immich ACL user (see Valkey Configuration section above) before running this role. + +## Example Playbook + +```yaml +--- +- hosts: servers + become: true + roles: + - role: podman + - role: immich + vars: + immich_postgres_password: "your-secure-postgres-password" + immich_valkey_password: "your-secure-valkey-password" + immich_upload_location: /mnt/storage/immich/upload + immich_timezone: America/New_York +``` + +**Complete Example with Valkey ACL:** + +In `inventory/host_vars/yourserver.yml`: + +```yaml +# Podman external networks +podman_external_networks: + - name: immich + subnet: 172.20.0.0/16 + gateway: 172.20.0.1 + +# Valkey admin password +valkey_admin_password: "your-valkey-admin-password" + +# Valkey ACL users - register all service users here +valkey_acl_users: + - username: immich + password: "{{ immich_valkey_password }}" + keypattern: "immich_bull* immich_channel*" + commands: "&* -@dangerous +@read +@write +@pubsub +select +auth +ping +info +eval +evalsha" + # Add other services here as needed + +# Immich passwords +immich_postgres_password: "your-secure-postgres-password" +immich_valkey_password: "your-secure-valkey-password" +``` + +In your playbook: + +```yaml +--- +- hosts: servers + become: true + roles: + - role: valkey # Must run first to create ACL users + - role: postgres + - role: podman + - role: immich +``` + +## Architecture + +The role deploys Immich using Podman containers that connect to shared system services: + +**Immich Containers:** +1. **immich-server** - Main application server (exposed on configured port) +2. **immich-machine-learning** - ML service for facial recognition and object detection + +**Shared System Services:** +3. **PostgreSQL** - Database with vector extensions (from `postgres` role) +4. **Valkey** - Redis-compatible cache (from `valkey` role) + +### Container Networking + +Both Immich containers run on a **dedicated external Podman network** with its own CIDR block. The network is created by the `podman` role as an external network, referenced in the compose file: + +```yaml +networks: + immich: + external: true + name: immich +``` + +The actual network configuration (subnet: `172.20.0.0/16`, gateway: `172.20.0.1`) is handled by the podman role based on the `immich_network_*` variables. + +This provides: +- **Network isolation**: Separate subnet (defined in inventory, e.g., `172.20.0.0/16`) from other containers +- **Network persistence**: Network survives compose stack rebuilds and container recreation +- **Named bridge**: Explicit interface naming for the network +- **Container-to-container communication**: The server reaches the ML container via service name (`immich-machine-learning:3003`) using Docker/Podman internal DNS +- **Container-to-host communication**: Both containers can reach PostgreSQL and Valkey on the host via `host.containers.internal:{{ podman_subnet_gateway }}` + +**Key Points:** +- The network must be defined in your inventory via `podman_external_networks` +- The network is created by the `podman` role before Immich deployment (via role dependency) +- The Immich network has its own gateway (e.g., `172.20.0.1` as defined in inventory) +- `extra_hosts` maps `host.containers.internal` to the **Podman default bridge gateway** (e.g., `10.88.0.1`), not the Immich network gateway +- This allows containers to route to the host machine for PostgreSQL/Valkey access + +**Checking the network:** +```bash +# List all Podman networks +podman network ls + +# Inspect the Immich network +podman network inspect immich +``` + +### Data Isolation + +The role implements proper data isolation for both database backends: + +- **PostgreSQL**: Immich gets its own database (`immich`) and dedicated user (`immich`) with restricted privileges (NOSUPERUSER, NOCREATEDB, NOCREATEROLE) +- **Valkey**: Immich uses a dedicated ACL user (`immich`) with: + - Dedicated password (independent from `valkey_admin_password`) + - Key pattern restriction (`immich_bull*` and `immich_channel*` only) + - Command restrictions (no admin/dangerous operations like FLUSHDB, CONFIG) + - Database number isolation (uses DB 0 by default, configurable) + - Pub/sub channel access for BullMQ job queues + +**Security Benefits:** +- Each service has unique credentials +- Compromised service cannot access other services' data +- Cannot accidentally delete all data (FLUSHDB/FLUSHALL denied) +- Cannot view keys from other services (KEYS command denied) +- Defense-in-depth: ACL + key patterns + command restrictions + database numbers + +The compose file is deployed to `{{ podman_projects_dir }}/immich/docker-compose.yml` and managed via a systemd service. + +## Post-Installation + +After deployment: + +1. Access Immich at `http://:2283` +2. Create an admin account on first login +3. Configure mobile/desktop apps to point to your server + +## Management + +The role creates a systemd service for managing the compose stack: + +```bash +# Check status +systemctl status immich + +# Stop Immich +systemctl stop immich + +# Start Immich +systemctl start immich + +# Restart Immich +systemctl restart immich + +# View logs for all containers +cd /opt/podman/immich && podman compose logs -f + +# View logs for specific service +cd /opt/podman/immich && podman compose logs -f immich-server +``` + +### Manual Management + +You can also manage containers directly with podman compose: + +```bash +cd /opt/podman/immich + +# Start services +podman compose up -d + +# Stop services +podman compose down + +# Pull latest images +podman compose pull + +# Recreate containers +podman compose up -d --force-recreate +``` + +## Updating Immich + +To update to a newer version: + +1. Update the `immich_version` variable in your playbook or inventory +2. Re-run the Ansible playbook +3. The systemd service will restart with the new version + +Or manually: + +```bash +cd /opt/podman/immich +podman compose pull +systemctl restart immich +``` + +## Storage + +- **Upload location**: Stores all photos, videos, and thumbnails +- **Database location**: PostgreSQL data (not suitable for network shares) +- **Model cache**: ML models for facial recognition + +Ensure adequate disk space and regular backups of these directories. + +## Files Deployed + +- `{{ podman_projects_dir }}/immich/docker-compose.yml` - Compose definition +- `/etc/systemd/system/immich.service` - Systemd service unit + +## Security Considerations + +- **Set strong passwords** for both `immich_postgres_password` and `immich_valkey_password` (min 12 chars) +- **Use Ansible Vault** to encrypt passwords in production: + ```bash + ansible-vault encrypt_string 'your-password' --name 'immich_postgres_password' + ansible-vault encrypt_string 'your-password' --name 'immich_valkey_password' + ``` +- **Configure Valkey ACL** properly (see Valkey Configuration section) - do not use `+@all` +- Consider using a reverse proxy (nginx/traefik) for HTTPS +- Restrict access via firewall rules if needed +- Keep Immich updated by changing `immich_version` and redeploying + +## Troubleshooting + +### Check service status +```bash +systemctl status immich +``` + +### View compose file +```bash +cat /opt/podman/immich/docker-compose.yml +``` + +### Check container status +```bash +cd /opt/podman/immich +podman compose ps +``` + +### View logs +```bash +cd /opt/podman/immich +podman compose logs +``` + +### Valkey ACL Issues + +**Error: "NOPERM No permissions to access a channel"** +- The Valkey ACL is missing channel permissions +- Ensure `&*` or `+allchannels` is in the ACL commands +- Verify ACL is properly loaded: `valkey-cli ACL LIST` + +**Error: "NOAUTH Authentication required"** +- Check `immich_valkey_password` is set correctly +- Verify the password matches in both inventory ACL config and immich vars + +**Error: "WRONGPASS invalid username-password pair"** +- Ensure the Immich user is registered in `valkey_acl_users` +- Check the Valkey ACL file was deployed: `cat /etc/valkey/users.acl` +- Restart Valkey to reload ACL: `systemctl restart valkey` + +**Verify Valkey ACL Configuration:** +```bash +# Connect as admin +valkey-cli +AUTH default + +# List all ACL users +ACL LIST + +# Check specific user +ACL GETUSER immich + +# Monitor commands (useful for debugging permissions) +MONITOR +``` + +**Test Immich user credentials:** +```bash +valkey-cli +AUTH immich +SELECT 0 +PING +# Should return PONG + +# Try a restricted command (should fail) +FLUSHDB +# Should return: (error) NOPERM +``` + +## License + +MIT + +## Author Information + +Created for deploying Immich on NAS systems using Podman and docker-compose. diff --git a/roles/immich/defaults/main.yml b/roles/immich/defaults/main.yml new file mode 100644 index 0000000..3e4808f --- /dev/null +++ b/roles/immich/defaults/main.yml @@ -0,0 +1,57 @@ +--- +# Immich version to deploy +immich_version: release + +# Storage location (@see https://docs.immich.app/install/environment-variables/) +immich_upload_location: "{{ podman_projects_dir }}/immich/data/upload" + +# PostgreSQL configuration (REQUIRED password - must be set explicitly) +immich_postgres_db_name: immich +immich_postgres_user: immich +# immich_postgres_password: "" # Intentionally undefined - role will fail if not set +immich_postgres_host: postgres.local +immich_postgres_port: 5432 + +# Valkey configuration (REQUIRED password - must be set explicitly) +immich_valkey_user: immich +# immich_valkey_password: "" # Intentionally undefined - role will fail if not set +immich_valkey_host: valkey.local +immich_valkey_port: 6379 +immich_valkey_db: 0 # Dedicated database number for isolation (0-15) + +# Valkey ACL configuration +# Based on: https://github.com/immich-app/immich/discussions/19727#discussioncomment-13668749 +immich_valkey_acl: + username: "{{ immich_valkey_user }}" + password: "{{ immich_valkey_password }}" + keypattern: "immich_bull* immich_channel*" # BullMQ patterns used by Immich + commands: "&* -@dangerous +@read +@write +@pubsub +select +auth +ping +info +eval +evalsha" + # &* = all channels (required for pub/sub) + # -@dangerous = deny dangerous commands (FLUSHDB, FLUSHALL, KEYS, etc) + # +@read +@write = allow read/write command groups + # +@pubsub = allow pub/sub commands + # +select = allow SELECT (database switching) + # +auth +ping +info = connection management + # +eval +evalsha = Lua scripting (required by BullMQ) + +# Network configuration +immich_port: 2283 + +# External network configuration +# Define in inventory via podman_external_networks list +# Example: +# podman_external_networks: +# - name: immich +# subnet: 172.20.0.0/16 +# gateway: 172.20.0.1 + +# Container images +immich_server_image: ghcr.io/immich-app/immich-server +immich_ml_image: ghcr.io/immich-app/immich-machine-learning + +# Timezone +immich_timezone: UTC + +# Nginx reverse proxy configuration +immich_nginx_enabled: false +immich_nginx_hostname: photos.nas.local diff --git a/roles/immich/files/debug-networking.sh b/roles/immich/files/debug-networking.sh new file mode 100644 index 0000000..9b79fca --- /dev/null +++ b/roles/immich/files/debug-networking.sh @@ -0,0 +1,172 @@ +#!/bin/bash +# Immich Networking Debug Script +# This script helps diagnose container communication issues between +# immich-server and immich-machine-learning containers + +set -e + +COMPOSE_DIR="/opt/podman/immich" +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' # No Color + +echo "========================================" +echo "Immich Networking Debug Script" +echo "========================================" +echo "" + +# Check if compose directory exists +if [ ! -d "$COMPOSE_DIR" ]; then + echo -e "${RED}ERROR: Compose directory not found: $COMPOSE_DIR${NC}" + exit 1 +fi + +cd "$COMPOSE_DIR" + +# 1. Check container status +echo -e "${YELLOW}1. Container Status${NC}" +echo "---" +podman compose ps +echo "" + +# 2. Check if containers are running +echo -e "${YELLOW}2. Verifying Containers are Running${NC}" +echo "---" +if podman ps | grep -q "immich_server"; then + echo -e "${GREEN}✓ immich_server is running${NC}" +else + echo -e "${RED}✗ immich_server is NOT running${NC}" +fi + +if podman ps | grep -q "immich_machine_learning"; then + echo -e "${GREEN}✓ immich_machine_learning is running${NC}" +else + echo -e "${RED}✗ immich_machine_learning is NOT running${NC}" +fi +echo "" + +# 3. Check network configuration +echo -e "${YELLOW}3. Network Configuration${NC}" +echo "---" +echo "Immich networks:" +podman network ls | grep -i immich || echo "No Immich networks found" +echo "" + +# 4. Check which networks containers are on +echo -e "${YELLOW}4. Container Network Membership${NC}" +echo "---" +echo "immich_server networks:" +podman inspect immich_server 2>/dev/null | grep -A 5 '"Networks"' || echo "Container not found" +echo "" +echo "immich_machine_learning networks:" +podman inspect immich_machine_learning 2>/dev/null | grep -A 5 '"Networks"' || echo "Container not found" +echo "" + +# 5. Check environment variables +echo -e "${YELLOW}5. Machine Learning URL Configuration${NC}" +echo "---" +ML_URL=$(podman inspect immich_server 2>/dev/null | grep IMMICH_MACHINE_LEARNING_URL | head -1 || echo "Not set") +echo "IMMICH_MACHINE_LEARNING_URL: $ML_URL" +echo "" + +# 6. Check if ML container is listening +echo -e "${YELLOW}6. ML Container Listening Status${NC}" +echo "---" +if podman exec immich_machine_learning sh -c 'command -v netstat' >/dev/null 2>&1; then + podman exec immich_machine_learning netstat -ln | grep 3003 || echo "Port 3003 not listening (or netstat not available)" +else + echo "netstat not available in container, checking logs instead:" + podman logs immich_machine_learning 2>&1 | grep -i "listening\|started" | tail -5 || echo "No listening messages found in logs" +fi +echo "" + +# 7. Test connectivity from server to ML +echo -e "${YELLOW}7. Testing Server → ML Connectivity${NC}" +echo "---" +if podman exec immich_server sh -c 'command -v curl' >/dev/null 2>&1; then + echo "Testing HTTP connection to ML service..." + if podman exec immich_server curl -sf http://immich-machine-learning:3003/ping >/dev/null 2>&1; then + echo -e "${GREEN}✓ Successfully connected to ML service via service name${NC}" + else + echo -e "${RED}✗ Failed to connect to ML service${NC}" + echo "Attempting to diagnose..." + + # Try to resolve DNS + if podman exec immich_server sh -c 'command -v nslookup' >/dev/null 2>&1; then + echo "DNS resolution test:" + podman exec immich_server nslookup immich-machine-learning || echo "DNS resolution failed" + fi + + # Try with verbose curl + echo "Verbose curl output:" + podman exec immich_server curl -v http://immich-machine-learning:3003/ping 2>&1 || true + fi +else + echo "curl not available in server container, skipping connectivity test" +fi +echo "" + +# 8. Check health status +echo -e "${YELLOW}8. Container Health Status${NC}" +echo "---" +echo "immich_server health:" +podman inspect immich_server 2>/dev/null | grep -A 10 '"Health"' | head -15 || echo "No health data available" +echo "" +echo "immich_machine_learning health:" +podman inspect immich_machine_learning 2>/dev/null | grep -A 10 '"Health"' | head -15 || echo "No health data available" +echo "" + +# 9. Check recent logs for errors +echo -e "${YELLOW}9. Recent Error Logs${NC}" +echo "---" +echo "Server errors (last 10):" +podman logs immich_server 2>&1 | grep -i "error\|fail\|unhealthy" | tail -10 || echo "No errors found" +echo "" +echo "ML errors (last 10):" +podman logs immich_machine_learning 2>&1 | grep -i "error\|fail" | tail -10 || echo "No errors found" +echo "" + +# 10. Check compose file configuration +echo -e "${YELLOW}10. Docker Compose Configuration${NC}" +echo "---" +echo "Network configuration in docker-compose.yml:" +grep -A 3 "^networks:" docker-compose.yml 2>/dev/null || echo "No networks section found in compose file" +echo "" +echo "Server network config:" +grep -A 2 "immich-server:" docker-compose.yml | grep -A 2 "networks:" || echo "No network config for server" +echo "" +echo "ML network config:" +grep -A 2 "immich-machine-learning:" docker-compose.yml | grep -A 2 "networks:" || echo "No network config for ML" +echo "" + +# Summary +echo "========================================" +echo -e "${YELLOW}Summary${NC}" +echo "========================================" +echo "" + +# Check if both containers are healthy +SERVER_HEALTHY=$(podman inspect immich_server 2>/dev/null | grep -c '"Status": "healthy"' || echo "0") +ML_HEALTHY=$(podman inspect immich_machine_learning 2>/dev/null | grep -c '"Status": "healthy"' || echo "0") + +if [ "$SERVER_HEALTHY" -gt 0 ] && [ "$ML_HEALTHY" -gt 0 ]; then + echo -e "${GREEN}✓ Both containers appear healthy${NC}" +elif [ "$SERVER_HEALTHY" -gt 0 ]; then + echo -e "${YELLOW}⚠ Server healthy, but ML container may have issues${NC}" +elif [ "$ML_HEALTHY" -gt 0 ]; then + echo -e "${YELLOW}⚠ ML healthy, but server container may have issues${NC}" +else + echo -e "${RED}✗ One or both containers are unhealthy${NC}" +fi + +echo "" +echo "For more detailed logs, run:" +echo " cd $COMPOSE_DIR && podman compose logs -f" +echo "" +echo "To restart containers:" +echo " cd $COMPOSE_DIR && podman compose restart" +echo "" +echo "To recreate containers with updated config:" +echo " cd $COMPOSE_DIR && podman compose down && podman compose up -d" +echo "" diff --git a/roles/immich/handlers/main.yml b/roles/immich/handlers/main.yml new file mode 100644 index 0000000..b0ef10a --- /dev/null +++ b/roles/immich/handlers/main.yml @@ -0,0 +1,15 @@ +--- +- name: Reload systemd + ansible.builtin.systemd: + daemon_reload: true + +- name: Restart Immich + ansible.builtin.systemd: + name: immich + state: restarted + daemon_reload: true + +- name: Reload nginx + ansible.builtin.systemd: + name: nginx + state: reloaded diff --git a/roles/immich/meta/main.yml b/roles/immich/meta/main.yml new file mode 100644 index 0000000..0be1062 --- /dev/null +++ b/roles/immich/meta/main.yml @@ -0,0 +1,6 @@ +--- +dependencies: + - role: podman + - role: postgres + - role: valkey + - role: nginx diff --git a/roles/immich/tasks/main.yml b/roles/immich/tasks/main.yml new file mode 100644 index 0000000..0f28c61 --- /dev/null +++ b/roles/immich/tasks/main.yml @@ -0,0 +1,123 @@ +--- +- name: Validate required passwords are set + ansible.builtin.assert: + that: + - immich_postgres_password is defined + - immich_postgres_password | length >= 12 + - immich_valkey_password is defined + - immich_valkey_password | length >= 12 + fail_msg: | + immich_postgres_password and immich_valkey_password are required (min 12 chars). + See roles/immich/defaults/main.yml for configuration instructions. + success_msg: "Password validation passed" + +- name: Create PostgreSQL database for Immich + community.postgresql.postgresql_db: + name: "{{ immich_postgres_db_name }}" + owner: "{{ immich_postgres_user }}" + state: present + become_user: "{{ postgres_admin_user }}" + +- name: Create PostgreSQL user for Immich + community.postgresql.postgresql_user: + name: "{{ immich_postgres_user }}" + password: "{{ immich_postgres_password }}" + state: present + become_user: "{{ postgres_admin_user }}" + +- name: Grant all privileges on database to Immich user + community.postgresql.postgresql_privs: + login_db: "{{ immich_postgres_db_name }}" + roles: "{{ immich_postgres_user }}" + type: database + privs: ALL + state: present + become_user: "{{ postgres_admin_user }}" + +- name: Ensure Immich user has no superuser privileges + community.postgresql.postgresql_user: + name: "{{ immich_postgres_user }}" + role_attr_flags: NOSUPERUSER,NOCREATEDB,NOCREATEROLE + state: present + become_user: "{{ postgres_admin_user }}" + +- name: Enable required PostgreSQL extensions in Immich database + community.postgresql.postgresql_ext: + name: "{{ item }}" + login_db: "{{ immich_postgres_db_name }}" + state: present + become_user: "{{ postgres_admin_user }}" + loop: + - cube + - earthdistance + - vector + +- name: Grant schema permissions to Immich user + community.postgresql.postgresql_privs: + login_db: "{{ immich_postgres_db_name }}" + roles: "{{ immich_postgres_user }}" + type: schema + objs: public + privs: CREATE,USAGE + state: present + become_user: "{{ postgres_admin_user }}" + +- name: Create Immich project directory + ansible.builtin.file: + path: "{{ podman_projects_dir }}/immich" + state: directory + owner: "{{ ansible_user }}" + group: "{{ ansible_user }}" + mode: "0755" + +- name: Create Immich data directories + ansible.builtin.file: + path: "{{ item }}" + state: directory + owner: "{{ ansible_user }}" + group: "{{ ansible_user }}" + mode: "0755" + loop: + - "{{ immich_upload_location }}" + +- name: Deploy docker-compose.yml for Immich + ansible.builtin.template: + src: docker-compose.yml.j2 + dest: "{{ podman_projects_dir }}/immich/docker-compose.yml" + owner: "{{ ansible_user }}" + group: "{{ ansible_user }}" + mode: "0644" + notify: Restart Immich + +- name: Create systemd service for Immich + ansible.builtin.template: + src: immich.service.j2 + dest: /etc/systemd/system/immich.service + owner: root + group: root + mode: "0644" + notify: Reload systemd + +- name: Enable and start Immich service + ansible.builtin.systemd: + name: immich + enabled: true + state: started + daemon_reload: true + +- name: Deploy nginx vhost configuration for Immich + ansible.builtin.template: + src: nginx-vhost.conf.j2 + dest: /etc/nginx/conf.d/immich.conf + owner: root + group: root + mode: "0644" + when: immich_nginx_enabled + notify: Reload nginx + +- name: Remove nginx vhost configuration for Immich + ansible.builtin.file: + path: /etc/nginx/conf.d/immich.conf + state: absent + when: not immich_nginx_enabled + notify: Reload nginx diff --git a/roles/immich/templates/docker-compose.yml.j2 b/roles/immich/templates/docker-compose.yml.j2 new file mode 100644 index 0000000..53a2b66 --- /dev/null +++ b/roles/immich/templates/docker-compose.yml.j2 @@ -0,0 +1,62 @@ +--- +services: + immich-server: + container_name: immich_server + image: {{ immich_server_image }}:{{ immich_version }} + networks: + - databases + - immich + extra_hosts: + - "{{ immich_postgres_host }}:{{ podman_subnet_gateway }}" + - "{{ immich_valkey_host }}:{{ podman_subnet_gateway }}" + volumes: + - /etc/localtime:/etc/localtime:ro + - {{ immich_upload_location }}:/data:rw,Z + environment: + DB_HOSTNAME: {{ immich_postgres_host }} + DB_PORT: {{ immich_postgres_port }} + DB_USERNAME: {{ immich_postgres_user }} + DB_PASSWORD: {{ immich_postgres_password }} + DB_DATABASE_NAME: {{ immich_postgres_db_name }} + REDIS_HOSTNAME: {{ immich_valkey_host }} + REDIS_PORT: {{ immich_valkey_port }} + REDIS_USERNAME: {{ immich_valkey_user }} + REDIS_PASSWORD: {{ immich_valkey_password }} + REDIS_DBINDEX: {{ immich_valkey_db }} + IMMICH_MACHINE_LEARNING_URL: http://immich-machine-learning:3003 + UPLOAD_LOCATION: {{ immich_upload_location }} + TZ: {{ immich_timezone }} + ports: + - "{{ immich_port }}:2283" + restart: always + healthcheck: + test: ["CMD-SHELL", "curl -f http://localhost:2283/api/server/ping"] + interval: 30s + timeout: 10s + retries: 3 + start_period: 60s + + immich-machine-learning: + container_name: immich_machine_learning + image: {{ immich_ml_image }}:{{ immich_version }} + networks: + - immich + volumes: + - model-cache:/cache + restart: always + healthcheck: + test: ["CMD", "python", "/usr/src/healthcheck.py"] + interval: 30s + timeout: 10s + retries: 3 + start_period: 60s + +networks: + databases: + name: podman + external: true + immich: + driver: bridge + +volumes: + model-cache: diff --git a/roles/immich/templates/immich.service.j2 b/roles/immich/templates/immich.service.j2 new file mode 100644 index 0000000..b284b33 --- /dev/null +++ b/roles/immich/templates/immich.service.j2 @@ -0,0 +1,16 @@ +[Unit] +Description=Immich Media Server +Requires=network-online.target +After=network-online.target + +[Service] +Type=oneshot +RemainAfterExit=true +WorkingDirectory={{ podman_projects_dir }}/immich +ExecStart=/usr/bin/podman compose up -d +ExecStop=/usr/bin/podman compose down +Restart=on-failure +RestartSec=10 + +[Install] +WantedBy=multi-user.target diff --git a/roles/immich/templates/nginx-vhost.conf.j2 b/roles/immich/templates/nginx-vhost.conf.j2 new file mode 100644 index 0000000..380b067 --- /dev/null +++ b/roles/immich/templates/nginx-vhost.conf.j2 @@ -0,0 +1,23 @@ +server { + listen 80; + server_name {{ immich_nginx_hostname }}; + + client_max_body_size 50000M; + + location / { + proxy_pass http://127.0.0.1:{{ immich_port }}; + proxy_set_header Host $http_host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + + # WebSocket support + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + + # Timeouts for large file uploads + proxy_read_timeout 600s; + proxy_send_timeout 600s; + } +} diff --git a/roles/nginx/README.md b/roles/nginx/README.md new file mode 100644 index 0000000..42882cf --- /dev/null +++ b/roles/nginx/README.md @@ -0,0 +1,213 @@ +# Nginx Role + +This Ansible role installs and configures Nginx as a reverse proxy for web applications. + +## Features + +- Installs Nginx +- Configurable worker processes and connections +- Gzip compression support +- SSL/TLS configuration +- Modular vhost configuration via `/etc/nginx/conf.d/` +- Zero-downtime reloads + +## Requirements + +- Systemd-based Linux distribution +- Root/sudo access + +## Role Variables + +See `defaults/main.yml` for all available variables and their default values. + +### Key Configuration + +The role provides sensible defaults for worker processes, connection limits, upload sizes, compression, and SSL/TLS settings. Override as needed in your inventory. + +## Dependencies + +None. + +## Example Playbook + +### Basic Installation + +```yaml +--- +- hosts: servers + become: true + roles: + - role: nginx +``` + +### Custom Configuration + +```yaml +--- +- hosts: servers + become: true + roles: + - role: nginx + vars: + nginx_worker_processes: 4 + nginx_worker_connections: 2048 + nginx_client_max_body_size: 500M +``` + +## Service Management + +The role creates handlers for managing nginx: + +```yaml +notify: Reload nginx # Graceful reload (zero downtime) +notify: Restart nginx # Full restart +``` + +## Vhost Configuration Pattern + +This role is designed to work with service-specific vhost configurations. Each service role should: + +1. Deploy its vhost config to `/etc/nginx/conf.d/.conf` +2. Notify the nginx reload handler +3. Use a variable to enable/disable nginx integration + +### Example Service Integration + +In your service role (e.g., `immich`): + +**defaults/main.yml:** +```yaml +immich_nginx_enabled: false +immich_nginx_hostname: immich.example.com +``` + +**tasks/main.yml:** +```yaml +- name: Deploy nginx vhost for service + ansible.builtin.template: + src: nginx-vhost.conf.j2 + dest: /etc/nginx/conf.d/myservice.conf + validate: nginx -t + when: myservice_nginx_enabled + notify: Reload nginx + +- name: Remove nginx vhost when disabled + ansible.builtin.file: + path: /etc/nginx/conf.d/myservice.conf + state: absent + when: not myservice_nginx_enabled + notify: Reload nginx +``` + +**templates/nginx-vhost.conf.j2:** +```nginx +server { + listen 80; + server_name {{ myservice_nginx_hostname }}; + + location / { + proxy_pass http://127.0.0.1:{{ myservice_port }}; + proxy_set_header Host $http_host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + } +} +``` + +**handlers/main.yml:** +```yaml +- name: Reload nginx + ansible.builtin.systemd: + name: nginx + state: reloaded +``` + +## Independent Deployments + +This pattern allows for independent service deployments: + +1. **Deploy service A** → Only touches `/etc/nginx/conf.d/serviceA.conf` → Reload nginx +2. **Deploy service B** → Only touches `/etc/nginx/conf.d/serviceB.conf` → Reload nginx +3. **No downtime** for other services during deployment + +## Log Management + +Nginx logs are written to: +- `/var/log/nginx/access.log` - Access logs +- `/var/log/nginx/error.log` - Error logs + +These are also captured by systemd journal: +```bash +# View nginx logs +journalctl -u nginx -f + +# View traditional log files +tail -f /var/log/nginx/access.log +tail -f /var/log/nginx/error.log +``` + +## Configuration Validation + +The role automatically validates nginx configuration before applying changes using `nginx -t`. + +Manual validation: +```bash +nginx -t # Test configuration +nginx -t -c /path/to/conf # Test specific config file +``` + +## Troubleshooting + +### Check nginx status +```bash +systemctl status nginx +``` + +### Test configuration +```bash +nginx -t +``` + +### Reload configuration +```bash +systemctl reload nginx +``` + +### View error logs +```bash +journalctl -u nginx -n 100 +# or +tail -f /var/log/nginx/error.log +``` + +### List loaded vhost configs +```bash +ls -la /etc/nginx/conf.d/ +``` + +## SSL/TLS Support + +For SSL support, you can: + +1. **Manual certificates:** Place certs in `/etc/ssl/` and reference in vhost configs +2. **Let's Encrypt:** Use certbot or similar tools (can be added to playbook) +3. **Self-signed:** Generate with `openssl` for testing + +The base nginx.conf includes SSL protocol configuration that applies to all vhosts. + +## Performance Tuning + +Adjust these variables based on your workload: + +- `nginx_worker_processes`: Set to number of CPU cores +- `nginx_worker_connections`: Increase for high traffic (check `ulimit -n`) +- `nginx_client_max_body_size`: Increase for large file uploads + +## License + +MIT + +## Author Information + +Created for managing reverse proxy configurations in NAS/homelab environments. diff --git a/roles/nginx/defaults/main.yml b/roles/nginx/defaults/main.yml new file mode 100644 index 0000000..2a7b25b --- /dev/null +++ b/roles/nginx/defaults/main.yml @@ -0,0 +1,16 @@ +--- +# Nginx configuration directory for service vhosts +nginx_conf_dir: /etc/nginx/conf.d + +# Worker processes (auto = number of CPU cores) +nginx_worker_processes: auto + +# Worker connections +nginx_worker_connections: 1024 + +# Client max body size (for file uploads) +nginx_client_max_body_size: 100M + +# SSL configuration (volontarily omit TLSv1.2 here) +nginx_ssl_protocols: TLSv1.3 +nginx_ssl_prefer_server_ciphers: true diff --git a/roles/nginx/handlers/main.yml b/roles/nginx/handlers/main.yml new file mode 100644 index 0000000..98d2229 --- /dev/null +++ b/roles/nginx/handlers/main.yml @@ -0,0 +1,10 @@ +--- +- name: Reload nginx + ansible.builtin.systemd: + name: nginx + state: reloaded + +- name: Restart nginx + ansible.builtin.systemd: + name: nginx + state: restarted diff --git a/roles/nginx/tasks/main.yml b/roles/nginx/tasks/main.yml new file mode 100644 index 0000000..37a5aca --- /dev/null +++ b/roles/nginx/tasks/main.yml @@ -0,0 +1,49 @@ +--- +- name: Load OS-specific variables + ansible.builtin.include_vars: "{{ item }}" + with_first_found: + - "{{ ansible_facts['os_family'] }}.yml" + - debian.yml + +- name: Install nginx + ansible.builtin.package: + name: nginx + state: present + +- name: Ensure nginx conf.d directory exists + ansible.builtin.file: + path: "{{ nginx_conf_dir }}" + state: directory + owner: root + group: root + mode: "0755" + +- name: Deploy nginx main configuration + ansible.builtin.template: + src: nginx.conf.j2 + dest: /etc/nginx/nginx.conf + owner: root + group: root + mode: "0644" + validate: nginx -t -c %s + notify: Reload nginx + +- name: Allow HTTP traffic through firewall + community.general.ufw: + rule: allow + port: "80" + proto: tcp + comment: Nginx HTTP + +- name: Allow HTTPS traffic through firewall + community.general.ufw: + rule: allow + port: "443" + proto: tcp + comment: Nginx HTTPS + +- name: Enable and start nginx service + ansible.builtin.systemd: + name: nginx + enabled: true + state: started diff --git a/roles/nginx/templates/nginx.conf.j2 b/roles/nginx/templates/nginx.conf.j2 new file mode 100644 index 0000000..154eff7 --- /dev/null +++ b/roles/nginx/templates/nginx.conf.j2 @@ -0,0 +1,42 @@ +user {{ nginx_user }}; +worker_processes {{ nginx_worker_processes }}; +error_log /var/log/nginx/error.log; +pid /run/nginx.pid; + +include /usr/share/nginx/modules/*.conf; + +events { + worker_connections {{ nginx_worker_connections }}; +} + +http { + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + + access_log /var/log/nginx/access.log main; + + sendfile on; + tcp_nopush on; + tcp_nodelay on; + keepalive_timeout 65; + types_hash_max_size 4096; + client_max_body_size {{ nginx_client_max_body_size }}; + + include /etc/nginx/mime.types; + default_type application/octet-stream; + + # Gzip compression + gzip on; + gzip_vary on; + gzip_proxied any; + gzip_comp_level 6; + gzip_types text/plain text/css text/xml text/javascript application/json application/javascript application/xml+rss application/rss+xml font/truetype font/opentype application/vnd.ms-fontobject image/svg+xml; + + # SSL configuration + ssl_protocols {{ nginx_ssl_protocols }}; + ssl_prefer_server_ciphers {{ 'on' if nginx_ssl_prefer_server_ciphers else 'off' }}; + + # Load modular configuration files from the conf.d directory + include {{ nginx_conf_dir }}/*.conf; +} diff --git a/roles/nginx/vars/archlinux.yml b/roles/nginx/vars/archlinux.yml new file mode 100644 index 0000000..eac5938 --- /dev/null +++ b/roles/nginx/vars/archlinux.yml @@ -0,0 +1,2 @@ +--- +nginx_user: http diff --git a/roles/podman/README.md b/roles/podman/README.md index 065851d..e4ff20e 100644 --- a/roles/podman/README.md +++ b/roles/podman/README.md @@ -8,6 +8,7 @@ This Ansible role installs and configures Podman for container management on NAS - Configures container registry search paths - Creates shared projects directory for compose files - Enables short image name resolution (e.g., `redis:alpine` → `docker.io/library/redis:alpine`) +- Creates external networks for services (e.g., dedicated Immich network) ## Requirements @@ -16,85 +17,24 @@ This Ansible role installs and configures Podman for container management on NAS ## Role Variables -Available variables with defaults (see `defaults/main.yml`): +See `defaults/main.yml` for all available variables and their default values. -```yaml -# Base directory for docker-compose projects -podman_projects_dir: /opt/podman +### Key Configuration -# Unqualified search registries (for short image names) -podman_unqualified_search_registries: - - docker.io - - quay.io - - ghcr.io +#### Unqualified Search Registries -# Podman bridge network (leave empty for default dynamic assignment) -podman_subnet: "" +When you use short image names (without registry prefix), Podman searches configured registries in order (e.g., `redis:alpine` → `docker.io/library/redis:alpine`). -# Podman bridge gateway IP (used by services binding to bridge) -podman_subnet_gateway: "" +Customize via the `podman_unqualified_search_registries` variable. -# Podman bridge interface name (if using custom network) -podman_subnet_iface: podman1 -``` -### Unqualified Search Registries +#### External Networks -When you use short image names (without registry prefix), Podman searches these registries in order: - -```bash -# Short name -podman pull redis:alpine - -# Resolves to -docker.io/library/redis:alpine -``` - -**Default search order:** -1. `docker.io` - Docker Hub -2. `quay.io` - Red Hat Quay -3. `ghcr.io` - GitHub Container Registry - -You can customize this list via the `podman_unqualified_search_registries` variable. - -### Podman Bridge Network - -By default, Podman dynamically assigns network subnets to bridge interfaces. You can document your network configuration using these variables: - -**Default behavior (empty `podman_subnet`):** -- Podman manages networks automatically -- No manual configuration needed - -**Explicit network documentation:** - -```yaml -podman_subnet: "10.89.0.0/24" -podman_subnet_gateway: "10.89.0.1" -podman_subnet_iface: podman1 -``` - -Use this to: -- Document your infrastructure topology -- Allow services to bind to the bridge gateway (e.g., PostgreSQL, Valkey) -- Reference in other roles that need bridge network information -- Maintain consistent network configuration across deployments - -**Finding your Podman network:** - -```bash -# List Podman networks -podman network ls - -# Show bridge interfaces -ip addr show | grep podman - -# Get specific interface IP -ip -4 addr show podman1 -``` +The role can create external Podman networks for services that need dedicated network isolation. Define the `podman_external_networks` list in your inventory. Networks persist across container restarts and compose stack rebuilds. See `defaults/main.yml` for configuration details. ## Dependencies -None. +- `containers.podman` collection (installed via `requirements.yml`) ## Example Playbook @@ -108,19 +48,7 @@ None. ### Custom Configuration -```yaml ---- -- hosts: servers - become: true - roles: - - role: podman - vars: - podman_projects_dir: /mnt/storage/containers - podman_unqualified_search_registries: - - docker.io - - ghcr.io - - registry.gitlab.com -``` +See `defaults/main.yml` for all available variables. Override in your inventory as needed. ## Files Deployed diff --git a/roles/podman/defaults/main.yml b/roles/podman/defaults/main.yml index 89ba48c..b901c1b 100644 --- a/roles/podman/defaults/main.yml +++ b/roles/podman/defaults/main.yml @@ -12,13 +12,16 @@ podman_unqualified_search_registries: # Leave empty to use Podman's default dynamic network assignment # Example: "10.89.0.0/24" if you want to explicitly set it podman_subnet: "" - # Podman bridge gateway IP (typically .1 of the bridge network) # Used by services that need to bind to the bridge interface -# Example: "10.89.0.1" for the 10.89.0.0/24 network -podman_subnet_gateway: "" -# Podman bridge interface name (corresponds to the network above) -# Common values: podman0, podman1, etc. -# Only relevant if podman_subnet is set -podman_subnet_iface: podman1 +# Each network should define: name, subnet, gateway +# podman_external_networks: [] +# Example: +# podman_external_networks: +# - name: immich +# subnet: 172.20.0.0/16 +# gateway: 172.20.0.1 +# - name: nextcloud +# subnet: 172.21.0.0/16 +# gateway: 172.21.0.1 diff --git a/roles/valkey/README.md b/roles/valkey/README.md index 0bbdb84..18bb140 100644 --- a/roles/valkey/README.md +++ b/roles/valkey/README.md @@ -39,46 +39,26 @@ Valkey is a high-performance key/value datastore and a drop-in replacement for R ## Role Variables -Available variables with defaults (see `defaults/main.yml`): +See `defaults/main.yml` for all available variables and their default values. +### Key Configuration Requirements + +#### Required Password + +The `valkey_admin_password` variable must be set in your inventory (min 12 characters). The role will fail if not set. + +#### ACL Users + +Service users must be registered via the `valkey_acl_users` list. See the ACL Configuration Guide section below for details. + +#### Container Access + +For containers to access Valkey, set `valkey_bind` to include the Podman gateway: ```yaml -# Bind address (localhost only for security) -valkey_bind: 127.0.0.1 - -# Port -valkey_port: 6379 - -# Authentication (REQUIRED - must be set explicitly) -# valkey_admin_password: "" # Intentionally undefined - role will fail if not set - -# ACL users (services register their users here) -valkey_acl_users: [] -# Example: -# valkey_acl_users: -# - username: immich -# password: "secretpassword" -# keypattern: "immich_bull* immich_channel*" # Space-separated patterns (template converts to ~pattern1 ~pattern2) -# commands: "&* -@dangerous +@read +@write +@pubsub +select +auth +ping +info +eval +evalsha" - -# Max memory (0 = unlimited) -valkey_maxmemory: 256mb - -# Eviction policy when max memory is reached -valkey_maxmemory_policy: allkeys-lru - -# Data directory -valkey_dir: /var/lib/valkey - -# ACL file location -valkey_acl_file: /etc/valkey/users.acl - -# Log level -valkey_loglevel: notice +valkey_bind: "127.0.0.1 {{ podman_subnet_gateway }}" ``` -**Security Note:** This role uses ACL-based authentication. You must set `valkey_admin_password` and configure service users via `valkey_acl_users`. - -**System Requirements:** This role automatically config +**System Requirements:** This role automatically configures kernel parameters (`vm.overcommit_memory=1`) and transparent hugepage settings ## Dependencies diff --git a/roles/valkey/defaults/main.yml b/roles/valkey/defaults/main.yml index 5c41e98..16fefa4 100644 --- a/roles/valkey/defaults/main.yml +++ b/roles/valkey/defaults/main.yml @@ -1,7 +1,5 @@ --- -# Valkey bind address -# Default: localhost only -# To allow container access, set to "127.0.0.1 {{ podman_subnet_gateway }}" in your inventory +# Valkey bind address(es) # Example: "127.0.0.1 10.89.0.1" valkey_bind: 127.0.0.1