chore: initial commit of docker-tools

This commit is contained in:
sapient
2026-03-22 00:54:34 -07:00
commit b9f096a090
17 changed files with 4916 additions and 0 deletions

5
.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
.env
.env.*
*.log
node_modules/
.venv/

201
bash/DEBUGGING.md Executable file
View File

@@ -0,0 +1,201 @@
# Debugging and Optimization Guide
This document describes the debugging and optimization features added to the docker-tools bash scripts.
## Debug Mode
Enable comprehensive debugging with the `--debug` or `-d` flag:
```bash
./docker-backup.sh --debug --all
./docker-migrate.sh --debug --service myapp --dest user@host
```
### What Debug Mode Does
1. **Enables `set -x` (xtrace)**: Shows every command executed with line numbers
2. **Function tracing**: Shows function entry points with file:line information
3. **Stack traces**: On errors, shows call stack
4. **Detailed logging**: All operations log debug information
5. **Verbose output**: Shows internal state and decisions
### Debug Output Format
```
+ [script.sh:123] function_name(): Command being executed
[DEBUG] script.sh:123 function_name()
[DEBUG] Validating stack name: 'myapp'
[DEBUG] Stack name validation passed
```
## Verbose Mode
Enable verbose logging without full xtrace:
```bash
./docker-backup.sh --verbose --all
```
Verbose mode provides:
- Function entry/exit logging
- Operation details
- State information
- Without the command-level tracing of debug mode
## Debug Logging Functions
### `log_debug()`
Logs debug messages (only shown in DEBUG or VERBOSE mode):
```bash
log_debug "Processing stack: $stack"
```
### `log_verbose()`
Logs verbose messages (only shown in VERBOSE mode):
```bash
log_verbose "Found ${#stacks[@]} stacks"
```
### `debug_trace()`
Automatically called in functions to show entry points:
- Shows file, line number, and function name
- Only active when DEBUG or VERBOSE is enabled
## Error Handling Improvements
### Better Error Messages
- More descriptive error messages with context
- Shows what was attempted and why it failed
- Includes relevant variable values
### Exit Codes
- Consistent exit codes across scripts
- `0`: Success
- `1`: General error
- `2`: Invalid arguments
- `3`: Dependency missing
### Error Stack Traces
In DEBUG mode, errors show call stack:
```
[ERROR] Stack directory not found: /opt/stacks/myapp
-> docker-backup.sh:248 backup_stack()
-> docker-backup.sh:510 main()
```
## Performance Optimizations
### Reduced Redundant Operations
- Docker commands cached where appropriate
- Stack discovery optimized
- Parallel operations properly throttled
### Improved Logging
- Log file writes are non-blocking
- Fallback to /tmp if primary log location fails
- Automatic log directory creation
### Spinner Optimization
- Spinner disabled in DEBUG mode (would interfere with output)
- Proper cleanup on exit
## Common Debugging Scenarios
### Stack Not Found
```bash
DEBUG=true ./docker-backup.sh --stack myapp
# Shows:
# [DEBUG] Discovering stacks in: /opt/stacks
# [DEBUG] Found directory: myapp
# [DEBUG] Validating stack name: 'myapp'
# [ERROR] Stack directory not found: /opt/stacks/myapp
```
### Docker Connection Issues
```bash
DEBUG=true ./docker-backup.sh --all
# Shows:
# [DEBUG] Checking Docker installation and daemon status
# [DEBUG] Docker command found: /usr/bin/docker
# [DEBUG] Docker version: Docker version 24.0.5
# [ERROR] Docker daemon not running or not accessible.
# [DEBUG] Attempting to get more details...
```
### Transfer Failures
```bash
DEBUG=true ./docker-migrate.sh --service myapp --dest user@host
# Shows:
# [DEBUG] Executing SSH command: host=host, port=22, user=user
# [DEBUG] SSH connection: user@host:22
# [DEBUG] SSH command exit code: 255
# [ERROR] SSH connection failed
```
## Environment Variables
You can also set debug mode via environment variables:
```bash
export DEBUG=true
./docker-backup.sh --all
export VERBOSE=true
./docker-migrate.sh --service myapp --dest user@host
```
## Log Files
Debug information is written to log files:
- Default: `/tmp/docwell/docwell.log`
- Script-specific: `/tmp/docwell/docker-{script}.log`
- Fallback: `/tmp/docwell/fallback.log` (if primary location fails)
Log format:
```
2025-02-18 10:30:45 [DEBUG] Validating stack name: 'myapp'
2025-02-18 10:30:45 [INFO] Starting backup for myapp
2025-02-18 10:30:46 [OK] Backup complete: myapp
```
## Best Practices
1. **Use DEBUG for troubleshooting**: When something fails unexpectedly
2. **Use VERBOSE for monitoring**: When you want to see what's happening without full trace
3. **Check log files**: Even without DEBUG, errors are logged
4. **Combine with dry-run**: `--debug --dry-run` to see what would happen
5. **Redirect output**: `--debug 2>debug.log` to save debug output
## Troubleshooting Tips
### Script Hangs
- Enable DEBUG to see where it's stuck
- Check for SSH timeouts or network issues
- Look for spinner processes that didn't clean up
### Permission Errors
- DEBUG shows exact commands being run
- Check if sudo is needed (shown in debug output)
- Verify file/directory permissions
### Unexpected Behavior
- Compare DEBUG output with expected behavior
- Check environment variables (shown at script start)
- Verify configuration file values
## Performance Monitoring
With VERBOSE mode, you can see:
- How many stacks were discovered
- Time spent on each operation
- Parallel job status
- Resource usage
Example:
```bash
VERBOSE=true ./docker-backup.sh --all
# Shows:
# [VERBOSE] Found 5 directories, 5 valid stacks
# [VERBOSE] Total stacks discovered: 5 (0 from docker ps)
# [VERBOSE] Processing stack 1/5: myapp
```

165
bash/MIGRATION_NOTES.md Executable file
View File

@@ -0,0 +1,165 @@
# Migration Notes: docker-stack.sh + docker-update.sh → docker-manager.sh
## Overview
The `docker-stack.sh` and `docker-update.sh` scripts have been combined into a single `docker-manager.sh` script that provides both stack lifecycle management and update functionality.
## Why Combine?
- **Logical grouping**: Stack management and updates are closely related operations
- **Better UX**: Single entry point for all stack operations
- **Reduced duplication**: Shared functions (get_stacks, is_stack_running, etc.)
- **Consistent interface**: All stack operations in one place
## Migration Guide
### Old Commands → New Commands
#### Stack Management
```bash
# Old
docker-stack.sh --list
docker-stack.sh --start myapp
docker-stack.sh --stop myapp
docker-stack.sh --restart myapp
docker-stack.sh --status myapp
docker-stack.sh --logs myapp
# New (same commands work!)
docker-manager.sh --list
docker-manager.sh --start myapp
docker-manager.sh --stop myapp
docker-manager.sh --restart myapp
docker-manager.sh --status myapp
docker-manager.sh --logs myapp
```
#### Update Operations
```bash
# Old
docker-update.sh --check
docker-update.sh --all
docker-update.sh --stack myapp
docker-update.sh --auto
# New (slightly different flags)
docker-manager.sh --check
docker-manager.sh --update-all # was --all
docker-manager.sh --update myapp # was --stack
docker-manager.sh --auto
```
### Flag Changes
| Old (docker-update.sh) | New (docker-manager.sh) | Notes |
|------------------------|-------------------------|-------|
| `--all` | `--update-all` or `-a` | More explicit |
| `--stack STACK` | `--update STACK` or `-u` | More consistent |
### Backward Compatibility
The old scripts (`docker-stack.sh` and `docker-update.sh`) are still available and functional. They can be:
- **Kept**: For backward compatibility with existing scripts
- **Removed**: If you prefer a single unified script
- **Aliased**: Create symlinks or aliases pointing to docker-manager.sh
### Creating Aliases (Optional)
If you want to keep using the old command names:
```bash
# Create symlinks
ln -s docker-manager.sh docker-stack.sh
ln -s docker-manager.sh docker-update.sh
# Or create aliases in your shell config
alias docker-stack='docker-manager.sh'
alias docker-update='docker-manager.sh'
```
### Interactive Mode
The combined script provides a unified interactive menu:
```
Docker Stack Manager
====================
Available stacks:
1) myapp [running]
2) test-stack [stopped]
r) Restart all
s) Stop all
u) Update all # NEW: Update all stacks
c) Check for updates # NEW: Check for updates
0) Exit
```
When selecting a stack, you get both management and update options:
```
Manage: myapp
=============
Status: ● Running
Stack Management:
1) Start
2) Stop
3) Restart
Updates:
4) Update images # NEW: Update this stack
5) Check for updates # NEW: Check this stack
Other:
6) View logs
7) View compose file
0) Back
```
## Benefits
1. **Single Script**: One script to manage instead of two
2. **Unified Interface**: Consistent CLI flags and behavior
3. **Better Organization**: Related operations grouped together
4. **Less Code Duplication**: Shared helper functions
5. **Easier Maintenance**: One script to update instead of two
## Recommendations
- **New scripts**: Use `docker-manager.sh`
- **Existing scripts**: Update to use `docker-manager.sh` when convenient
- **Old scripts**: Can be removed once migration is complete
## Examples
### Combined Operations
```bash
# Check for updates, then update all if available
docker-manager.sh --check
docker-manager.sh --update-all
# Start a stack and update it
docker-manager.sh --start myapp
docker-manager.sh --update myapp
# Check status and updates in one go
docker-manager.sh --list
docker-manager.sh --check
```
### Automation Scripts
```bash
#!/bin/bash
# Combined stack management script
# List all stacks
docker-manager.sh --list --quiet
# Check for updates
if docker-manager.sh --check --quiet | grep -q "update available"; then
# Update all stacks
docker-manager.sh --update-all --yes --quiet
fi
```

161
bash/README.md Executable file
View File

@@ -0,0 +1,161 @@
# Docker Tools - Bash Scripts
This directory contains enhanced bash scripts that match the functionality of the Go version (`docwell.go`). All scripts are version 2.6.2 and share a common library.
## Architecture
All scripts source `lib/common.sh`, which provides:
- Color output with `NO_COLOR` support
- Spinner/progress animations (braille frames)
- Unified logging (`log_info`, `log_warn`, `log_error`, `log_success`, `log_header`)
- Config file loading from `~/.config/docwell/config`
- Compose file detection, stack discovery, validation
- Cleanup traps (signal handling for Ctrl+C)
- Dry-run wrapper (`run_or_dry`)
- Bounded parallel execution
- SSH helpers
> **Note**: `docker-stack.sh` and `docker-update.sh` have been combined into `docker-manager.sh`. The old scripts are kept for backward compatibility but `docker-manager.sh` is recommended for new usage.
## Scripts Overview
### docker-backup.sh
Backup Docker stacks with compression.
**Usage:**
```bash
./docker-backup.sh --list # List stacks
./docker-backup.sh --all # Backup all stacks
./docker-backup.sh --stack myapp # Backup specific stack
./docker-backup.sh --all --quiet --yes # Non-interactive backup
./docker-backup.sh --all --dry-run # Preview without executing
```
### docker-cleanup.sh
Clean up Docker resources (containers, images, volumes, networks, build cache).
**Usage:**
```bash
./docker-cleanup.sh --containers # Remove stopped containers
./docker-cleanup.sh --all --yes # Run all cleanups
./docker-cleanup.sh --images --volumes # Remove images and volumes
./docker-cleanup.sh --all --dry-run # Preview cleanup
```
### docker-manager.sh
Combined stack management and update functionality.
**Usage:**
```bash
./docker-manager.sh --list # List all stacks
./docker-manager.sh --start myapp # Start a stack
./docker-manager.sh --stop myapp # Stop a stack
./docker-manager.sh --check # Check for updates
./docker-manager.sh --update-all # Update all stacks
./docker-manager.sh --update myapp # Update specific stack
./docker-manager.sh --auto --yes # Auto-update all stacks
```
### docker-migrate.sh
Migrate Docker services between servers.
**Usage:**
```bash
./docker-migrate.sh myapp --dest user@remote-host
./docker-migrate.sh --service myapp --dest user@remote --method rsync
./docker-migrate.sh --service myapp --dest remote --bwlimit 50 --retries 5
./docker-migrate.sh --service myapp --dest remote --dry-run
```
### docker-auto-migrate.sh
Migrate ALL stacks to a destination host.
**Usage:**
```bash
./docker-auto-migrate.sh --dest 10.0.0.2 --dest-user admin
./docker-auto-migrate.sh --dest remote-host --method rsync --dry-run
```
## Common Options
All scripts support:
- `--version` — Show version information
- `--help, -h` — Show help message
- `--quiet, -q` — Suppress non-error output
- `--yes, -y` — Auto-confirm all prompts
- `--dry-run` — Preview without executing
- `--install-deps` — Auto-install missing dependencies (requires root)
- `--log FILE` — Custom log file path
## Configuration
### Config File
Scripts load settings from `~/.config/docwell/config` (same as Go version):
```ini
StacksDir="/opt/stacks"
BackupBase="/storage/backups/docker-myhost"
LogFile="/tmp/docwell/docwell.log"
OldHost="10.0.0.10"
OldPort="22"
OldUser="admin"
BandwidthMB="50"
TransferRetries="3"
```
### Environment Variables
Config file values can be overridden by environment variables:
- `STACKS_DIR` — Stacks directory (default: `/opt/stacks`)
- `BACKUP_BASE` — Backup base directory (default: `/storage/backups/docker-$HOSTNAME`)
- `LOG_FILE` — Log file path (default: `/tmp/docwell/docwell.log`)
- `NO_COLOR` — Disable color output when set
Migration-specific:
- `OLD_HOST`, `OLD_PORT`, `OLD_USER` — Source server defaults
- `NEW_HOST`, `NEW_PORT`, `NEW_USER` — Destination server defaults
## Comparison with Go Version
| Feature | Bash | Go |
|---|---|---|
| CLI Flags | ✅ | ✅ |
| Interactive Mode | ✅ | ✅ |
| Spinner/Progress | ✅ | ✅ |
| Config File | ✅ | ✅ |
| Dry-Run | ✅ | ✅ |
| Signal Handling | ✅ | ✅ |
| Transfer Methods | ✅ rsync, tar, rclone | ✅ rsync, tar, rclone |
| Parallel Operations | ✅ Bounded | ✅ Full |
| Post-Migration Verify | ✅ | ✅ |
| Bandwidth Limiting | ✅ | ✅ |
| NO_COLOR Support | ✅ | ❌ |
## Examples
### Automated Backup (cron)
```bash
0 2 * * * /usr/local/bin/docker-backup.sh --all --quiet --yes
```
### Weekly Cleanup (cron)
```bash
0 2 * * 0 /usr/local/bin/docker-cleanup.sh --all --yes --quiet
```
### Update Check Script
```bash
#!/bin/bash
/usr/local/bin/docker-manager.sh --check --quiet
```
## Notes
- All scripts require Docker and Docker Compose
- Some operations may require root or sudo
- Scripts use `set -euo pipefail` for strict error handling
- Color output disabled when `NO_COLOR=1` is set or stdout is not a terminal
## See Also
- Go version: `../go/docwell.go`
- Shared library: `lib/common.sh`
- Test stack: `../test-stack/`

366
bash/docker-auto-migrate.sh Executable file
View File

@@ -0,0 +1,366 @@
#!/bin/bash
# Docker Auto-Migration Script
# Migrates ALL stacks to a specified destination host
set -euo pipefail
# Version
VERSION="2.6.2"
# Resolve script directory and source shared library
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=lib/common.sh
source "$SCRIPT_DIR/lib/common.sh"
# Enable debug trace after sourcing common.sh (so DEBUG variable is available)
enable_debug_trace
# Default configuration
LOG_FILE="${LOG_FILE:-/tmp/docwell/docker-auto-migrate.log}"
DEST_HOST="${DEST_HOST:-}"
DEST_USER="${DEST_USER:-$(whoami)}"
DEST_PORT="${DEST_PORT:-22}"
REMOTE_STACKS_DIR="${REMOTE_STACKS_DIR:-/opt/stacks}"
TRANSFER_METHOD="rclone"
COMPRESSION="${COMPRESSION:-zstd}"
# Load config file overrides
load_config
parse_args() {
while [[ $# -gt 0 ]]; do
case $1 in
--version)
echo "docker-auto-migrate.sh v$VERSION"
exit 0
;;
--quiet|-q)
QUIET=true
shift
;;
--yes|-y)
YES=true
shift
;;
--dest)
DEST_HOST="$2"
shift 2
;;
--dest-user)
DEST_USER="$2"
shift 2
;;
--dest-port)
DEST_PORT="$2"
shift 2
;;
--stacks-dir)
STACKS_DIR="$2"
shift 2
;;
--remote-stacks-dir)
REMOTE_STACKS_DIR="$2"
shift 2
;;
--method)
TRANSFER_METHOD="$2"
shift 2
;;
--install-deps)
AUTO_INSTALL=true
shift
;;
--log)
LOG_FILE="$2"
shift 2
;;
--dry-run)
DRY_RUN=true
shift
;;
--debug|-d)
DEBUG=true
shift
;;
--verbose|-v)
VERBOSE=true
shift
;;
--help|-h)
show_help
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}" >&2
show_help
exit 1
;;
esac
done
}
show_help() {
cat << EOF
Docker Auto-Migration Script v$VERSION
Migrates ALL Docker stacks to a specified destination host.
Requires root privileges and SSH key-based authentication.
Usage: $0 [OPTIONS]
Options:
--dest HOST Destination host (required)
--dest-user USER Destination user (default: current user)
--dest-port PORT Destination SSH port (default: 22)
--stacks-dir DIR Local stacks directory (default: /opt/stacks)
--remote-stacks-dir Remote stacks directory (default: /opt/stacks)
--method METHOD Transfer method: rclone, rsync (default: rclone)
--quiet, -q Suppress non-error output
--yes, -y Auto-confirm all prompts
--dry-run Show what would be done without actually doing it
--debug, -d Enable debug mode (verbose output + xtrace)
--verbose, -v Enable verbose logging
--install-deps Auto-install missing dependencies (requires root)
--log FILE Log file path
--version Show version information
--help, -h Show this help message
Environment Variables:
DEST_HOST Destination host
DEST_USER Destination user
DEST_PORT Destination SSH port
STACKS_DIR Local stacks directory
REMOTE_STACKS_DIR Remote stacks directory
Examples:
$0 --dest 10.0.0.2 --dest-user admin
$0 --dest 10.0.0.2 --method rsync --dry-run
$0 --dest remote-host --yes
EOF
}
check_dependencies() {
local required=("docker" "ssh")
case "$TRANSFER_METHOD" in
rclone) required+=("rclone") ;;
rsync) required+=("rsync") ;;
esac
local missing=()
for cmd in "${required[@]}"; do
command -v "$cmd" &>/dev/null || missing+=("$cmd")
done
if [[ ${#missing[@]} -gt 0 ]]; then
log_error "Missing dependencies: ${missing[*]}"
if [[ "$AUTO_INSTALL" == "true" ]]; then
install_dependencies "${missing[@]}"
return $?
fi
return 1
fi
return 0
}
stop_local_stack() {
local stack="$1"
local stack_path="$STACKS_DIR/$stack"
local compose_file
compose_file=$(get_compose_file "$stack_path") || return 1
run_or_dry "stop $stack" docker compose -f "$stack_path/$compose_file" down >/dev/null 2>&1
}
start_remote_stack() {
local stack="$1"
local remote_path="$REMOTE_STACKS_DIR/$stack"
for f in compose.yaml compose.yml docker-compose.yaml docker-compose.yml; do
if ssh -p "$DEST_PORT" -o StrictHostKeyChecking=no "$DEST_USER@$DEST_HOST" \
"test -f '$remote_path/$f'" 2>/dev/null; then
run_or_dry "start $stack on remote" \
ssh -p "$DEST_PORT" -o StrictHostKeyChecking=no "$DEST_USER@$DEST_HOST" \
"cd '$remote_path' && docker compose -f '$f' up -d" >/dev/null 2>&1
return $?
fi
done
log_error "No compose file found on destination for $stack"
return 1
}
transfer_files() {
local src="$1" dst="$2"
case "$TRANSFER_METHOD" in
rclone)
run_or_dry "rclone sync $src to $dst" \
rclone sync "$src" ":sftp,host=$DEST_HOST,user=$DEST_USER,port=$DEST_PORT:$dst" \
--transfers 16 --progress --sftp-ask-password 2>&1
;;
rsync)
run_or_dry "rsync $src to $dst" \
rsync -avz --progress --delete \
-e "ssh -p $DEST_PORT -o StrictHostKeyChecking=no" \
"$src/" "$DEST_USER@$DEST_HOST:$dst/"
;;
esac
}
migrate_volume() {
local vol="$1"
local vol_backup_dir="$2"
local comp_info
comp_info=$(get_compressor "$COMPRESSION")
local compressor="${comp_info%%:*}"
local ext="${comp_info#*:}"
log_info "Backing up volume: $vol"
if [[ "$DRY_RUN" == "true" ]]; then
log_info "[DRY-RUN] Would migrate volume $vol"
return 0
fi
# Backup locally
docker run --rm \
-v "${vol}:/data:ro" \
-v "$vol_backup_dir:/backup" \
alpine \
sh -c "tar -cf - -C /data . | $compressor > /backup/${vol}${ext}" 2>/dev/null || {
log_error "Failed to backup volume: $vol"
return 1
}
# Transfer to remote
transfer_files "$vol_backup_dir/${vol}${ext}" "/tmp/docwell_vol_migrate/"
# Restore on remote
local decompressor
case "$compressor" in
zstd) decompressor="zstd -d" ;;
gzip) decompressor="gzip -d" ;;
*) decompressor="cat" ;;
esac
ssh -p "$DEST_PORT" -o StrictHostKeyChecking=no "$DEST_USER@$DEST_HOST" \
"docker volume create '$vol' 2>/dev/null; \
docker run --rm -v '${vol}:/data' -v '/tmp/docwell_vol_migrate:/backup:ro' alpine \
sh -c 'cd /data && cat /backup/${vol}${ext} | ${decompressor} | tar -xf -'" 2>/dev/null || {
log_warn "Failed to restore volume on remote: $vol"
return 1
}
log_success "Volume $vol migrated"
}
main() {
parse_args "$@"
# Must be root
if [[ $EUID -ne 0 ]]; then
log_error "This script requires root privileges"
exit 1
fi
if [[ -z "$DEST_HOST" ]]; then
log_error "Destination host required (--dest HOST)"
show_help
exit 1
fi
if ! check_dependencies; then
exit 1
fi
# Test SSH
if ! test_ssh "$DEST_HOST" "$DEST_PORT" "$DEST_USER"; then
exit 1
fi
local stacks
mapfile -t stacks < <(get_stacks)
if [[ ${#stacks[@]} -eq 0 ]]; then
log_error "No stacks found in $STACKS_DIR"
exit 1
fi
log_header "Auto-Migrate All Stacks"
echo -e "${GRAY}DocWell Auto-Migration v$VERSION${NC}"
echo
echo -e " Source: local ($STACKS_DIR)"
echo -e " Destination: $DEST_USER@$DEST_HOST:$DEST_PORT ($REMOTE_STACKS_DIR)"
echo -e " Method: $TRANSFER_METHOD"
echo -e " Stacks: ${#stacks[@]}"
echo
if ! confirm "Proceed with migrating ALL stacks?"; then
exit 0
fi
local start_time=$SECONDS
local total=${#stacks[@]}
local failed=()
local vol_backup_dir
vol_backup_dir=$(mktemp -d /tmp/docwell_auto_migrate.XXXXXX)
register_cleanup "rm -rf '$vol_backup_dir'"
local idx=0
for stack in "${stacks[@]}"; do
((idx++))
echo
log_separator
echo -e "${CYAN}[$idx/$total]${NC} ${BOLD}$stack${NC}"
# Step 1: Stop local stack
log_info "Stopping local stack..."
if is_stack_running "$stack"; then
stop_local_stack "$stack" || {
log_warn "Failed to stop $stack, continuing..."
}
fi
# Step 2: Transfer files
log_info "Transferring files..."
if ! transfer_files "$STACKS_DIR/$stack" "$REMOTE_STACKS_DIR/$stack"; then
log_error "Failed to transfer files for $stack"
failed+=("$stack")
continue
fi
# Step 3: Migrate volumes
local volumes
mapfile -t volumes < <(get_service_volumes "$stack" 2>/dev/null || true)
for vol in "${volumes[@]}"; do
[[ -z "$vol" ]] && continue
migrate_volume "$vol" "$vol_backup_dir" || {
log_warn "Volume migration failed for $vol"
}
done
# Step 4: Start on remote
log_info "Starting on destination..."
if ! start_remote_stack "$stack"; then
log_warn "Failed to start $stack on destination"
failed+=("$stack")
else
log_success "$stack migrated"
fi
done
echo
log_separator
local elapsed=$(( SECONDS - start_time ))
if [[ ${#failed[@]} -gt 0 ]]; then
log_warn "Failed stacks (${#failed[@]}): ${failed[*]}"
else
log_success "All stacks migrated successfully"
fi
log_info "Auto-migration completed in $(format_elapsed "$elapsed")"
}
main "$@"

602
bash/docker-backup.sh Executable file
View File

@@ -0,0 +1,602 @@
#!/bin/bash
# Docker Stack Backup Script
# Enhanced version matching Go docwell functionality
set -euo pipefail
# Version
VERSION="2.6.2"
# Resolve script directory and source shared library
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=lib/common.sh
source "$SCRIPT_DIR/lib/common.sh"
# Enable debug trace after sourcing common.sh (so DEBUG variable is available)
enable_debug_trace
# Default configuration
HOSTNAME=$(hostname 2>/dev/null | tr -cd 'a-zA-Z0-9._-' || echo "unknown")
DATE=$(date +%Y-%m-%d)
BACKUP_BASE="${BACKUP_BASE:-/storage/backups/docker-$HOSTNAME}"
BACKUP_DIR="$BACKUP_BASE/$DATE"
LOG_FILE="${LOG_FILE:-/tmp/docwell/docker-backup.log}"
# CLI flags
BACKUP_ALL=false
BACKUP_STACK=""
LIST_ONLY=false
AUTO_MODE=false
COMPRESSION="zstd"
# Parser configuration
MAX_STACK_NAME_LEN=30
# Load config file overrides
load_config
# Parse command line arguments
parse_args() {
while [[ $# -gt 0 ]]; do
case $1 in
--version)
echo "docker-backup.sh v$VERSION"
exit 0
;;
--quiet|-q)
QUIET=true
shift
;;
--yes|-y)
YES=true
shift
;;
--all|-a)
BACKUP_ALL=true
shift
;;
--stack|-s)
require_arg "$1" "${2:-}" || exit 1
BACKUP_STACK="$2"
shift 2
;;
--list|-l)
LIST_ONLY=true
shift
;;
--stacks-dir)
STACKS_DIR="$2"
shift 2
;;
--backup-base)
BACKUP_BASE="$2"
BACKUP_DIR="$BACKUP_BASE/$DATE"
shift 2
;;
--auto)
AUTO_MODE=true
QUIET=true
YES=true
shift
;;
--install-deps)
AUTO_INSTALL=true
shift
;;
--log)
LOG_FILE="$2"
shift 2
;;
--compression|-c)
COMPRESSION="$2"
shift 2
;;
--dry-run)
DRY_RUN=true
shift
;;
--debug|-d)
DEBUG=true
shift
;;
--verbose|-v)
VERBOSE=true
shift
;;
--help|-h)
show_help
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}" >&2
show_help
exit 1
;;
esac
done
}
show_help() {
cat << EOF
Docker Stack Backup Script v$VERSION
Usage: $0 [OPTIONS]
Options:
--all, -a Backup all stacks
--stack STACK, -s Backup specific stack by name
--list, -l List available stacks for backup
--stacks-dir DIR Stacks directory (default: /opt/stacks)
--backup-base DIR Backup base directory (default: /storage/backups/docker-\$HOSTNAME)
--log FILE Log file path (default: /tmp/docwell/docker-backup.log)
--quiet, -q Suppress non-error output
--yes, -y Auto-confirm all prompts
--auto Auto mode: backup all stacks & clean resources (for cron)
--compression METHOD Compression method: zstd or gzip (default: zstd)
--dry-run Show what would be done without actually doing it
--debug, -d Enable debug mode (verbose output + xtrace)
--verbose, -v Enable verbose logging
--install-deps Auto-install missing dependencies (requires root)
--version Show version information
--help, -h Show this help message
Examples:
$0 --list # List all stacks
$0 --all # Backup all stacks
$0 --stack myapp # Backup specific stack
$0 --all --quiet --yes # Backup all stacks non-interactively
$0 --all --dry-run # Preview backup without executing
EOF
}
# Check for required dependencies
check_dependencies() {
local required_cmds=("docker")
local optional_cmds=("zstd")
local missing=()
local missing_optional=()
for cmd in "${required_cmds[@]}"; do
if ! command -v "$cmd" &>/dev/null; then
missing+=("$cmd")
fi
done
for cmd in "${optional_cmds[@]}"; do
if ! command -v "$cmd" &>/dev/null; then
missing_optional+=("$cmd")
fi
done
if [[ ${#missing_optional[@]} -gt 0 ]]; then
log_warn "Optional dependencies missing: ${missing_optional[*]} (will use fallback)"
fi
if [[ ${#missing[@]} -gt 0 ]]; then
log_error "Missing required dependencies: ${missing[*]}"
echo -e "${BLUE}Install commands:${NC}"
command -v apt &>/dev/null && echo -e " ${GREEN}Debian/Ubuntu:${NC} sudo apt install docker.io ${missing_optional[*]}"
command -v pacman &>/dev/null && echo -e " ${GREEN}Arch Linux:${NC} sudo pacman -S docker ${missing_optional[*]}"
if [[ "$AUTO_INSTALL" == "true" ]]; then
log_info "Auto-installing dependencies..."
install_dependencies "${missing[@]}" "${missing_optional[@]}"
return $?
fi
return 1
fi
return 0
}
find_compose_file() {
local stack="$1"
get_compose_file "$STACKS_DIR/$stack"
}
stop_stack() {
local stack="$1"
local stack_path="$STACKS_DIR/$stack"
local compose_file
log_debug "Attempting to stop stack: $stack"
compose_file=$(find_compose_file "$stack") || {
log_debug "No compose file found for stack: $stack"
return 1
}
if is_stack_running "$stack"; then
log_info "Stopping $stack..."
if run_or_dry "stop $stack" docker compose -f "$stack_path/$compose_file" down >/dev/null 2>&1; then
log_debug "Stack stopped successfully: $stack"
return 0
else
log_error "Failed to stop stack: $stack"
return 1
fi
fi
log_debug "Stack was not running: $stack"
return 1
}
start_stack() {
local stack="$1"
local stack_path="$STACKS_DIR/$stack"
local compose_file
log_debug "Attempting to start stack: $stack"
compose_file=$(find_compose_file "$stack") || {
log_error "No compose file found for stack: $stack"
return 1
}
log_info "Starting $stack..."
if run_or_dry "start $stack" docker compose -f "$stack_path/$compose_file" up -d >/dev/null 2>&1; then
log_debug "Stack started successfully: $stack"
return 0
else
log_error "Failed to start stack: $stack"
return 1
fi
}
create_backup_archive() {
local stack="$1"
log_debug "Creating backup archive for: $stack"
# Temp files in BACKUP_DIR, cleaned up on failure
local temp_tar
temp_tar=$(mktemp "${BACKUP_DIR}/.${stack}.tar.XXXXXX" 2>/dev/null) || {
log_error "Failed to create temporary file in $BACKUP_DIR"
return 1
}
register_cleanup "rm -f '$temp_tar'"
log_debug "Temporary tar file: $temp_tar"
log_info "Archiving $stack..."
local comp_info
comp_info=$(get_compressor "$COMPRESSION")
local compressor="${comp_info%%:*}"
local ext="${comp_info#*:}"
log_debug "Using compressor: $compressor, extension: $ext"
if [[ "$DRY_RUN" == "true" ]]; then
log_info "[DRY-RUN] Would create: $BACKUP_DIR/docker-$stack$ext"
rm -f "$temp_tar"
return 0
fi
log_debug "Creating tar archive..."
if ! tar -cf "$temp_tar" -C "$STACKS_DIR" "$stack" 2>/dev/null; then
log_error "Failed to create tar archive for $stack"
rm -f "$temp_tar"
return 1
fi
local tar_size
tar_size=$(stat -f%z "$temp_tar" 2>/dev/null || stat -c%s "$temp_tar" 2>/dev/null || echo "unknown")
log_debug "Tar archive size: $tar_size bytes"
log_debug "Compressing archive with $compressor..."
if ! $compressor < "$temp_tar" > "$BACKUP_DIR/docker-$stack$ext" 2>/dev/null; then
log_error "Failed to compress archive for $stack"
rm -f "$temp_tar" "$BACKUP_DIR/docker-$stack$ext" 2>/dev/null
return 1
fi
local final_size
final_size=$(stat -f%z "$BACKUP_DIR/docker-$stack$ext" 2>/dev/null || stat -c%s "$BACKUP_DIR/docker-$stack$ext" 2>/dev/null || echo "unknown")
log_debug "Final backup size: $final_size bytes"
log_debug "Backup archive created successfully: $BACKUP_DIR/docker-$stack$ext"
rm -f "$temp_tar"
return 0
}
backup_stack() {
local stack="$1"
log_debug "Starting backup for stack: $stack"
if ! validate_stack_name "$stack"; then
log_error "Invalid stack name: $stack"
return 1
fi
local stack_path="$STACKS_DIR/$stack"
if [[ ! -d "$stack_path" ]]; then
log_error "Stack directory not found: $stack_path"
return 1
fi
log_debug "Stack path verified: $stack_path"
# Create backup directory
if ! mkdir -p "$BACKUP_DIR"; then
log_error "Failed to create backup directory: $BACKUP_DIR"
return 1
fi
log_debug "Backup directory ready: $BACKUP_DIR"
local was_running=false
if stop_stack "$stack"; then
was_running=true
log_debug "Stack was running, stopped for backup"
else
log_debug "Stack was not running"
fi
if create_backup_archive "$stack"; then
log_success "Backup complete: $stack"
if [[ "$was_running" == "true" ]]; then
log_debug "Restarting stack: $stack"
start_stack "$stack" || log_warn "Failed to restart $stack"
fi
return 0
else
log_error "Backup failed: $stack"
if [[ "$was_running" == "true" ]]; then
log_debug "Attempting to restart stack after failed backup: $stack"
start_stack "$stack" || log_warn "Failed to restart $stack"
fi
return 1
fi
}
list_stacks() {
show_spinner "Loading stacks..."
local stacks
mapfile -t stacks < <(get_stacks)
hide_spinner
if [[ ${#stacks[@]} -eq 0 ]]; then
log_info "No stacks found in $STACKS_DIR"
return 1
fi
for stack in "${stacks[@]}"; do
local status="stopped"
local size
size=$(get_stack_size "$stack")
if is_stack_running "$stack"; then
status="running"
fi
echo -e "$stack\t$status\t$size"
done
}
interactive_backup() {
local stacks
show_spinner "Loading stacks..."
mapfile -t stacks < <(get_stacks)
hide_spinner
if [[ ${#stacks[@]} -eq 0 ]]; then
log_error "No stacks found in $STACKS_DIR"
exit 1
fi
log_header "Backup Docker Stacks"
echo -e "${GRAY}DocWell Backup v$VERSION${NC}"
echo
echo "Available stacks:"
echo -e " ${BOLD}0${NC}) All stacks ${GRAY}[Backup everything]${NC}"
for i in "${!stacks[@]}"; do
local stack="${stacks[$i]}"
local status="●"
local color="$GRAY"
if is_stack_running "$stack"; then
color="$GREEN"
fi
local size
size=$(get_stack_size "$stack")
echo -e " ${BOLD}$((i+1))${NC}) ${color}${status}${NC} ${stack:0:$MAX_STACK_NAME_LEN} ${GRAY}[${size}]${NC}"
done
echo
read -rp "Enter selection (0 for all, comma-separated numbers): " selection
log_info "Starting backup on $HOSTNAME"
local start_time=$SECONDS
if [[ "$selection" == "0" ]]; then
_backup_stacks_parallel "${stacks[@]}"
else
IFS=',' read -ra SELECTIONS <<< "$selection"
for sel in "${SELECTIONS[@]}"; do
sel=$(echo "$sel" | xargs)
if [[ ! "$sel" =~ ^[0-9]+$ ]]; then
log_warn "Invalid selection: $sel"
continue
fi
if [[ "$sel" -ge 1 ]] && [[ "$sel" -le "${#stacks[@]}" ]]; then
local idx=$((sel-1))
backup_stack "${stacks[$idx]}" || true
else
log_warn "Selection out of range: $sel"
fi
done
fi
local elapsed=$(( SECONDS - start_time ))
log_info "Backup completed in $(format_elapsed "$elapsed")"
}
_backup_stacks_parallel() {
local stacks=("$@")
local total=${#stacks[@]}
local pids=()
local fail_dir
fail_dir=$(mktemp -d /tmp/backup_fail.XXXXXX)
register_cleanup "rm -rf '$fail_dir'"
log_info "Starting backups for $total stack(s)..."
local idx=0
for stack in "${stacks[@]}"; do
((idx++))
echo -e "${CYAN}[$idx/$total]${NC} Backing up $stack..."
parallel_throttle "$MAX_PARALLEL"
(
if ! backup_stack "$stack"; then
touch "$fail_dir/$stack"
fi
) &
_parallel_pids+=($!)
done
parallel_wait
# Collect failures
local failed_stacks=()
for stack in "${stacks[@]}"; do
if [[ -f "$fail_dir/$stack" ]]; then
failed_stacks+=("$stack")
fi
done
if [[ ${#failed_stacks[@]} -gt 0 ]]; then
log_warn "Failed to backup ${#failed_stacks[@]} stack(s): ${failed_stacks[*]}"
else
log_success "All stacks backed up successfully"
fi
}
auto_mode() {
log_info "Starting auto mode on $HOSTNAME"
local start_time=$SECONDS
# Backup all stacks
local stacks
mapfile -t stacks < <(get_stacks)
if [[ ${#stacks[@]} -eq 0 ]]; then
log_warn "No stacks found for backup"
else
log_info "Backing up ${#stacks[@]} stack(s)..."
local failed_count=0
local idx=0
for stack in "${stacks[@]}"; do
((idx++))
[[ "$QUIET" == "false" ]] && echo -e "${CYAN}[$idx/${#stacks[@]}]${NC} $stack"
if ! backup_stack "$stack"; then
((failed_count++))
fi
done
if [[ $failed_count -gt 0 ]]; then
log_warn "Failed to backup $failed_count stack(s)"
else
log_success "All stacks backed up successfully"
fi
fi
# Cleanup: stopped containers
log_info "Cleaning up stopped containers..."
if run_or_dry "prune containers" docker container prune -f > /dev/null 2>&1; then
log_success "Stopped containers cleaned"
else
log_warn "Failed to clean stopped containers"
fi
# Cleanup: dangling images
log_info "Cleaning up dangling images..."
if run_or_dry "prune images" docker image prune -f > /dev/null 2>&1; then
log_success "Dangling images cleaned"
else
log_warn "Failed to clean dangling images"
fi
local elapsed=$(( SECONDS - start_time ))
log_info "Auto mode completed in $(format_elapsed "$elapsed")"
}
main() {
parse_args "$@"
log_debug "Starting docker-backup.sh v$VERSION"
log_debug "Configuration: STACKS_DIR=$STACKS_DIR, BACKUP_BASE=$BACKUP_BASE"
log_debug "Flags: QUIET=$QUIET, YES=$YES, DRY_RUN=$DRY_RUN, DEBUG=$DEBUG, VERBOSE=$VERBOSE"
# Check dependencies first
if ! check_dependencies; then
log_error "Dependency check failed"
exit 1
fi
# Validation of critical dirs
if [[ ! -d "$STACKS_DIR" ]]; then
log_error "Stacks directory does not exist: $STACKS_DIR"
log_debug "Attempting to create stacks directory..."
if mkdir -p "$STACKS_DIR" 2>/dev/null; then
log_info "Created stacks directory: $STACKS_DIR"
else
log_error "Cannot create stacks directory: $STACKS_DIR"
exit 1
fi
fi
# Check docker connectivity
if ! docker info >/dev/null 2>&1; then
log_debug "Docker info check failed, checking permissions..."
if [[ "$EUID" -ne 0 ]]; then
if ! confirm "Docker socket not accessible. Run with sudo?"; then
exit 1
fi
log_debug "Re-executing with sudo..."
exec sudo "$SCRIPT_DIR/$(basename "${BASH_SOURCE[0]}")" "$@"
else
log_error "Cannot connect to Docker daemon even as root."
exit 1
fi
fi
# Handle auto mode
if [[ "$AUTO_MODE" == "true" ]]; then
auto_mode
exit 0
fi
# Handle list only
if [[ "$LIST_ONLY" == "true" ]]; then
list_stacks
exit 0
fi
# Handle specific stack backup
if [[ -n "$BACKUP_STACK" ]]; then
if ! validate_stack_name "$BACKUP_STACK"; then
exit 1
fi
local start_time=$SECONDS
backup_stack "$BACKUP_STACK"
local rc=$?
local elapsed=$(( SECONDS - start_time ))
log_info "Completed in $(format_elapsed "$elapsed")"
exit $rc
fi
# Handle backup all
if [[ "$BACKUP_ALL" == "true" ]]; then
local stacks
mapfile -t stacks < <(get_stacks)
if [[ ${#stacks[@]} -eq 0 ]]; then
log_error "No stacks found"
exit 1
fi
local start_time=$SECONDS
_backup_stacks_parallel "${stacks[@]}"
local elapsed=$(( SECONDS - start_time ))
log_info "Completed in $(format_elapsed "$elapsed")"
exit 0
fi
# Interactive mode
interactive_backup
}
main "$@"

536
bash/docker-cleanup.sh Executable file
View File

@@ -0,0 +1,536 @@
#!/bin/bash
# Docker Cleanup Script
# Enhanced version matching Go docwell functionality
set -euo pipefail
VERSION="2.6.2"
# Resolve script directory and source shared library
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=lib/common.sh
source "$SCRIPT_DIR/lib/common.sh"
# Enable debug trace after sourcing common.sh (so DEBUG variable is available)
enable_debug_trace
# CLI flags
CLEANUP_CONTAINERS=false
CLEANUP_IMAGES=false
CLEANUP_VOLUMES=false
CLEANUP_NETWORKS=false
CLEANUP_BUILDCACHE=false
CLEANUP_SYSTEM=false
CLEANUP_PACKAGES=false
CLEANUP_ALL=false
LOG_FILE="${LOG_FILE:-/tmp/docwell/docker-cleanup.log}"
# Load config file overrides
load_config
parse_args() {
while [[ $# -gt 0 ]]; do
case $1 in
--version)
echo "docker-cleanup.sh v$VERSION"
exit 0
;;
--quiet|-q)
QUIET=true
shift
;;
--yes|-y)
YES=true
shift
;;
--containers)
CLEANUP_CONTAINERS=true
shift
;;
--images)
CLEANUP_IMAGES=true
shift
;;
--volumes)
CLEANUP_VOLUMES=true
shift
;;
--networks)
CLEANUP_NETWORKS=true
shift
;;
--buildcache)
CLEANUP_BUILDCACHE=true
shift
;;
--system)
CLEANUP_SYSTEM=true
shift
;;
--packages)
CLEANUP_PACKAGES=true
shift
;;
--all)
CLEANUP_ALL=true
shift
;;
--install-deps)
AUTO_INSTALL=true
shift
;;
--log)
LOG_FILE="$2"
shift 2
;;
--dry-run)
DRY_RUN=true
shift
;;
--debug|-d)
DEBUG=true
shift
;;
--verbose|-v)
VERBOSE=true
shift
;;
--help|-h)
show_help
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}" >&2
show_help
exit 1
;;
esac
done
}
show_help() {
cat << EOF
Docker Cleanup Script v$VERSION
Usage: $0 [OPTIONS]
Options:
--containers Remove stopped containers
--images Remove unused images
--volumes Remove unused volumes (DATA LOSS WARNING)
--networks Remove unused networks
--buildcache Clear build cache
--system System-wide cleanup
--packages Clean OS package cache
--all Run all cleanup operations
--install-deps Auto-install missing dependencies (requires root)
--log FILE Log file path (default: /tmp/docwell/docker-cleanup.log)
--dry-run Show what would be done without actually doing it
--debug, -d Enable debug mode (verbose output + xtrace)
--verbose, -v Enable verbose logging
--quiet, -q Suppress non-error output
--yes, -y Auto-confirm all prompts
--version Show version information
--help, -h Show this help message
Examples:
$0 --containers # Remove stopped containers
$0 --all --yes # Run all cleanups non-interactively
$0 --images --volumes # Remove images and volumes
$0 --all --dry-run # Preview cleanup without executing
EOF
}
show_docker_usage() {
log_header "Docker Disk Usage"
docker system df -v 2>/dev/null || {
log_error "Docker not running or not accessible"
exit 1
}
echo
}
cleanup_containers() {
log_header "Container Cleanup"
local stopped_count
stopped_count=$(docker ps -aq --filter status=exited 2>/dev/null | wc -l || echo "0")
if [[ "$stopped_count" == "0" ]] || [[ -z "$stopped_count" ]]; then
log_info "No stopped containers found"
echo
return 0
fi
if [[ "$QUIET" == "false" ]]; then
echo -e "${YELLOW}Stopped containers to remove ($stopped_count):${NC}"
docker ps -a --filter status=exited --format "table {{.Names}}\t{{.Image}}\t{{.Status}}" 2>/dev/null || true
echo
fi
if ! confirm "Remove stopped containers?"; then
log_info "Skipping container cleanup"
echo
return 0
fi
if run_or_dry "prune stopped containers" docker container prune -f &>/dev/null; then
log_success "Removed stopped containers"
else
log_error "Failed to remove containers"
return 1
fi
echo
}
cleanup_images() {
local remove_all="${1:-}"
log_header "Image Cleanup"
local dangling_count
dangling_count=$(docker images -f "dangling=true" -q 2>/dev/null | wc -l || echo "0")
if [[ "$dangling_count" != "0" ]] && [[ -n "$dangling_count" ]]; then
if [[ "$QUIET" == "false" ]]; then
echo -e "${YELLOW}Dangling images to remove ($dangling_count):${NC}"
docker images -f "dangling=true" --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" 2>/dev/null || true
echo
fi
if ! confirm "Remove dangling images?"; then
log_info "Skipping dangling image cleanup"
echo
return 0
fi
if run_or_dry "prune dangling images" docker image prune -f &>/dev/null; then
log_success "Removed dangling images"
else
log_error "Failed to remove dangling images"
return 1
fi
else
log_info "No dangling images found"
fi
if [[ "$QUIET" == "false" ]] && [[ "$remove_all" != "--all" ]]; then
local unused_count
unused_count=$(docker images -q 2>/dev/null | wc -l || echo "0")
if [[ "$unused_count" != "0" ]]; then
echo -e "${YELLOW}Unused images (showing first 20):${NC}"
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" 2>/dev/null | head -20 || true
echo
fi
fi
if [[ "$remove_all" != "--all" ]]; then
if ! confirm "Remove ALL unused images?"; then
log_info "Skipping unused image cleanup"
echo
return 0
fi
fi
if run_or_dry "prune all unused images" docker image prune -a -f &>/dev/null; then
log_success "Removed all unused images"
else
log_error "Failed to remove unused images"
return 1
fi
echo
}
cleanup_volumes() {
log_header "Volume Cleanup"
local unused_count
unused_count=$(docker volume ls -q --filter dangling=true 2>/dev/null | wc -l || echo "0")
if [[ "$unused_count" == "0" ]] || [[ -z "$unused_count" ]]; then
log_info "No unused volumes found"
echo
return 0
fi
if [[ "$QUIET" == "false" ]]; then
echo -e "${YELLOW}Unused volumes to remove ($unused_count):${NC}"
docker volume ls --filter dangling=true --format "table {{.Name}}\t{{.Driver}}" 2>/dev/null || true
echo
fi
log_warn "This will permanently delete volume data!"
if ! confirm "Remove unused volumes?"; then
log_info "Skipping volume cleanup"
echo
return 0
fi
if run_or_dry "prune unused volumes" docker volume prune -f &>/dev/null; then
log_success "Removed unused volumes"
else
log_error "Failed to remove volumes"
return 1
fi
echo
}
cleanup_networks() {
log_header "Network Cleanup"
if [[ "$QUIET" == "false" ]]; then
echo -e "${YELLOW}Unused networks:${NC}"
docker network ls --format "table {{.Name}}\t{{.Driver}}\t{{.Scope}}"
echo
fi
if ! confirm "Remove unused networks?"; then
log_info "Skipping network cleanup"
echo
return 0
fi
if run_or_dry "prune unused networks" docker network prune -f &>/dev/null; then
log_success "Removed unused networks"
else
log_error "Failed to remove networks"
return 1
fi
echo
}
cleanup_build_cache() {
log_header "Build Cache Cleanup"
if [[ "$QUIET" == "false" ]]; then
echo -e "${YELLOW}Build cache usage:${NC}"
docker system df 2>/dev/null | grep "Build Cache" || echo "No build cache info available"
echo
fi
if ! confirm "Clear build cache?"; then
log_info "Skipping build cache cleanup"
echo
return 0
fi
if run_or_dry "prune build cache" docker builder prune -f &>/dev/null; then
log_success "Cleared build cache"
else
log_error "Failed to clear build cache"
return 1
fi
echo
}
cleanup_system() {
log_header "System-wide Cleanup"
log_warn "This will remove:"
echo "- All stopped containers"
echo "- All networks not used by at least one container"
echo "- All dangling images"
echo "- All dangling build cache"
echo
if ! confirm "Perform system cleanup?"; then
log_info "Skipping system cleanup"
echo
return 0
fi
if ! run_or_dry "system prune" docker system prune -f &>/dev/null; then
log_error "Failed to prune system"
return 1
fi
log_success "System cleanup completed"
if confirm "Also remove unused images?"; then
if run_or_dry "aggressive system prune" docker system prune -a -f &>/dev/null; then
log_success "Aggressive cleanup completed"
else
log_error "Failed to prune system with images"
return 1
fi
fi
echo
}
cleanup_packages() {
log_header "Package Manager Cleanup"
if ! check_root; then
log_error "sudo privileges required for package cleanup"
echo
return 1
fi
if command -v pacman &> /dev/null; then
log_info "Detected Arch Linux"
local orphaned
orphaned=$(pacman -Qtdq 2>/dev/null || true)
if [[ -n "$orphaned" ]]; then
if [[ "$QUIET" == "false" ]]; then
echo -e "${YELLOW}Orphaned packages:${NC}"
echo "$orphaned"
echo
fi
if confirm "Remove orphaned packages?"; then
local orphaned_packages=()
while IFS= read -r pkg; do
[[ -n "$pkg" ]] && orphaned_packages+=("$pkg")
done < <(echo "$orphaned")
if (( ${#orphaned_packages[@]} > 0 )); then
if run_or_dry "remove orphaned packages" sudo pacman -Rns "${orphaned_packages[@]}" &>/dev/null; then
log_success "Orphaned packages removed"
else
log_warn "Failed to remove orphaned packages"
fi
fi
fi
else
log_info "No orphaned packages found"
fi
if confirm "Clear package cache?"; then
if run_or_dry "clear package cache" sudo pacman -Scc --noconfirm &>/dev/null; then
log_success "Package cache cleared"
else
log_error "Failed to clear package cache"
return 1
fi
fi
elif command -v apt &> /dev/null; then
log_info "Detected Debian/Ubuntu"
if confirm "Remove unused packages?"; then
if run_or_dry "autoremove packages" sudo apt autoremove -y &>/dev/null; then
log_success "Removed unused packages"
else
log_error "Failed to remove unused packages"
return 1
fi
fi
if confirm "Clean package cache?"; then
if run_or_dry "clean package cache" sudo apt autoclean &>/dev/null; then
log_success "Cache cleaned"
else
log_error "Failed to clean cache"
return 1
fi
fi
else
log_info "No supported package manager found"
fi
echo
}
cleanup_all() {
show_docker_usage
cleanup_containers
cleanup_images
cleanup_networks
cleanup_build_cache
if confirm "Also remove unused volumes? (DATA LOSS WARNING)"; then
cleanup_volumes
fi
if confirm "Also remove ALL unused images?"; then
cleanup_images --all
fi
if confirm "Also clean package cache?"; then
cleanup_packages
fi
}
main_menu() {
while true; do
log_header "Docker Cleanup Menu"
echo -e "${GRAY}DocWell Cleanup v$VERSION${NC}"
echo
echo "1) Show Docker disk usage"
echo "2) Clean containers (stopped)"
echo "3) Clean images (dangling/unused)"
echo "4) Clean volumes (unused)"
echo "5) Clean networks (unused)"
echo "6) Clean build cache"
echo "7) System-wide cleanup"
echo "8) Package manager cleanup"
echo "9) Run all cleanups"
echo "0) Exit"
echo
read -rp "Select option [0-9]: " choice
if [[ ! "$choice" =~ ^[0-9]$ ]]; then
log_error "Invalid option. Please enter a number between 0-9."
echo
read -rp "Press Enter to continue..."
clear_screen
continue
fi
case $choice in
1) show_docker_usage ;;
2) cleanup_containers ;;
3) cleanup_images ;;
4) cleanup_volumes ;;
5) cleanup_networks ;;
6) cleanup_build_cache ;;
7) cleanup_system ;;
8) cleanup_packages ;;
9) cleanup_all ;;
0)
log_info "Exiting..."
exit 0
;;
*)
log_error "Invalid option. Please try again."
;;
esac
echo
read -rp "Press Enter to continue..."
clear_screen
done
}
main() {
parse_args "$@"
check_docker || exit 1
# Handle CLI flags
if [[ "$CLEANUP_ALL" == "true" ]]; then
cleanup_all
exit 0
fi
[[ "$CLEANUP_CONTAINERS" == "true" ]] && cleanup_containers
[[ "$CLEANUP_IMAGES" == "true" ]] && cleanup_images
[[ "$CLEANUP_VOLUMES" == "true" ]] && cleanup_volumes
[[ "$CLEANUP_NETWORKS" == "true" ]] && cleanup_networks
[[ "$CLEANUP_BUILDCACHE" == "true" ]] && cleanup_build_cache
[[ "$CLEANUP_SYSTEM" == "true" ]] && cleanup_system
[[ "$CLEANUP_PACKAGES" == "true" ]] && cleanup_packages
# If no flags provided, run interactive menu
if [[ "$CLEANUP_CONTAINERS" == "false" ]] && \
[[ "$CLEANUP_IMAGES" == "false" ]] && \
[[ "$CLEANUP_VOLUMES" == "false" ]] && \
[[ "$CLEANUP_NETWORKS" == "false" ]] && \
[[ "$CLEANUP_BUILDCACHE" == "false" ]] && \
[[ "$CLEANUP_SYSTEM" == "false" ]] && \
[[ "$CLEANUP_PACKAGES" == "false" ]]; then
clear_screen
main_menu
fi
}
main "$@"

746
bash/docker-manager.sh Executable file
View File

@@ -0,0 +1,746 @@
#!/bin/bash
# Docker Stack Manager Script
# Combines stack management and update functionality
set -euo pipefail
# Version
VERSION="2.6.2"
# Resolve script directory and source shared library
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=lib/common.sh
source "$SCRIPT_DIR/lib/common.sh"
# Enable debug trace after sourcing common.sh (so DEBUG variable is available)
enable_debug_trace
# Default configuration
LOG_FILE="${LOG_FILE:-/tmp/docwell/docker-manager.log}"
# CLI flags
LIST_ONLY=false
START_STACK=""
STOP_STACK=""
RESTART_STACK=""
STATUS_STACK=""
LOGS_STACK=""
CHECK_UPDATES=false
UPDATE_ALL=false
UPDATE_STACK=""
AUTO_UPDATE=false
# Load config file overrides
load_config
parse_args() {
while [[ $# -gt 0 ]]; do
case $1 in
--version)
echo "docker-manager.sh v$VERSION"
exit 0
;;
--quiet|-q)
QUIET=true
shift
;;
--yes|-y)
YES=true
shift
;;
--list|-l)
LIST_ONLY=true
shift
;;
--start)
START_STACK="$2"
shift 2
;;
--stop)
STOP_STACK="$2"
shift 2
;;
--restart)
RESTART_STACK="$2"
shift 2
;;
--status)
STATUS_STACK="$2"
shift 2
;;
--logs)
LOGS_STACK="$2"
shift 2
;;
--check)
CHECK_UPDATES=true
shift
;;
--update-all|-a)
UPDATE_ALL=true
shift
;;
--update|-u)
UPDATE_STACK="$2"
shift 2
;;
--auto)
AUTO_UPDATE=true
shift
;;
--install-deps)
AUTO_INSTALL=true
shift
;;
--log)
LOG_FILE="$2"
shift 2
;;
--dry-run)
DRY_RUN=true
shift
;;
--debug|-d)
DEBUG=true
shift
;;
--verbose|-v)
VERBOSE=true
shift
;;
--help|-h)
show_help
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}" >&2
show_help
exit 1
;;
esac
done
}
show_help() {
cat << EOF
Docker Stack Manager Script v$VERSION
Combines stack lifecycle management and update functionality.
Usage: $0 [OPTIONS]
Stack Management:
--list, -l List all stacks and their status
--start STACK Start a specific stack
--stop STACK Stop a specific stack
--restart STACK Restart a specific stack
--status STACK Get status of a specific stack
--logs STACK View logs for a specific stack
Update Operations:
--check Check for available image updates
--update-all, -a Update all stacks
--update STACK, -u Update a specific stack
--auto Auto-update mode with zero-downtime
General Options:
--quiet, -q Suppress non-error output
--yes, -y Auto-confirm all prompts
--dry-run Show what would be done without actually doing it
--debug, -d Enable debug mode (verbose output + xtrace)
--verbose, -v Enable verbose logging
--install-deps Auto-install missing dependencies (requires root)
--log FILE Log file path (default: /tmp/docwell/docker-manager.log)
--version Show version information
--help, -h Show this help message
Examples:
$0 --list # List all stacks
$0 --start myapp # Start a stack
$0 --check # Check for updates
$0 --update-all # Update all stacks
$0 --update myapp # Update specific stack
$0 --auto --yes # Auto-update all stacks
$0 --update-all --dry-run # Preview update without executing
EOF
}
# ─── Stack Management Functions ──────────────────────────────────────────────
list_stacks_display() {
show_spinner "Loading stacks..."
local stacks
mapfile -t stacks < <(get_stacks)
hide_spinner
if [[ ${#stacks[@]} -eq 0 ]]; then
log_info "No stacks found"
return 1
fi
for stack in "${stacks[@]}"; do
local status="stopped"
if is_stack_running "$stack"; then
status="running"
fi
echo -e "$stack\t$status"
done
}
start_stack_cmd() {
local stack="$1"
local stack_path="$STACKS_DIR/$stack"
local compose_file
compose_file=$(get_compose_file "$stack_path") || {
log_error "No compose file found"
return 1
}
if run_or_dry "start $stack" docker compose -f "$stack_path/$compose_file" up -d >/dev/null 2>&1; then
log_success "Started $stack"
return 0
else
log_error "Failed to start $stack"
return 1
fi
}
stop_stack_cmd() {
local stack="$1"
local stack_path="$STACKS_DIR/$stack"
local compose_file
compose_file=$(get_compose_file "$stack_path") || {
log_error "No compose file found"
return 1
}
if run_or_dry "stop $stack" docker compose -f "$stack_path/$compose_file" down >/dev/null 2>&1; then
log_success "Stopped $stack"
return 0
else
log_error "Failed to stop $stack"
return 1
fi
}
restart_stack_cmd() {
local stack="$1"
local stack_path="$STACKS_DIR/$stack"
local compose_file
compose_file=$(get_compose_file "$stack_path") || {
log_error "No compose file found"
return 1
}
if run_or_dry "restart $stack" docker compose -f "$stack_path/$compose_file" restart >/dev/null 2>&1; then
log_success "Restarted $stack"
return 0
else
log_error "Failed to restart $stack"
return 1
fi
}
get_stack_status_cmd() {
local stack="$1"
if is_stack_running "$stack"; then
echo "running"
else
echo "stopped"
fi
}
view_logs() {
local stack="$1"
local stack_path="$STACKS_DIR/$stack"
local compose_file
compose_file=$(get_compose_file "$stack_path") || {
log_error "No compose file found"
return 1
}
docker compose -f "$stack_path/$compose_file" logs -f
}
# ─── Update Functions ────────────────────────────────────────────────────────
get_stack_images() {
local stack_path="$1"
local compose_file
compose_file=$(get_compose_file "$stack_path") || return 1
docker compose -f "$stack_path/$compose_file" config --images 2>/dev/null | grep -v '^$' || true
}
get_image_digest() {
local image="$1"
docker inspect --format='{{.Id}}' "$image" 2>/dev/null || echo ""
}
get_remote_digest() {
local image="$1"
local manifest_output
manifest_output=$(docker manifest inspect "$image" 2>/dev/null) || true
if [[ -n "$manifest_output" ]]; then
# Check for manifest list (multi-platform)
if echo "$manifest_output" | grep -q '"mediaType"[[:space:]]*:[[:space:]]*"application/vnd.docker.distribution.manifest.list.v2+json"'; then
local platform_digest
platform_digest=$(echo "$manifest_output" | \
grep -A 20 '"platform"' | \
grep -B 5 -E '"architecture"[[:space:]]*:[[:space:]]*"(amd64|x86_64)"' | \
grep '"digest"' | head -1 | cut -d'"' -f4)
if [[ -z "$platform_digest" ]]; then
platform_digest=$(echo "$manifest_output" | \
grep -A 10 '"platform"' | \
grep '"digest"' | head -1 | cut -d'"' -f4)
fi
if [[ -n "$platform_digest" ]]; then
local platform_manifest
platform_manifest=$(docker manifest inspect "$image@$platform_digest" 2>/dev/null)
if [[ -n "$platform_manifest" ]]; then
local config_digest
config_digest=$(echo "$platform_manifest" | \
grep -A 5 '"config"' | \
grep '"digest"' | head -1 | cut -d'"' -f4)
if [[ -n "$config_digest" ]]; then
echo "$config_digest"
return 0
fi
fi
fi
else
local config_digest
config_digest=$(echo "$manifest_output" | \
grep -A 5 '"config"' | \
grep '"digest"' | head -1 | cut -d'"' -f4)
if [[ -n "$config_digest" ]]; then
echo "$config_digest"
return 0
fi
fi
fi
return 1
}
check_stack_updates_display() {
local stack_path="$1"
local stack_name="$2"
local images
mapfile -t images < <(get_stack_images "$stack_path")
local has_updates=false
local temp_dir
temp_dir=$(mktemp -d /tmp/docwell_update_check.XXXXXX)
register_cleanup "rm -rf '$temp_dir'"
local pids=()
local image_index=0
for image in "${images[@]}"; do
[[ -z "$image" ]] && continue
(
local output_file="$temp_dir/image_${image_index}.out"
local status_file="$temp_dir/image_${image_index}.status"
local local_digest
local_digest=$(get_image_digest "$image")
if [[ -z "$local_digest" ]]; then
echo -e " ${YELLOW}?${NC} $image ${GRAY}[not present locally]${NC}" > "$output_file"
echo "has_update" > "$status_file"
exit 0
fi
local remote_digest
remote_digest=$(get_remote_digest "$image") || true
local used_manifest=true
if [[ -z "$remote_digest" ]]; then
used_manifest=false
if ! docker pull "$image" >/dev/null 2>&1; then
echo -e " ${RED}${NC} $image ${GRAY}[check failed]${NC}" > "$output_file"
exit 0
fi
remote_digest=$(get_image_digest "$image")
fi
if [[ -n "$local_digest" && -n "$remote_digest" && "$local_digest" != "$remote_digest" ]]; then
echo -e " ${YELLOW}${NC} $image ${GRAY}[update available]${NC}" > "$output_file"
echo "has_update" > "$status_file"
else
local method_str=""
[[ "$used_manifest" == "true" ]] && method_str=" [manifest]"
echo -e " ${GREEN}${NC} $image ${GRAY}[up to date${method_str}]${NC}" > "$output_file"
fi
) &
pids+=($!)
((image_index++))
done
for pid in "${pids[@]}"; do
wait "$pid" 2>/dev/null || true
done
for ((i=0; i<image_index; i++)); do
local output_file="$temp_dir/image_${i}.out"
local status_file="$temp_dir/image_${i}.status"
[[ -f "$output_file" ]] && cat "$output_file"
if [[ -f "$status_file" ]] && [[ "$(cat "$status_file")" == "has_update" ]]; then
has_updates=true
fi
done
[[ "$has_updates" == "true" ]]
}
check_for_updates() {
show_spinner "Scanning stacks for available updates..."
local stacks
mapfile -t stacks < <(get_stacks)
hide_spinner
local updates_available=()
local total=${#stacks[@]}
local idx=0
for stack in "${stacks[@]}"; do
((idx++))
local stack_path="$STACKS_DIR/$stack"
if ! has_compose_file "$stack_path"; then
continue
fi
echo -e "${CYAN}[$idx/$total]${NC} ${BOLD}$stack:${NC}"
if check_stack_updates_display "$stack_path" "$stack"; then
updates_available+=("$stack")
fi
done
echo
if [[ ${#updates_available[@]} -gt 0 ]]; then
log_warn "${#updates_available[@]} stack(s) have updates available:"
for stack in "${updates_available[@]}"; do
echo " - $stack"
done
else
log_success "All stacks are up to date"
fi
}
update_stack_cmd() {
local stack="$1"
local stack_path="$STACKS_DIR/$stack"
if ! has_compose_file "$stack_path"; then
log_warn "No compose file found for $stack"
return 1
fi
local compose_file
compose_file=$(get_compose_file "$stack_path")
local was_running=false
if is_stack_running "$stack"; then
was_running=true
fi
log_info "Updating $stack..."
if ! run_or_dry "pull images for $stack" docker compose -f "$stack_path/$compose_file" pull >/dev/null 2>&1; then
log_warn "Failed to pull images for $stack"
return 1
fi
if [[ "$was_running" == "true" ]]; then
if ! run_or_dry "recreate $stack" docker compose -f "$stack_path/$compose_file" up -d --force-recreate >/dev/null 2>&1; then
log_error "Failed to recreate containers for $stack"
log_warn "Images pulled but containers not recreated"
return 1
fi
log_success "Updated $stack"
else
log_info "Stack was stopped, not starting $stack"
fi
return 0
}
update_all_stacks() {
log_warn "This will pull latest images and recreate all containers"
if ! confirm "Continue?"; then
return 1
fi
local stacks
mapfile -t stacks < <(get_stacks)
local failed_stacks=()
local total=${#stacks[@]}
local idx=0
local start_time=$SECONDS
for stack in "${stacks[@]}"; do
((idx++))
echo -e "${CYAN}[$idx/$total]${NC} Processing $stack..."
if ! update_stack_cmd "$stack"; then
failed_stacks+=("$stack")
fi
done
local elapsed=$(( SECONDS - start_time ))
if [[ ${#failed_stacks[@]} -gt 0 ]]; then
log_warn "Failed to update ${#failed_stacks[@]} stack(s): ${failed_stacks[*]}"
else
log_success "All stacks updated"
fi
log_info "Completed in $(format_elapsed "$elapsed")"
}
auto_update_mode() {
log_warn "This mode will:"
echo " 1. Check for updates"
echo " 2. Update stacks with available updates"
echo
if ! confirm "Enable auto-update mode?"; then
return 1
fi
local stacks
mapfile -t stacks < <(get_stacks)
local total=${#stacks[@]}
local idx=0
local start_time=$SECONDS
for stack in "${stacks[@]}"; do
((idx++))
local stack_path="$STACKS_DIR/$stack"
if ! has_compose_file "$stack_path"; then
continue
fi
echo -e "${CYAN}[$idx/$total]${NC} Processing $stack..."
local images
mapfile -t images < <(get_stack_images "$stack_path")
local needs_update=false
for image in "${images[@]}"; do
[[ -z "$image" ]] && continue
local local_digest
local_digest=$(get_image_digest "$image")
[[ -z "$local_digest" ]] && continue
local remote_digest
remote_digest=$(get_remote_digest "$image") || true
if [[ -z "$remote_digest" ]]; then
docker pull "$image" >/dev/null 2>&1 || continue
remote_digest=$(get_image_digest "$image")
fi
if [[ -n "$local_digest" && -n "$remote_digest" && "$local_digest" != "$remote_digest" ]]; then
needs_update=true
break
fi
done
if [[ "$needs_update" == "false" ]]; then
log_success "$stack already up to date"
continue
fi
log_warn "Updates available for $stack, updating..."
if update_stack_cmd "$stack"; then
sleep 3
if is_stack_running "$stack"; then
log_success "Update successful"
else
log_error "Update may have failed, check logs"
fi
fi
echo
done
local elapsed=$(( SECONDS - start_time ))
log_success "Auto-update complete in $(format_elapsed "$elapsed")"
}
# ─── Interactive UI Functions ────────────────────────────────────────────────
interactive_stack_menu() {
local selected_stack="$1"
while true; do
clear_screen
log_header "Manage: $selected_stack"
echo -e "${GRAY}DocWell Manager v$VERSION${NC}"
echo
local status_str="${GRAY}${NC} Stopped"
if is_stack_running "$selected_stack"; then
status_str="${GREEN}${NC} Running"
fi
echo "Status: $status_str"
echo
echo "Stack Management:"
echo " 1) Start"
echo " 2) Stop"
echo " 3) Restart"
echo
echo "Updates:"
echo " 4) Update images"
echo " 5) Check for updates"
echo
echo "Other:"
echo " 6) View logs"
echo " 7) View compose file"
echo " 0) Back"
echo
read -rp "Select action: " action
case $action in
1) start_stack_cmd "$selected_stack" ;;
2) stop_stack_cmd "$selected_stack" ;;
3) restart_stack_cmd "$selected_stack" ;;
4) update_stack_cmd "$selected_stack" ;;
5)
local stack_path="$STACKS_DIR/$selected_stack"
if has_compose_file "$stack_path"; then
echo -e "${BOLD}$selected_stack:${NC}"
check_stack_updates_display "$stack_path" "$selected_stack" || true
fi
;;
6) view_logs "$selected_stack" ;;
7)
local stack_path="$STACKS_DIR/$selected_stack"
local compose_file
compose_file=$(get_compose_file "$stack_path") && \
less "$stack_path/$compose_file" || \
log_error "No compose file found"
;;
0) break ;;
*) log_error "Invalid selection" ;;
esac
echo
read -rp "Press Enter to continue..."
done
}
interactive_main_menu() {
while true; do
clear_screen
log_header "Docker Stack Manager"
echo -e "${GRAY}DocWell Manager v$VERSION${NC}"
echo
show_spinner "Loading stacks..."
local stacks
mapfile -t stacks < <(get_stacks)
hide_spinner
if [[ ${#stacks[@]} -eq 0 ]]; then
log_error "No stacks found"
exit 1
fi
echo "Available stacks:"
for i in "${!stacks[@]}"; do
local stack="${stacks[$i]}"
local status_str="${GRAY}[stopped]${NC}"
if is_stack_running "$stack"; then
status_str="${GREEN}[running]${NC}"
fi
echo -e " ${BOLD}$((i+1))${NC}) ${stack:0:30} $status_str"
done
echo
log_separator
echo -e " ${BOLD}r${NC}) Restart all"
echo -e " ${BOLD}s${NC}) Stop all"
echo -e " ${BOLD}u${NC}) Update all"
echo -e " ${BOLD}c${NC}) Check for updates"
echo -e " ${BOLD}0${NC}) Exit"
echo
read -rp "Select stack number or action: " choice
if [[ "$choice" == "0" ]] || [[ "$choice" == "q" ]] || [[ "$choice" == "quit" ]]; then
exit 0
fi
case "$choice" in
r)
for stack in "${stacks[@]}"; do
restart_stack_cmd "$stack" || true
done
read -rp "Press Enter to continue..."
;;
s)
for stack in "${stacks[@]}"; do
stop_stack_cmd "$stack" || true
done
read -rp "Press Enter to continue..."
;;
u)
update_all_stacks
read -rp "Press Enter to continue..."
;;
c)
check_for_updates
read -rp "Press Enter to continue..."
;;
*)
if [[ "$choice" =~ ^[0-9]+$ ]] && [[ "$choice" -ge 1 ]] && [[ "$choice" -le "${#stacks[@]}" ]]; then
local idx=$((choice-1))
interactive_stack_menu "${stacks[$idx]}"
else
log_error "Invalid selection"
read -rp "Press Enter to continue..."
fi
;;
esac
done
}
# ─── Main ────────────────────────────────────────────────────────────────────
main() {
parse_args "$@"
if ! check_docker; then
exit 1
fi
# Handle list only
[[ "$LIST_ONLY" == "true" ]] && { list_stacks_display; exit 0; }
# Handle stack management CLI operations
[[ -n "$START_STACK" ]] && { start_stack_cmd "$START_STACK"; exit $?; }
[[ -n "$STOP_STACK" ]] && { stop_stack_cmd "$STOP_STACK"; exit $?; }
[[ -n "$RESTART_STACK" ]] && { restart_stack_cmd "$RESTART_STACK"; exit $?; }
[[ -n "$STATUS_STACK" ]] && { get_stack_status_cmd "$STATUS_STACK"; exit 0; }
[[ -n "$LOGS_STACK" ]] && { view_logs "$LOGS_STACK"; exit $?; }
# Handle update CLI operations
[[ "$CHECK_UPDATES" == "true" ]] && { check_for_updates; exit 0; }
[[ "$UPDATE_ALL" == "true" ]] && { update_all_stacks; exit 0; }
[[ -n "$UPDATE_STACK" ]] && { update_stack_cmd "$UPDATE_STACK"; exit $?; }
[[ "$AUTO_UPDATE" == "true" ]] && { auto_update_mode; exit 0; }
# Interactive mode
interactive_main_menu
}
main "$@"

758
bash/docker-migrate.sh Executable file
View File

@@ -0,0 +1,758 @@
#!/bin/bash
# Docker Service Migration Script
# Enhanced version matching Go docwell functionality
set -euo pipefail
# Version
VERSION="2.6.2"
# Resolve script directory and source shared library
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=lib/common.sh
source "$SCRIPT_DIR/lib/common.sh"
# Enable debug trace after sourcing common.sh (so DEBUG variable is available)
enable_debug_trace
# Default configuration
LOG_FILE="${LOG_FILE:-/tmp/docwell/docker-migrate.log}"
HOSTNAME=$(hostname 2>/dev/null | tr -cd 'a-zA-Z0-9._-' || echo "unknown")
# Source/destination configuration
OLD_HOST="${OLD_HOST:-local}"
OLD_PORT="${OLD_PORT:-22}"
OLD_USER="${OLD_USER:-$(whoami)}"
NEW_HOST="${NEW_HOST:-}"
NEW_PORT="${NEW_PORT:-22}"
NEW_USER="${NEW_USER:-$(whoami)}"
# Transfer options
TRANSFER_METHOD="rsync"
TRANSFER_MODE="clone" # clone (non-disruptive) or transfer (stop source)
COMPRESSION="zstd"
BANDWIDTH_MB="${BANDWIDTH_MB:-0}"
TRANSFER_RETRIES="${TRANSFER_RETRIES:-3}"
SERVICE_NAME=""
# Load config file overrides
load_config
parse_args() {
# Support legacy positional argument: first arg as service name
if [[ $# -gt 0 ]] && [[ ! "$1" =~ ^-- ]]; then
SERVICE_NAME="$1"
shift
fi
while [[ $# -gt 0 ]]; do
case $1 in
--version)
echo "docker-migrate.sh v$VERSION"
exit 0
;;
--quiet|-q)
QUIET=true
shift
;;
--yes|-y)
YES=true
shift
;;
--service)
require_arg "$1" "${2:-}" || exit 1
SERVICE_NAME="$2"
shift 2
;;
--source)
require_arg "$1" "${2:-}" || exit 1
OLD_HOST="$2"
shift 2
;;
--dest)
require_arg "$1" "${2:-}" || exit 1
NEW_HOST="$2"
shift 2
;;
--method)
TRANSFER_METHOD="$2"
shift 2
;;
--transfer)
TRANSFER_MODE="transfer"
shift
;;
--compression)
COMPRESSION="$2"
shift 2
;;
--bwlimit)
BANDWIDTH_MB="$2"
shift 2
;;
--retries)
TRANSFER_RETRIES="$2"
shift 2
;;
--stacks-dir)
STACKS_DIR="$2"
shift 2
;;
--install-deps)
AUTO_INSTALL=true
shift
;;
--log)
LOG_FILE="$2"
shift 2
;;
--dry-run)
DRY_RUN=true
shift
;;
--debug|-d)
DEBUG=true
shift
;;
--verbose|-v)
VERBOSE=true
shift
;;
--help|-h)
show_help
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}" >&2
show_help
exit 1
;;
esac
done
}
show_help() {
cat << EOF
Docker Service Migration Script v$VERSION
Usage: $0 [SERVICE] [OPTIONS]
Options:
--service SERVICE Service name to migrate
--source HOST Source host (default: local)
--dest HOST Destination host/user@host
--method METHOD Transfer method: rsync, tar, rclone (default: rsync)
--transfer Transfer mode (stop source), default is clone mode
--compression METHOD Compression: zstd or gzip (default: zstd)
--bwlimit MB Bandwidth limit in MB/s for rsync (0 = unlimited)
--retries N Number of retries for failed transfers (default: 3)
--stacks-dir DIR Stacks directory (default: /opt/stacks)
--quiet, -q Suppress non-error output
--yes, -y Auto-confirm all prompts
--dry-run Show what would be done without actually doing it
--debug, -d Enable debug mode (verbose output + xtrace)
--verbose, -v Enable verbose logging
--install-deps Auto-install missing dependencies (requires root)
--log FILE Log file path
--version Show version information
--help, -h Show this help message
Examples:
$0 myapp --dest user@remote-host
$0 --service myapp --source local --dest user@remote --method rsync
$0 --service myapp --source old-server --dest new-server --transfer
$0 --service myapp --dest user@remote --bwlimit 50 --retries 5
$0 --service myapp --dest user@remote --dry-run
EOF
}
# ─── Dependency checks ───────────────────────────────────────────────────────
check_dependencies() {
local required_cmds=("docker" "ssh")
local missing=()
case "$TRANSFER_METHOD" in
rsync) required_cmds+=("rsync") ;;
rclone) required_cmds+=("rclone") ;;
esac
for cmd in "${required_cmds[@]}"; do
if ! command -v "$cmd" &>/dev/null; then
missing+=("$cmd")
fi
done
if [[ ${#missing[@]} -gt 0 ]]; then
log_error "Missing dependencies: ${missing[*]}"
if [[ "$AUTO_INSTALL" == "true" ]]; then
install_dependencies "${missing[@]}"
return $?
fi
return 1
fi
return 0
}
# ─── Parse user@host:port ─────────────────────────────────────────────────────
parse_host_string() {
local input="$1"
local -n _host="$2"
local -n _user="$3"
local -n _port="$4"
if [[ "$input" == "local" ]]; then
_host="local"
return 0
fi
# Parse user@host:port
if [[ "$input" =~ ^(.+)@(.+):([0-9]+)$ ]]; then
_user="${BASH_REMATCH[1]}"
_host="${BASH_REMATCH[2]}"
_port="${BASH_REMATCH[3]}"
elif [[ "$input" =~ ^(.+)@(.+)$ ]]; then
_user="${BASH_REMATCH[1]}"
_host="${BASH_REMATCH[2]}"
elif [[ "$input" =~ ^(.+):([0-9]+)$ ]]; then
_host="${BASH_REMATCH[1]}"
_port="${BASH_REMATCH[2]}"
else
_host="$input"
fi
}
# ─── Pre-flight checks ───────────────────────────────────────────────────────
preflight_checks() {
local service="$1"
log_info "Running pre-flight checks..."
# Validate service name
if ! validate_stack_name "$service"; then
return 1
fi
# Check source
if [[ "$OLD_HOST" == "local" ]]; then
if [[ ! -d "$STACKS_DIR/$service" ]]; then
log_error "Service not found locally: $STACKS_DIR/$service"
return 1
fi
log_success "Source: local ($STACKS_DIR/$service)"
else
log_info "Source: $OLD_USER@$OLD_HOST:$OLD_PORT"
if ! test_ssh "$OLD_HOST" "$OLD_PORT" "$OLD_USER"; then
return 1
fi
# Verify service exists on source
if ! ssh_cmd "$OLD_HOST" "$OLD_PORT" "$OLD_USER" "test -d '$STACKS_DIR/$service'" 2>/dev/null; then
log_error "Service not found on source: $STACKS_DIR/$service"
return 1
fi
log_success "Service found on source"
fi
# Check destination
if [[ -z "$NEW_HOST" ]]; then
log_error "Destination host not specified"
return 1
fi
log_info "Destination: $NEW_USER@$NEW_HOST:$NEW_PORT"
if ! test_ssh "$NEW_HOST" "$NEW_PORT" "$NEW_USER"; then
return 1
fi
# Service conflict check: verify service doesn't already exist on destination
if ssh_cmd "$NEW_HOST" "$NEW_PORT" "$NEW_USER" "test -d '$STACKS_DIR/$service'" 2>/dev/null; then
log_warn "Service already exists on destination: $STACKS_DIR/$service"
if ! confirm "Overwrite existing service on destination?"; then
return 1
fi
fi
log_success "Pre-flight checks passed"
return 0
}
# ─── Transfer functions ──────────────────────────────────────────────────────
rsync_transfer() {
local src="$1" dst_host="$2" dst_port="$3" dst_user="$4" dst_path="$5"
local attempt=0
local max_retries="$TRANSFER_RETRIES"
local rsync_opts=(-avz --progress --delete)
# Bandwidth limiting
if [[ "$BANDWIDTH_MB" -gt 0 ]]; then
local bw_kbps=$((BANDWIDTH_MB * 1024))
rsync_opts+=(--bwlimit="$bw_kbps")
log_info "Bandwidth limited to ${BANDWIDTH_MB}MB/s"
fi
# Compression
local comp_info
comp_info=$(get_compressor "$COMPRESSION")
local compressor="${comp_info%%:*}"
if [[ "$compressor" == "zstd" ]]; then
rsync_opts+=(--compress-choice=zstd)
fi
rsync_opts+=(-e "ssh -p $dst_port -o StrictHostKeyChecking=no -o ConnectTimeout=10")
while [[ $attempt -lt $max_retries ]]; do
((attempt++))
if [[ $attempt -gt 1 ]]; then
local backoff=$((attempt * 5))
log_warn "Retry $attempt/$max_retries in ${backoff}s..."
sleep "$backoff"
fi
if run_or_dry "rsync $src to $dst_user@$dst_host:$dst_path" \
rsync "${rsync_opts[@]}" "$src" "$dst_user@$dst_host:$dst_path"; then
return 0
fi
log_warn "Transfer attempt $attempt failed"
done
log_error "Transfer failed after $max_retries attempts"
return 1
}
tar_transfer() {
local src="$1" dst_host="$2" dst_port="$3" dst_user="$4" dst_path="$5"
local comp_info
comp_info=$(get_compressor "$COMPRESSION")
local compressor="${comp_info%%:*}"
log_info "Transferring via tar+$compressor..."
if [[ "$DRY_RUN" == "true" ]]; then
log_info "[DRY-RUN] Would tar $src | $compressor | ssh -> $dst_path"
return 0
fi
ssh_cmd "$dst_host" "$dst_port" "$dst_user" "mkdir -p '$dst_path'" || return 1
tar -cf - -C "$(dirname "$src")" "$(basename "$src")" | \
$compressor | \
ssh -p "$dst_port" -o StrictHostKeyChecking=no "$dst_user@$dst_host" \
"cd '$dst_path' && $compressor -d | tar -xf -"
}
rclone_transfer() {
local src="$1" dst_host="$2" dst_port="$3" dst_user="$4" dst_path="$5"
if ! command -v rclone &>/dev/null; then
log_error "rclone not installed, falling back to rsync"
rsync_transfer "$src" "$dst_host" "$dst_port" "$dst_user" "$dst_path"
return $?
fi
log_info "Transferring via rclone SFTP..."
if [[ "$DRY_RUN" == "true" ]]; then
log_info "[DRY-RUN] Would rclone sync $src to :sftp:$dst_host:$dst_path"
return 0
fi
rclone sync "$src" ":sftp,host=$dst_host,user=$dst_user,port=$dst_port:$dst_path" \
--transfers "$((MAX_PARALLEL > 16 ? 16 : MAX_PARALLEL))" \
--progress \
--sftp-ask-password \
2>&1 || {
log_error "rclone transfer failed"
return 1
}
}
file_transfer() {
local method="$1" src="$2" dst_host="$3" dst_port="$4" dst_user="$5" dst_path="$6"
case "$method" in
rsync) rsync_transfer "$src" "$dst_host" "$dst_port" "$dst_user" "$dst_path" ;;
tar) tar_transfer "$src" "$dst_host" "$dst_port" "$dst_user" "$dst_path" ;;
rclone) rclone_transfer "$src" "$dst_host" "$dst_port" "$dst_user" "$dst_path" ;;
*)
log_error "Unknown transfer method: $method"
return 1
;;
esac
}
# ─── Volume operations ────────────────────────────────────────────────────────
backup_volume() {
local vol="$1"
local backup_dir="$2"
local comp_info
comp_info=$(get_compressor "$COMPRESSION")
local compressor="${comp_info%%:*}"
local ext="${comp_info#*:}"
local archive="$backup_dir/${vol}${ext}"
log_info "Backing up volume: $vol"
if [[ "$DRY_RUN" == "true" ]]; then
log_info "[DRY-RUN] Would backup volume $vol to $archive"
return 0
fi
docker run --rm \
-v "${vol}:/data:ro" \
-v "$backup_dir:/backup" \
alpine \
sh -c "tar -cf - -C /data . | $compressor > /backup/${vol}${ext}" 2>/dev/null || {
log_error "Failed to backup volume: $vol"
return 1
}
log_success "Volume $vol backed up"
}
restore_volume() {
local vol="$1"
local backup_dir="$2"
local host="$3" port="$4" user="$5"
# Detect archive format
local archive=""
local decompressor=""
if [[ -f "$backup_dir/${vol}.tar.zst" ]]; then
archive="$backup_dir/${vol}.tar.zst"
decompressor="zstd -d"
elif [[ -f "$backup_dir/${vol}.tar.gz" ]]; then
archive="$backup_dir/${vol}.tar.gz"
decompressor="gzip -d"
else
log_error "No backup found for volume: $vol"
return 1
fi
log_info "Restoring volume: $vol"
if [[ "$DRY_RUN" == "true" ]]; then
log_info "[DRY-RUN] Would restore volume $vol from $archive"
return 0
fi
if [[ "$host" == "local" ]]; then
docker run --rm \
-v "${vol}:/data" \
-v "$backup_dir:/backup:ro" \
alpine \
sh -c "cd /data && cat /backup/$(basename "$archive") | $decompressor | tar -xf -" 2>/dev/null || {
log_error "Failed to restore volume: $vol"
return 1
}
else
ssh_cmd "$host" "$port" "$user" \
"docker run --rm -v '${vol}:/data' -v '$backup_dir:/backup:ro' alpine sh -c 'cd /data && cat /backup/$(basename "$archive") | $decompressor | tar -xf -'" 2>/dev/null || {
log_error "Failed to restore volume on remote: $vol"
return 1
}
fi
log_success "Volume $vol restored"
}
# ─── Post-migration verification ─────────────────────────────────────────────
verify_migration() {
local service="$1"
local host="$2" port="$3" user="$4"
log_info "Verifying migration..."
# Check service directory exists on destination
if ! ssh_cmd "$host" "$port" "$user" "test -d '$STACKS_DIR/$service'" 2>/dev/null; then
log_error "Service directory not found on destination"
return 1
fi
log_success "Service directory present"
# Check compose file exists
local compose_found=false
for f in compose.yaml compose.yml docker-compose.yaml docker-compose.yml; do
if ssh_cmd "$host" "$port" "$user" "test -f '$STACKS_DIR/$service/$f'" 2>/dev/null; then
compose_found=true
break
fi
done
if [[ "$compose_found" == "false" ]]; then
log_error "No compose file found on destination"
return 1
fi
log_success "Compose file present"
# Wait and check if service started
log_info "Waiting for service health check (5s)..."
sleep 5
if ssh_cmd "$host" "$port" "$user" \
"cd '$STACKS_DIR/$service' && docker compose ps -q 2>/dev/null | head -1" 2>/dev/null | grep -q .; then
log_success "Service is running on destination"
else
log_warn "Service may not be running on destination yet"
fi
log_success "Migration verification passed"
return 0
}
# ─── Main migration logic ────────────────────────────────────────────────────
migrate_service() {
local service="$1"
local start_time=$SECONDS
log_header "Migrating Service: $service"
log_info "Mode: $TRANSFER_MODE | Method: $TRANSFER_METHOD | Compression: $COMPRESSION"
write_log "Starting migration of $service from $OLD_HOST to $NEW_HOST"
# Create temp dir for volume backups
local vol_backup_dir
vol_backup_dir=$(mktemp -d /tmp/docwell_migrate.XXXXXX)
register_cleanup "rm -rf '$vol_backup_dir'"
# Step 1: Transfer configuration
log_info "[Step 1/5] Transferring configuration..."
local stack_path="$STACKS_DIR/$service"
if [[ "$OLD_HOST" == "local" ]]; then
file_transfer "$TRANSFER_METHOD" "$stack_path/" "$NEW_HOST" "$NEW_PORT" "$NEW_USER" "$STACKS_DIR/$service/"
else
# Remote to remote: pull from source then push to dest
local local_temp
local_temp=$(mktemp -d /tmp/docwell_config.XXXXXX)
register_cleanup "rm -rf '$local_temp'"
rsync -avz -e "ssh -p $OLD_PORT -o StrictHostKeyChecking=no" \
"$OLD_USER@$OLD_HOST:$stack_path/" "$local_temp/$service/" >/dev/null 2>&1 || {
log_error "Failed to pull config from source"
return 1
}
file_transfer "$TRANSFER_METHOD" "$local_temp/$service/" "$NEW_HOST" "$NEW_PORT" "$NEW_USER" "$STACKS_DIR/$service/"
fi
log_success "Configuration transferred"
# Step 2: Get volumes
log_info "[Step 2/5] Discovering volumes..."
local volumes
mapfile -t volumes < <(get_service_volumes "$service")
if [[ ${#volumes[@]} -eq 0 ]] || [[ -z "${volumes[0]}" ]]; then
log_info "No volumes to migrate"
else
log_info "Found ${#volumes[@]} volume(s)"
# Step 3: Backup volumes
log_info "[Step 3/5] Backing up volumes..."
local vol_idx=0
for vol in "${volumes[@]}"; do
[[ -z "$vol" ]] && continue
((vol_idx++))
echo -e "${CYAN}[$vol_idx/${#volumes[@]}]${NC} Volume: $vol"
backup_volume "$vol" "$vol_backup_dir" || {
log_error "Failed to backup volume: $vol"
return 1
}
done
# Step 4: Transfer and restore volumes
log_info "[Step 4/5] Transferring volumes..."
file_transfer "$TRANSFER_METHOD" "$vol_backup_dir/" "$NEW_HOST" "$NEW_PORT" "$NEW_USER" "$vol_backup_dir/"
log_info "Restoring volumes on destination..."
vol_idx=0
for vol in "${volumes[@]}"; do
[[ -z "$vol" ]] && continue
((vol_idx++))
# Create volume on destination
ssh_cmd "$NEW_HOST" "$NEW_PORT" "$NEW_USER" "docker volume create '$vol'" >/dev/null 2>&1 || true
echo -e "${CYAN}[$vol_idx/${#volumes[@]}]${NC} Restoring: $vol"
restore_volume "$vol" "$vol_backup_dir" "$NEW_HOST" "$NEW_PORT" "$NEW_USER" || {
log_warn "Failed to restore volume: $vol"
}
done
fi
# Step 5: Start on destination and optionally stop source
log_info "[Step 5/5] Starting service on destination..."
local remote_compose
for f in compose.yaml compose.yml docker-compose.yaml docker-compose.yml; do
if ssh_cmd "$NEW_HOST" "$NEW_PORT" "$NEW_USER" "test -f '$STACKS_DIR/$service/$f'" 2>/dev/null; then
remote_compose="$f"
break
fi
done
if [[ -n "${remote_compose:-}" ]]; then
if run_or_dry "start service on destination" \
ssh_cmd "$NEW_HOST" "$NEW_PORT" "$NEW_USER" \
"cd '$STACKS_DIR/$service' && docker compose -f '$remote_compose' up -d" >/dev/null 2>&1; then
log_success "Service started on destination"
else
log_error "Failed to start service on destination"
fi
fi
# Transfer mode: stop source
if [[ "$TRANSFER_MODE" == "transfer" ]]; then
log_info "Transfer mode: stopping source service..."
if [[ "$OLD_HOST" == "local" ]]; then
local compose_file
compose_file=$(get_compose_file "$stack_path") && \
run_or_dry "stop local source" docker compose -f "$stack_path/$compose_file" down >/dev/null 2>&1
else
run_or_dry "stop remote source" ssh_cmd "$OLD_HOST" "$OLD_PORT" "$OLD_USER" \
"cd '$STACKS_DIR/$service' && docker compose down" >/dev/null 2>&1 || {
log_warn "Could not stop source service"
}
fi
log_success "Source service stopped"
fi
# Post-migration verification
verify_migration "$service" "$NEW_HOST" "$NEW_PORT" "$NEW_USER" || {
log_warn "Post-migration verification had warnings"
}
local elapsed=$(( SECONDS - start_time ))
log_success "Migration completed in $(format_elapsed "$elapsed")"
}
# ─── Interactive mode ─────────────────────────────────────────────────────────
interactive_migrate() {
clear_screen
log_header "Docker Service Migration"
echo -e "${GRAY}DocWell Migration v$VERSION${NC}"
echo
# Service selection
show_spinner "Loading stacks..."
local stacks
mapfile -t stacks < <(get_stacks)
hide_spinner
if [[ ${#stacks[@]} -eq 0 ]]; then
log_error "No stacks found"
exit 1
fi
echo "Available services:"
for i in "${!stacks[@]}"; do
local status="${GRAY}${NC}"
is_stack_running "${stacks[$i]}" && status="${GREEN}${NC}"
echo -e " ${BOLD}$((i+1))${NC}) $status ${stacks[$i]}"
done
echo
read -rp "Select service to migrate: " choice
if [[ ! "$choice" =~ ^[0-9]+$ ]] || [[ "$choice" -lt 1 ]] || [[ "$choice" -gt "${#stacks[@]}" ]]; then
log_error "Invalid selection"
exit 1
fi
SERVICE_NAME="${stacks[$((choice-1))]}"
# Source
read -rp "Source host [${OLD_HOST}]: " input_host
[[ -n "$input_host" ]] && OLD_HOST="$input_host"
# Destination
read -rp "Destination host: " NEW_HOST
if [[ -z "$NEW_HOST" ]]; then
log_error "Destination required"
exit 1
fi
# Parse user@host for destination
parse_host_string "$NEW_HOST" NEW_HOST NEW_USER NEW_PORT
# Transfer method
echo
echo "Transfer methods:"
echo " 1) rsync (recommended)"
echo " 2) tar over SSH"
echo " 3) rclone"
read -rp "Select method [1]: " method_choice
case "${method_choice:-1}" in
2) TRANSFER_METHOD="tar" ;;
3) TRANSFER_METHOD="rclone" ;;
*) TRANSFER_METHOD="rsync" ;;
esac
# Transfer mode
echo
echo "Transfer modes:"
echo " 1) Clone (keep source running)"
echo " 2) Transfer (stop source after)"
read -rp "Select mode [1]: " mode_choice
case "${mode_choice:-1}" in
2) TRANSFER_MODE="transfer" ;;
*) TRANSFER_MODE="clone" ;;
esac
echo
log_separator
echo -e " Service: ${BOLD}$SERVICE_NAME${NC}"
echo -e " Source: $OLD_USER@$OLD_HOST:$OLD_PORT"
echo -e " Destination: $NEW_USER@$NEW_HOST:$NEW_PORT"
echo -e " Method: $TRANSFER_METHOD"
echo -e " Mode: $TRANSFER_MODE"
log_separator
echo
if ! confirm "Proceed with migration?"; then
log_info "Migration cancelled"
exit 0
fi
if ! preflight_checks "$SERVICE_NAME"; then
log_error "Pre-flight checks failed"
exit 1
fi
migrate_service "$SERVICE_NAME"
}
# ─── Main ────────────────────────────────────────────────────────────────────
main() {
parse_args "$@"
if ! check_dependencies; then
exit 1
fi
# Parse destination user@host if provided
if [[ -n "$NEW_HOST" ]]; then
parse_host_string "$NEW_HOST" NEW_HOST NEW_USER NEW_PORT
fi
# CLI mode
if [[ -n "$SERVICE_NAME" ]] && [[ -n "$NEW_HOST" ]]; then
if ! preflight_checks "$SERVICE_NAME"; then
exit 1
fi
migrate_service "$SERVICE_NAME"
exit $?
fi
# Interactive mode
interactive_migrate
}
main "$@"

712
bash/lib/common.sh Executable file
View File

@@ -0,0 +1,712 @@
#!/bin/bash
# DocWell Shared Library
# Common functions used across all docker-tools bash scripts
# Source this file: source "$(dirname "$0")/lib/common.sh"
# Version
LIB_VERSION="2.6.2"
# ─── Colors (with NO_COLOR support) ──────────────────────────────────────────
if [[ -n "${NO_COLOR:-}" ]] || [[ ! -t 1 ]]; then
RED='' GREEN='' YELLOW='' BLUE='' GRAY='' NC='' BOLD='' CYAN=''
else
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
GRAY='\033[0;90m'
NC='\033[0m'
BOLD='\033[1m'
CYAN='\033[0;36m'
fi
# ─── Common defaults ─────────────────────────────────────────────────────────
STACKS_DIR="${STACKS_DIR:-/opt/stacks}"
QUIET="${QUIET:-false}"
YES="${YES:-false}"
DRY_RUN="${DRY_RUN:-false}"
AUTO_INSTALL="${AUTO_INSTALL:-false}"
LOG_FILE="${LOG_FILE:-/tmp/docwell/docwell.log}"
DEBUG="${DEBUG:-false}"
VERBOSE="${VERBOSE:-false}"
# Concurrency limit (bounded by nproc)
MAX_PARALLEL="${MAX_PARALLEL:-$(nproc 2>/dev/null || echo 4)}"
# ─── Debug and Tracing ──────────────────────────────────────────────────────
# Enable xtrace if DEBUG is set (but preserve existing set options)
# Note: This should be called after scripts set their own 'set' options
enable_debug_trace() {
if [[ "${DEBUG}" == "true" ]]; then
# Check if we're already in a script context with set -euo pipefail
# If so, preserve those flags when enabling -x
if [[ "${-}" == *e* ]] && [[ "${-}" == *u* ]] && [[ "${-}" == *o* ]]; then
set -euxo pipefail
else
set -x
fi
export PS4='+ [${BASH_SOURCE##*/}:${LINENO}] ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'
log_debug "Debug trace enabled (xtrace)"
fi
}
# Function call tracing (when DEBUG or VERBOSE is enabled)
debug_trace() {
[[ "$DEBUG" == "true" || "$VERBOSE" == "true" ]] || return 0
local func="${FUNCNAME[1]:-main}"
local line="${BASH_LINENO[0]:-?}"
local file="${BASH_SOURCE[1]##*/}"
echo -e "${GRAY}[DEBUG]${NC} ${file}:${line} ${func}()" >&2
}
# Debug logging
log_debug() {
[[ "$DEBUG" == "true" || "$VERBOSE" == "true" ]] || return 0
local msg="$*"
local timestamp
timestamp=$(date '+%Y-%m-%d %H:%M:%S')
write_log "$timestamp [DEBUG] $msg"
echo -e "${GRAY}[DEBUG]${NC} $msg" >&2
}
# Verbose logging
log_verbose() {
[[ "$VERBOSE" == "true" ]] || return 0
local msg="$*"
local timestamp
timestamp=$(date '+%Y-%m-%d %H:%M:%S')
write_log "$timestamp [VERBOSE] $msg"
[[ "$QUIET" == "false" ]] && echo -e "${GRAY}[VERBOSE]${NC} $msg"
}
# ─── Spinner ──────────────────────────────────────────────────────────────────
_SPINNER_PID=""
_SPINNER_FRAMES=("⠋" "⠙" "⠹" "⠸" "⠼" "⠴" "⠦" "⠧" "⠇" "⠏")
show_spinner() {
local msg="${1:-Working...}"
[[ "$QUIET" == "true" ]] && return
[[ "$DEBUG" == "true" ]] && return # Don't show spinner in debug mode
(
local i=0
while true; do
printf "\r%b%s%b %s" "$CYAN" "${_SPINNER_FRAMES[$((i % ${#_SPINNER_FRAMES[@]}))]}" "$NC" "$msg"
((i++))
sleep 0.08
done
) &
_SPINNER_PID=$!
disown "$_SPINNER_PID" 2>/dev/null || true
log_debug "Spinner started (PID: $_SPINNER_PID)"
}
hide_spinner() {
if [[ -n "$_SPINNER_PID" ]]; then
log_debug "Stopping spinner (PID: $_SPINNER_PID)"
kill "$_SPINNER_PID" 2>/dev/null || true
wait "$_SPINNER_PID" 2>/dev/null || true
_SPINNER_PID=""
printf "\r\033[K" # Clear the spinner line
fi
}
# ─── Logging ──────────────────────────────────────────────────────────────────
write_log() {
local msg="$1"
local log_dir
log_dir=$(dirname "$LOG_FILE")
# Try to create log directory if it doesn't exist
if [[ ! -d "$log_dir" ]]; then
mkdir -p "$log_dir" 2>/dev/null || {
# If we can't create the directory, try /tmp as fallback
if [[ "$log_dir" != "/tmp"* ]]; then
log_dir="/tmp/docwell"
mkdir -p "$log_dir" 2>/dev/null || return 0
else
return 0
fi
}
fi
# Check if we can write to the log file
if [[ -w "$log_dir" ]] 2>/dev/null || [[ -w "$(dirname "$log_dir")" ]] 2>/dev/null; then
echo "$msg" >> "$LOG_FILE" 2>/dev/null || {
# Fallback to /tmp if original location fails
if [[ "$LOG_FILE" != "/tmp/"* ]]; then
echo "$msg" >> "/tmp/docwell/fallback.log" 2>/dev/null || true
fi
}
fi
}
log_info() {
local msg="$*"
local timestamp
timestamp=$(date '+%Y-%m-%d %H:%M:%S')
write_log "$timestamp [INFO] $msg"
[[ "$QUIET" == "false" ]] && echo -e "${GREEN}[INFO]${NC} $msg"
debug_trace
}
log_warn() {
local msg="$*"
local timestamp
timestamp=$(date '+%Y-%m-%d %H:%M:%S')
write_log "$timestamp [WARN] $msg"
[[ "$QUIET" == "false" ]] && echo -e "${YELLOW}[WARN]${NC} $msg" >&2
}
log_error() {
local msg="$*"
local timestamp
timestamp=$(date '+%Y-%m-%d %H:%M:%S')
write_log "$timestamp [ERROR] $msg"
echo -e "${RED}[ERROR]${NC} $msg" >&2
debug_trace
# In debug mode, show stack trace
if [[ "$DEBUG" == "true" ]]; then
local i=0
while caller $i >/dev/null 2>&1; do
local frame
frame=$(caller $i)
echo -e "${GRAY} ->${NC} $frame" >&2
((i++))
done
fi
}
log_success() {
local msg="$*"
local timestamp
timestamp=$(date '+%Y-%m-%d %H:%M:%S')
write_log "$timestamp [OK] $msg"
[[ "$QUIET" == "false" ]] && echo -e "${GREEN}[✓]${NC} $msg"
}
log_header() {
local msg="$1"
write_log "=== $msg ==="
echo -e "${BLUE}${BOLD}═══════════════════════════════════════════════${NC}"
echo -e "${BLUE}${BOLD} $msg${NC}"
echo -e "${BLUE}${BOLD}═══════════════════════════════════════════════${NC}"
}
log_separator() {
echo -e "${GRAY}───────────────────────────────────────────────${NC}"
}
# ─── Prompts ──────────────────────────────────────────────────────────────────
confirm() {
local prompt="$1"
if [[ "$YES" == "true" ]]; then
return 0
fi
read -p "$(echo -e "${YELLOW}?${NC} $prompt (y/N): ")" -n 1 -r
echo
[[ $REPLY =~ ^[Yy]$ ]]
}
# ─── Argument validation ──────────────────────────────────────────────────────
# Call from parse_args for options that require a value. Returns 1 if missing/invalid.
require_arg() {
local opt="${1:-}"
local val="${2:-}"
if [[ -z "$val" ]] || [[ "$val" == -* ]]; then
log_error "Option $opt requires a value"
return 1
fi
return 0
}
# ─── Validation ───────────────────────────────────────────────────────────────
validate_stack_name() {
local name="$1"
debug_trace
log_debug "Validating stack name: '$name'"
if [[ -z "$name" ]] || [[ "$name" == "." ]] || [[ "$name" == ".." ]]; then
log_error "Invalid stack name: cannot be empty, '.', or '..'"
return 1
fi
if [[ "$name" =~ [^a-zA-Z0-9._-] ]]; then
log_error "Invalid stack name: contains invalid characters (allowed: a-z, A-Z, 0-9, ., _, -)"
return 1
fi
if [[ ${#name} -gt 64 ]]; then
log_error "Invalid stack name: too long (max 64 characters, got ${#name})"
return 1
fi
log_debug "Stack name validation passed"
return 0
}
validate_port() {
local port="$1"
debug_trace
log_debug "Validating port: '$port'"
if ! [[ "$port" =~ ^[0-9]+$ ]] || [[ "$port" -lt 1 ]] || [[ "$port" -gt 65535 ]]; then
log_error "Invalid port: $port (must be 1-65535)"
return 1
fi
log_debug "Port validation passed"
return 0
}
# ─── Config file support ─────────────────────────────────────────────────────
DOCWELL_CONFIG_FILE="${HOME}/.config/docwell/config"
load_config() {
[[ -f "$DOCWELL_CONFIG_FILE" ]] || return 0
while IFS= read -r line; do
# Skip comments and empty lines
line=$(echo "$line" | xargs)
[[ -z "$line" || "$line" == \#* ]] && continue
# Split on first '=' so values can contain '='
key="${line%%=*}"
key=$(echo "$key" | xargs)
value="${line#*=}"
value=$(echo "$value" | xargs | sed 's/^"//;s/"$//')
[[ -z "$key" ]] && continue
case "$key" in
StacksDir) STACKS_DIR="$value" ;;
BackupBase) BACKUP_BASE="$value" ;;
LogFile) LOG_FILE="$value" ;;
OldHost) OLD_HOST="$value" ;;
OldPort) OLD_PORT="$value" ;;
OldUser) OLD_USER="$value" ;;
NewHost) NEW_HOST="$value" ;;
NewPort) NEW_PORT="$value" ;;
NewUser) NEW_USER="$value" ;;
BandwidthMB) BANDWIDTH_MB="$value" ;;
TransferRetries) TRANSFER_RETRIES="$value" ;;
esac
done < "$DOCWELL_CONFIG_FILE"
}
# ─── Compose file detection ──────────────────────────────────────────────────
get_compose_file() {
local stack_path="$1"
debug_trace
log_debug "Looking for compose file in: $stack_path"
if [[ ! -d "$stack_path" ]]; then
log_debug "Stack path does not exist: $stack_path"
return 1
fi
for f in compose.yaml compose.yml docker-compose.yaml docker-compose.yml; do
if [[ -f "$stack_path/$f" ]]; then
log_debug "Found compose file: $f"
echo "$f"
return 0
fi
done
log_debug "No compose file found in: $stack_path"
return 1
}
has_compose_file() {
local stack_path="$1"
get_compose_file "$stack_path" >/dev/null 2>&1
}
# ─── Stack operations ────────────────────────────────────────────────────────
get_stacks() {
debug_trace
log_debug "Discovering stacks in: $STACKS_DIR"
if [[ ! -d "$STACKS_DIR" ]]; then
log_debug "Stacks directory does not exist: $STACKS_DIR"
return 1
fi
local stacks=()
local dir_count=0
while IFS= read -r -d '' dir; do
((dir_count++))
local name
name=$(basename "$dir")
log_debug "Found directory: $name"
if validate_stack_name "$name" >/dev/null 2>&1; then
stacks+=("$name")
log_debug "Added valid stack: $name"
else
log_debug "Skipped invalid stack name: $name"
fi
done < <(find "$STACKS_DIR" -maxdepth 1 -mindepth 1 -type d -print0 2>/dev/null | sort -z)
log_debug "Found $dir_count directories, ${#stacks[@]} valid stacks"
# Fallback: also discover stacks from docker ps
log_debug "Checking running containers for additional stacks..."
local docker_stacks
docker_stacks=$(docker ps --format '{{.Labels}}' 2>/dev/null | \
grep -oP 'com\.docker\.compose\.project=\K[^,]+' | sort -u || true)
local added_from_docker=0
for ds in $docker_stacks; do
[[ -z "$ds" ]] && continue
local found=false
for s in "${stacks[@]}"; do
[[ "$s" == "$ds" ]] && { found=true; break; }
done
if [[ "$found" == "false" ]] && validate_stack_name "$ds" >/dev/null 2>&1; then
stacks+=("$ds")
((added_from_docker++))
log_debug "Added stack from docker ps: $ds"
fi
done
log_debug "Total stacks discovered: ${#stacks[@]} (${added_from_docker} from docker ps)"
printf '%s\n' "${stacks[@]}"
}
is_stack_running() {
local stack="$1"
local stack_path="${2:-$STACKS_DIR/$stack}"
local compose_file
debug_trace
log_debug "Checking if stack '$stack' is running (path: $stack_path)"
compose_file=$(get_compose_file "$stack_path") || {
log_debug "No compose file found for stack: $stack"
return 1
}
local running
running=$(docker compose -f "$stack_path/$compose_file" ps -q 2>/dev/null | grep -c . || echo "0")
if [[ "$running" -gt 0 ]]; then
log_debug "Stack '$stack' is running ($running container(s))"
return 0
else
log_debug "Stack '$stack' is not running"
return 1
fi
}
get_stack_size() {
local stack="$1"
local stack_path="$STACKS_DIR/$stack"
debug_trace
if [[ ! -d "$stack_path" ]]; then
log_debug "Stack path does not exist: $stack_path"
echo "?"
return 0
fi
local size
size=$(du -sh "$stack_path" 2>/dev/null | awk '{print $1}' || echo "?")
log_debug "Stack size for $stack: $size"
echo "$size"
}
get_service_volumes() {
local service="$1"
local stack_path="${2:-$STACKS_DIR/$service}"
local compose_file
debug_trace
log_debug "Getting volumes for service: $service (path: $stack_path)"
compose_file=$(get_compose_file "$stack_path") || {
log_debug "No compose file found, cannot get volumes"
return 1
}
local volumes
volumes=$(docker compose -f "$stack_path/$compose_file" config --volumes 2>/dev/null || true)
if [[ -n "$volumes" ]]; then
local vol_count
vol_count=$(echo "$volumes" | grep -c . || echo "0")
log_debug "Found $vol_count volume(s) for service: $service"
else
log_debug "No volumes found for service: $service"
fi
echo "$volumes"
}
# ─── Dependency management ───────────────────────────────────────────────────
install_deps_debian() {
local deps=("$@")
local apt_pkgs=()
for dep in "${deps[@]}"; do
case "$dep" in
docker) apt_pkgs+=("docker.io") ;;
zstd) apt_pkgs+=("zstd") ;;
rsync) apt_pkgs+=("rsync") ;;
rclone) apt_pkgs+=("rclone") ;;
*) apt_pkgs+=("$dep") ;;
esac
done
if [[ ${#apt_pkgs[@]} -gt 0 ]]; then
sudo apt update && sudo apt install -y "${apt_pkgs[@]}"
fi
}
install_deps_arch() {
local deps=("$@")
local pacman_pkgs=()
for dep in "${deps[@]}"; do
case "$dep" in
docker) pacman_pkgs+=("docker") ;;
zstd) pacman_pkgs+=("zstd") ;;
rsync) pacman_pkgs+=("rsync") ;;
rclone) pacman_pkgs+=("rclone") ;;
*) pacman_pkgs+=("$dep") ;;
esac
done
if [[ ${#pacman_pkgs[@]} -gt 0 ]]; then
sudo pacman -Sy --noconfirm "${pacman_pkgs[@]}"
fi
}
install_dependencies() {
local deps=("$@")
if command -v apt &>/dev/null; then
install_deps_debian "${deps[@]}"
elif command -v pacman &>/dev/null; then
install_deps_arch "${deps[@]}"
else
log_error "No supported package manager found (apt/pacman)"
return 1
fi
}
check_docker() {
debug_trace
log_debug "Checking Docker installation and daemon status"
if ! command -v docker &>/dev/null; then
log_error "Docker not found. Please install Docker first."
echo -e "${BLUE}Install commands:${NC}"
command -v apt &>/dev/null && echo -e " ${GREEN}Debian/Ubuntu:${NC} sudo apt install docker.io"
command -v pacman &>/dev/null && echo -e " ${GREEN}Arch Linux:${NC} sudo pacman -S docker"
if [[ "$AUTO_INSTALL" == "true" ]]; then
log_info "Auto-installing dependencies..."
install_dependencies "docker"
return $?
fi
return 1
fi
log_debug "Docker command found: $(command -v docker)"
log_debug "Docker version: $(docker --version 2>&1 || echo 'unknown')"
if ! docker info &>/dev/null; then
log_error "Docker daemon not running or not accessible."
log_debug "Attempting to get more details..."
docker info 2>&1 | head -5 | while IFS= read -r line; do
log_debug " $line"
done || true
return 1
fi
log_debug "Docker daemon is accessible"
return 0
}
check_root() {
if [[ $EUID -ne 0 ]]; then
if ! sudo -n true 2>/dev/null; then
log_error "Root/sudo privileges required"
return 1
fi
fi
return 0
}
# ─── Screen control ───────────────────────────────────────────────────────────
clear_screen() {
printf '\033[H\033[2J'
}
# ─── Elapsed time formatting ─────────────────────────────────────────────────
format_elapsed() {
local seconds="$1"
# Handle invalid input
if ! [[ "$seconds" =~ ^[0-9]+$ ]]; then
echo "0s"
return 0
fi
if [[ "$seconds" -lt 60 ]]; then
echo "${seconds}s"
elif [[ "$seconds" -lt 3600 ]]; then
local mins=$((seconds / 60))
local secs=$((seconds % 60))
echo "${mins}m ${secs}s"
else
local hours=$((seconds / 3600))
local mins=$((seconds % 3600 / 60))
local secs=$((seconds % 60))
echo "${hours}h ${mins}m ${secs}s"
fi
}
# ─── Compression helper ──────────────────────────────────────────────────────
get_compressor() {
local method="${1:-zstd}"
case "$method" in
gzip)
echo "gzip:.tar.gz"
;;
zstd|*)
if command -v zstd &>/dev/null; then
echo "zstd -T0:.tar.zst"
else
log_warn "zstd not found, falling back to gzip"
echo "gzip:.tar.gz"
fi
;;
esac
}
# ─── Cleanup trap helper ─────────────────────────────────────────────────────
# Usage: register_cleanup "command to run"
_CLEANUP_CMDS=()
register_cleanup() {
local cmd="$1"
debug_trace
log_debug "Registering cleanup command: $cmd"
_CLEANUP_CMDS+=("$cmd")
}
_run_cleanups() {
hide_spinner
if [[ ${#_CLEANUP_CMDS[@]} -gt 0 ]]; then
log_debug "Running ${#_CLEANUP_CMDS[@]} cleanup command(s)"
for cmd in "${_CLEANUP_CMDS[@]}"; do
log_debug "Executing cleanup: $cmd"
eval "$cmd" 2>/dev/null || {
log_warn "Cleanup command failed: $cmd"
}
done
fi
}
trap _run_cleanups EXIT INT TERM
# ─── Dry-run wrapper ─────────────────────────────────────────────────────────
# Usage: run_or_dry "description" command arg1 arg2 ...
run_or_dry() {
local desc="$1"
shift
debug_trace
log_debug "run_or_dry: $desc"
if [[ "$DRY_RUN" == "true" ]]; then
log_info "[DRY-RUN] Would: $desc"
log_verbose "[DRY-RUN] Command: $*"
return 0
fi
log_verbose "Executing: $*"
"$@"
local rc=$?
log_debug "Command exit code: $rc"
return $rc
}
# ─── Parallel execution with bounded concurrency ─────────────────────────────
# Usage: parallel_run max_jobs command arg1 arg2 ...
# Each arg is processed by spawning: command arg &
_parallel_pids=()
parallel_wait() {
debug_trace
if [[ ${#_parallel_pids[@]} -gt 0 ]]; then
log_debug "Waiting for ${#_parallel_pids[@]} parallel jobs to complete"
local failed_count=0
for pid in "${_parallel_pids[@]}"; do
local exit_code=0
if ! wait "$pid" 2>/dev/null; then
exit_code=$?
log_debug "Job $pid failed with exit code: $exit_code"
((failed_count++)) || true
else
log_debug "Job $pid completed successfully"
fi
done
if [[ $failed_count -gt 0 ]]; then
log_warn "$failed_count parallel job(s) failed"
fi
fi
_parallel_pids=()
}
parallel_throttle() {
local max_jobs="${1:-$MAX_PARALLEL}"
debug_trace
log_debug "Throttling parallel jobs: ${#_parallel_pids[@]}/$max_jobs"
while [[ ${#_parallel_pids[@]} -ge "$max_jobs" ]]; do
local new_pids=()
for pid in "${_parallel_pids[@]}"; do
if kill -0 "$pid" 2>/dev/null; then
new_pids+=("$pid")
else
log_debug "Job $pid completed, removing from throttle list"
fi
done
_parallel_pids=("${new_pids[@]+"${new_pids[@]}"}")
[[ ${#_parallel_pids[@]} -ge "$max_jobs" ]] && sleep 0.1
done
log_debug "Throttle check passed: ${#_parallel_pids[@]}/$max_jobs"
}
# ─── SSH helpers ──────────────────────────────────────────────────────────────
ssh_cmd() {
local host="$1" port="$2" user="$3"
shift 3
debug_trace
log_debug "Executing SSH command: host=$host, port=$port, user=$user"
log_verbose "Command: $*"
if [[ "$host" == "local" ]]; then
log_debug "Local execution (no SSH)"
"$@"
local rc=$?
log_debug "Local command exit code: $rc"
return $rc
fi
validate_port "$port" || return 1
log_debug "SSH connection: $user@$host:$port"
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=10 -p "$port" "$user@$host" "$@"
local rc=$?
log_debug "SSH command exit code: $rc"
return $rc
}
test_ssh() {
local host="$1" port="$2" user="$3"
[[ "$host" == "local" ]] && return 0
validate_port "$port" || return 1
if ssh_cmd "$host" "$port" "$user" 'echo OK' >/dev/null 2>&1; then
log_success "SSH connection to $user@$host:$port successful"
return 0
else
log_error "SSH connection failed to $user@$host:$port"
return 1
fi
}

1
go Submodule

Submodule go added at 036f0a5a8b

5
test-stack/.dockerignore Executable file
View File

@@ -0,0 +1,5 @@
README.md
*.md
.git
.gitignore

197
test-stack/README.md Executable file
View File

@@ -0,0 +1,197 @@
# DocWell Test Stack
A simple HTTP server stack for testing DocWell features.
## Overview
This is a minimal Docker Compose stack that runs an Nginx web server. It's designed to be used for testing all DocWell features including:
- Backup and restore operations
- Stack management (start/stop/restart)
- Image updates
- Service migration
- Volume management
- Health checks
## Quick Start
### 1. Install the Stack
```bash
# Copy to stacks directory (default: /opt/stacks)
sudo cp -r test-stack /opt/stacks/
# Or use your configured stacks directory
sudo cp -r test-stack /path/to/your/stacks/dir/
```
### 2. Start the Stack
```bash
cd /opt/stacks/test-stack
docker compose up -d
```
Or use DocWell:
```bash
./docwell --stack-start test-stack
```
### 3. Verify It's Running
```bash
# Check status
docker compose ps
# Or use DocWell
./docwell --stack-status test-stack
# Access the web interface
curl http://localhost:8080
# Or open in browser: http://localhost:8080
```
## Testing DocWell Features
### Backup Operations
```bash
# Backup this stack
./docwell --backup-stack test-stack
# Or backup all stacks
./docwell --backup
```
### Stack Management
```bash
# List all stacks
./docwell --stack-list
# Start stack
./docwell --stack-start test-stack
# Stop stack
./docwell --stack-stop test-stack
# Restart stack
./docwell --stack-restart test-stack
# View logs
./docwell --stack-logs test-stack
```
### Update Operations
```bash
# Check for updates
./docwell --update-check
# Update this stack
./docwell --update-stack test-stack
# Update all stacks
./docwell --update-all
```
### Migration Testing
```bash
# Migrate to another server
./docwell --migrate \
--migrate-service test-stack \
--migrate-source local \
--migrate-dest user@remote-host \
--migrate-method rsync
```
## Stack Details
- **Service**: Nginx web server
- **Image**: `nginx:alpine` (lightweight)
- **Port**: 8080 (host) → 80 (container)
- **Volume**: `web_data` (persistent cache storage)
- **Health Check**: Enabled (checks every 30s)
## Files
- `compose.yaml` - Docker Compose configuration
- `html/index.html` - Web interface served by Nginx
- `README.md` - This file
## Customization
### Change Port
Edit `compose.yaml`:
```yaml
ports:
- "9090:80" # Change 8080 to your desired port
```
### Change Image Version
Edit `compose.yaml`:
```yaml
image: nginx:latest # or nginx:1.25, etc.
```
### Add More Services
You can add additional services to test more complex scenarios:
```yaml
services:
web:
# ... existing config ...
db:
image: postgres:alpine
# ... database config ...
```
## Cleanup
To remove the test stack:
```bash
# Stop and remove
cd /opt/stacks/test-stack
docker compose down -v
# Remove directory
sudo rm -rf /opt/stacks/test-stack
```
## Troubleshooting
### Port Already in Use
If port 8080 is already in use, change it in `compose.yaml`:
```yaml
ports:
- "8081:80" # Use different port
```
### Permission Issues
Ensure the stacks directory is accessible:
```bash
sudo chown -R $USER:$USER /opt/stacks/test-stack
```
### Container Won't Start
Check logs:
```bash
docker compose logs
# Or
./docwell --stack-logs test-stack
```
## Notes
- This stack uses a named volume (`web_data`) to test volume migration features
- The health check ensures the service is actually running, not just started
- The web interface shows real-time information about the stack

263
test-stack/TESTING.md Executable file
View File

@@ -0,0 +1,263 @@
# DocWell Testing Guide
This document provides specific test scenarios for using the test-stack with DocWell.
## Prerequisites
1. Install the test stack:
```bash
cd test-stack
./setup.sh
```
2. Ensure DocWell is built:
```bash
cd go
go build -o docwell docwell.go
```
## Test Scenarios
### 1. Stack Management Tests
#### Test: List Stacks
```bash
./docwell --stack-list
```
**Expected**: Should show `test-stack` with status (running/stopped)
#### Test: Start Stack
```bash
./docwell --stack-start test-stack
```
**Expected**: Stack starts, container runs, web interface accessible at http://localhost:8080
#### Test: Check Status
```bash
./docwell --stack-status test-stack
```
**Expected**: Outputs "running" or "stopped"
#### Test: View Logs
```bash
./docwell --stack-logs test-stack
```
**Expected**: Shows Nginx logs
#### Test: Restart Stack
```bash
./docwell --stack-restart test-stack
```
**Expected**: Stack restarts, service remains available
#### Test: Stop Stack
```bash
./docwell --stack-stop test-stack
```
**Expected**: Stack stops, container removed, port 8080 no longer accessible
### 2. Backup Tests
#### Test: Backup Single Stack
```bash
./docwell --backup-stack test-stack
```
**Expected**:
- Creates backup in configured backup directory
- Backup file: `docker-<hostname>/<date>/docker-test-stack.tar.zst`
- Stack restarts if it was running
#### Test: List Stacks for Backup
```bash
./docwell --backup-list
```
**Expected**: Lists all stacks with their status
#### Test: Backup All Stacks
```bash
./docwell --backup
```
**Expected**: Backs up all stacks including test-stack
### 3. Update Tests
#### Test: Check for Updates
```bash
./docwell --update-check
```
**Expected**: Shows update status for nginx:alpine image
#### Test: Update Single Stack
```bash
./docwell --update-stack test-stack
```
**Expected**:
- Pulls latest nginx:alpine image
- Recreates container if stack was running
- Service continues to work
#### Test: Update All Stacks
```bash
./docwell --update-all
```
**Expected**: Updates all stacks including test-stack
### 4. Cleanup Tests
#### Test: Cleanup Containers
```bash
# Stop the stack first
./docwell --stack-stop test-stack
./docwell --cleanup-containers
```
**Expected**: Removes stopped containers
#### Test: Cleanup Images
```bash
./docwell --cleanup-images
```
**Expected**: Removes unused images (may remove old nginx images)
#### Test: Cleanup Volumes
```bash
./docwell --cleanup-volumes
```
**Expected**: Removes unused volumes (be careful - this removes data!)
### 5. Migration Tests
#### Test: Migrate to Another Server (Clone Mode)
```bash
./docwell --migrate \
--migrate-service test-stack \
--migrate-source local \
--migrate-dest user@remote-host \
--migrate-method rsync
```
**Expected**:
- Creates backup of config and volumes
- Transfers to destination
- Starts service on destination
- Original service keeps running
#### Test: Migrate (Transfer Mode)
```bash
./docwell --migrate \
--migrate-service test-stack \
--migrate-source local \
--migrate-dest user@remote-host \
--migrate-method rsync \
--migrate-transfer
```
**Expected**:
- Stops service on source
- Transfers everything
- Starts on destination
- Source service is stopped
### 6. Interactive Mode Tests
#### Test: Interactive Backup
```bash
./docwell
# Select option 1 (Backup Stacks)
# Select test-stack
```
**Expected**: Interactive menu works, backup completes
#### Test: Interactive Stack Manager
```bash
./docwell
# Select option 4 (Stack Manager)
# Select test-stack
# Try start/stop/restart/update/logs
```
**Expected**: All operations work through interactive menu
## Verification Steps
After each test, verify:
1. **Service Status**:
```bash
docker compose -f /opt/stacks/test-stack/compose.yaml ps
```
2. **Web Interface**:
```bash
curl http://localhost:8080
# Should return HTML content
```
3. **Volume Exists**:
```bash
docker volume ls | grep test-stack
```
4. **Backup Created**:
```bash
ls -lh /storage/backups/docker-*/$(date +%Y-%m-%d)/
```
## Troubleshooting
### Port Already in Use
If port 8080 is in use, edit `compose.yaml`:
```yaml
ports:
- "8081:80" # Change port
```
### Permission Denied
```bash
sudo chown -R $USER:$USER /opt/stacks/test-stack
```
### Container Won't Start
```bash
cd /opt/stacks/test-stack
docker compose logs
docker compose up -d
```
### Backup Fails
- Check backup directory permissions
- Ensure enough disk space
- Check logs: `tail -f /var/log/docwell.log`
## Test Checklist
- [ ] Stack listing works
- [ ] Start/stop/restart operations work
- [ ] Status checking works
- [ ] Log viewing works
- [ ] Backup creates valid archive
- [ ] Update pulls new images
- [ ] Cleanup removes unused resources
- [ ] Migration transfers correctly
- [ ] Interactive mode works
- [ ] Web interface accessible after operations
## Performance Testing
For stress testing:
1. **Multiple Stacks**: Create several test stacks
2. **Parallel Operations**: Run multiple operations simultaneously
3. **Large Volumes**: Add data to volumes to test transfer speeds
4. **Network Testing**: Test migration over slow networks
## Cleanup After Testing
```bash
# Stop and remove stack
cd /opt/stacks/test-stack
docker compose down -v
# Remove directory
sudo rm -rf /opt/stacks/test-stack
# Clean up backups (optional)
rm -rf /storage/backups/docker-*/$(date +%Y-%m-%d)/docker-test-stack.tar.zst
```

27
test-stack/compose.yaml Executable file
View File

@@ -0,0 +1,27 @@
services:
web:
image: nginx:alpine
container_name: test-stack_web
ports:
- "8080:80"
volumes:
- ./html:/usr/share/nginx/html:ro
- web_data:/var/cache/nginx
restart: unless-stopped
labels:
- "com.docker.compose.project=test-stack"
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
web_data:
driver: local
networks:
default:
name: test-stack_network

115
test-stack/html/index.html Executable file
View File

@@ -0,0 +1,115 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>DocWell Test Stack</title>
<style>
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
max-width: 800px;
margin: 50px auto;
padding: 20px;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: white;
}
.container {
background: rgba(255, 255, 255, 0.1);
padding: 30px;
border-radius: 10px;
box-shadow: 0 8px 32px rgba(0, 0, 0, 0.1);
backdrop-filter: blur(10px);
}
h1 {
margin-top: 0;
text-align: center;
}
.info {
background: rgba(255, 255, 255, 0.2);
padding: 15px;
border-radius: 5px;
margin: 20px 0;
}
.status {
display: inline-block;
padding: 5px 15px;
border-radius: 20px;
background: #4CAF50;
font-weight: bold;
}
code {
background: rgba(0, 0, 0, 0.3);
padding: 2px 6px;
border-radius: 3px;
font-family: 'Courier New', monospace;
}
ul {
line-height: 1.8;
}
</style>
</head>
<body>
<div class="container">
<h1>🚀 DocWell Test Stack</h1>
<div class="info">
<p><span class="status">✓ Running</span></p>
<p>This is a test stack for DocWell Docker management tool.</p>
</div>
<div class="info">
<h2>Stack Information</h2>
<ul>
<li><strong>Stack Name:</strong> <code>test-stack</code></li>
<li><strong>Service:</strong> Nginx Web Server</li>
<li><strong>Image:</strong> nginx:alpine</li>
<li><strong>Port:</strong> 8080 (host) → 80 (container)</li>
<li><strong>Volume:</strong> <code>web_data</code> (persistent cache)</li>
</ul>
</div>
<div class="info">
<h2>Testing DocWell Features</h2>
<p>Use this stack to test:</p>
<ul>
<li>✅ Stack backup and restore</li>
<li>✅ Start/stop/restart operations</li>
<li>✅ Image updates</li>
<li>✅ Service migration</li>
<li>✅ Volume management</li>
<li>✅ Health checks</li>
</ul>
</div>
<div class="info">
<h2>Quick Commands</h2>
<pre style="background: rgba(0,0,0,0.3); padding: 15px; border-radius: 5px; overflow-x: auto;">
# View logs
docker compose logs -f
# Check status
docker compose ps
# Restart service
docker compose restart
# Stop service
docker compose down
# Start service
docker compose up -d
</pre>
</div>
<div class="info">
<p style="text-align: center; margin-bottom: 0;">
<small>Generated: <span id="timestamp"></span></small>
</p>
</div>
</div>
<script>
document.getElementById('timestamp').textContent = new Date().toLocaleString();
</script>
</body>
</html>

56
test-stack/setup.sh Executable file
View File

@@ -0,0 +1,56 @@
#!/bin/bash
# Setup script for DocWell test stack
set -e
# Default stacks directory
STACKS_DIR="${STACKS_DIR:-/opt/stacks}"
STACK_NAME="test-stack"
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
echo -e "${GREEN}DocWell Test Stack Setup${NC}"
echo "================================"
echo ""
# Check if running as root or with sudo
if [ "$EUID" -ne 0 ]; then
echo -e "${YELLOW}Note: You may need sudo to copy to $STACKS_DIR${NC}"
echo ""
fi
# Get absolute path of test-stack directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Create stacks directory if it doesn't exist
if [ ! -d "$STACKS_DIR" ]; then
echo "Creating stacks directory: $STACKS_DIR"
sudo mkdir -p "$STACKS_DIR"
fi
# Copy test stack
echo "Copying test stack to $STACKS_DIR/$STACK_NAME..."
sudo cp -r "$SCRIPT_DIR" "$STACKS_DIR/$STACK_NAME"
# Set permissions
echo "Setting permissions..."
sudo chown -R "$USER:$USER" "$STACKS_DIR/$STACK_NAME"
echo ""
echo -e "${GREEN}✓ Setup complete!${NC}"
echo ""
echo "Next steps:"
echo " 1. Start the stack:"
echo " cd $STACKS_DIR/$STACK_NAME"
echo " docker compose up -d"
echo ""
echo " 2. Or use DocWell:"
echo " ./docwell --stack-start $STACK_NAME"
echo ""
echo " 3. Access the web interface:"
echo " http://localhost:8080"
echo ""