5.3 KiB
Containers-Workspace
Various useful and useless Dockerfiles, often experimental and work in progress
system-toolbox
Fedora based container wih preinstalled many usefull tools for various debug and problem searching purposes run help-toolbox to show what can you do in there
Typical container run options that allows for host data access:
podman run --rm -it --privileged \
--network host --pid host --ipc host --no-hosts --ulimit host \
--userns host \
--name toolbox toolbox
cloud-toolbox
Sounds huge, but it is just set of tools for cloud-based stuff, like openstack-cli, rclone, openshift cli, etc...
Also contains fzf
and bash-completion. Mount your bash_history for
best experience.
podman run --rm -it \
-v "$HOME/.bash_history:/root/.bash_history" \
--security-opt label:disable \
cloud-toolbox:latest
gui-container
gui-container is an experiment for apps with GUI
how to run with default, permissive options:
podman run --privileged -it \
-e XDG_RUNTIME_DIR=/runtime_dir \
-e WAYLAND_DISPLAY="$WAYLAND_DISPLAY" \
-e DISPLAY="$DISPLAY" \
-v /tmp/.X11-unix:/tmp/.X11-unix:rw \
-v $HOME/.Xauthority:/root/.Xauthority:ro \
-v "$XDG_RUNTIME_DIR:/runtime_dir:rw" \
--entrypoint bash \
--name "gui_container" \
gui-container:latest
Minimal permissions example (for wayland). Mounting just the display server socket, there will be no sound or anything else:
podman run -it --security-opt label:disable \
-e XDG_RUNTIME_DIR=/runtime_dir\
-e WAYLAND_DISPLAY="$WAYLAND_DISPLAY" \
-v "$XDG_RUNTIME_DIR/wayland-0:/runtime_dir/wayland-0:rw" \
--entrypoint bash --name "gui_container" \
gui-container:latest
starting dbus:
export $(dbus-launch)
allowing podman to connect to X display as "non-network local connections"
xhost +"local:podman@"
unsetting WAYLAD_DISPLAY
or DISPLAY
can force apps to use the other one
unset DISPLAY
# or
unset WAYLAD_DISPLAY
to mage Qt-based apps work:
export QT_QPA_PLATFORM=wayland
rathole
Compiled from source rathole image.
snowflake
Compiled from source torproject snowflake image.
Tor relay/bridge node
# prepare
cd tor/;
podman build -t tornode .;
chmod 777 ./data ./logs;
# run (network host for easy port bind on ipv6)
podman run -d --read-only --network host \
-v "/home/user/torrc.conf:/torrc:rw,Z" \
-v "/home/user/tor/logs:/var/log:Z,rw" \
-v "/home/user/tor/data:/var/lib/tor:Z,rw" \
--name tornode tornode:latest
# prepare systemd service for reboot persistence
podman generate systemd --new --name tornode > /etc/systemd/system/tornode.service;
restorecon -v /etc/systemd/system/tornode.service;
systemctl daemon-reload;
systemctl enable --now tornode.service;
# view nyx dashboard
podman exec -it tornode nyx
Wireguard
Simple container that will setup wireguard interface according to
/data/wg0.conf
and then replace process with pid 1 to sleep infinity
.
MASQUERADE required for accessing external networks is done by nftables, so
it should work with nftables kernel modules, iptables-only modules can
be missing.
Before seting up the wg interface, entrypoint will execute files in
/setup.d/
if any.
PostUp
and PostDown
in network interface config should look like this:
PostUp = nft add table inet filter; nft add chain inet filter forward { type filter hook forward priority 0 \; }; nft add rule inet filter forward iifname "%i" accept; nft add rule inet filter forward oifname "%i" accept; nft add table inet nat; nft add chain inet nat postrouting { type nat hook postrouting priority 100 \; }; nft insert rule inet nat postrouting tcp flags syn / syn,rst counter tcp option maxseg size set rt mtu; nft add rule inet nat postrouting oifname "eth*" masquerade
PostDown = nft delete table inet filter; nft delete table inet nat;
The nft insert rule inet nat postrouting tcp flags syn / syn,rst counter tcp option maxseg size set rt mtu
is optional, but recommended if on client side there are virtual networks from which discovering the MTU of whole path can be difficult.
Example run (requires root and privileged for nftables setup)
podman run --privileged --name wireguard -d \
-v './config:/data:ro' \
-v './setup:/setup.d:ro' \
-wireguard:latest
zabbix-agent
Very simple alpine-based zabbix-agent image providing additioanl deps required for SMART monitoring.
Setting up such contenerized agent in systemd based system:
systemctl stop zabbix-agent.service;
podman rm -f zabbix-agent;
rm -f /etc/systemd/system/zabbix-agent.service;
podman run --restart no \
--network host --pid host --ipc host --no-hosts --ulimit host --userns host \
--privileged \
-v "/path/to/custom/config.conf:/etc/zabbix/zabbix_agent2.conf:ro" \
-v "/sys:/sys:ro" \
-v "/sys/fs/cgroup:/sys/fs/cgroup:ro" \
-v "/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw" \
--name zabbix-agent \
-d localhost/zabbix-agent;
podman generate systemd --new --name zabbix-agent > /etc/systemd/system/zabbix-agent.service;
restorecon -v /etc/systemd/system/zabbix-agent.service;
systemctl daemon-reload;
systemctl enable --now zabbix-agent.service;