Migrating to FileRun - again...

In a previous post I was describing my migration away from Nextcloud to the much cleaner solution FileRun - well, it was fun while it lasted! The developer had responded to the ticket concerning thumbnail cache, and I could solve the issue of thumbnails not loading for external web albums by disabling the Apache X-Sendfile download accelerator. Suddenly, a new problem appeared however: Whenever I accessed large file galleries from FileRun through my reverse proxy, I/O-wait on my server increased massively, rendering everything hosted there unresponsive. The FileRun docker container could not be stopped anymore. The effects rippled through my home network, stopping AdGuard from resolving DNS queries, preventing Home Assistant from switching the lights, etc. I had to hard reboot the server by killing the power, risking data corruption everytime. After spending hours digging into processes with top, iotop, log-files and more, I was able to narrow down the problem to FileRun’s internal Apache server in conjunction with Caddy; I did not know how to fix the issue though and did not find the issue anywhere else online. Internal IPs and also Tailscale access were still working fine.

Enter FileBrowser

Since I was about to leave for vacation, I decided to give up on the issue for the time being and look for alternatives for access to my files during my absence. The solution had to be rock-solid, since I would not be able to cut the power from abroad. Just using Tailscale for the time being would have been a solution, but I wanted to ensure that I also had access from an Internet cafe in case I lost access to my personal devices. Therefore, I decided to try out the most simple solution I was aware of: FileBrowser.

FileBrowser used to be a Caddy plug-in but has matured in recent years and can be installed as its own docker container. The user files are bound to the container and multiple users with their own root paths can be created - that’s it! FileBrowser is now serving user files for download under its IP-address. It does have less bells and whistles compared to a full-fledged solution like FileRun, but is definitely sufficient to offer simple file downloads or browser some pictures.

Authentication with Authelia

Since I had just recently enabled Authelia, I decided to use its proxy authentication feature for FileBrowser, which supports that method, as well. From the FileBrowser CLI, setting the method to work with my existing Authelia settings and Caddyfile syntax was as easy as this:

filebrowser config set --auth.method=proxy --auth.header=Remote-User

The only thing I had to edit in addition was access control for public shares from FileBrowser - I did not want every exposed link to require authentication from Authelia, since I might need to share files with external parties without an Authelia user account. Therefore I allowed certain path bypasses in the Authelia config:

    - domain:
        - "file.DOMAIN.tld"
        - "^/api/public/dl/*"
        - "/share/*"
        - "/static/js/*"
        - "/static/css/*"
        - "/static/img/*"
        - "/static/themes/*"
        - "/static/fonts/*"
      policy: bypass

In theory, this would be a great solution; in the current version, proxy authentication seems to be broken however. Therefore, currently an additional FileBrowser-native login screen is still presented.

I am currently hosting a photo gallery with images from an event for our friends and families - this was also based on FileRun and needed to be replaced. Even if I could personally still access FileRun behind Tailscale, every access to the external gallery (via my reverse proxy) could be fatal for my server. Since FileBrowser shares would only allow to download individual files, and not browse the pictures online, this solution was not really sufficient for my needs. Luckily, there was a very easy fix for this requirement. I started an addition Docker contain for pigallery2. pigallery2 is a very lightweight, fast and responsive image library with a nice UI. I only needed to mount the photo volume into the container, create an external link and redirect the shared URL to point to the new album - done! I cannot recommend this tool enough, if all you want to do is expose photo albums from your server.

Solving WebDAV

The last requirement I had was to be able to access a WebDAV folder for a certain app I am using from outside. Again, I could just use Tailscale and an internal IP for this, but did not want to rely on FileRun if it was not stable for all my use cases. Instead of spinning up yet another additional container, I decided to use a Caddy plug-in to directly expose a folder mounted into Caddy with the webdav directive! To achieve this, I first had to build the Caddy Docker image myself, since the default Caddy installation does not come with the webdav plug-in. I used Portainer for this, but the Dockerfile can of course also be built with regular Docker commands:

FROM caddy:builder-alpine AS builder

RUN xcaddy build \
    --with github.com/mholt/caddy-webdav

FROM caddy:alpine

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

After restarting Caddy with the new image and my Webdav folder mounted at /srv within the container, I could expose it externally with this entry in my Caddyfile:

my.domain.tld {
        basicauth {
                user pw_hash
        webdav {
                root /srv

A hashed password can be created from the Caddy container directly by running the caddy hash-password command. I could even use Authelia to protect WebDAV shares with an additional container now, but have refrained from that so far due to the requirement of session cookies in the WebDAV client.

And… revert it all

Initially, I mentioned that I assumed the root cause to be somewhere between Caddy and Apache - so what if I used nginx for FileRun instead of Apache? I could not let go of the thought since I vastly preferred FileRun to my new architecture of small solutions. So I decided to try to run FileRun with nginx instead of Apache to see if this would fix my issues!

Since I could not find an up-to-date Docker container for FileRun, I decided to set up a dedicated virtual machine following the instructions on the official FileRun website. In my previous article I stated that I did not want to run Authentik in part due to its resource requirements amounting up to 2GB - in contrast, FileRun is supposed to be one of the main building blocks of my infrastructure, which is why I dynamically allocated the VM 2-4GB of RAM for stability and performance. While Ubuntu server 22.10 has recently been released, I still continued with the Ubuntu 20.04 image - FileRun relies on php7.4, and while a repository can be added to install version 7.4 instead of 8 on newer versions of Ubuntu server, there are some dependencies which are not yet available. In contrast to the bundled Docker container I had to install some additional libraries (vips, imagemagick, stl-thumb, libreoffice for thumbnail generation myself, but this was a painless experience.

I adjusted a couple of additional settings - I fixed an issue where I stored the image thumbnails in the root of the web server and now moved them to the VM home folder instead, granting the www-data user ownership of this folder via chown. Furthermore, I had to tweak the msize-values for the p9 volume mount into the container to increase throughput performance. All in all, I was able to fully replicate my Docker setup from an end user perspective. When I now tested access through Caddy and the external URL… everything worked! I could connect to edit files in my OnlyOffice instance, I could share external photo albums with thumbnails storing on the Cache drive, and nothing would cripple my server. I am still planning to setup nginx X-Accel as a download accelerator for large files, but this will only be the cherry on top and should only be a small entry in the nginx config file at /etc/nginx/conf.d/default.conf and reloading nginx via service nginx reload. Therefore, FileBrowser with Authelia, pigallery2 and WebDAV via Caddy are all obsolete for me once more… but figuring out how to run this more stable and secure (in spite of only treating symptoms and not finding the root cause for the initial server freezes) was well worth it to me!