After some tinkering, **yes ** this indeed was my issue. The logs for pictrs
and lemmy
in particular were between 3 and 8 gb only after a couple weeks of info
level logging.
Steps to fix (the post above has more detail but adding my full workflow in case that helps folks, some of this wasn't super apparent to me) - these steps assume a docker/ansible install:
-
SSH to your instance.
-
Change to your instance install dir
most likely:
cd /srv/lemmy/{domain.name}
- List currently running containers
docker ps --format '{{.Name}}'
Now for each docker container name:
- Find the path/name of the associated log file:
docker inspect --format='{{.LogPath}}' {one of the container names from above}
- Optionally check the file size of the log
ls -lh {path to log file from the inspect command}
- Clear the log
truncate -s 0 {path to log file from the inspect command}
After you have cleared any logs you want to clear:
- Modify docker-compose.yml adding the following to each container:
logging:
driver: "json-file"
options:
max-size: "100m"
- Restart the containers
docker-compose restart
Huge thank you! I had a feeling something like this was going on but had no idea how to troubleshoot/fix.
My
pictrs
andlemmy
containers were the biggest between 3-8 GB (significant for a smaller instance) after a couple weeks.For anyone who finds this, in addition to what OP provided here, another command I found helpful (since I am a docker noob ๐) to find the name of the currently running containers:
docker ps --format '{{.Name}}'