this post was submitted on 17 Jun 2023
8 points (100.0% liked)

Lemmy Support

4647 readers
2 users here now

Support / questions about Lemmy.

Matrix Space: #lemmy-space

founded 5 years ago
MODERATORS
 

Any helpful tips for general care and feeding I should be doing on a regular basis?

I know I need to keep an eye on updates and re-run my ansible setup form time to time to stay up to date.

But I have also been keeping an eye on my VPS metrics to see when/if I need to beef up the server.

One thing I am noticing is a steadily increasing disk utilization (which mostly makes sense except its seeming a bit faster than I expected as most all media is links to external sites rather than uploading media directly to my instance).

Anything I can do to manage that short of just adding more space? Like are there logs/cached content that need to be purged from time to time?

Thank you!

top 5 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 1 year ago

Just keep an eye on the GitHub page if you want to stay updated at all times. Other than that just check up on your storage use from time to time. You can also set up a job to restart the server every once in a while but that's not really necessary.

[–] [email protected] 2 points 1 year ago (1 children)

The main cause for the steady rise in disk usage that I'm seeing is the activities table, which contains the full JSON of all ActivityPub messages seen by the instance. It appears Lemmy automatically removes entries older than 6 months though.

[–] [email protected] 1 points 1 year ago

Gotcha thanks! Thats good to know. Based on the originating ticket: https://github.com/LemmyNet/lemmy/issues/1133

Sounds like it might be safe for me to purge that table a bit more often as well.

Dumb question, how are you profiling (RE your mention of getting a better idea of which tables might be bloated) your DB? Just SSHing into your box and direct connecting to DB? Or are there other recommended workflows?

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

UPDATE:

If anyone else is running into consistently rising disk I am pretty sure this is my issue (RE logs running with no cap):

https://lemmy.eus/post/172518

Trying out ^ and will update with my findings if it helps.

[–] [email protected] 1 points 1 year ago

After some tinkering, **yes ** this indeed was my issue. The logs for pictrs and lemmy in particular were between 3 and 8 gb only after a couple weeks of info level logging.

Steps to fix (the post above has more detail but adding my full workflow in case that helps folks, some of this wasn't super apparent to me) - these steps assume a docker/ansible install:

  1. SSH to your instance.

  2. Change to your instance install dir

most likely: cd /srv/lemmy/{domain.name}

  1. List currently running containers

docker ps --format '{{.Name}}'

Now for each docker container name:

  1. Find the path/name of the associated log file:

docker inspect --format='{{.LogPath}}' {one of the container names from above}

  1. Optionally check the file size of the log

ls -lh {path to log file from the inspect command}

  1. Clear the log

truncate -s 0 {path to log file from the inspect command}

After you have cleared any logs you want to clear:

  1. Modify docker-compose.yml adding the following to each container:
logging:
      driver: "json-file"
      options:
        max-size: "100m"
  1. Restart the containers

docker-compose restart