philipstorry

joined 1 year ago
[โ€“] philipstorry 1 points 1 year ago

My company uses Acronis M365 Backup Protection.

I believe it was selected because the licensing options and costs were much better than Veeam's offering.

To be honest I can't comment much further - it was set up by a colleague, it runs in a different country, and I've never needed to do a restore from it because I've always been able to recover lost files/emails from the recycle bin/recoverable items.

It's more of an insurance policy against ransomware or other malware than anything else. It's good to have, but not used day to day.

Although that does remind me that we are probably due a test restore. I'll add that to the list for this month. Thanks! ๐Ÿ˜‰

4
submitted 1 year ago* (last edited 1 year ago) by philipstorry to c/sysadmin
 

One of the more interesting uses of AI is to power natural language interfaces.

Basically this means plumbing them in to reporting layers so that the AI can figure out what it is you're asking and create appropriate queries for data stores, execute them, and then present (and possibly interpret) the results.

Imagine an ELK stack that you're shipping all your logs into. As well as getting some pretty graphs for management to coo at you could also just ask an AI interface connect to it: "Tell me who authenticated with $platform last Friday, in a table ordered by the number of authentication attempts" and it would just return that.

Kinda tempting, huh?

Well this link is to a SANS Internet Storm Centre Diary where they look at doing that from an Incident Response point of view.

The short version - your job is safe. For now.

But I think it's a good read simply because it gives us ideas about how we could use AI, and a pointer at what's likely to work. The fact that multiple models were tested is particularly interesting...

What do you think?

[โ€“] philipstorry 1 points 1 year ago

It's fine.

Like any CMS, it has a seemingly constant low level of patching to be applied. The more third party modules and themes you have, the worse that gets.

Remove unused modules that aren't core. Same with themes. That'll make things easier.

Otherwise it's overheads are just Apache/nginx, MySQL/MariaDb, and maintenance of the TLS certificate, plus OS patching. All fairly well understood stuff that you should have no issues with.

[โ€“] philipstorry 1 points 1 year ago

You're welcome! ๐Ÿ‘

[โ€“] philipstorry 15 points 1 year ago (2 children)

I may as well make myself unpopular with some context...

Some here have compared NTFS with ZFS, which is unfair as ZFS is over 12 years younger. In 1993 machines had an average of less than 4Mb of RAM, and the average disk size was probably somewhere in the 80-100Mb range. NTFS required more RAM - if you wanted to run it I think you had to have 12Mb of RAM minimum, maybe even 16Mb. If you didn't have that you had to install your Windows NT 3.1 copy with FAT...

A better comparison filesystem would be XFS, which was developed at around the same time and saw its first release in 1994.

XFS has had a lot more development of late than NTFS has, and it could be argued that because of that it now has the edge. But both are venerable survivors of that era. Both are reliable, robust, feature-rich and widely deployed.

A lot of problems that people have with NTFS are to do with the way Windows handles disk access rather than the filesystem itself. A filesystem is more than just an on-disk layout and a bit of code to read or write from it, it also has to interact with OS disk buffering systems, security systems, caching mechanisms, and possibly even things like file locking and notification mechanisms.

Windows has a concept of the "installable file system" - these days it's primarily a way to load filter drivers that can inspect all I/O operations. It's how Windows security programs like antivirus work, but also how Windows prevents writes to its own folders by ordinary users. As you can guess, that slows things down. On the boot/OS drive of a Windows machine there are a lot of filter drivers. Android developers know this from how long some build operations take, and have often cursed at NTFS for it. Yet if you move the project onto a non-OS NTFS drive, suddenly it's much faster - because that drive lacks many of those filter drivers, as there is no OS to protect on that drive.

The point here being that NTFS often gets slammed for issues which aren't its fault, and it has no control over.

NTFS is probably in the top ten most-installed filesystems ever. And high on that top ten. (I wonder what that top ten would look like? I think that embedded use of ext2 probably places it near the top, but then you have wildcards like the Minix file system... anyway, back on track!)

Filesystems are one of those things that everyone takes for granted, yet are incredibly important. NTFS may not be native to Linux, and may come from somewhere that many see as "the enemy", but I think 30 years of tireless work deserves some recognition.

Happy birthday, NTFS. You've done well.

[โ€“] philipstorry 5 points 1 year ago (1 children)

My local backups are handled by rdiff-backup to a mirror set of disks. That means my data is versioned but easily accessible for immediate restore, and now on three disks (my SSD, and two rotating rust drives). It also makes restores as simple as copying a file if I want the latest version, or an easy command if I want an older version. And testing backups is as easy as a diff command to compare the backup version with the live version.

Having your files just be files in your backup solution is very handy. At work I don't mind having to use an application like Veeam, because I'm being paid to do that. At home I want to see my backups quickly and easily, because I'd rather be working on my files than wrestling with backup software...

Remote backups are handled by SpiderOak, who have been fine for me for almost a decade. I also use them to synchronise my desktop and laptop computer. On my desktop SpiderOak also backs up some files in an archive area on the rotating rust mirror set - stuff that's large and I don't access often, so don't need to put on my laptop but do want backed up.

I also have a USB thumbdrive that's encrypted and used when I'm travelling to back up changes on my laptop via a simple rsync copy - just in case I have limited internet access and SpiderOak can't do its thing...

I did also have a NAS in the mix once, but I realised that it was a waste of energy - both mine and electricity. In normal circumstances my data is on 5 locations (desktop SSD, laptop SSD, desktop mirror set, SpiderOak's storage) and in the very worst case it's in two locations (laptop SSD, USB thumbdrive). Rdiff-backup to the NAS was simply overkill once I'd added the local mirror set into my desktop, so I retired it.

I'd added the local mirror set because I was working with large files - data sets and VM images - and backups over the network to the NAS were taking an age. A local set of cheap disks in my desktop tower was faster and yet still fairly cheap.

Here's my advice for your consideration:

  • Simple is better than complicated.
  • How you restore is more important than how you backup; perform test restores regularly.
  • Performance matters; backups that take ages are backups you won't run.
  • Look to meet the 3-2-1 criteria; 3 copies, on 2 different storage systems, with at least 1 in a different geographic location. Cloud storage helps with this.

Good luck with your backup strategy!

[โ€“] philipstorry 2 points 1 year ago

Absolutely - rdiff-backup onto a local mirror set of disks. As you say, the big advantage is that the last "current" entry in the backup is available just by browsing, but I have a full history just a command away. Backups are no use if you can't access them, and people really under-rate ease of access when evaluating their backup strategy.

[โ€“] philipstorry 2 points 1 year ago

I think the thing to take away from this is the poor state of management/maintenance tools that Exchange had. Thankfully over the years this has improved, but in those early years it was pretty bad.

I was using both Exchange and Lotus Notes/Domino in that period. If the same thing had happened in Notes, we'd just shut down the mail router task, open the mail.box database and remove the offending message. Easy. But that can only be done because Notes re-used its database system for everything, including the mail queue.

Exchange has a history of reinventing the wheel even within its own architecture. Let's just say I'mm very grateful that we're now on Microsoft 365 and these things are Microsoft's problem, not mine... ๐Ÿ˜‰

[โ€“] philipstorry 3 points 1 year ago* (last edited 1 year ago)

A friend recommended Reddit, I signed up in 2012 but the interface was... uninspiring. Especially on mobile. Out of curiosity I tried Sync in 2013 because it was getting very good reviews, and immediately preferred it.

I've never been a heavy Reddit user, but did most of my usage through Sync. It was just better.

Somewhat ironically I've recently been thinking about doing more in some Reddit communities, but the awful behaviour of Reddit means that won't happen now. So I'm really looking forward to seeing Sync for Lemmy so that I can pursue those goals here in comfort and style! ๐Ÿ˜‰