this post was submitted on 16 Oct 2024
52 points (85.1% liked)

Linux

47732 readers
1210 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 
  • Price: 370$
  • Model: Asus ROG Strix G15 (G531GV)
  • CPU: Intel I7 9th Gen
  • GPU: Nvidia RTX 2060 6GB
  • Ram: 16GB
  • Storage: Samsung SSD 980 Pro 1TB (NVME)
you are viewing a single comment's thread
view the rest of the comments
[–] Jesus_666 6 points 6 hours ago (1 children)

These days ROCm support is more common than a few years ago so you're no longer entirely dependent on CUDA for machine learning. (Although I wish fewer tools required non-CUDA users to manually install Torch in their venv because the auto-installer assumes CUDA. At least take a parameter or something if you don't want to implement autodetection.)

Nvidia's Linux drivers generally are a bit behind AMD's; e.g. driver versions before 555 tended not to play well with Wayland.

Also, Nvidia's drivers tend not to give any meaningful information in case of a problem. There's typically just an error code for "the driver has crashed", no matter what reason it crashed for.

Personal anecdote for the last one: I had a wonky 4080 and tracing the problem to the card took months because the log (both on Linux and Windows) didn't contain error information beyond "something bad happened" and the behavior had dozens of possible causes, ranging from "the 4080 is unstable if you use XMP on some mainboards" over "some BIOS setting might need to be changed" and "sometimes the card doesn't like a specific CPU/PSU/RAM/mainboard" to "it's a manufacturing defect".

Sure, manufacturing defects can happen to anyone; I can't fault Nvidia for that. But the combination of useless logs and 4000-series cards having so many things they can possibly (but rarely) get hung up on made error diagnosis incredibly painful. I finally just bought a 7900 XTX instead. It's slower but I like the driver better.

[–] SaveMotherEarthEDF 2 points 5 hours ago* (last edited 5 hours ago) (1 children)

Finally, thanks for the clear cut answer. I don't have any experience with training on AMD but the errors from nvidia are usually very obscure.

As for using gpus other than nvidia, there's a slew of problems. Mostly that on cloud where most of the projects are deployed, our options seem either limited to nvidia gpus, or cloud tpus.

Each AI experiment can cost usually in thousands of dollars and use a cluster of GPUs. We have built and modified our system for fully utilizing such an environment. I can’t even imagine shifting to Amd gpus at this point. The amount of work involved and the red tape shudder

[–] Jesus_666 1 points 3 hours ago

Oh yeah, the equation completely changes for the cloud. I'm only familiar with local usage where you can't easily scale out of your resource constraints (and into budgetary ones). It's certainly easier to pivot to a different vendor/ecosystem locally.

By the way, AMD does have one additional edge locally: They tend to put more RAM into consumer GPUs at a comparable price point – for example, the 7900 XTX competes with the 4080 on price but has as much memory as a 4090. In systems with one or few GPUs (like a hobbyist mixed-use machine) those few extra gigabytes can make a real difference. Of course this leads to a trade-off between Nvidia's superior speed and AMD's superior capacity.