this post was submitted on 02 Jul 2023
19 points (100.0% liked)

Stable Diffusion

4266 readers
1 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

I was using stable diffusion a lot previously, but haven't really touched it in the past several months. I was wondering what interfaces people are using these days?

Automatic1111 still seems to be popular, and that's the one I am most familiar with. I know there are some others now though, like comfy, and I guess maybe invokeAI is still going?

top 22 comments
sorted by: hot top controversial new old
[–] 2dollarsim 7 points 1 year ago

Automatic1111 is the second one I tried and I never left. It's the best.

[–] [email protected] 7 points 1 year ago

Automatic1111's Stable Diffusion WebUI is hard to give up, with how many features it has that are missing in other frontends. I use anapnoe's fork for a slightly better UI. I would use vladmandic's fork, but some of the changes have caused issues for my particular setup.

[–] Swexti 4 points 1 year ago* (last edited 1 year ago) (1 children)

Is no-one here running ComfyUI? It's one of my favorite UI's as it's completely node based and extensive! Has everything auto1111 has and even more! EDIT: It's not really everything, but almost!

[–] [email protected] 1 points 1 year ago

I'm also using ComfyUI. It just has the ability to do so much more than something like Automatic1111, even if it is missing a couple of features. For example, I have several workflows that do incremental changes to a photo, changing the prompt halfway through the generation, or even upscaling halfway through the generation.

I can't really imagine going back, unless there is some killer feature that Comfy is missing.

[–] [email protected] 2 points 1 year ago

Automatic1111

[–] the_ramzay 2 points 1 year ago

Draw Things and InvokeAI on my MacBook Pro m1 pro

[–] [email protected] 2 points 1 year ago

SD.next (Vladmandic)

[–] WildBanjos 1 points 1 year ago

I mainly use the UI-UX fork of AUTOMATIC1111.

[–] [email protected] 1 points 1 year ago

I'm stuck with an AMD card for other purposes so I pretty much have to use the DirectML fork which is okay, but it's very slow and despite having 12gb of video ram, I still get the out of ram messages all the time. But, hopefully some progress will be made somewhat soon on those cards.

But it is fun, that's for sure.

[–] pennomi 1 points 1 year ago

InvokeAI has a very beautiful unified inpainting and outpainting interface with an infinite canvas.

[–] danielbln 1 points 1 year ago

Auto1111 on RunDiffusion.

[–] [email protected] 1 points 1 year ago

I use DiffusionBee on Mac. It’s simple and good enough for my needs

[–] [email protected] 1 points 1 year ago

I think Automatic is already great, but you should at least mention Easy Diffusion. Also very easy to use.

[–] [email protected] 1 points 1 year ago

i used to use invoke ai,. invoke ai just realeased version 3 beta. i will wait for a few more betas or rc for 3 to use it again.
These days I usually use Makeayo

[–] [email protected] -2 points 1 year ago* (last edited 1 year ago) (1 children)

The porn industry is doomed!

Edit: well I think the title of this post changed or my comment ended up on the wrong post so my comment is irrelevant lol

[–] 2dollarsim 1 points 1 year ago (1 children)

Not for videos, we are still quite a way from that yet.

[–] [email protected] 2 points 1 year ago (1 children)

yeah it was a joke. Doesn't look like it resonated well lol.

[–] 2dollarsim 1 points 1 year ago (2 children)

Haha it won't be a joke next week when the new text-to-video model comes out

[–] [email protected] 2 points 1 year ago (1 children)

With how unstable (lol) txt2img is, I don't believe a stable enough for porn txt2video model is coming soon.

[–] 2dollarsim 1 points 1 year ago (1 children)

I would agree, but the rate of innovation in AI is so unpredictable that it could go either way.

[–] [email protected] 1 points 1 year ago

I don't really agree.

Recent AI inovations are pretty modest and use the innovation of raw fucking power to achieve goals.

Gpt4 uses 230B parameters, whereas to run a 7B LLM you need 16gb of vram already, and llms are o(n²) in complexity in terms of parameters, I'll let you do the maths

Stable diffusion (latent diffusion to be more precise) is about the same, the initial training required billions of teraflop, while it was relatively cheap (100k$), it still rides on modern GPU technology .

[–] [email protected] 1 points 1 year ago
load more comments
view more: next ›