Image not available

1024x1024

_adcbdb56-5153-4e....jpg

๐Ÿงต Untitled Thread

Anonymous No. 970092

Anyone uses (((AI))) for concept art/prototyping/inspo? Right now I'm proompting 15 times a day using Bing but I'm thinking about switching to Stable Diffusion using some specialized models or checkpoints or whatever and I'm wondering if anyone has any good workflows set up

Anonymous No. 970097

>>970092
That is one of the most generic shit I have ever seen

Image not available

2257x1520

Books.jpg

Anonymous No. 970128

>>970092
I used it a while back to gen a bunch of generic fantasy magic book covers to use as background props. There's a few there that aren't shown since there's some that are square shaped that I just didn't show.
They're not the best, and I didn't create any proper normal maps so I'm just feeding the diffuse into the bump, but it's alright enough for the 6 or so pixels they'll occupy on screen.
90% of them won't even be visible, since they'll be on a bookshelf so only the spine will show. Still I figured I'd have them and not need them.
Several days of designing and creating book covers from scratch compared to the 30 seconds or so to gen a bunch of them is worth it I'd say. Considering the time can better be spent on the rest of the scene instead of a filler background asset.

Anonymous No. 970170

>>970097
That's why I'm asking, pretty much all I'm getting when using Bing is these uninspired pics that sort of show what I'm looking for but only "sort of".

Anonymous No. 970172

i don't know if i'd use it for concept art - there isn't exactly a shortage of concepts out there. you're usually just prompting stuff like 'trending on artstation' or 'artist name', may as well just go to art station lol. but sure, stable-diffusion is really easy to set up, just head over to /sdg/ on /g/. it'll take you a few hours/days to get used to upscaling and control netting workflows. midjourney is much much easier to deal with and arguably better quality.

>>970128
did something very similar for background posters/pages. had to prompt it away from text for obvious reasons, but then these were assets for a shot that lasted < 5 seconds so w/e.

i have been thinking about pipeline for stylised textures for a full scene. houdini's mlops add-on is pretty decent so i might be able to build the whole thing in there.

Image not available

1321x884

AIconceptProcess.jpg

Anonymous No. 970174

>>970092
> if anyone has any good workflows set up

Use the standalone variant of Stable Diffusion installed locally if your computer can run it, then use 'image2image' to generate over a lose sketch of the thing you wanna create. That way you can kinda guide the AI to see what you want it too see and have some command of the sort of thing you end up generating.

Anonymous No. 970177

>>970174
Thanks, does using specific models matter all that much?

Anonymous No. 970179

>>970177
Probably depending on what you wanna create, the whole 'waifu crowd' is kinda front and center there in seeing what optimized models can achieve.

Personally I find the loose initial model (with all the illegal data scrubbed off artstation and what not)
It seems to be the most creative and open ended one so it's the one I use.

I'm more interested in having the AI provide me with inspirational variants of my input for me to draw elements from in refining it to finished 3D designs
rather than have it spit out something highly refined that is like a finished art piece in and of itself, so base model suit my needs.

Anonymous No. 970180

>>970092
I use it to get my own ideas flowing, never for "final" stuff. Few refining repetitions etc, all I want is a few strokes or a blurry mess. AI is neat for that, but as soon as you've got a specific image in your head and want that replicated it's nothing but frustration.

Anonymous No. 970768

>>970097
my apology...perhaps i should post some AI generated CSAM to make it less generic for you?

Image not available

1024x1024

1705558611210836.jpg

Anonymous No. 971012

>>970092
get a dedicated model or train your own, you need to have hardware with 12gb of vram at least, nvidia because amd fucking sucks amd has limited features compared with nvdoot

also extra points if you train your LOHA for features that you want to add to the generated images

Image not available

512x512

fire castle.png

Anonymous No. 971013

>>971012
once you generate your generic crap image, you can remodel totally the aesthetics transforming the slop you generated into another style or a model with different or better details

Anonymous No. 971359

>>971012
>>971013
Alright, thanks, I do have an nVidia with 12 gigs so it should be fine