Image not available

1555x1215

Screenshot 2022-0....png

๐Ÿ—‘๏ธ ๐Ÿงต Ligit 100% It's over for /3/ Stable Diffusion Plug-in for Blender

Anonymous No. 916999

Someone made a blender plug-in that take's outputs from the AI Stable diffusion and pumps them out into usable meshes for blender.
I really hope you all didn't plan on having an extended career in 3d graphics.

Anonymous No. 917002

>>916999
Someone made a "plugin" that feeds viewport pictures to an AI and you are a retard

Anonymous No. 917033

Too bad no one can write a plugin that makes you able to read, or not be an ugly balding Colombian piece of shit.

Anonymous No. 917081

>>917033
OP here,
oddly accurate, I am bald, and half Colombian but I'm basically a white as they come and was born in the US.

Anonymous No. 917212

>>916999
>blender plug-in that take's outputs from the AI Stable diffusion and pumps them out into usable meshes

Other way around, newfag. There won't be an image to 3d AI as long as you're alive.

Too many variables to account for.

Anonymous No. 917243

>>917212
Nah... I think there will. Not as simple as 1 image to 3d meshes though.

Probably something more like a multi-view drawing that it creates something rough out of. Then a 3d modeler can clean it up.

Almost any artist would kill for such a tool: It'd speed up workflows a shitton to sketch how you want it to look in low detail then detail it up in 3d after; though it's prob not coming in 5 years unless a major company decides to through money at it.

Anonymous No. 917244

>>917243


Or better yet, an AI that attempts multiview from an initial sketch which can be cleaned up and then fed to a 3d model creation AI.

Or even a workflow like:
text->image->human assisted clean up -> multiview -> human clean up -> 3d model mesh creation -> human clean up

How much would that save in time/money to a corp?

Anonymous No. 917245

>>917243
>Probably something more like a multi-view drawing

We have 3d scanning, it's shit at precision and great at giving you a noisy mesh with 400 million poligons.

If I can take pics of an object easily it's not worth my time.

Anonymous No. 917246

>>917245
I'm not talking photogrammetry of a real object. Nor using a 3d scanner.

I'm talking multiview drawings of non-existent static objects/riggable objects/etc

As is, plenty of artists do the front, side, 3/4, back,etc drawings for a character reference sheet. Having AI help streamline the 2d 'sketching' into a somewhat shitty 3d "sketch" seems like a way to speed up a design workflow. Get a vague idea of what the character would look like from more views or cleanup/rework in 3d after.

Anonymous No. 917247

>>917246
Or better yet, take your 2d character reference sheet into a virtual 3d posable mannequin.

Which in turn can be composited with other shit for 2d, 3d, whatever other shit.

Anonymous No. 917249

>>917246
The software can't do depth correctly, because it sees 2D and it has to interpret another axis, this axis can't be reliably interpreted due to there being 1000 cameras on the market and 10000 different light source variations that fucks with depth, can't match where to "connect" things. You'll get the same results as dale-e but in 3d, a fucked up interpretation of reality that takes 2 hours a pop to process.

There's software that does what you're saying already, it's shit. Avatar SDK is one, it gives you long ass heads because it can't know where the head ends from the photo, so there's a limit at "how long a head can be" in the code, if you gave it a side view it fucks some other thing up behind the photo.

Image not available

1920x1007

file.png

Anonymous No. 917251

>>917249
>life like avatars for the metaverse
I think I vomited a little.

I still wasn't talking about photo-realism or using photos -> 3d.

I'm talking artists like... Genndy Tartakovsky's style where his team starts in 2d and then mocks up to 3d, custom rigging to allow for more 2d cartoony stretches and smearing.

I completely agree photogrammetry isn't there yet, especially given the wide variety of ways something could have been photographed.

However, many people still sketch in 2d before they draw in 3d. I'm sure plenty if not most people use 2d images when modeling. If it's possible for a person to make a model of an object from multiple 2d views, it tracks it's possible for an AI to make something feasible from a 2d artist's character sheet.


I guess the real issue is data: there aren't many 2d character reference sheets paired with 3d models to work off.

Anonymous No. 917418

I feel like stylised AI texturing should be pretty easy now. All the tools are there to get it done from one entry of a text prompt but nobody's pieced it together yet.

Anonymous No. 917563

>>917212
>>917243
Please tell me if I'm wrong but judging from these videos I believe a "Image to 3d AI" isn't as far away as you're implying:
https://youtu.be/Jy_VZQnZqGk?t=47
https://youtu.be/5j8I7V6blqM?t=98

Anonymous No. 917633

>>917563
So photogrammetry (2d image to 3d model) exists already to a certain extent. The issue is that it isn't super good just yet.

When it comes to creating something that could be rigged up and animated, it isn't at all there yet. But the speed of progress is insane. MINST 32x32 image of number -> recognizing that number was 2004 with an under 1% error rate. We're now at hi-res images of human faces from there in little over a decade.

2d -> 3d for mapping tunnels and scanning objects exists already but isn't super accurate. I'd say within 15 years we'll have decent "2d to 3d". I dunno if it'll be directly usable in workflows though.

Anonymous No. 917799

>>916999
how does this work? How can you make a 3d mesh from a 2d picture.