Image not available

1920x1080

Houdini tool - Co....webm

๐Ÿงต Untitled Thread

Anonymous No. 1009733

What is the point of doing this? Isn't it better to have two triangles with transparent texture of foliage than having it converted into gazillion triangles with non-transparent texture?

Anonymous No. 1009735

not for offline pathtracers
don't ask me why, but using large amounts of opacity on most renderers absolutely destroys render times
redshift has a sprite node specifically to avoid this issue, i don't think any of the others do, so you end up doing this trace/remesh thing

Image not available

1074x647

file.png

Anonymous No. 1009739

>>1009735
actually i got curious and the redshift sprite page has a nice explanation for this:

>When a ray hits a polygon using opacity, Redshift has to execute operations to read the texture at that location, setup the next transparency or refraction ray and so on. These operations are wasteful if the ray hits an area that is completely transparent but the process can't be skipped because that part may be semi-transparent and Redshift cannot assume otherwise. Because of this, the transparency trace depth has to be increased in order to prevent rays from being terminated prematurely which introduces visual artifacts. The Sprite node, on the other hand, is optimized to skip fully transparent parts of a polygon and with very few operations.

https://help.maxon.net/r3d/maya/en-us/Content/html/Sprite+Node.html

i also peeped that your webm says something about nanite. i don't do game stuff so not sure what that's about.

Anonymous No. 1009748

>>1009733
Many renderers use deferred rendering; which struggle with transparency. And also the quality of the foliage depends on the quality of the image; requiring large file sizes to achieve a good look. This method solves for the above problems.

Anonymous No. 1009750

>>1009733
Testing for an intersection with a triangle is a cheap, trivial, easily optimizable operation. Testing for transparency within that triangle is orders of magnitude more expensive

Anonymous No. 1009753

>>1009750
> Testing for an intersection with a triangle is a cheap, trivial, easily optimizable operation. Testing for transparency within that triangle is orders of magnitude more expensive
That's not really so obvious honestly. It's like:
test raycast with 2 triangles + lookup pixel alpha within 1 texture if it hits any of those 2 triangles
vs
test raycast with 300 triangles + don't lookup pixel alpha
I don't think alpha lookup is that expensive, textures' uploaded on GPU, there's nothing expensive about reading it's pixel
What I think it tries to optimize is make less of a dead area, but the fact those [singular pixel lookups that have to be made when ray hits dead (transparent) area of 2 triangles] are more expensive than [having to raycast hundreds triangles] is not obvious at all either.
Maybe it also heavily depends on the size of polygons and distance to them from camera? In foliage like grass and flowers for example, most of them in scene, especially far away, will have their whole polygon so small that optimizing for that dead area doesn't sound good at all.
Really makes you hmmm.

Anonymous No. 1009754

>>1009753
Ohhhh damn I think I get it. It's maybe also about having dozens or hundrens of those foliage planes in a row in every direction. Like it usually is in landscapes, which means ray goes through shit ton of sprites and each one of them potentially stops it or makes it go through, so those lookups kinda stack up unpredictably.

Anonymous No. 1009760

>>1009753
> don't think alpha lookup is that expensive, textures' uploaded on GPU, there's nothing expensive about reading it's pixel
That's where you're wrong, basically.
At its core, testing talking alpha into account is always basically a loop of "find closest triangle intersection, find out if material has alpha, convert from triangle space to texture space, look up texture, if not opaque find next closest triangle", while without texture it's just "find closest triangle intersection" and that's it, you're done, no loop beyond whatever the GPU does to query triangles in the first place.

The only limitation, then, is how finding said triangle ramps up as you add more of them, but that's exactly what raytracing is optimized to minimize.

Anonymous No. 1009761

>>1009760
> find out if material has alpha, look up texture
Those are basically zero cost operations.
> convert from triangle space to texture space
I guess this is where the price is.

Anonymous No. 1009764

>>1009761
>Those are basically zero cost operations.
To give you an idea, a texture sample is about 100x slower than a triangle check

Anonymous No. 1009792

>>1009764
I'm another anon and I don't really have the technical background to say if what you state is correct or not but assuming it i;
What is the deal with 'alphatest' being so seemingly cheap in shaders while interacting correctly with light/shadow.
Isn't a per texel write to the depth buffer super cheap if we have to do texture lookup anyways to see what texture a pixel is to have?

If I was doing something like Op show for a more high end render I would not stop at the generated mesh but then bend/displace those generated leaves to let them have geometric shape sort of cupping the way a branch of leaves would instead of being a geometric 2D plane.

Anonymous No. 1009798

>>1009792
>what is the deal with 'alphatest' being so seemingly cheap in shaders
It's not! It's a perf hit although for different reasons than the ones involved in ray tracing. This is because rasterizers can use a series of optimizations grouped under the appellation of early-Z. basically checks you can do before running the pixel shader.

For example, if you render a triangle with no alpha test, you can write depth as soon as you've run the vertex shader, so the next triangle can be depth tested against depth in parallel with the pixel shader for the current triangle being run. But if you need alpha test, then the pixle shader must be run before the next triangle can be tested.

That's not the only one, another common one is dividing the screen into tiles, and as long as a tile is entirely covered by a triangle instead of writing each pixel you just write the plane parameters (4 numbers) for the whole tile and do further depth tests against that, saving tons of bandwidth (for an 8x8 tile it'd be 16 times less bandwidth for example).

As a consequence games usually render all the alpha tested stuff after all the opaque stuff to speed things up. Another optimization is to render only depth at first, so you can run a pxiel shader that only does the strict minimum for an alpha test, then a second pass of the whole scene where you don't write depth but instead test for equality, so you only have to run the part of the shader that writes color and part of the early Z stuff can run in parallel because the depth buffer's static. Like, yeah, if you pixel shader's complex enough, rendering the whole scene twice to take better advantage of early Z is faster (don't mistake it for deferred rendering, this is a different trick that can be used with or without it).

So basically it isn't cheap at all and a bunch of tricks are used to reduce its impact.

Anonymous No. 1009801

>>1009792
Also note that
>per texel write to the depth buffer
Since it's so essential to rasterization, a depth buffer you're writing to specifically as part of the rasterization depth test has a special, super fast pipeline for reads and writes as long as it satisfies specific conditions (which includes the early Z stuff mentioned earlier) which boil down to "only write planar opaque triangles" (it's possible to write a non-planar triangle, you can enable changing depth per-pixel in the pixel shader but of course it has the cost of no early Z).

Also, note that the subdivision seen in OP's post is an optimization for *raytracing* against, it's not an optimization for rasterization (ie drawing the triangle to the screen).

In rasterization alpha testing can be preferrable because basically the smaller the trianlge, the more proportionally costly it is to draw. Due to how a GPU computes derivatives, basically the triangle is drawn as a bunch of 2x2 tiles completely covering the triangle, pixel shader included and the extra pixels are clipped at the end.

So a triangle that only covers a 1-pixel line in the end would cost twice as much as it should, a 3 pixels triangle would cost 4 pixels, and rendering an extremely subdivided mesh with only 1-pixel triangles for that perfect look would cost 4 times the pixel count (discounting overdraw which would make it worse). So alpha testing becomes much cheaper in comparison.

In UE5 they deal with that by detecting small triangles and drawing them with a compute shader instead of the traditional pipeline, to avoid the 2x2 thing.

Anonymous No. 1009821

>>1009798
>>1009801

I understood about 70% of what you said but even without comprehending details it does makes sense to me how this could be very advantageous in raytracing to handle everything as geometry.
I only touch this stuff writing raster shaders and the project I did most such work on was on was using a deferred render pipe.

When you learn shaders without really understanding the underpinning tech the appdata and the vertex shader is this kinda black box that just hands you things.
The fragment program is easier to understand as it's fully exposed and you can just read what it does line by line to figure out how they work.

Anonymous No. 1009836

>>1009798
>so the next triangle can be depth tested against depth in parallel with the pixel shader for the current triangle being run
that's not how the vertex/fragment shader pipeline works

Anonymous No. 1009839

>>1009836
Better tell nvidia and amd then, they might want to rewrite their GDC docs

Anonymous No. 1009840

>>1009839
Yeah sorry to tell you but fragment shaders don't run at all until early-z is done

Anonymous No. 1009886

>>1009733
>thinly veiled Threat Interactive thread
we get it, modern gaymes are shit.

Anonymous No. 1009891

>>1009886
Not all of them. Genshin Impact is peak kino for example.