Image not available

1748x1605

Nightshade_AI_Mon....jpg

🗑️ 🧵 You can poison AI/LLMs

Heir of Sigma No. 974416

So I've made this post yesterday -

>>>/pol/458898616

You can actually poison AI/LLMs with "wrong" input. A frame is a basically a micro part of a video, so in theory you could also apply this for videos (like 3D videos, for example)

Discuss about it.

>Pic. Related

Image not available

666x666

8-Bit_Apu.png

Heir of Sigma No. 974417

I know 3D modeling is a busy endeavor but...

>Anon, this is important
>Drop the stylus and post your 2 cents

Anonymous No. 974440

i work at openai
we can detect this and exclude the data

Image not available

681x159

Google_AI_Cannot_....jpg

Heir of Sigma No. 974450

>>974440
I'm calling your bluff.

>AI can't detect AI prompts with precision
>Despite basically having countless training cycles
>MFW I actually ponder the implications to that


Just a trick question to you: what if Nightshade/Glaze become part of the actual artwork?

How could you tell?

>Also, what makes you think it'll stop at Nightshade or Glaze
>Could AI deal with, say, 50+ filters/distortions put there to meddle with learning?
>Pic. Related

Anonymous No. 974451

>>974416
So you would rather waste time on this shit than actually working on something useful?

Image not available

1242x1635

1643408312081.jpg

Heir of Sigma No. 974452

>>974451

>Wasting time

OpenAI just released an existential problem to everyone who works with 3D, from scrub to master.

What is actually wasting time:

>Pretending it doesn't affects me or others
>Thinking it's just a fad
>Wishfully thinking it won't change financial prospects at all
>Not helping AI-disruptive software development
>Talking to you
>Thinking you matter

You belong here ---> Plebbit

Anonymous No. 974454

>>974452
Based

Anonymous No. 974455

>>974450
1. we've basically stopped scraping new art. there's no need for it really. we already got everything.
2. detecting glaze/nightshade markers isn't the same as detecting if an image is ai generated.

Image not available

740x506

Cooking_AI_Poison.jpg

Heir of Sigma No. 974456

>>974455

Ok.

>But we now know that LLMs can be poisoned

It's really just a matter of finding out how to get the poison into your precious data center.

Also

>Implying competitors wouldn't want to see competing AI/LLM models destroyed

In hindsight AI-disruption software is much, much worse than a simple lawsuit.


And by the way

>We stopped scraping new art. Source: trust me bro
>Implying software development ever stops for whatever reason there is

Every shit you're saying smells like bluff and cope.
Expect more poison to come out, nigger.

>Pic. Toxically Related

Anonymous No. 974459

>>974456
based human

Anonymous No. 974492

>>974452
The amount of time it took you to write this out, you could've been working on some modeling, a scene, or some animation. All you're doing is proving to me how useless you are and your efforts.

Anonymous No. 974501

>>974456
NTA, but there really isn't much point in scraping more data. At best it'd be a marginal improvement. When training, having a high quality source dataset is important, but when you're throwing billions/trillions of images at it, it's kind of a moot point. From there all that really matters is the manner of training, and what (if any) new technologies you leverage in the process.
As it is, there's plenty of information to still be gleaned from the "older" datasets to the point that scraping new ones don't really matter.

That being said, those datasets aren't scraped by the ones training, but by companies that specialize in the task (and then sold to the trainers). So while there's no real point for the trainers to use a new dataset, since companies like to offer the "New Best Thing™", they're probably still scraping just to offer updates to their datasets for new people to buy into.

Taking the time to poison the well just doesn't really do much when the dataset that's already out there is still robust enough to train against, and in the future there will probably be methods to circumvent those tactics anyway.
Best advice is to not worry about it and do your own thing. There's more to art than selling it. Enjoy the process of creation. Normies and Twitter-twats are so averse to AI as a concept that they seem MORE willing to go out of their way to pay for art and support artists than they used to.
Of course, that advice is going to fall on deaf ears (or I guess blind eyes), since you're out to fight a boogeyman.

Image not available

2048x2048

image_0.jpg

Anonymous No. 974506

>>974501
many people missed this in the sora release, but sora is now openai's best text-to-image model.
2048x2048 native generation.
turns out videos are just made of thousands of pictures. who knew.

Heir of Sigma No. 974507

>>974501
>and in the future there will probably be methods to circumvent those tactics anyway.

As I've posted above

>Google literally trained AI to try identifying AI from man-made art
>Prompts vs real artwork
>Several hundreds of thousands of cycles spent
>Google has a lot of infra-structure
>A lot of resources
>AI/LLM model still cannot tell the difference reliably

Alternatively

>Forensic tools to this day have a hard time telling Photoshopped images from raw, unaltered

If AI/LLM models cannot surpass this limitation today, it will just not be possible at all.

And you're missing a huge important point:

Glaze is defensive and will only "defend" from AI for that specific case.
Nightshade on the other hand is called poison for a good reason.

It poisons concepts.

>Example: prompt me a cat

If such AI/LLM model does becomes tainted all other prompts will become tainted as well.
How do you even "edit memory"? Are you going to selectively and manually edit such things?
We're talking about A LOT of data. Such task is practically unviable.

And we're not even discussing what future AI-disruptive software might emerge in the future.

>AI will be lobotomized, defiled, dumbed down, glazed, poisoned, punched, etc.
>Fuck AI

Anonymous No. 974740

>>974416
If luddites knew...
It just takes one change to the AI to adapt to adversarial data and it will be immune to it forever.
There's no cat and mouse game here, the AI just wins.

Image not available

760x635

Violence_Anime_Gu....jpg

Heir of Sigma No. 974786

>>974740

Alright

>From the millions of data pieces scraped

Which one is the "adversarial"?
I have seen Glazed/Nightshades pieces and I can't tell the difference

If I cannot tell the difference, how you even start a model to train AI on a right VS wrong learning schedule?

Not to mention

>Adversarial? You dumb nigger the AI isn't doing anything wrong, per se
>It's being told to train on top of poison
>Intoxicated it will then become
>Because AI doesn't know what it's doing
>Simple as

And by the way

>There's no cat and mouse game here, the AI just wins

I looked online for a tool or code which could protect AI from Glaze/Nightshade disruption
I've found a faggot from Github (probably as ugly and disgusting as your fuckface) claiming to have found a solution

>No demonstrations whatsoever, no gay substack posts, no normietube videos

Once those tools become popularized, expect AI-disruption software to become way more popular
And for entire datacenters to burn to the ground

Along with your trash waifu, faggot nigger

Anonymous No. 974789

>nightshade
another snakeoil product from a bunch of clueless fuckwits that won't do anything but part idiots from their money but it's not quite as bad as the human content only logo jpegs for your website that you have to pay monthly for that another company is peddling

Anonymous No. 974792

ah, futile resistance, a second favorite of those still stuck in denial and anger

Image not available

340x444

schizotime.png

Anonymous No. 974810

>>974416
Goes from "Clean SDXL" model to "Poisoned SDXL" model.
Yeah nah. That ain't how it works.
You're trying to tell me that some randos re-trained base the SDXL checkpoint from scratch with "poisoned" images instead? Fuckin what?
At best, they can either train a lora based on the original SDXL checkpoint, or train a completely new checkpoint from scratch based off of SDXL principles. By no means can they re-train the already released SDXL checkpoint from scratch.

If it's the latter where they just trained their own checkpoint. Then their results are disingenuous if they're comparing it against the OG SDXL. It could be just as likely that they're fucking shit at training, or intentionally used shitty images to skew results. Doesn't mean that their "poison" worked, unless the results are repeatable from a 3rd party.

I mean for fuck's sake, all things being equal (seeds, cfg, prompt, etc) those poisoned images should still look pretty similar to the outputs of base SDXL, just extremely fucked up. Yet all of them are completely different (different car models, dog breeds, colors). Using the same seed is pretty fucking powerful if you have a similar prompt, results should not be that different, even in their own model. Even between 100 samples and 300 samples, there'd be signs from using the same seed where you can see that even though the results are pretty different there's a common thread between them.

OP image looks like purposeful cherrypicking and not a good faith comparison.
Anyway, postin in a schizo thread. Let's fuckin go.

Anonymous No. 974817

>>974786
>>974786
>I have seen Glazed/Nightshades pieces and I can't tell the difference
This is exactly the point. You can't tell the difference because your brain isn't subject to these.
And it's possible to train an algorithm in such a way that the training process includes whatever ""poison"" algorithm to get better at everything.
All it takes is copy paste the glazed and nightshade and rotate phaser frequencies oh no the borg have adapted into the source code label it as adversarial and train it.
And good luck of trying to make that illegal in every jurisdiction. Perhaps the MPAA lawyers that made it illegal in muhrica to circumvent ineffective copy protection could help you out?
gl with that.

Heir of Sigma No. 974920

>>974789

Glaze and Nightshade are a new breakthrough in technology
All you got to do is stop cope, seething and dilating
Then see the OP's (mine) image attached to this thread

>The possibility of literally not only raging but poisoning
>And thus killing

The Machine

>AIdiots will be on suicide watch pretty soon

Image not available

720x987

Transhumanist_Ret....png

Heir of Sigma No. 974924

>>974817

Nigger, as I've said:

>>974786


Learn to read.

If google hasn't managed to train AI/LLMs that can tell the difference then who can?

>Nobody
>That's who

Also, what makes you think the current Glaze/Nightshade software is going to stay as it is?

>Not to mention

What makes you think there won't be a future industry dedicated to developing AI/LLM disruptive software?
If paying 20 USD a month will translate into faggotty niggers like you roping, I will pay it.

>Prooompt harder AIdiot

Anonymous No. 974941

>>974924
>Learn to read.
No U

>If google hasn't managed to train AI/LLMs that can tell the difference then who can?
My point was it doesn't have to "tell the difference" but it can become immune to it and in some regard by incorporating the adversarial AI into the model become better at not just not being affected by "glazing" but also become better at generating content.
This is because these models exploit "wrong" weights in the neural networks that do not mimic human perception.

Image not available

512x488

AI_Generated_NuMale.jpg

Heir of Sigma No. 974956

>>974941

>No U

I'm not playing this faggoty game with you
Go back to plebbit or fuck off to the hole you belong

>My point was it doesn't have to "tell the difference" but it can become immune to it

Extremely dishonest and untrue

You can't train something (even people) on a discriminative model when you can't tell apart A from B (or more) samples.

>AIdiots fail at basic logic
>Not surprised

Also

>ASSUMING this niche of software won't become better as well
>ASSUMING there won't be new stuff on the following months
>ASSUMING people/economies won't react to AI/LLMs in a very hostile way

Extremely bold assumptions from you

Anonymous No. 974964

>>974956
>You can't train something (even people) on a discriminative model when you can't tell apart A from B (or more) samples.
You
simply
incorporate
the source code
of the poison model
into the generative model.

Anonymous No. 974970

>only works if the clip model nightshade uses is close enough to the one the model uses
>doesn't work for models that use blip instead
>doesn't work for any modern openai model because they don't use clip anymore, they use a new in-house image->embedding model that nightshade doesn't have access to
>glaze fails if you use anisotropic filtering on the image before using it in your dataset
>nightshade causes visible artifacts that make your art ugly, can be easily spotted by a third-worlder paid in peanuts to double-check both the image and what the auto-tagger thinks that it is
>openai doesn't even scrape the web any more due to already having a colossal dataset and the fear of ingesting ai generated images
>adobe has a deal with Shutterstock for images and doesn't need to trawl the web for furry inflation porn
how the fuck is this even useful? it's just somebody's bullshit paper-wank

Anonymous No. 974983

>>974970
There's a lot of people awakening to what's going on who was huffing copium for the longest time and they're now
faced with the same sort of existential dread more clever people started processing in a very real way about a year ago.

OpenAI is seemingly releasing what they already sit on staggered now to purposefully incite this reaction and have the masses start
thinking about the consequences of the world we're all about to enter.

Getting that 'Butlarian Jihad' or 'poison anon' kneejerk reaction out of a lot of stupid people thinking we can sabotage our way out of this
at this stage is useful. We need to have them move thru all the states of grief and reach acceptance sooner rather than later.
You dont want them to enter the 'let's have a riot' phase when the robots are out working jobs like busy little bees.

All these people need to get onboard and start engaging in the wider conversations about what kind of response we as a society
will actually mount in the face of the technology we've developed.

We live in democracies and the populace needs to be made aware of what's going on. Altman has made statements that indicate he understands this very well.
The shocked reactions people are having at witnessing Sora and asking why the hell we need this technology is very much intended to spark such discourse.

Heir of Sigma No. 975040

>>974964
>>974970

Didn't find a SINGLE countermeasure to Nightshade as of now.

If there was one I guarantee you fucking AIdiots would be posting ways around or countermeasures as we speak.

The fact that not a single one of you retarded fags posted a single line of code demonstrates there are a bunch of laymen trannies are trying to argue against a huge paradigm shift in AI/LLM technology.

By the way

>There will be more AI-disruptive code out there
>Those will be mixed

That's how you get rid of an infection - you mix so many poisons all at once the pathogen can't do anything about it.

There will be an anti-AI industry and there aren't enough GPUs in existence to save you from this fact.

>>974983

I am not reading your dumb prompt, retard

Image not available

1210x422

WEF_Nightshaded.jpg

Heir of Sigma No. 975041

>Cope, Seethe, Dilate
>Pic. Related

Anonymous No. 975042

>>975040
>Didn't find a SINGLE countermeasure to Nightshade as of now.
Nobody took it serious enough to put in the effort.
Again for the third time, it's as easy as merging the source code or re-implementing it from the paper if the code is proprietary.
Oh and btw (You) and by that I mean (You) as another insufferable screeching namefag I won't bother to remember haven't put forth any reason of why incorporating a poison model into the generator won't make it immune.
Hell I argue that could generalize which means after 1-3 implementations the generation becomes immune to any poison attempts while at the same time the "anti-AI industry" did all the work of pruning away unnecessary weights that do not mimic human perception as well.

Anonymous No. 975043

>>974416
That's cool but rendering all ai that was trained on scraped data without consent non commerical by pushing appropriate bills should be the focus

Anonymous No. 975044

>>975040
>I am not reading your dumb prompt, retard

It's not a prompt and you already read it, you deflect with an insult because you disliked what you heard but recognized it to be truth.

Anonymous No. 975045

>it's a retarded namefag doesn't understand something but thinks he knows better than anybody else and anybody who thinks otherwise is the evil boogeyman episode
I love these

Anonymous No. 975057

>>974416
just poison using normal / alpha maks, mask any picture with that, you can use photoscape has a built in texture function

Image not available

640x640

NuMale_Cope_Proje....png

Heir of Sigma No. 975190

>>975042
>why incorporating a poison model into the generator won't make it immune

I will spoonfeed you now.
But first,

>Put that BBC nigger cock out of your mouth for a second you faggot cuckold
>Let Tyrone please your wife for a second

As posted from the WEF Forum article:

>"Nightshade allows artists to change pixels in their work in a way that is invisible to the human eye. This misleads AI programs, which then believe the image shows one thing, whereas viewers see something different.
>“Dogs become cats, cars become cows, and so forth,” the MIT Technology Review explains in its preview article on Nightshade.

Currently AI doesn't really "learns" artwork, it tries to follow an image pattern to the best of it's capacity.
Many times I still catch AI putting artefacts like artist signatures on one of the corners of generated graphics.

So in summary -

>If a person's eye cannot see Glaze/Nightshade filters
>AI will be incapable of doing so

And as you've said -

>Nobody took it serious enough to put in the effort

If it was possible it would've been done by now because this can and will poison a whole AI/LLM model.
Now fuck off my board and go yapp at /g/ you nigger

Heir of Sigma No. 975194

>>975043
Legal remedies are far too slow and kind for AIdiots.

>We have to just accept the fact that the law, society and corporations want to sack us, period
>Or "just get a real job", implying that we don't put hard work into our crafts or that it's not really valuable
>Or even wrongly assuming AI/AGI won't also affect physical jobs through robotics

Software retaliation would (I think it will) be far more financially destructive to AIdiots at large.
I do not want to coexist with psychotic trannies that literally come out of the woodworks to post about why I'm "wrong"

>Without posting any concrete proof otherwise

Just so we can feel despaired and powerless in a situation we have no control whatsoever
They can go die in their own hopeless pits of doom for all I care

There will be an AI-disruptive industry and those fucking code faggots will either

>Fall in line and fight the machine
>Or perish in yet another lolbertarian failed theorem

I, for one, welcome their circuits burning, their databases ruined and their precious AI models lobotomized beyond recognition

>They're big MAD because someone figured out it's possible to make AI/LLM models completely useless
>And nobody saw it coming

Anonymous No. 975229

>>975190
>invisible
I remember a fuckton of people bitching that it made their pictures look like highly compresed jpegs with horrible artifacting and the only guy that couldn't see anything wrong was red-green colorblind.
You're also completely ignoring the anons that pointed out that it only works on unmodified clip vision models as if any company with the cash to train or at least finetune an image model can't spend a little to finetune clip as well. Like the other anon said, if it becomes an impediment in any way someone will bother to come up with a workaround. Until then it isn't worth any company's time. What needs to happen is that a court needs to determine if training falls under fair use or not.

Image not available

339x296

1708651102200.png

Anonymous No. 975231

>anons have spent nearly a week arguing with a brain-damaged schizo namefag

Anonymous No. 975904

holy shit

Image not available

4000x3000

20240131_232226.jpg

🗑️ sage No. 979022

Oвa мecтo ceгa e мoјoт тoaлeт.