Image not available

1024x1024

15058ABE-0C04-459....jpg

๐Ÿงต Untitled Thread

Anonymous No. 16262037

> be me
> looking into AGI startup
> ask the developer if their AGI is real or just a fancy wrapper for a chat model
> they don't understand
> pull out illustrated diagram explaining what is AGI and what is wrapper
> they laugh and say "it's a real AGI, sir"
> run evals
> it's just a fancy wrapper for a chat model

Anonymous No. 16262058

>>16262037
Real AI is centuries away

Anonymous No. 16262121

Frog thread bumped

Anonymous No. 16262146

I wish people would actually research pre chat model rlhf base models, they are so much better. Unfortunately using them requires talent so the skill floor is higher.

Anonymous No. 16262152

>>16262037
how about you go back to the 'ddit

Anonymous No. 16262170

Man what a shitty boring stale meme. Did you steal this from twitter?

Anonymous No. 16262379

>>16262146
got any good links or resources? i crave AI research that isn't just "perceptron but large"

Anonymous No. 16262403

>>16262379
no, anon, that's exactly what that still is. it's just modifying the training procedure. while more of a customization of the resulting probability distributions in the output than just feeding data in, it's still just regurgitating a probability function, just now with "reinforcement learning/human feedback" adjustments to it. perceptrons are basically the only architecture we can run or train efficiently, and have been for over half a century.

we actually know biological brains don't work like that, assuming neuron/synapse architecture is the "source" of intelligence. real brains have feedback paths and cyclical structures that perceptrons lack (between each perceptron layer is a complete bipartite digraph that only moves in one direction; there's no way for information to move anywhere but to the outputs - no, backpropagation isn't the same thing, that's a process outside the perceptron to adjust the weights during training), and we legitimately have no idea how to train artificial neural nets with cyclic subgraphs efficiently like we do with perceptrons. we also have no idea how to train even perceptrons in real time. and we've been stuck there for half a century.