🧵 Untitled Thread
raphael at Sun, 8 Sep 2024 07:43:43 UTC No. 16367600
so im studying the intracies of ML and i found this booshit suddenly
linear regression is just algebra 2 and statistics overlap in khan academy its y=mx+b
its the line of best fit
it predicts a linear value or if the data is non linear it fits to it like in quant analysis
and tree boosting uses linear regression to predict the next branch after filtering the binary classification free of logistic regression
what the fuck is this shit?
did anyone else notice this?
>t. 100 iq anti memer
raphael at Sun, 8 Sep 2024 07:46:16 UTC No. 16367602
>>16367600
''' y
^
|
| *
| *
| * *
| * Line of best fit
| *
| *
|_________> x
'''
raphael at Sun, 8 Sep 2024 07:47:17 UTC No. 16367605
>>16367602
why does regression have complex syntax if its just algebra 2
Anonymous at Sun, 8 Sep 2024 07:55:14 UTC No. 16367612
Holy schizo, what are yous saying. Fix that spacing and give more context. WHat do you mean by "this booshit". Linear regression is statistics yea, usually the notation is like that because you do the sum of least square. Notice what exactly?
raphael at Sun, 8 Sep 2024 08:04:47 UTC No. 16367619
all of ml is regression
raphael at Sun, 8 Sep 2024 08:05:23 UTC No. 16367620
>>16367612
you are low iq
raphael at Sun, 8 Sep 2024 08:05:55 UTC No. 16367622
>>16367612
thats error dumbass
Anonymous at Sun, 8 Sep 2024 08:39:28 UTC No. 16367677
most of ML is just statistical learning aka regression
did you expect something else?
raphael at Sun, 8 Sep 2024 08:39:47 UTC No. 16367679
the activation function sigmoid for example is a s curve like how bitcoiners talk about efficiency in adoption why dont we have generative activation functions so the neural network spreads to the entire plane and thats why most of machine learning is bullshit
unless theres a layer of abstraction im not aware of please explain anon
>t. 100 FSIQ anti memer
Anonymous at Sun, 8 Sep 2024 08:48:26 UTC No. 16367688
>>16367679
Because empirically, training an activation function gives the same result with higher inference but lower parameter counts as just training some extra weights with a non-linear activation function? Check Kolmogorov-Arnold Network.
raphael at Sun, 8 Sep 2024 09:10:56 UTC No. 16367711
>>16367688
>Kolmogorov-Arnold Network.
how the fuck do you know this
raphael at Sun, 8 Sep 2024 09:12:59 UTC No. 16367713
>>16367688
you dont know what im talking about
raphael at Sun, 8 Sep 2024 09:13:36 UTC No. 16367715
>>16367677
i discovered the syntax bullshit you academics make up you cant seem to see that nigger
raphael at Sun, 8 Sep 2024 09:26:02 UTC No. 16367730
>>16367688
can you explain what a markov model is in simple terms and show a visualization
raphael at Sun, 8 Sep 2024 09:36:49 UTC No. 16367738
>>16367730
probability theory isnt agi
raphael at Sun, 8 Sep 2024 09:37:20 UTC No. 16367741
>>16367738
i say this because RL uses prob theory aka markov chains
Anonymous at Sun, 8 Sep 2024 09:41:38 UTC No. 16367746
>>16367715
I'm glad you've realized it. Next look into logistic "regresion" for more lolz
raphael at Sun, 8 Sep 2024 09:43:03 UTC No. 16367749
>>16367746
i know what binary classification is kek
raphael at Sun, 8 Sep 2024 09:45:46 UTC No. 16367753
>>16367746
oh shit lmfao
raphael at Sun, 8 Sep 2024 09:46:32 UTC No. 16367755
>>16367746
all of machine learning is regression what the fuck is this bullshit and what the fuck are hedge funds using
Raphael at Sun, 8 Sep 2024 11:14:42 UTC No. 16367858
>>16367600
Even vector auto regression is regression what the fuck so Ianguage models are using “kernel” non linear regression
Fucking hell
Anonymous at Sun, 8 Sep 2024 11:20:27 UTC No. 16367866
>>16367605
Because you need a mechanism to estimate the best fit parameters for the linear regression (slope in each direction if it's a plane in many dimensions).
If you already know the slope and bias then you don't need anything special. The process of getting that best-fit slope and bias is the regression part.
Anonymous at Sun, 8 Sep 2024 11:45:58 UTC No. 16367908
>>16367755
No, machine learning uses universal function approximators (MLPs) as a units. These are usually stacked into separate layer/convolution blocks to form a model's architecture , it's much more than juts a best fit line, a best fit line will always have some degree of variance if the data isn't aligned perfectly, while a UFA is a non-linear function that tries to get rid of any variance and capture all the details of it
raphael at Mon, 9 Sep 2024 00:10:48 UTC No. 16368970
>>16367866
then its stats
Anonymous at Mon, 9 Sep 2024 01:18:31 UTC No. 16369049
>>16367600
Literally all of machine learning and human consciousness is just elaborate linear regression and interpolation with a large enough number of different fits.
Raphael at Mon, 9 Sep 2024 02:36:56 UTC No. 16369125
>>16369049
It’s abstract memeing syntax
raphael at Mon, 9 Sep 2024 05:28:27 UTC No. 16369245
>>16369049
just read this again
you dont know how a machine learning algorithm works kek