๐งต We are running out of time to solve the millenium problems
Anonymous at Wed, 30 Oct 2024 10:55:31 UTC No. 16455740
In ~10 years, all these problems will be solved by AI, so if we want them to be solved by humans, we have to hurry up
Anonymous at Wed, 30 Oct 2024 11:15:57 UTC No. 16455753
>>16455740
You can't turn lead into gold and a computer will not auto populate knowledge and do your homework for you. Never gonna happen and all the baseless claims of AGI and a "God in the machine" mean nothing on the ground where I can mind fuck you AI in 30 seconds or less.
Anonymous at Wed, 30 Oct 2024 13:07:48 UTC No. 16455814
>>16455753
would not be so sure about that.
it is relativly easy to create an infinit amount of training data for math.
in programming at least there seems to be no real limit what it can do. at least we can expect future systems to be very capable in these areas. it might very well be able to at least check your paper and find flaws like it can check my code and find flaws.
it very well will be able to slove any kind of universty exam with ease in a year or two, current versions allready getting good at this..
it might not be a god, but it will have a phd quite soon
Anonymous at Wed, 30 Oct 2024 13:14:48 UTC No. 16455817
>>16455740
LLM isn't real AI and can't reason, stop coping.
Anonymous at Wed, 30 Oct 2024 14:25:44 UTC No. 16455844
>>16455740
That's not how it works, bro
Anonymous at Wed, 30 Oct 2024 15:36:32 UTC No. 16455908
>>16455814
A Ph.D. requires original research. Where will this "AI" get its original research from?
Anonymous at Wed, 30 Oct 2024 15:39:58 UTC No. 16455914
>>16455740
>In ~10 years, all these problems will be solved by AI
GOOD
Then we get reverse aging
Even if I was capable of attacking those problems, which would I rather?
>experience 40 years of decay in the latter half of my life after being more capable than 99.99999% of humans
>be a normal human, feel good and go outside every day
Anonymous at Wed, 30 Oct 2024 15:41:13 UTC No. 16455915
>>16455740
SAAAR PLEASE GIVE TRAINING DATA FOR MY MODEL YOU BLOODY BENCHOD IT WILL PROVE THE RIEMANN HYPOTHESIS BY LEARNING OFF OF THE FAILED PROOFS
Anonymous at Wed, 30 Oct 2024 15:58:20 UTC No. 16455940
>>16455740
>*AI plays chess
>your consciousness is nothing special
>*AI passes Turing Test
>your consciousness is nothing special
>*AI does art
>your consciousness is nothing special
>*AI does science
>Noooo STooop my consciousness is special and AIs can't reason like me
The absolute state of /sci/tards.
Anonymous at Wed, 30 Oct 2024 16:04:41 UTC No. 16455943
>>16455914
How will solving the Riemanm hypothesis solve aging? Solving aging doesn't require more intelligence, it just needs research, which takes time, because it takes 36 months for a mouse to die and if you need to try several compounds, a lot of time passes, even with AGI
Anonymous at Wed, 30 Oct 2024 16:05:29 UTC No. 16455944
>>16455740
we've got another 975 years left surely?
Anonymous at Wed, 30 Oct 2024 18:31:32 UTC No. 16456095
>>16455940
You're downplaying a human being as nothing more than a logical system and then proceed to toot your horn that logical systems are better at being logical systems than your caricature of a human being. Classic straw man.
Anonymous at Wed, 30 Oct 2024 19:02:47 UTC No. 16456124
>>16456095
For decades this board of materialists have been parading machine intelligence as the apex of intellect set to overtake man in almost every sci-fi.
Now when shit gets real and AI actually start to become comparable or even straight up outcompete humans in various sectors, all of a sudden ohhh nooo AI will never be able to do math/sci.
Like are you jokers for real? A logical machine will never be able to outcompete you in math/sci, of all things? I swear the moment /sci/tards perceive a threat to their paychecks all rationale goes out the window and their triple digit human IQs become wholly bent on manufacuring non-stop copium.
Anonymous at Wed, 30 Oct 2024 19:19:22 UTC No. 16456130
>>16456124
That's a bunch of sweeping statements. It's wholly unclear what ''doing'' math and science even is and the future does not need to be humans versus AI it can be humans with AI and it's dishonest to portray a future as an inevitability. We have more to fear from what humans are going to do with AI than from the possibility of AI itself.
๐๏ธ Anonymous at Wed, 30 Oct 2024 19:47:41 UTC No. 16456157
>>16456095
>It's wholly unclear what ''doing'' math and science even is
Yep, here comes the cope.
Anybody genuinely doing math and science knows what it is. Empirical studies comes down to one thing - prediction. Create models of reality, predict or bust. That's it.
Whatever fears the general pubic might have, or should have, the fear on this board over irrelevancy and job-loss to AI is absolutely palpable.
Anonymous at Wed, 30 Oct 2024 19:48:57 UTC No. 16456161
>>16456130
>It's wholly unclear what ''doing'' math and science even is
Yep, here comes the cope.
Anybody genuinely doing math and science knows what it is. Empirical studies comes down to one thing - predictions. Create models of reality, predict or bust. That's it.
Whatever fears the general public might have, or should have, the fear on this board over irrelevancy and job-loss to AI is absolutely palpable.
Anonymous at Wed, 30 Oct 2024 20:11:13 UTC No. 16456195
>>16456161
>That's it.
No it's not. Towards what end do we build models of reality? Where do our hypotheses come from? Where is the line between facts and interpretations? What cost/benefit ratios are acceptable? Do the ends of science justify the means? Are AI capable of asking themselves these questions and re-evaluating their answers during their life cycle? Many such questions.
Anonymous at Wed, 30 Oct 2024 20:46:29 UTC No. 16456234
>>16456195
>No it's not. Towards what end do we build models of reality?
To consistently reproduce results, to get more or what people want, and less of what people don't want. A simple exercise in logics based on empirical sensory data. That's it. There is nothing mystical about it.
>Are AI capable of..(more empirical questions)
Short of appealing to /x/ concepts, there is no question whose answer that is rooted in observable facts that an AI won't be able to consider. It's simple data input/output.
In terms of processing power and software malleability, your wetware of a brain is severely capped while a machine's software/hardware is not. The eventual outcome is obvious to anyone not huffing copium by the barrel.
Anonymous at Wed, 30 Oct 2024 20:52:48 UTC No. 16456245
>>16456234
Perhaps my point was unclear. To clarify: it's not up to AI to decide its input and its not up to AI to decide what to do with its output.
Anonymous at Wed, 30 Oct 2024 20:59:52 UTC No. 16456255
>>16456245
And that's a question of subjective should, not objective could.
Anonymous at Wed, 30 Oct 2024 21:21:20 UTC No. 16456281
>>16456255
So you concede that the subjective part of science is not going to be replaced by AI.
Anonymous at Wed, 30 Oct 2024 21:24:28 UTC No. 16456284
>>16456124
>>16456161
>>16456234
absolute state of this trannyme lover with his inferiority complex.
If you weren't too busy watching anime and jerking off to hentai, you could see that science is not just input/output with randomised processes in between, it much more.
You mentioned creating a model. It's not something that you can brute force your way until you find the right one. A lot of problems in science and math are beyond the scope of our current knowledge and and understanding in that field, so no amount of pattern recognition would help you to find the answer. Those aren't the type of questions that "the answer is somewhere inside the vast amounts of current data, we just have to look closer" or "we just need to connect the dots, the answer will reveal itself". For solving a lot of problems in science and math, especially the difficult ones, you need to literally create something new that has never existed before, something that would challenge some of the well established Ideas that have been out there for decades or centuries. You need to think outside of the box like that faggot Steve jobs used to say.
AI can't do that.
Anonymous at Wed, 30 Oct 2024 21:34:34 UTC No. 16456288
>>16456281
>subjective part of science
>science
>subjective
Do you even read what you write?
Yes, the part of how I go about eating an output cake is subjective, and also not a science.
The part of how much I allow an AI to penetrate my privacy to collect input data on my preference of what type of cake I like is also subjective and is also, not a science.
The part of how the AI turns said data into a delicious cake measured in terms of efficiency and effectiveness is what is science and not subjective.
Anonymous at Wed, 30 Oct 2024 21:34:43 UTC No. 16456289
>>16456284
Not him but you do beg the question where novelty comes from if not from some combination of current data. Whether or not AI is better in finding a novel combination that revolutionizes a scientific field is another question but that's exactly where he wants us because then he's going to say that we're shifting the goalposts by pretending that creativity is magical.
Anonymous at Wed, 30 Oct 2024 21:41:39 UTC No. 16456294
>>16456288
>Do you even read what you write?
If you really mean to pretend that the objective can clearly be discerned from the subjective in 2024 then you're not worth anyone's time or effort to reply. An AI in the hands of the vax crowd will generate different results from an AI in the hands of the anti-vax crows even though in both cases they follow logic and reason.
Anonymous at Wed, 30 Oct 2024 21:52:14 UTC No. 16456301
>>16456284
>You mentioned creating a model. It's not something that you can brute force your way until you find the right one.
Overwhelming empirical evidence says otherwise faggot.
>no amount of pattern recognition would help you to find the answer
>you need to literally create something new that has never existed before
Science is literally pattern recognition. What do you think is a hypothesis? It's a pattern phrased in the human language. What do you think the scientific method does? It's a methodology to adjust the patterns in a model to more closely match patterns in reality.
Create something new? All you are creating is more best-fit lines to try to better approximate the pattern in data points.
Yes, an AI can definitely do that, and better.
Anonymous at Wed, 30 Oct 2024 22:01:43 UTC No. 16456313
>>16455740
https://www.cio.com/article/3593403
>cant even transcript
Did we get too cocky AI bros?
Anonymous at Wed, 30 Oct 2024 22:04:42 UTC No. 16456317
>>16456301
>best-fit
How do you know when an AI has found a better fit? In b4:
>Because more reliable and accurate predictions.
Then explain the rules for how to determine whether an observation fits a prediction or not and by extension the rules for whether or not an AI did the science correctly. Pro tip: you can't because you would win a nobel prize for solving the entire climate change debate once and for all.
Anonymous at Wed, 30 Oct 2024 22:10:41 UTC No. 16456322
>>16456294
>>16456317
>AI cannot ever collect empirical data independently
>AI must rely on my meaty human hands to input data to verify models, of which I can lie my ass off
>This is a fundementally insurmountable barrier for AIs
Your copium tanks is running out of gas.
Anonymous at Wed, 30 Oct 2024 22:22:02 UTC No. 16456336
>>16456322
The insurmountable problem is: all logic and reason must start with an axiom for which there is no logical or reasonable justification. In b4:
>Try all the axioms.
Still an axiom is needed to determine the hierarchy of all possible frameworks of valid, accurate and reliable correlations.
Anonymous at Wed, 30 Oct 2024 22:26:13 UTC No. 16456341
>>16455817
So what's going on with "actual AI"? is it progessing as rapidly as LLMs?
Anonymous at Wed, 30 Oct 2024 22:33:22 UTC No. 16456349
>>16456336
>can't prove axioms
Neither can humans, nobody cares.
Useful axiomatic assumptions are induced generalizations based on large quantities of observations. AIs can either use human axioms, or form their own based on their own observations from ground zero.
The only thing fundmental is purpose. Axioms are secondary to that.
Anonymous at Wed, 30 Oct 2024 22:43:42 UTC No. 16456360
>>16456341
Real AI is a dead dream of wannabe-genious modern alchemists who are too egotistic to admit that neuroscience must come before even a hint of creating real AGI can become apparent. All they can do is cope about artifical neuron count being too low and training data being too small (despite human brain being able to BTFO CNNs in terms of recognition with only a shred of data needed for instant, near perfect recognition).
Anonymous at Wed, 30 Oct 2024 23:01:57 UTC No. 16456374
>>16456349
>The only thing fundmental is purpose. Axioms are secondary to that.
That's like the chicken-and-egg of ontology and epistemology. It's indeterminable which one is primary.
>Neither can humans
The point is that all decisions including the decisions of AI are ultimately based on subjective norms and values. Your pretense that science and AI are objective/independent systems is unfounded.
>AIs can either use human axioms, or form their own
So you concede AI and science done by AI is subjective.
>their own observations from ground zero.
Again the chicken-and-egg-problem: an observation is already an intentionally directed limited perspective. Thus observation already requires axioms before axioms based on observations are established. There is no such thing as ground zero. Humans nor AI are blank slates.
Unironically AI must rely on meaty human hands because it's of utmost importance to the wellbeing of sapient homos that AI operates on our norms and values and not on its own.
Anonymous at Wed, 30 Oct 2024 23:32:30 UTC No. 16456391
>>16456374
>AI is subjective
No, AIs are trained on objective values. You can literally see the input codes. That is how their output is measured and adjusted.
>axioms are fundamental
Axioms are only fundamental in the human epistemological system. The human epistemological system is a way to efficiently utitlize the human mind to recongize patterns. It's not some baked in fundemental aspect of reality. Intelligence is measured purely by input/output. Human minds having generalized axiomatic assumptions helps toward this purpose.
>AI must rely on meaty hands to input perimeters
>should not be completely independent without oversight
Should not. Not could not.
Anonymous at Thu, 31 Oct 2024 00:43:15 UTC No. 16456448
/sci/ is less freaked out about the possibility of AI replacing scientists in the immediate future but more about the fact AI will be used to read the vast amount of unread publishings and highlight suspicious papers for review.
The gruel train is about to end for many.
Anonymous at Thu, 31 Oct 2024 03:11:40 UTC No. 16456598
>>16455740
If the answer isn't in the algorithms it won't know the answer
Anonymous at Thu, 31 Oct 2024 08:07:20 UTC No. 16456735
>>16455814
> least there seems to be no real limit what it can do
You mean we don't know if there is a limit to what it can do
> at least we can expect future systems to be very capable in these areas
How can you expect that when you don't know what the limit is
>it very well will be able to slove any kind of universty exam with ease in a year or two, current versions allready getting good at this..
That is because that shit is explicitly trained for by these jewish AI companies so they can tell their investors look at how smart this thing is getting, you should give us another 10 gorillian dollars for our next gpu cluster.
Anonymous at Thu, 31 Oct 2024 13:37:25 UTC No. 16456910
>>16455740
Why would I be bothered by AI solving those instead of some meat blob?
Anonymous at Thu, 31 Oct 2024 14:08:33 UTC No. 16456939
>>16456910
It's like with football teams. You always want yours to win
Anonymous at Thu, 31 Oct 2024 14:19:21 UTC No. 16456952
>>16456939
Oh ook, some people are like that, they need the sense of belonging, sense of being part of a group, so that there are we and there are them.
I don't like sports, never considered anyone my team... maybe thats why I don't care.
Anonymous at Thu, 31 Oct 2024 14:28:19 UTC No. 16456958
>>16455814
>0+0=0
>0+1=1
>0+2=2
>. . .
OMFG
Anonymous at Thu, 31 Oct 2024 14:35:51 UTC No. 16456972
>>16456952
Tribalism is an evolutionary survival skill. We all have it. Cheering for your team is just evolution at work.
You are maladapted.
Anonymous at Thu, 31 Oct 2024 14:40:02 UTC No. 16456979
>>16455740
AI is very good at creating clear and succinct descriptions of well understood problems with well documented solutions.
Anonymous at Thu, 31 Oct 2024 14:45:26 UTC No. 16456989
>>16456972
Yeah it makes sense. I always think in terms of gains and costs when doing something... I may be a sociopath, is there an objective way to test it ?
Anonymous at Thu, 31 Oct 2024 15:06:32 UTC No. 16457039
>>16456989
"DSM-5 flame war" is another thread, Anon.
This is an "AGI fan fiction" thread.
Anonymous at Thu, 31 Oct 2024 15:12:57 UTC No. 16457052
>>16457039
>AGI fan fiction
well said, AGI is as fantasy as dragons.
even if you would have a humainoid robot which speaks and behaves like a human under the hood there will always be a glorified calculator soft written by fat and sweaty IT savant trying to cope with loneliness.