๐งต Untitled Thread
Anonymous at Fri, 22 Mar 2024 12:31:00 UTC No. 16090944
So... in the near future, if not now, scientists and researchers will be able to put a scientific paper into an AI summarizer to get all the TL;DR and they can crosscheck the AI:s conclusions by looking at the paper directly, say to confirm that what the AI is saying is true
This means researchers will be able to digest far more information going forward, and produce more papers as they aren't locked into having to read papers multiple times to understand them
What are the implications of this?
Anonymous at Fri, 22 Mar 2024 13:26:12 UTC No. 16091015
>>16090944
Generally the papers indtroduction/conclusion is a good enough tldr. This will probably only help outsiders understand papers, so people who arent part of the specific field, but want to use the research.
Anonymous at Fri, 22 Mar 2024 14:57:31 UTC No. 16091100
Eventually the AI will be able to write the papers and humans "double checking" AI produced information will be seen as a stupid endeavor equivalent to watching paint dry.
Frankly I can't wait. Having information that I can rely on and use in the world. Right now research is on the whole fake and it makes engineering solutions difficult because I have to constantly add in safety factors beyond what the researchers intended.
Anonymous at Fri, 22 Mar 2024 14:58:57 UTC No. 16091102
>>16090944
The authors themselves put in a TL;DR already. It's called the abstract. In my field it's also common practice to put an outline of the paper at the end of the introduction. I'm not sure what an AI could do in the near future that is better than this.
Anonymous at Fri, 22 Mar 2024 15:52:17 UTC No. 16091184
>>16091102
But chatgpt can give you the abstract without the jargon. Scientists intentionally use jargon to filter out non scientists. It's fucking scummy.
AIFag !Gy8L8Ggb7w at Fri, 22 Mar 2024 15:56:01 UTC No. 16091187
>>16090944
doubt it. LLMs are dumb and cannot summary anything that is harder than undergrad mathematic level properly.
I can just read the abstract and the summary instead of using shitty LLMs.
Anonymous at Fri, 22 Mar 2024 16:19:02 UTC No. 16091212
>>16091184
In a field like theoretical physics or math the jargon is there to convey ideas that don't exist in natural language. There is no way around it. There might be some self-promoters dressing up something simple in buzzwords, but that is not why the majority do it.
Anonymous at Fri, 22 Mar 2024 17:07:07 UTC No. 16091266
No you tard an abstract is made for this purpose, or the section called "Conclusions"
Anonymous at Fri, 22 Mar 2024 17:10:29 UTC No. 16091269
>>16091102
>>16091266
Except no researcher just uses the abstract. He actually reads the paper and tries to understand it
Anonymous at Fri, 22 Mar 2024 17:33:40 UTC No. 16091289
>>16091269
lmao, you have no clue. Reading and understanding a paper can take a day or more, depending on how different the material is from your specialty. The abstract is there so you can determine which of many papers are worth spending some extra time to even read the introduction and skim, which itself might cost you a half an hour.
Anonymous at Fri, 22 Mar 2024 17:44:25 UTC No. 16091302
>>16091289
And... thats the point of OP
AI is a great tool to massively increase the speed at which you understand a paper. An abstract cant be at all compared to an AI summarizer, where you can ask follow up questions, ask it to granularize or generalize the paper, so on and so on.
Anonymous at Fri, 22 Mar 2024 17:57:36 UTC No. 16091316
>>16091302
Yes I agree that would be helpful, but large language models are still garbage for technical fields. Maybe in 5 years.
AIFag !Gy8L8Ggb7w at Fri, 22 Mar 2024 18:10:01 UTC No. 16091330
>>16091302
>you can ask follow up questions, ask it to granularize or generalize the paper, so on and so on.
have you ever tried this?
I did and they're all garbage, especially for more technical papers where the mathematic is precise.
LLMs are horrible at this, they give wrong answers, wrong mathematic references, make it even more confusing.
No, I would just rather read the papers myself instead.
AIFag !Gy8L8Ggb7w at Fri, 22 Mar 2024 18:14:20 UTC No. 16091336
>>16091184
>abstract without the jargon
the reason to use jargons in an abstract is to shorten it in a precise way so other experts can skim the abstract and get what you are trying to do. this is the point of the abstract. without using jargon, researchers would have to write a bunch of mathematics formulas or paragraphs to convey the same ideas.