@emilymbender
This is the paper the journalist references:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5166938
The AI antagonists, once again prove that humans do not need #AI to generate #bullshit.
All that this "research" proves is that at the most generous these Professors are ignorant of how #LLM models work. Which is the most generous interpretation of the human #hallucinations they created. Because the other explanation is less generous, they engage in academic fraud.
Briefly why:
Methodology.
They upload a specific full 185 page textbook into the LLM context window (presumably with full permission of the copyright owner), then they ask SUPER SPECIFIC questions,
without directing the AI to reference the specific text uploaded
.
Once again, be sceptical when humans reference a "study" showing "Bad AI". So far, every time I have seen such a study, it's the human flailing at the controls.
Including the "famous" #BBC study.
Newsflash: "Hammers cause thumb injuries in humans who are
untrained
in their use"