@JagersbergKnut I have been searching on the topic for several months. A first experimental study is here https://t.co/n3ub5iY2yY and now I am rather on the uses and risk mitigation in fact-checking (coming from NLP in my PhD, interest in LLM is not surpri
1,770 followers
1,770 followers
@benoitraphael Bcp de tests juqu'à présent et RAS dans ce sens (mais je trouve ça fascinant), par contre je constate fréquemment des générations de contenus qui ont l'air plausibles mais n'ont rien à voir avec les faits, peu importe le prompt (du simple à
1,770 followers
@rasmus_kleis Current detectors lack accountability and reliability. However, such detectors are less relevant because GAI systems are used to inform and disinform. The solution? Focusing on content quality (especially when dealing with artificial hallucin
478 followers
ChatGPT as a commenter to the news: can LLMs generate human-like opinions?. https://t.co/SszQT5kLxt