TY - Type of reference TI - Risks and benefits of conversational agents for naive access to scientific knowledge AU - Robert VISEUR AB - This article examines the risks associated with using generative conversational agents such as ChatGPT to access scientific knowledge (and, more broadly, technical and medical knowledge). The evolution of the Web has been accompanied by a shift in gatekeeping towards algorithmic forms, of which generative artificial intelligences are the latest manifestation. Their limitations, most notably hallucinations and various biases, are, however, well documented. Are these conversational agents therefore suitable for tasks of scientific mediation? Their performance depends not only on the properties of their algorithms but also on the availability of training data in sufficient quantity and quality. Access to content on news websites is, moreover, frequently hindered by publishers. What, then, of scientific content managed by commercial academic publishers? Must developers of generative chatbots rely on lower-quality material, with harmful consequences for the reliability of responses? We therefore analyse the risks of scientific misinformation stemming from constraints on data access. We then discuss these risks more broadly, when such agents are used as scientific mediators, across different usage scenarios. DO - 10.21494/ISTE.OP.2026.1405 JF - Open Journal in Information Systems Engineering KW - Artificial intelligence, gatekeeping, misinformation, datasets, Intelligence artificielle, gatekeeping, mésinformation, jeu de données, L1 - https://openscience.fr/IMG/pdf/iste_roisi26v6n1_3.pdf LA - en PB - ISTE OpenScience DA - 2026/01/28 SN - 2634-1468 TT - Risques et bénéfices des agents conversationnels pour l’accès profane aux connaissances scientifiques UR - https://openscience.fr/Risks-and-benefits-of-conversational-agents-for-naive-access-to-scientific IS - Special Issue VL - 6 ER -