Information and Communication > Home > Journal
ROISI - ISSN 2634-1468 - © ISTE Ltd
The journal aims at providing a space for the publication of disciplinary or interdisciplinary methodological or applied French-speaking research, in the field of information systems engineering. The contributions formalize the design, implementation, and evaluation of information systems. The journal aims to promote and energize stimulating and high-quality research in the emerging themes of information systems. The language of publication is French and, exceptionally, English.
Scientific Board
|
Guillaume CABANAC
|
Nadira LAMMARI
|
L’objectif de la revue est de fournir un espace pour la publication de recherches francophones disciplinaires ou interdisciplinaires, méthodologiques ou appliquées autour de l’ingénierie des systèmes d’information. Les contributions ont pour but de formaliser la conception, la mise en œuvre et l’évaluation des systèmes d’information. La revue vise à promouvoir et dynamiser des recherches stimulantes et de haute qualité dans les thématiques émergentes des systèmes d’information. La langue de publication est le français et, à titre exceptionnel, l’anglais.
Conseil scientifique
|
Guillaume CABANAC
|
Nadira LAMMARI
|
Ce numéro spécial de la Revue Ouverte de l’Ingénierie des Systèmes d’Information se consacre à la thématique « L’industrie culturelle face à la transformation numérique », un sujet central pour comprendre les évolutions récentes des pratiques, des organisations et des usages culturels à l’ère du numérique. La culture, qui englobe croyances, pratiques, normes, valeurs, traditions et connaissances partagées, façonne les interactions sociales et les modes de production et de consommation. La transformation numérique modifie profondément ces processus, en offrant de nouvelles formes d’accès, de consommation et de création culturelle
This article questions the dominant narratives on generative artificial intelligence (GAI) in the cultural and creative industries, comparing it with the social and economic dynamics that accompany its integration into work processes and collectives. Three myths are analyzed: the disappearance of professions and jobs, the obsolescence of skills, and productivity gains. Using a multidisciplinary approach combining expertise in AI, a sociotechnical approach, and field approach, we show that the situation reveals structural imbalances rather than producing them: the gradual deterioration of processes, value capture, and deskilling are telling illustrations of this. We highlight real uses that tend to reconfigure these technologies and propose concrete avenues for collective reappropriation through the empowerment of actors, usage labs, and technological social dialogue.
This article focuses on the recommendation system of the subscription video-on-demand (SVOD) platform Netflix®, specifically exploring what can be understood from an end-user perspective. To this end, we investigate how viewing choices influence the recommendations displayed during the subsequent platform visit. To conduct this analysis, we designed two experiments comparing two user profiles: one long-standing profile active for 7 years, and another newly created profile. We concentrated on elements observable directly by the user: categories, recommended titles within personalized categories, and the « top banners » displayed prominently on the homepage. Our findings revealed the following: first, recommendations for a recent profile are more quickly and strongly influenced by the content viewed on that profile, whereas an older profile shows little change. Second, the new profile receives recommendations spanning a wide variety of genres, including popular content as well as some personalized suggestions. Finally, over time, while the older profile initially received increasing numbers of documentary recommendations based on its viewing history, only a few days of inactivity were enough for these recommendations to disappear entirely. Conversely, the recent profile continued to receive documentary suggestions. These experiments also allowed us to observe the evolution of suggested categories for each profile. The significant diversity of categories and the variability in how films are ranked within them emerged as important factors. This observation raises further questions about how the recommendation system creates and uses categories to encourage user engagement.
This article examines the risks associated with using generative conversational agents such as ChatGPT to access scientific knowledge (and, more broadly, technical and medical knowledge). The evolution of the Web has been accompanied by a shift in gatekeeping towards algorithmic forms, of which generative artificial intelligences are the latest manifestation. Their limitations, most notably hallucinations and various biases, are, however, well documented. Are these conversational agents therefore suitable for tasks of scientific mediation? Their performance depends not only on the properties of their algorithms but also on the availability of training data in sufficient quantity and quality. Access to content on news websites is, moreover, frequently hindered by publishers. What, then, of scientific content managed by commercial academic publishers? Must developers of generative chatbots rely on lower-quality material, with harmful consequences for the reliability of responses? We therefore analyse the risks of scientific misinformation stemming from constraints on data access. We then discuss these risks more broadly, when such agents are used as scientific mediators, across different usage scenarios.
Synthetic media produced by generative artificial intelligence (GAI) tools are flooding the Web, causing risks of cultural harm that should be addressed. Yet, the question of cultural harm resulting from the dissemination of synthetic media has only been partially addressed in the legal literature. This article aims to fill this gap, by exploring the legal implications of culturally harmful synthetic media. To that end, this article analyzes the key concept of cultural harm and the role played by cultural rights and the principle of cultural diversity in its characterization. The ways in which synthetic media can cause cultural harm and the potential legal consequences are then discussed. It will be shown that, while international law provides mechanisms for preventing cultural harm, there are few means to really take into account the specificities of culturally harmful synthetic media.
Ce numéro spécial, « Apports et limites de l’intelligence artificielle pour la gestion des connaissances tacites en entreprise », reprend une sélection retravaillée de trois des huit contributions ayant été présentées à l’atelier "Connaissances Tacites" (KM-IA).
Digital artificial intelligence (AI) is ubiquitous and constantly interacts with humans, drawing on both their explicit and tacit knowledge. Unlike humans, who possess both formalized knowledge and a wealth of tacit knowledge shaped by experience, AI does not hold any intrinsic knowledge. It generates responses by exploiting algorithmic models and accumulated datasets, but encounters limitations in understanding and reproducing tacit knowledge, which is often unarticulated and highly context-dependent. However, AI could play a key role in the articulation and transmission of such knowledge. By interacting with humans, it may assist in structuring informal knowledge, identifying recurring patterns in decision-making, and facilitating the exchange of expertise within organizations. Inspired by the concept of "Ba" defined by Nonaka, which describes a shared space that fosters knowledge creation, AI could act as a catalyst for formalizing certain aspects of tacit knowledge, while simultaneously raising major epistemological challenges, such as bias and the opacity of AI models. In this article, we analyze the capabilities and limitations of AI in addressing informal knowledge. We explore the mechanisms by which it could contribute to the emergence of a hybrid intelligence, combining human reasoning with algorithmic assistance, and discuss the practical, ethical, and equity-related implications of this interaction, particularly in domains where intuition and experience are essential, such as medicine, education, and strategic decision-making.
This study explores the capture and valorization of tacit knowledge within scientific organizations, using BRGM as a case study. Facing the strategic challenge of knowledge management, we examine the transformation of indi-vidual knowledge into a collective asset via a three-pronged methodological approach: theoretical framework, use of AI tools to identify and transcribe this knowledge, and proposal of a CBR architecture with an AI agent ("beregem") to solve problems in geosciences. This research contributes to the management of scientific tacit knowledge through AI-based solutions.
This paper explores the perpetuation of practitioners’ tacit knowledge in the context of projects aimed at designing Artificial Intelligence (AI) uses in organizations. By comparing an interdisciplinary review of the state of the art on tacit knowledge with an observational field study of 7 application cases in France and Switzerland, this article sheds light on the dynamics of capturing practitioners’ tacit knowledge during the design and operation of AI models and highlights three areas for consideration: (1) the emergence of new devices for translating practitioners’ know-how into data models and capturing tacit knowledge through the maieutic carried out in the design phase, (2) the difficulty of taking unconscious tacit knowledge into account when judging AI in use, revealing issues of interpretability, cognitive bias and trust, and (3) the capture of knowledge, including tacit knowledge, as the primary goal of Data Science projects. But this capture may not be desired by the practitioners or even introduce an intermediation that prevents the development of further tacit knowledge derived from real-life experience in favour of that linked to the use of AI. These considerations lead to the improvement of tacit knowledge perpetuation devices, as long as their legitimacy is justified, and the risks are mitigated.
Editorial Board
Editor in Chief
Isabelle COMYN-WATTIAU
ESSEC Business School
[email protected]
Vice Editor in Chief
Christine VERDIER
Université Grenoble Alpes
[email protected]
Olivier TESTE
IRIT, Université de Toulouse
[email protected]