Information and Communication > Home > Open Journal in Information Systems Engineering > Issue
Ce numéro spécial de la Revue Ouverte de l’Ingénierie des Systèmes d’Information se consacre à la thématique « L’industrie culturelle face à la transformation numérique », un sujet central pour comprendre les évolutions récentes des pratiques, des organisations et des usages culturels à l’ère du numérique. La culture, qui englobe croyances, pratiques, normes, valeurs, traditions et connaissances partagées, façonne les interactions sociales et les modes de production et de consommation. La transformation numérique modifie profondément ces processus, en offrant de nouvelles formes d’accès, de consommation et de création culturelle
This article questions the dominant narratives on generative artificial intelligence (GAI) in the cultural and creative industries, comparing it with the social and economic dynamics that accompany its integration into work processes and collectives. Three myths are analyzed: the disappearance of professions and jobs, the obsolescence of skills, and productivity gains. Using a multidisciplinary approach combining expertise in AI, a sociotechnical approach, and field approach, we show that the situation reveals structural imbalances rather than producing them: the gradual deterioration of processes, value capture, and deskilling are telling illustrations of this. We highlight real uses that tend to reconfigure these technologies and propose concrete avenues for collective reappropriation through the empowerment of actors, usage labs, and technological social dialogue.
This article focuses on the recommendation system of the subscription video-on-demand (SVOD) platform Netflix®, specifically exploring what can be understood from an end-user perspective. To this end, we investigate how viewing choices influence the recommendations displayed during the subsequent platform visit. To conduct this analysis, we designed two experiments comparing two user profiles: one long-standing profile active for 7 years, and another newly created profile. We concentrated on elements observable directly by the user: categories, recommended titles within personalized categories, and the « top banners » displayed prominently on the homepage. Our findings revealed the following: first, recommendations for a recent profile are more quickly and strongly influenced by the content viewed on that profile, whereas an older profile shows little change. Second, the new profile receives recommendations spanning a wide variety of genres, including popular content as well as some personalized suggestions. Finally, over time, while the older profile initially received increasing numbers of documentary recommendations based on its viewing history, only a few days of inactivity were enough for these recommendations to disappear entirely. Conversely, the recent profile continued to receive documentary suggestions. These experiments also allowed us to observe the evolution of suggested categories for each profile. The significant diversity of categories and the variability in how films are ranked within them emerged as important factors. This observation raises further questions about how the recommendation system creates and uses categories to encourage user engagement.
This article examines the risks associated with using generative conversational agents such as ChatGPT to access scientific knowledge (and, more broadly, technical and medical knowledge). The evolution of the Web has been accompanied by a shift in gatekeeping towards algorithmic forms, of which generative artificial intelligences are the latest manifestation. Their limitations, most notably hallucinations and various biases, are, however, well documented. Are these conversational agents therefore suitable for tasks of scientific mediation? Their performance depends not only on the properties of their algorithms but also on the availability of training data in sufficient quantity and quality. Access to content on news websites is, moreover, frequently hindered by publishers. What, then, of scientific content managed by commercial academic publishers? Must developers of generative chatbots rely on lower-quality material, with harmful consequences for the reliability of responses? We therefore analyse the risks of scientific misinformation stemming from constraints on data access. We then discuss these risks more broadly, when such agents are used as scientific mediators, across different usage scenarios.
Synthetic media produced by generative artificial intelligence (GAI) tools are flooding the Web, causing risks of cultural harm that should be addressed. Yet, the question of cultural harm resulting from the dissemination of synthetic media has only been partially addressed in the legal literature. This article aims to fill this gap, by exploring the legal implications of culturally harmful synthetic media. To that end, this article analyzes the key concept of cultural harm and the role played by cultural rights and the principle of cultural diversity in its characterization. The ways in which synthetic media can cause cultural harm and the potential legal consequences are then discussed. It will be shown that, while international law provides mechanisms for preventing cultural harm, there are few means to really take into account the specificities of culturally harmful synthetic media.
2026
Volume 26- 6
Special Issue2025
Volume 25- 5
Issue 12024
Volume 24- 4
Special Issue2023
Volume 23- 3
Special issue2021
Volume 21- 2
Issue 12020
Volume 20- 1
Issue 1