Engineering and Systems > Home > Cognitive Engineering > Issue
This study examines whether the automation bias in situations of arbitration between human and AI-based assistance varies as a function of individuals’ psychosocial characteristics. The literature highlights the robustness of the automation bias in decision-making situations with a single aid, but a few recent studies mobilizing the dual decision aid paradigm identify more nuanced results, particularly as a function of participants’ characteristics. 2 groups of participants (37 military pilot students vs. 37 operational pilots) are engaged in an close air support mission simulation, where they must choose between information provided by a human aid and that provided by an AI-based automated aid. Trust in these aids is induced a priori by predefined levels of reliability (20%, 50%, 70% 90%). With equal reliability, when young participants and experts are confronted with a human aid and an AI-based aid, they have a preference for the human aid. However, this preference is greater for experts. The study questions the invariability of automation bias, highlighting the impact of the operator’s psychosocial characteristics on decision-making. It seems necessary to reconsider the automation bias in modern contexts through individual representations of technologies to optimize the design of decision support systems.
Epistemology does be a meta-discipline whose structuring as 4x4 matrix called epistemological polyptich is presented so as to help situate any scientific study according to a simple spectrum. Some consequences of this polyptich approach are stated.
As a theoretical concept, cognitive warfare is receiving increasing attention. Yet there is a gap between the nascent literature on the subject and a thorough understanding of China’s cognitive warfare strategy and tactics, as well as the impact it has on democracies. The research hypothesises that China’s cognitive warfare strategy, while drawing on disruptive technologies and scientific advances, particularly in the field of neuropsychology, is rooted in the country’s historical strategic culture, and in particular in the indirect strategy and (de)socialisation processes in China’s worldview.
Following an inventory of some characteristics of “meaning”, we consider the interaction between a human being with another as a crucial situation. The question of meaning, with the hypothesis that meaning is the opposite of information, is then discussed.
To respond to the problems posed by the growing use of AI models in high stakes applications, explainable artificial intelligence (XAI) has experienced significant growth in recent years. Initially dedicated to the search for technical solutions making it possible to produce explanations automatically, it encountered several difficulties, in particular when these solutions were confronted with non-expert end users. The XAI then sought to draw inspiration from the social sciences to produce explanations that were easier to understand. Despite some encouraging results, this new approach has not brought as much as hoped. This article analyzes the evolution of the XAI through these two periods. He discusses possible reasons for the difficulties encountered, and then proposes a new approach to improve the automated production of explanations. This approach, called semantic explainability or S-XAI, focuses on user cognition. While previous methods are oriented towards algorithms or causality, S-XAI starts from the principle that understanding relies above all on the user’s ability to appropriate the meaning of what is explained.
Researchers are beginning to transition from studying human–automation interaction to human–autonomy teaming. This distinction has been highlighted in recent literature, and theoretical reasons why the psychological experience of humans interacting with autonomy may vary and affect subsequent collaboration outcomes are beginning to emerge. In this review, we do a deep dive into human-autonomy teams (HATs) by explaining the differences between automation and autonomy and by reviewing the domain of human–human teaming to make inferences for HATs. We examine the domain of human–human teaming to extrapolate a few core factors that could have relevance for HATs. Notably, these factors involve critical social elements within teams that are central (as argued in this review) for HATs. We conclude by highlighting some research gaps that researchers should strive toward answering, which will ultimately facilitate a more nuanced and complete understanding of HATs in a variety of real-world contexts.