In their recently published paper, ‘Mirages: On anthropomorphism in dialogue systems’,1 Gavin Abercrombie, Amanda Cercas Curry, Tanvi Dinkar, Verena Rieser and Zeerak Talat survey the large body of research that shows how automated dialogue systems, influenced by linguistic factors, can lead to issues of transparency and trust, as well as reinforce gender stereotypes and notions of acceptable language. They highlight how language models have helped to create widespread public confusion, pointing out that ‘this confusion extends from children to adults (including some journalists, policymakers, and business people) that are convinced, on the one hand, of humanity’s imminent enslavement to super-intelligent artificial agents (to the neglect of actual harms already being propagated by technological systems) or, on the other, that they provide super-human solutions to the world’s problems.’ 2 3 4
The value of anthropomorphism
Anthropomorphism involves attributing human traits to non-human entities, fostering engagement and reciprocity. The authors recognise that ‘by encouraging such types of engagements, developers can foster greater connection between people and systems, which increases user satisfaction, and plays an important role in systems becoming widely accepted and adopted. That is, developers are incentivised to engage with anthropomorphism to stimulate people to create deeper emotional connections with systems that cannot reciprocate.’ 5
The paper suggests that anthropomorphising is often automatic and prompted by interface cues rather than deliberate beliefs. Some research proposes that anthropomorphism is a default behaviour that is corrected as people acquire more knowledge about an object, as individuals tend to anchor their understanding to personal experiences.6 This tendency is influenced by motivational determinants and driven by the need for efficient interaction and a desire for human connection. Loneliness and attachment issues may increase the likelihood of anthropomorphising objects such as dialogue systems which, the authors note, have been suggested as remedies for loneliness.7
Anthropomorphism in systems emerge from interactive, language-using systems that take on human roles. Drawing a parallel with human traits, characteristics such as personas, names and preferences contribute to the system’s human-like appearance, collectively forming what the paper describes as the ‘face’ of anthropomorphism. The authors consider the linguistic elements and design choices as acting like the ‘strokes in the anthropomorphic painting’ created by modern dialogue systems.
Linguistic factors
The design of conversational AI systems has focused on visual elements, but the authors suggest that linguistic factors influencing personification have been overlooked. Voice characteristics, including pitch, tone and prosody, play a significant role in attributing personhood to dialogue systems. Disfluencies, such as interruptions and hesitations (‘um’ and ‘er’), are integrated into text-to-speech systems, affecting user perception of system confidence. Accent and pronunciation features contribute to users associating socio-linguistic identity with synthetic voices, with potential implications for trust and transparency. The paper considers the inclusion of such linguistic elements in dialogue systems as problematic, citing Google Duplex among others.8
The anthropomorphic challenges in dialogue systems arise from the blurring of lines between animate and inanimate entities, with outputs expressing preferences, opinions and even human-like needs, such as hunger. To maintain transparency, the authors point to responses to direct questions about the system’s nature as crucial in considering regulatory requirements. Issues arise in tracking dialogue context and handling follow-up queries accurately. Linguistic elements, such as thought representation, sentience and agency, contribute to anthropomorphic perceptions, which tend to align system outputs with human values. Anthropomorphism is also influenced by empathetic responses, claims of human-like abilities and pronoun use, especially first-person pronouns, indicating consciousness. The paper suggests that addressing these factors is essential to prevent confusion and establish clear distinctions between human and machine capabilities.
Avoiding human identity
To mitigate anthropomorphism in automated system outputs, the authors suggest prioritising functional styles over social features in language, avoiding unnecessary phatic expressions such as pleasantries. Expressions of confidence and doubt play a role, with the ‘imposter effect’ observed when people overestimate the factual accuracy of generated output. Training dialogue systems to reflect uncertainty, incorporating hedging phrases, can counteract this effect but may enhance anthropomorphic signals. Personas, often based on human attributes, contribute to anthropomorphism and the paper recommends that efforts are made to prevent systems from appearing to have a human identity.
The use of anthropomorphic language and terminology, such as ‘know,’ ‘think,’ and ‘intelligence’, contributes to inaccurate perceptions of the system’s capabilities.
The roles assigned to dialogue systems by designers and users can shift them from tools to human-like companions, with implications for user interactions. Many systems are designed with subservient roles, leading to instances of verbal abuse. Systems can present unqualified expertise, providing high-risk diagnoses or treatment plans due to biased training data. The use of anthropomorphic language and terminology, such as ‘know,’ ‘think,’ and ‘intelligence’, contributes to inaccurate perceptions of the system’s capabilities, highlighting the importance of terminology in shaping user understanding.
The consequences of anthropomorphism – norms, roles and stereotypes
The paper is clear that anthropomorphism of dialogue systems can lead to adverse societal consequences. These include generating unreliable information and reinforcing social roles, language norms and stereotypes. Trust and deception issues arise when users are unaware that they are interacting with automated systems, especially among vulnerable populations like the young, elderly, or those with illnesses or disabilities. Gendering machines, even without explicit gender markers, can perpetuate stereotypes, with implications for representation and diversity. Language variation, historically centred around white, affluent American dialects in natural language processing technologies, may lead to code-switching and erasure of marginalised communities as users conform to recognised language norms. The personification of systems, particularly those centred on Western notions of acceptability, can exacerbate these issues, limiting diverse language data and reinforcing standardised linguistic representations.
Intelligence vs. human-like behaviour
The paper provides broad recommendations for dialogue system design, focusing on recognising and managing the tendency of users to personify these systems. It suggests that developers should be aware of the inherent human inclination to attribute meaning to signals, advising against unnecessary integration of anthropomorphic linguistic cues that may lead users to attribute human-like cognitive abilities to systems. The authors highlight the appropriateness of anthropomorphic tools and urge developers to consider context and use cases to avoid the over-integration of misleading cues . Research goals should emphasise the distinction between intelligence and human-like behaviour in AI systems. They advise against the embedding of human-like personality traits in dialogue systems and the use of anthropomorphic language in system descriptions, citing potential public confusion and the impact of language on understanding and behaviour.
A guide for developers – and policymakers
The paper’s authors acknowledge the attractions of anthropomorphism as a means of enhancing user engagement. Their principal targets are the ‘highly anthropomorphised systems’, highlighting the potential downstream harms, such as misplaced trust arising from the generated misinformation associated with such systems. They point out that, even when attempts are made specifically to avoid anthropomorphic signals in dialogue systems, users may still personify them. They also stress the importance of careful consideration of how systems might be perceived anthropomorphically, advocating for the deliberate selection of appropriate features for each context. They point to a list of linguistic factors influencing anthropomorphism, but emphasise that these factors vary among individuals. They also recognise the focus on English language systems and call for the inclusion of specific features in other languages. The authors are keen to highlight the potential use of the paper as a guide for creating anthropomorphic systems. Their aim, they say, is to help developers address concerns related to anthropomorphism in the light of advanced systems such as ChatGPT. These concerns may also be in the purview of policymakers as they consider the next steps towards AI guidance and regulation.
The authors conclude that: ‘by carefully considering how a system may be anthropomorphised and deliberately selecting the attributes that are appropriate for each context, developers and designers can avoid falling into the trap of creating mirages of humanity.’