Thesis title: Unlocking the potential of Artificial Intelligence: barriers and barriers’ inhibitors regarding the adoption of artificial intelligence-enabled products by consumers
In the past decade, euphoric projections foresaw mainstream adoption of artificial intelligence-enabled products - also known as smart objects (SOs) - (e.g., autonomous vehicles, smart robots, virtual assistants, conversational platforms) and connections of an estimated 50 billion devices to the Internet. But the reality of 2020 has not lived up to these predictions. Indeed, only 20 billion connected devices are actually in use out of the 50 billion that were predicted (Kranz, 2019), suggesting that “the road to mass adoption of the smart home will likely be a long and bumpy one” (Morrissey, 2019).
The key factors preventing consumers from the adoption of AI-enabled products relate to both technological features (e.g., use complexity, value offered, risk, object intrusiveness; Hubert et al., 2019; Johnson, Kiser, Washington, and Torres, 2018; Laukkanen, 2016; Lee and Coughlin, 2015; Mani and Chouk, 2017, 2018; Ram and Sheth, 1989) and consumers’ individual characteristics (e.g., data collection concerns, desire to avoid becoming dependent on AI-enabled products, low usage self-efficacy; Hsu and Lin, 2016; Johnson et al., 2018; Lee and Coughlin, 2015; Mani and Chouk, 2017, 2018; Ram and Sheth, 1989). Notably, data collection concerns (DCC) (Malhotra, Kim, and Agarwal, 2004; Smith, Milberg, and Burke, 1996) appear strongly influential. According to a recent survey, 87% of sales delays stem from consumers’ privacy concerns (Cisco, 2019). That is, DCC appears to hinder purchase intentions, being a crucial concern for consumers.
Although the relevance that several market and consulting reports ascribe to DCC, no empirical evidence so far has shown the actual role played by this barrier in preventing consumers’ adoption of AI-enabled products. Therefore, Paper 1 is an exploratory study adopting a mixed-method approach. The paper aims at identifying what are the barriers specific to the adoption of these devices and shows that DCC is the most important barrier in discriminating consumers between adopters and non-adopters of AI-enabled products. As a consequence, mainstream diffusion of AI-enabled products will require reductions in DCC, or else business efforts to neutralize or mitigate DCC’s adverse effects.
Some studies suggest that granting consumers control over their personal data management (i.e., control) is key to inhibiting DCC (e.g., Malhotra, Kim, and Agarwal, 2004; Xu, Teo, Tan, and Agarwal, 2012), and some companies (e.g., Apple, Facebook) have implemented policies to help consumers see which information is being collected and decide whether to allow such collections or when to remove their information from a company’s database. Beyond this direct effect of control on DCC, it also seems likely to interact with other relevant inhibitors, though these combined effects surprisingly have not been much studied (Aguirre, Mahr, Grewal, Ruyter, and Wetzels, 2015; Martin, Borah, and Palmatier, 2017; Martin and Murphy, 2017). For example, the quality of information that companies provide to explain how their AI-enabled products’ algorithms process personal data to produce a certain outcome, or information detail, is critical in AI contexts (Mastercard, 2020; Puntoni, Reczek, Giesler, and Botti, forthcoming; Rai, 2020), because consumers generally lack sophisticated knowledge about how the algorithms work, which strongly feeds their DCC (Cisco, 2019). Providing consumers with detailed information about the functioning of AI-enabled products’ algorithms should decrease DCC, alone and in interaction with control. Therefore, Paper 2, using a quantitative experimental approach, investigates whether, by which pathways, and in which settings the provision of detailed information combines with control to reduce DCC in an AI-enabled product domain.
All barriers we consider in Paper 1 and Paper 2 stem from either AI-enabled product technical capacities (e.g., connectivity, ubiquity, and smartness) or consumers’ personal traits. Focusing on AI-enabled product capacities, they depend on both AI-enabled product technical abilities (e.g., being provided with Internet connection, sensors, AI systems) as well as human-like characteristics (e.g., usage of natural language to communicate with the user, having humanized names and genders, and ability to interact with the user in real time). The human-like characteristics enhance the AI-enabled product’s social aspect, making AI-enabled product s’ social presence more salient (McLean & Osei-Frimpong, 2019). This social aspect has affected how consumers interact with AI-enabled products. Indeed, consumer-smart object interactions have acquired meaning that extend beyond the utilitarian benefits stemming from AI-enabled product technical capacities, as AI-enabled product characteristics are enriched by social aspects. The motivation leading consumers to use AI-enabled products now involves this social aspect. People use these devices for social purposes such as conversation (Ammari, Kaye, Tsai, & Bentley, 2019) or having company (Gao, Pan, Wang, & Chen, 2018). These characteristics lead consumers to look at AI-enabled products as potential partners in a relationship that can be referenced to social and interpersonal relationships (Gao et al., 2018; Hoffman & Novak, 2018; Novak & Hoffman, 2019). People can compare AI-enabled products to their romantic companions (Gao et al., 2018) and also attribute to them social roles (e.g., partner, master, or servant - Schweitzer et al., 2019). As the consumer-smart object interaction evolution has implications for consumer behavior (e.g., changes in the occasions of use and type of task performed), it has also implications for consumers’ resistance to the adoption of AI-enabled products. However, consumer resistance frameworks have not included the relational perspective as a barrier that can explain this phenomenon so far. In order to fully understand why a consumer rejects an AI-enabled product, it is necessary to account for reasons that may stem from consumers’ perception of as AI-enabled product as potential partner in a relationship. Therefore, Paper 3 uses a qualitative methodology to identify the relational barriers as a new category of barriers to consumers’ adoption of AI-enabled products.
In conclusion, this PhD dissertation is focused on the barriers to adoption of AI-enabled products and on the identification of factors that can inhibit the effect of such barriers on the adoption from a consumer behavior perspective. More specifically, the goal of this dissertation is threefold. First, to advance knowledge about the barriers to consumers’ adoption of AI-enabled products and factors that can inhibit these barriers. Second, to provide useful suggestions and solutions that could be implemented by managers to reduce the main barrier to consumers’ adoption (i.e., DCC). Third, to provide evidence about the existence of a new category of barriers: the relational barrier. In conclusion, despite limitations, this PhD dissertation aims at encouraging future research building on consumer resistance to AI-enabled products.