Introduction

Recently, Artificial Intelligence (AI) has been actively applied in a variety of educational scenarios, which creates numerous promising opportunities for educational innovations (Cheng et al. 2020; Hwang et al. 2020; Wang et al. 2020). Meanwhile, as AI in education is regarded as a “highly technology-dependent and cross-disciplinary field” (Hwang et al. 2020, p. 1), most practitioners (including teachers) do not know how AI functions and cannot make full use of AI in education (Celik et al. 2022; Chounta et al. 2021; Chiu and Chai 2020; Hwang et al. 2020). Actually, developing and implementing effective AI-based learning activities has been considered very challenging in practice (Hwang et al. 2020). This highlights the critical importance of enhancing teachers’ professional learning regarding AI. However, as Lindner and Berges (2020) have claimed, in the field of AI, there is a dearth of investigations focusing on K-12 teacher education. Furthermore, a recent study showed that many K-12 teachers in China were anxious about the complex algorithms or codes of AI and reluctant to learn AI (Li and Gu 2021). Therefore, it is necessary to investigate K-12 teachers’ intentions to learn AI and related determinants so as to promote their AI learning, which is the prerequisite for effective teaching with AI.

The Theory of Reasoned Action (TRA) (Fishbein and Ajzen 1975), Theory of Planned Behavior (TPB) (Ajzen 1985, 1991) and technology acceptance model (TAM) (Davis 1989; Davis et al. 1989) is widely used to predict people’s behavioral intentions to use new technologies. However, these models have seldom been applied in the domain of AI education (Nazaretsky et al. 2022), and the construct of behavioral intentions to learn AI has scarcely been explored (Chai et al. 2021), especially that of teachers. Furthermore, when explaining individuals’ behavioral intentions, traditional theories (i.e., TRA, TPB, TAM) neglect some perspectives (e.g., Akman and Turhan 2017; Scherer and Teo 2019; Ursavaş et al. 2019; Zhao et al. 2021), including literacy (Teeroovengadum et al. 2017; Nazaretsky et al. 2022) and ethical awareness (Akman and Turhan 2017). In the context of AI education, the two perspectives are highly valued. Firstly, AI literacy has been considered as an indispensable capability that everybody needs to have in response to the AI-powered world during the twenty-first century (Ng et al. 2021, 2022). It is argued that individuals with higher levels of AI literacy are less likely to be fearful of AI applications (Chai et al. 2021). As AI literacy is frequently emphasized in K-12 schools (Chai et al. 2021; Ng et al. 2021, 2022), it is meaningful to take AI literacy into account when predicting teachers’ and students’ behavioral intentions to learn or use AI. Secondly, due to its novelty and complexity, the risk of using AI has become a momentous issue, and numerous global institutions have thus called for attention on AI ethics (Borenstein and Howard 2021; Lin et al. 2021; Qin et al. 2020; Richards and Dignum 2019; Shih et al. 2021). If individuals do not trust in existing AI ethics guidelines, they would not be eager to learn or use AI (Qin et al. 2020). Akman and Turhan (2017) highlighted that exploring the complex relationship between people’s ethical concerns and their behavioral intentions can help understand their decision-making process of learning and using new technologies. As AI ethics is incorporated in K-12 teaching (Lin et al. 2021; Shih et al. 2021), it is meaningful to examine the often-neglected perspective of ethical awareness when assessing K-12 teachers’ behavioral intentions to learn AI.

Motivated by these gaps, our aim to explore the antecedents of K-12 teachers’ intentions to learn AI. To this end, we propose and validate a model, which integrates AI literacy and awareness of AI ethics with TRA and TPB. Our larger goals include using our findings as a foundation for further investigation in the arena of K-12 teachers’ AI learning, which is still in its infancy, and for future practical design and implementation of professional teacher programs focusing on AI.

Literature review and hypotheses development

Behavioral intentions to learn AI

TRA, proposed by Fishbein and Ajzen (1975), posits that the variable of people’s actual behavior can be accurately and immediately determined by their behavioral intentions to perform that behavior. The central factor of TRA, behavioral intentions, is defined as people’s belief about their future willingness to perform a certain action (Fishbein and Ajzen 1975). Ajzen (1991) further explained that behavioral intentions are “indications of how hard people are willing to try, of how much of an effort they are planning to exert, in order to perform the behavior” (p. 181). Nowadays, the factor behavioral intentions is widely used in various fields (e.g., Bin et al. 2020; Davis 1989; Davis et al. 1989; Kyndt et al. 2011; LaCaille 2013). For instance, Davis (1989) and Davis et al. (1989) proposed in the TAM that a user’s behavioral intentions to use a new system directly determine his or her adoption of the system. For another instance, the variable intentions to learn is recognized as “the proximal determinant of participation in learning activities” (Kyndt et al. 2011, p. 214).

As AI is a brand of new, updated and advanced technologies, many teachers feel that they do not have enough knowledge and skills to use or even teach it well in practice (Celik et al. 2022; Chounta et al. 2021; Chiu and Chai 2020; Hwang et al. 2020). To address this issue and equip teachers with required knowledge and skills, it is important to increase teachers’ readiness to learn AI. However, As Chai et al. (2021) claimed, the factor behavioral intentions to learn, undergirded by TRA, has yet to be fully discussed and thoroughly investigated in AI education. This study operationally conceptualizes behavioral intentions to learn AI as people’s belief about their future willingness to learn AI. The term behavioral intentions to learn AI describes K–12 teachers’ belief about their future willingness to learn what constitutes AI and how to apply AI in their teaching (Chai et al. 2021). If teachers have higher behavioral intentions to learn AI, they are more likely to engage in different kinds of professional learning activities involving AI knowledge and skills.

Perceptions of the use of AI for social good (PAIS)

The initial TRA suggests that people’s attitudes towards a certain behavior can significantly predict their related behavioral intentions (Fishbein and Ajzen 1975). Specifically, attitudes toward a certain behavior refer to “the degree to which a person has a favorable or unfavorable evaluation or appraisal of the behavior in question” (Azjen 1991, p. 188). These kinds of attitudes are generally produced when people judge the outcomes of the behavior (Fishbein and Ajzen 1975). People can have positive attitudes toward a behavior if they consider its outcomes beneficial (Fishbein and Ajzen 1975).

The fair use of AI can cause different outcomes, which may benefit not only the users themselves, but also the society. Recently, there have been growing calls for the application of AI in the domain of social good (Cowls et al. 2021; Floridi et al. 2021; Tomašev et al. 2020). The term AI for social good has also been introduced to describe the phenomenon of “leveraging AI technologies to deliver socially beneficial outcomes” (Cowls et al. 2021, p. 111). Educational researchers have recommended that the idea of AI for social good should be incorporated into K-12 school curriculum (e.g., Chiu and Chai 2020; Lin and Van Brummelen 2021). In doing so, teachers and students can have positive attitudes towards AI learning as they realize that using AI can be of great benefit to others and society (Chai et al. 2021). Actually, Chai et al. (2021) claimed that the factor, PAIS, is one important but often-neglected facet of attitudes towards AI learning. Furthermore, considering the impact of attitudes on behavioral intentions (Fishbein and Ajzen 1975), it could be assumed that if people can perceive the benefit of the use of AI to the society, they will be extrinsically motivated and have strong behavioral intentions to learn AI. However, to the best of our knowledge, this relationship has never been certified among teachers. We hypothesize:

H1: K-12 teachers’ PAIS will directly influence their behavioral intentions to learn AI.

Self-efficacy in learning AI

Ajzen (1985, 1991) added perceived behavioral control to TRA and proposed the TPB. TPB complements that perceived behavioral control is also an important determinant of behavioral intentions (Ajzen 1985, 1991), which describes “people’s perception of the ease or difficulty of performing the behavior of interest” (Ajzen 1991, p. 183). Particularly, Ajzen (1991) pointed out that perceived behavioral control “is most compatible with Bandura’s (1977, 1982) concept of perceived self-efficacy” (p. 184). In his social cognitive theory, Bandura (1982) proposed that self-efficacy “is concerned with judgments of how well one can execute courses of action required to deal with prospective situations” (p. 122).

Previous studies have confirmed the impact of self-efficacy on behavioral intentions to learn (e.g., Evans et al. 2020; Lin et al. 2018; Kumar et al. 2020). For instance, Kumar et al. (2020) substantiated the direct influence of mobile learning self-efficacy on mobile learning intentions. Based on TPB, this study operationally conceptualizes teachers’ self-efficacy in learning AI as their perception of the ease or difficulty of learning and understanding the basic knowledge or concepts of AI, and further hypothesizes:

H2: K-12 teachers’ self-efficacy in learning AI will directly influence their behavioral intentions to learn AI.

Several prior studies have indicated the two predictors of behavioral intentions derived from TPB, namely attitudes towards the behavior and perceived behavioral control (i.e., self-efficacy), are significantly correlated (e.g., Coban and Atasoy 2019; Kao et al. 2020; Yada et al. 2018). For instance, it was found that teachers’ attitudes towards inclusive education were significantly influenced by their self-efficacy in the use of inclusive practices (Yada et al. 2018). However, as very few studies applies TPB in AI education (Chai et al. 2021), the relationship between self-efficacy in learning AI and attitudes towards the use of AI has seldom been examined, especially among teachers. Considering that PAIS is one important facet of attitudes towards the use of AI, we hypothesize:

H3: K-12 teachers’ self-efficacy in learning AI will directly influence their PAIS.

AI literacy

Beyond the TPB, Fishbein and Ajzen (2010) also noted that epistemic factors could be the antecedents of attitudinal and control beliefs, which may consequently predict behavioral intentions. Specifically, epistemic factors usually describe people’s conceptions about knowledge or knowing in a certain domain or field (Hofer and Pintrich 1997). In AI education, AI literacy is a critical epistemic factor (Chai et al. 2021), which encapsulates people’s knowledge and understanding of AI concepts and application (Chai et al. 2021; Lin et al. 2021; Ng et al. 2021). Chai et al. (2021) defined AI literate as people who “know what constitutes AI and know how to apply AI to different problems” (p. 90). Long and Magerko (2020) provided a more comprehensive definition of AI literacy: “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace” (p. 598). Although AI episteme is not explicitly emphasized in their definition, Long and Magerko (2020) noted that literacy is historically associated with people’s access to knowledge and suggested that knowledge of AI is an important component of AI literacy.

Previous studies have verified the impact of literacy on attitudes towards a certain behavior (e.g., Jan 2018; Nam and Park 2016) and self-efficacy (e.g., Khan and Idris 2019; Prior et al. 2016). Nevertheless, to the best of our knowledge, such effects have never been explored in research on teachers’ AI learning. Based on Fishbein and Ajzen’s (2010) notes, we hypothesize:

H4: K-12 teachers’ AI literacy will directly influence their PAIS.

H5: K-12 teachers’ AI literacy will directly influence their self-efficacy in learning AI.

Furthermore, considering the hypothetical impact of K-12 teachers’ AI literacy on their PAIS, self-efficacy and behavioral intentions to learn AI, the following indirect effects are formulated:

H6: K-12 teachers’ AI literacy will indirectly influence their behavioral intentions to learn AI mediated by PAIS.

H7: K-12 teachers’ AI literacy will indirectly influence their behavioral intentions to learn AI mediated by self-efficacy.

Awareness of AI ethics

When it comes to the appropriate learning and use of AI, ethics is a critical issue that can never be ignored (Borenstein and Howard 2021; Lin et al. 2021; Qin et al. 2020; Richards and Dignum 2019; Shih et al. 2021). Actually, the uncertainty and risk of AI has aroused wide public concerns (Jobin et al. 2019; Qin et al. 2020). To respond to these concerns, a large number of ethical principles have been developed to promote the proper understanding and use of AI (Jobin et al. 2019; Richards and Dignum 2019). Among them, transparency, responsibility, justice and sustainability are four widely-emphasized core AI ethical principles (Lin et al. 2021).

The term awareness describes people’s attention, concern (mindful or heedful) and sensitivity regarding a certain issue or action (Sudarmadi et al. 2001). Lin et al. (2021) and Shih et al. (2021) pointed out that there was a strong positive link between awareness of AI ethics and AI literacy. Actually, according to Long and Magerko’s (2020) definition, individuals with AI literacy are able to critically evaluate AI. Therefore, they may pay close attention to and be concerned about the risk of AI, and then be aware of the AI ethical issues. Additionally, as the AI literate usually have a good knowledge and understanding of AI (Chai et al. 2021; Lin et al. 2021; Ng et al. 2021), they can also know and understand the potential risk, limitation and uncertainty of AI, and thus realize the ethical aspects of AI. However, the direct impact of awareness of AI ethics on AI literacy has rarely been confirmed among K-12 teachers. We hypothesize:

H8: K-12 teachers’ AI literacy will directly influence their awareness of AI ethics.

Awareness can play a vital role in attitude formation (Potas et al. 2022; Shuhaiber and Mashal 2019; Sweldens et al. 2014)Footnote 1. For instance, in the educational technology field, Potas et al. (2022) found that adolescents’ awareness of technology addiction directly affected their attitudes towards it. Hence, it is reasonable to assume that individuals’ awareness of AI ethics may influence their attitudes towards the use of AI. However, to the best of our knowledge, this effect has never been confirmed. Considering that the factor, PAIS, is one of the most important attitudes towards the use of AI (Chai et al. 2021), we hypothesize:

H9: K-12 teachers’ awareness of AI ethics will directly influence their PAIS.

Furthermore, considering the hypothetical impact of AI literacy on awareness of AI ethics, PAIS and behavioral intentions to learn AI, the following indirect impact is formulated:

H10: K-12 teachers’ AI literacy will indirectly influence their PAIS mediated by awareness of AI ethics.

H11: K-12 teachers’ awareness of AI ethics will indirectly influence their behavioral intentions to learn AI mediated by the use of AI for social good.

Based on the aforementioned justifications, the conceptual research model is proposed (see Fig. 1).

Fig. 1: The conceptual research model.
figure 1

Note. BI behavioral intentions to learn AI, PAIS perceptions of the use of AI for social good, SE self-efficacy in learning AI, AIL AI literacy, AAIE awareness of AI ethics.

Method

Participants and procedure

A total of 318 K-12 teachers from sixteen provinces or municipalities in China participated in our study voluntarily. Table 1 shows the profile of our participant teachers. As for the recruitment process, we first targeted 300 as our sample size based on Hu and Bentler’s (1999) recommendations for structural equation modeling analysis. Then, we randomly selected around fifty K-12 partner schools in different regions of China. Next, we randomly selected around eight teachers in each partner school and sent an open recruitment letter as well as an online questionnaire link to them by email. Teachers interested in our study could complete the questionnaire with mobile phones or computers. We finally received 339 responses, and 21 of them were removed due to incompleteness. Ethics approval was obtained before we distributed the questionnaires.

Table 1 Profile of the participant teachers.

All teachers in our K–12 partner schools used one AI-based product in their teaching named Zhixue, which was developed by iFLYTEK. This AI-based product had three main functions. Firstly, it could record and analyze teachers’ teaching language and behaviors automatically and further help teachers assess and improve their teaching performance. Secondly, it could automatically generate teaching materials and resources and further help teachers complete their lesson plans. Thirdly, it could assist teachers in evaluating students’ coursework. It is also one of the most widely used AI-based products in Chinese K–12 schools currently.

Data collection tool

The scales of the four sub-dimensions (i.e., transparency, responsibility, justice, sustainability) of awareness of AI ethics were adapted from Lin et al. (2021) and Shih et al. (2021). Each sub-dimension was assessed by three items. The scales of AI literacy, self-efficacy in learning AI, PAIS and behavioral intention to learn AI were adapted from Chai et al. (2021), respectively containing four items, four items, five items, and four items. After modifying some statements of the five original scales to suit our study whose participants were teachers, we followed the forward- and back-translation step to develop its Chinese version. During the translation process, language experts were consulted. After that, a pilot test was conducted. Some items were revised according to the results of the exploratory factor analysis. Lastly, we finalized the formal questionnaire. It was seven-point Likert-type, where 1 represents strongly disagree and 7 represents strong agree. The validity and reliability of our formal questionnaire is presented in the Results section.

Data analysis

Following Anderson and Gerbing’s (1988) guide, a two-step structural equation modeling analysis was performed using AMOS 21. First, we conducted the confirmatory factor analysis (CFA) to validate our measurement model. The validation process is shown in the Results section. Second, we estimated the structural model to test the research hypotheses and detect the direct and indirect effects of AI literacy on other constructs.

Results

Our conceptual research model has four sub-dimensions of awareness of AI ethics, which require to us to firstly validate the first-order factors and the second-order factor of measurement models using CFA before integrating and testing all factors of the conceptual model. Our results of CFA were compared with the suggested fit statistics (Byrne 2010, p. 80) such as “chi-square (χ2)/degree of freedom (<5), root mean square error of approximation (RMSEA < 0.1), Tucker–Lewis Index (TLI > 0.90) and comparative fit index (CFI > 0.90)”. The four-factor measurement model of transparency, responsibility, justice and sustainability is considered to be the first-order factors consisted of 12 indicators (χ2 = 157.223; df = 48; p = 0.000; RMSEA = 0.085; CFI = 0.965; and TLI = 0.952) and the second-order factor of awareness of AI ethics (χ2 = 176.537; df = 50; p = 0.000; RMSEA = 0.089; CFI = 0.960; and TLI = 0.947) were valid. Then, we interrelated our second-order factor of awareness of AI ethics (AAIE) with other factors such as behavioral intentions to learn AI (BI), perceptions of the use of AI for social good (PAIS), self-efficacy in learning AI (SE), and AI literacy (AIL) as indicated in Fig. 2, and we excluded three problematic items (SE2, SG5 and BI1) to validate our modified measurement model (χ2 = 856.382; df = 285; p = 0.000; RMSEA = 0.080; CFI = 0.927; and TLI = 0.917) through convergent and discriminant validity.

Fig. 2: Revised measurement model.
figure 2

The figure presents the modified measurement model and its constructs.

Table 2 shows how gradually fit indices confirmed the required values after excluding highly correlated items one at a time.

Table 2 The three problematic items.

The significant values of double headed arrows in Fig. 2 contains covariances among the factors, which are smaller than the square root of average variance extracted (AVEs) (Fornell and Larcker, 1981). Additionally, the composite reliability (CR > 0.70) and AVE (>0.50) scores for all the dimensions of our revised model exceeded Hair et al.’s (2010) recommendations due to the significant loadings ranging from 0.73 to 0.95. Above all, the results ensure the convergent and discriminant validity of our model and allow us to test the predicted hypotheses (see Tables 3 and 4).

Table 3 The results of the discriminant validity.
Table 4 The results of the revised measurement model.

To estimate our conceptual research model, we constructed 11 hypotheses and found all of them are valid based on their path coefficients (β), critical ratio (CR > 1.96) and p values. Our conceptual model’s validity was also adequately analyzed based on its fit indices (χ2 = 958.646; df = 366; p = 0.000; RMSEA = 0.071; CFI = 0.925; and TLI = 0.917). Figure 3 confirms that K-12 teachers’ PAIS had a direct influence on their behavioral intentions (BI) to learn AI (β = 0.62, p = 0.000, CR = 9.037). K-12 teachers’ self-efficacy (SE) in learning AI had direct influence on their behavioral intentions to learn AI (β = 0.29, p = 0.000, CR = 4.557) and PAIS (β = 0.55, p = 0.000, CR = 7.625). On the other hand, K-12 teachers’ AI literacy (AIL) had a direct influence on their PAIS (β = 0.18, p = 0.016, CR = 2.406), self-efficacy in learning AI (β = 0.77, p = .000, CR = 12.238) and awareness of AI ethics (β = 0.62, p = 0.000, CR = 10.063). K-12 teachers’ awareness of AI ethics (AAIE) also had a direct influence on their PAIS (β = 0.22, p = 0.000, CR = 4.249). Additionally, we tested the control variables of Gender, School Stage, Age, School District, Education Background, and Major to examine their impacts on the constructs of the structural model. The results showed that among the control variables, only age (β = 0.24, p = 0.000, CR = 5.102) and major (β = −0.13, p = 0.006, CR = −2.733) have a significant impact on AAIE, while school district (β = 0.08, p = 0.029, CR = 2.177) has a significant impact on PAIS. Interestingly, the results with control variables found that our proposed model is valid and robust, and the constructed hypotheses remain statistically significant. To produce a clear path diagram of the structural model, we excluded insignificant control variables from the structural model (see Fig. 3).

Fig. 3: The conceptual research model.
figure 3

The figure indicates the causal relationships among the constructs.

Furthermore, through the Sobel test (Sobel 1982), we found that K-12 teachers’ AI literacy had an indirect influence on their behavioral intentions to learn AI mediated by PAIS (χ2 = 2.779; p = 0.002) and self-efficacy (χ2 = 4.066; p = 0.000). Teachers’ AI literacy also had an indirect influence on their PAIS mediated by awareness of AI ethics (χ2 = 4.210; p = 0.000). Lastly, K-12 teachers’ awareness of AI ethics had an indirect influence on their behavioral intentions to learn AI mediated by the use of AI for social good (χ2 = 3.835; p = 0.000). We have summarized the accepted hypotheses and the variances of mediating and endogenous variables in Table 5.

Table 5 The summary of hypotheses.

Discussion

This study proposes an empirically-based model for K-12 teachers to illustrate the power of AI literacy and exhibit the antecedents of behavioral intentions to learn AI. Our model is based on a mixed of theoretical backgrounds, including: (1) Fishbein and Ajzen (1975) TRA, (2) Ajzen’s (1985, 1991) TPB, (3) Fishbein and Ajzen’s (2010) conceptualization regarding the impact of epistemic factors on attitudinal and control beliefs, (4) Lin et al.’s (2021) and Shih et al.’s (2021) conceptualization regarding the link between awareness of AI ethics and AI literacy, and (5) Potas et al.’s (2022), Shuhaiber and Mashal’s (2019) and Sweldens et al.’s (2014) conceptualization regarding the role of awareness played in attitude formation. This study is among the first to integrate AI literacy and awareness of AI ethics with TRA and TPB to predict K-12 teachers’ behavioral intentions to learn AI. Our findings theoretically and practically contribute to the limited knowledge of K-12 teachers’ AI learning as follows.

Firstly, in light of TRA and TPB, which has seldom been used in the domain of AI education (Chai et al. 2021), this study confirms that K-12 teachers’ PAIS and self-efficacy in learning AI are the direct determinants of their behavioral intentions to learn AI. Our findings support prior research showing the impact of attitudes (e.g., Gjicali and Lipnevich 2021; Norwich and Duncan 1990; Zhu et al. 2020) and self-efficacy (e.g., Evans et al. 2020; Lin et al. 2018; Kumar et al. 2020) on behavioral intentions to learn in the context of K-12 teachers’ AI learning.

Secondly, for the first time, this study successfully incorporates two important perspectives (i.e., literacy and ethical awareness) into TRA and TPB. Our study is the first to articulate that AI literacy and awareness of AI ethics are two indirect predictors of behavioral intentions to learn AI. On the one hand, our findings are in line with previous studies indicating the impact of literacy on attitudes towards a certain behavior (e.g., Jan 2018; Nam and Park 2016) and self-efficacy (e.g., Khan and Idris 2019; Prior et al. 2016) in AI education. On the other hand, our findings confirm Lin et al.’s (2021) and Shih et al.’s (2021) standpoint that awareness of AI ethics and AI literacy are positively linked. More importantly, as there is some dispute about the role of awareness played in attitude formation (Sweldens et al. 2014), our findings can help better understand the relationships between awareness and attitude by detecting the direct effect of awareness of AI ethics on AI attitudes.

Our findings elucidate that AI literacy has a direct impact on PAIS, self-efficacy in learning AI and awareness of AI ethics and has an indirect impact on behavioral intentions to learn AI. This impact has rarely been examined before among K-12 teachers, especially the indirect relationship between AI literacy and behavioral intentions to learn AI. Our model shows the significance of AI literacy in K-12 teachers’ AI learning. Therefore, throughout K-12 teachers’ professional learning, empowering their AI literacy should be the central element.

Fourthly, our study detects the significant effects of K–12 teachers’ age and major on AAIE, and school district on PAIS. On the one hand, our study supports Wilford and Wakunuma’s (2014) findings that age and major could impact individuals’ ethical awareness of technologies. As they pointed out, old people and information systems professionals could respectively better understand the ethical issues than young people and other professionals (Wilford and Wakunuma 2014). On the other hand, the impact of K–12 teachers’ school district on PAIS has never been reported before. As Tena-Meza et al. (2022) pointed out, many people in marginalized, low-income and rural communities did not have access to AI technologies. Therefore, urban teachers have more chances to witness and experience the socially beneficial outcomes that AI technologies deliver than rural teachers. This may be the reason why there are significant differences between urban and rural teachers’ PAIS.

This study provides insights into the practical design and implementation of professional teacher programs focusing on AI. To enhance K-12 teachers’ behavioral intentions to learn AI, professional teacher programs should help eliminate their anxiety about the complexity and uncertainty of AI and enhance their self-efficacy in learning AI. Professional teacher programs can also help teachers understand the idea of AI for social good and encourage teachers to make full use of AI to help students or others. In addition, we recommend that professional teacher programs should include AI ethics as part of their core content. Teachers should fully realize and understand AI ethical principles (e.g., transparency, responsibility, justice and sustainability). Besides, professional teacher programs cannot emphasize the significance of AI literacy too much. In response to the upcoming AI era, teachers have to thoroughly know and understand the basic concepts and knowledge of AI through their professional learning. Only in this way can they have a fruitful and successful teaching life in the AI-powered educational world.

Conclusion

This study represents one of the earliest attempts to empirically examine the power of AI literacy and explore the determinants of behavioral intentions to learn AI among K-12 teachers. Our findings demonstrate that K-12 teachers’ AI literacy can directly influence their PAIS and self-efficacy in learning AI, which are the immediate antecedents of behavioral intentions to learn AI. Meanwhile, K-12 teachers’ AI literacy also directly impacts their awareness of AI ethics, which will influence PAIS and further influence behavioral intentions to learn AI. In summary, this study shows that PAIS and self-efficacy in learning AI are two direct determinants of behavioral intentions to learn AI, while awareness of AI ethics and AI literacy are two indirect ones. Notably, AI literacy, as the only exogenous variable in the model, has a direct impact on the other three mediating variables (i.e., PAIS, self-efficacy in learning AI and awareness of AI ethics), and an indirect impact on the endogenous variable (behavioral intentions to learn AI). Most importantly, 75% of the variance in K-12 teachers’ behavioral intentions to learn AI can be accounted for by using the four predictive variables, indicating the strong explanation power of our model.

For the first time, this study successfully incorporates two important perspectives (i.e., literacy and ethical awareness) into TRA and TPB, and thus expands TRA and TPB. Moreover, this study can theoretically contribute to the virgin field of K–12 teachers’ AI learning in the following aspects. Firstly, our study is the first to articulate that K–12 teachers’ AI literacy and awareness of AI ethics are two indirect predictors of their behavioral intentions to learn AI. Secondly, our study detects the direct effect of K–12 teachers’ awareness of AI ethics on AI attitudes. It should be noted that there is some dispute about the impact of awareness on attitudes in the existing literature (Sweldens et al. 2014). Thirdly, our model empirically shows the significance of AI literacy in K–12 teachers’ AI learning by revealing the direct impact of AI literacy on PAIS, self-efficacy in learning AI and awareness of AI ethics, and the indirect impact on behavioral intentions to learn AI. This impact has rarely been examined before among K–12 teachers. Fourthly, our study detects the significant effects of K–12 teachers’ age and major on AAIE, and school district on PAIS. These effects have seldom been reported before. Last but not least, our study contributes to the understanding of the antecedents of the construct K–12 teachers’ behavioral intentions to learn AI, which has yet to be discussed and thoroughly investigated in the literature of AI education (Chai et al. 2021).

Finally, it is necessary to acknowledge four limitations. First of all, due to limited time and funding, a few regions in China are not covered in our study. Therefore, a certain amount of caution is still needed when generalizing our findings. Meanwhile, considering the huge development gap between coastal and inland provinces in China (Jiang et al. 2021; Jiang et al. 2024), it should be acknowledged that teachers in different regions may have different AI resources, knowledge and learning opportunities. In the future, regional comparisons are needed to understand the differences of behavioral intentions to learn AI among teachers in different regions. Secondly, this is a quantitative study that relies on teachers’ self-reported data. Nowadays, self-reported data is widely used in educational studies although it may be subjective (Fryer and Dinsmore 2020). As prior studies claimed that self-report may be the only viable manner in which to explore individuals’ self-efficacy (Fryer and Dinsmore 2020; Zimmerman 2000), we only collected teachers’ self-reported data. We recognize that the use of subjective data is one limitation of our study. Future studies can collect data from more sources to increase the robustness of our findings. Thirdly, although our model has shown strong explanation power, it does not mean that the model cannot be further improved. Based on our model, future studies can include more perspectives (e.g., subjective norms) to further enhance its explanation power. Fourthly, three items (i.e., SE2, SG5 and BI1) in the original instrument cannot be validated in our study. According to Wolf et al. (2021), the validity of the items may vary in different cultural contexts as people in different cultural contexts may understand the meaning of the items differently. Despite this, the actual reasons why these three items cannot be validated needs further investigation.