AI in education
Technology has impacted almost every sector; reasonably, it also needs time (Leeming, 2021). From telecommunication to communication and health to education, it plays a significant role and assists humanity in one way or another (Stahl A., 2021a, 2021b). No one can deny its importance and applications for life, which provides a solid reason for its existence and development. One of the most critical technologies is artificial intelligence (AI) (Ross, 2021). AI has applications in many sectors, and education is one. Many AI applications in education include tutoring, educational assistance, feedback, social robots, admission, grading, analytics, trial and error, virtual reality, etc. (Tahiru, 2021).
AI is based on computer programming or computational approaches; questions can be raised on the process of data analysis, interpretation, sharing, and processing (Holmes et al., 2019) and how the biases should be prevented, which may impact the rights of students as it is believed that design biases may increase with time and how it will address concerns associated with gender, race, age, income inequality, social status, etc. (Tarran, 2018). Like any other technology, there are also some challenges related to AI and its application in education and learning. This paper focuses on the ethical concerns of AI in education. Some problems are related to privacy, data access, right and wrong responsibility, and student records, to name a few (Petousi and Sifaki, 2020). In addition, data hacking and manipulation can challenge personal privacy and control; a need exists to understand the ethical guidelines clearly (Fjelland, 2020).
Perhaps the most important ethical guidelines for developing educational AI systems are well-being, ensuring workplace safety, trustworthiness, fairness, honoring intellectual property rights, privacy, and confidentiality. In addition, the following ten principles were also framed (Aiken and Epstein, 2000).
-
1.
Ensure encouragement of the user.
-
2.
Ensure safe human–machine interaction and collaborative learning
-
3.
Positive character traits are to be ensured.
-
4.
Overloading of information to be avoided
-
5.
Build an encouraging and curious learning environment
-
6.
Ergonomics features to be considered
-
7.
Ensure the system promotes the roles and skills of a teacher and never replaces him
-
8.
Having respect for cultural values
-
9.
Ensure diversity accommodation of students
-
10.
Avoid glorifying the system and weakening the human role and potential for growth and learning.
If the above principles are discussed individually, many questions arise while using AI technology in education. From its design and planning to use and impact, at every stage, ethical concerns arise and are there. It’s not the purpose for which AI technology is developed and designed. Technology is advantageous for one thing but dangerous for another, and the problem is how to disintegrate the two (Vincent and van, 2022).
In addition to the proper framework and principles not being followed during the planning and development of AI for Education, bias, overconfidence, wrong estimates, etc., are additional sources of ethical concerns.
Security and privacy issues
Stephen Hawking once said that success in creating AI would be the most significant event in human history. Unfortunately, it might also be the last unless we learn to avoid the risks. Security is one of the major concerns associated with AI and learning (Köbis and Mehner, 2021). Trust-worthy artificial intelligence (AI) in education: Promises and challenges (Petousi and Sifaki, 2020; Owoc et al., 2021). Most educational institutions nowadays use AI technology in the learning process, and the area attracted researchers and interests. Many researchers agree that AI significantly contributes to e-learning and education (Nawaz et al. 2020; Ahmed and Nashat, 2020). Their claim is practically proved by the recent COVID-19 pandemic (Torda, 2020; Cavus et al., 2021). But AI or machine learning also brought many concerns and challenges to the education sector, and security and privacy are the biggest.
No one can deny that AI systems and applications are becoming a part of classrooms and education in one form or another (Sayantani, 2021). Each tool works according to its way, and the student and teacher use it accordingly. It creates an immersive learning experience using voices to access information and invites potential privacy and security risks (Gocen and Aydemir, 2020). While answering a question related to privacy concerns focuses on student safety as the number one concern of AI devices and usage. The same may go for the teacher’s case as well.
Additionally, teachers know less about the rights, acts, and laws of privacy and security, their impact and consequences, and any violations cost to the students, teachers, and country (Vadapalli, 2021). Machine learning or AI systems are purely based on data availability. Without data, it is nothing, and the risk is unavoidable of its misuse and leaks for a lousy purpose (Hübner, 2021).
AI systems collect and use enormous data for making predictions and patterns; there is a chance of biases and discrimination (Weyerer and Langer, 2019). Many people are now concerned with the ethical attributes of AI systems and believe that the security issue must be considered in AI system development and deployment (Samtani et al., 2021). The Facebook-Cambridge Analytica scandal is one of the significant examples of how data collected through technology is vulnerable to privacy concerns. Although much work has been done, as the National Science Foundation recognizes, much more is still necessary (Calif, 2021). According to Kurt Markley, schools, colleges, and universities have big banks of student records comprising data related to their health, social security numbers, payment information, etc., and are at risk. It is necessary that learning institutions continuously re-evaluate and re-design the security practices to make the data secure and prevent any data breaches. The trouble is even more in remote learning environments or when information technology is effective (Chan and Morgan, 2019).
It is also of importance and concern that in the current era of advanced technology, AI systems are getting more interconnected to cybersecurity due to the advancement of hardware and software (Mengidis et al., 2019). This has raised significant concerns regarding the security of various stakeholders and emphasizes the procedures the policymakers must adopt to prevent or minimize the threat (ELever and Kifayat, 2020). It is also important to note that security concerns increase with network and endpoints in remote learning. One problem is that protecting e-learning technology from cyber-attacks is neither easy nor requires less money, especially in the education sector, with a limited budget for academic activities (Huls, 2021). Another reason this severe threat exists is because of very few technical staff in an educational institution; hiring them is another economic issue. Although, to some extent, using intelligent technology of AI and machine learning, the level and threat of security decrease, again, the issue is that neither every teacher is a professional and trained enough to use the technology nor able to handle the common threats. And as the use of AI in education increases, the danger of security concerns also increases (Taddeo et al., 2019). No one can run from the threat of AI concerning cybersecurity, and it behaves like a double-edged sword (Siau and Wang, 2020).
Digital security is the most significant risk and ethical concern of using AI in education systems, where criminals hack machines and sell data for other purposes (Venema, 2021). We alter our safety and privacy (Sutton et al., 2018). The question remains: whether our privacy is secured, and when will AI systems become able to keep our confidentiality connected? The answer is beyond human knowledge (Kirn, 2007).
Human interactions with AI are increasing day by day. For example, various AI applications, like robots, chatbots, etc., are used in e-learning and education. Many will learn human-like habits one day, but some human attributes, like self-awareness, consciousness, etc., will remain a dream. AI still needs data and uses it for learning patterns and making decisions; privacy will always remain an issue (Mhlanga, 2021). On the one hand, it is a fact that AI systems are associated with various human rights issues, which can be evaluated from case to case. AI has many complex pre-existing impacts regarding human rights because it is not installed or implemented against a blank slate but as a backdrop of societal conditions. Among many human rights that international law assures, privacy is impacted by it (Levin, 2018). From the discussed review, we draw the following hypothesis.
H1: There is a significant impact of artificial intelligence on the security and privacy issues
Making humans lazy
AI is a technology that significantly impacts Industry 4.0, transforming almost every aspect of human life and society (Jones, 2014). The rising role of AI in organizations and individuals feared the persons like Elon Musk and Stephen Hawking. Who thinks it is possible when AI reaches its advanced level, there is a risk it might be out of control for humans (Clark et al., 2018). It is alarming that research increased eight times compared to the other sectors. Most firms and countries invest in capturing and growing AI technologies, skills, and education (Oh et al., 2017). Yet the primary concern of AI adoption is that it complicates the role of AI in sustainable value creation and minimizes human control (Noema, 2021).
When the usage and dependency of AI are increased, this will automatically limit the human brain’s thinking capacity. This, as a result, rapidly decreases the thinking capacity of humans. This removes intelligence capacities from humans and makes them more artificial. In addition, so much interaction with technology has pushed us to think like algorithms without understanding (Sarwat, 2018). Another issue is the human dependency on AI technology in almost every walk of life. Undoubtedly, it has improved living standards and made life easier, but it has impacted human life miserably and made humans impatient and lazy (Krakauer, 2016). It will slowly and gradually starve the human brain of thoughtfulness and mental efforts as it gets deep into each activity, like planning and organizing. High-level reliance on AI may degrade professional skills and generate stress when physical or brain measures are needed (Gocen and Aydemir, 2020).
AI is minimizing our autonomous role, replacing our choices with its choices, and making us lazy in various walks of life (Danaher, 2018). It is argued that AI undermines human autonomy and responsibilities, leading to a knock-out effect on happiness and fulfilment (C. Eric, 2019). The impact will not remain on a specific group of people or area but will also encompass the education sector. Teachers and students will use AI applications while doing a task/assignment, or their work might be performed automatically. Progressively, getting an addiction to AI use will lead to laziness and a problematic situation in the future. To summarize the review, the following hypothesis is made:
H2: There is a significant impact of artificial intelligence on human laziness
Loss of human decision-making
Technology plays an essential role in decision-making. It helps humans use information and knowledge properly to make suitable decisions for their organization and innovations (Ahmad, 2019). Humans are producing large volumes of data, and to make it efficient, firms are adopting and using AI and kicking humans out of using the data. Humans think they are getting benefits and saving time by using AI in their decisions. But it is overcoming the human biological processors through lowing cognition capabilities (Jarrahi, 2018).
It is a fact that AI technologies and applications have many benefits. Still, AI technologies have severe negative consequences, and the limitation of their role in human decision-making is one of them. Slowly and gradually, AI limits and replaces the human role in decision-making. Human mental capabilities like intuitive analysis, critical thinking, and creative problem-solving are getting out of decision-making (Ghosh et al., 2019). Consequently, this will lead to their loss as there is a saying, use it or lose it. The speed of adaptation of AI technology is evident from the usage of AI in the strategic decision-making processes, which has increased from 10 to 80% in five years (Sebastian and Sebastian, 2021).
Walmart and Amazon have integrated AI into their recruitment process and make decisions about their product. And it’s getting more into the top management decisions (Libert, 2017). Organizations use AI to analyze data and make complex decisions effectively to obtain a competitive advantage. Although AI is helping the decision-making process in various sectors, humans still have the last say in making any decision. It highlights the importance of humans’ role in the process and the need to ensure that AI technology and humans work side by side (Meissner and Keding, 2021). The hybrid model of the human–machine collaboration approach is believed to merge in the future (Subramaniam, 2022).
The role of AI in decision-making in educational institutions is spreading daily. Universities are using AI in both academic and administrative activities. From students searching for program admission requirements to the issuance of degrees, they are now assisted by AI personalization, tutoring, quick responses, 24/7 access to learning, answering questions, and task automation are the leading roles AI plays in the education sector (Karandish, 2021).
In all the above roles, AI collects data, analyzes it, and then responds, i.e., makes decisions. It is necessary to ask some simple but essential questions: Does AI make ethical choices? The answer is AI was found to be racist, and its choice might not be ethical (Tran, 2021). The second question is, does AI impact human decision-making capabilities? While using an intelligent system, applicants may submit their records directly to the designer and get approval for admission tests without human scrutiny. One reason is that the authorities will trust the system; the second may be the laziness created by task automation among the leaders.
Similarly, in keeping the records of students and analyzing their data, again, the choice will be dependent on the decision made by the system, either due to trust or due to the laziness created by task automation among the authorities. Almost in every task, the teachers and other workers lose the power of cognition while making academic or administrative decisions. And their dependency increases daily on the AI systems installed in the institution. To summarize the review, in any educational organization, AI makes operations automatic and minimizes staff participation in performing various tasks and making decisions. The teachers and other administrative staff are helpless in front of AI as the machines perform many of their functions. They are losing the skills of traditional tasks to be completed in an educational setting and consequently losing the reasoning capabilities of decision-making.
H3: There is a significant impact of artificial intelligence with the loss of human decision making
Conceptual framework
Fig. 1