As the digital revolution continues to sweep the world, AI technology has helped organizations optimize their outcomes across multiple industries. This includes the higher education sector, which has turned to AI to improve student success and retention and to relieve overburdened teaching staff. However, it’s important to remember that AI is not a silver bullet; it can present challenges such as scaling, learning, and bias that can be problematic.
Keep reading as we dive into how AI is revolutionizing the higher education field and what we need to be wary of.
Improving Outcomes with Technology
AI can help higher education institutions more easily address challenges such as increasing drop-out rates through custom-tailored educational experiences, discovering knowledge gaps, providing quick responses, and increasing accessibility for all students. For example, the Asia University in Taiwan has a “give up on no one” policy and has implemented customized learning tracks and professor interventions to adapt to different student contexts. The university has turned to AI technology and models to predict potential student drop-outs and the key variables that might affect students’ performance.
Institutions are also retaining students and better ensuring their academic success through AI tutors. By offering different interactive tools and resources to discover a student’s knowledge gaps and learning style, AI tutors can learn to adapt to an individual’s needs, ultimately increasing engagement and helping students learn better. In fact, it’s been shown that students who used an online AI tutor had 2 to 2.5 times higher learning outcomes than those who did not use one.
And it’s not just students who can benefit from AI technology. Overworked and overscheduled teaching staff can lean on AI to help with mundane tasks, freeing up their time to focus on the curriculum and their students. At a Master’s-level AI class at the Georgia Institute of Technology, 300 students posted about 10,000 messages a semester to an online message board, which was nearly impossible for the professor or teaching assistant to keep up with. So the professor, Ashok Goel, and his team implemented a virtual assistant, Jill, to help out. Not only was she among the most effective teaching assistants the class had experienced (with a 97% success rate), students didn’t even realize they were interacting with a chatbot. With Jill carrying the lion’s share of responding to queries, the professor and his team had more time to focus on meaningful work.
There are downsides to AI
While the benefits of AI for the higher education field are great, it’s important to remember that it’s not perfect. Because humans build these models, bias is naturally present, and we must account for scaling and a learning curve when implementing this technology.
It can be easy to think of data as uniformly unbiased and accurate, but humans are integral to gathering, inputting, and analyzing that data. In fact, much research has been conducted on the limitations of human decision-making and how implicit bias can rear its ugly head across a multitude of settings. A report from the Ohio State University’s Kirwan Institute for the Study of Race and Ethnicity argued for the responsible use of predictive analytics and the need for a deep understanding of racial bias in these models. The report indicates potential racial biases that can significantly impact the design of data models and the interpretation of their findings. This means that with the human element of predictive analytics, there is a range of issues, such as the accuracy, security, and privacy of data and the potential for bias against specific student groups. Predictive models could rely on student demographics such as economic status, race, gender, or cultural background, which, when put into these models, could perpetuate the inequities that persist in access to education.
Whenever any new technology is introduced into an environment, it can be challenging to understand how to scale it to grow with changing contexts, as well as get individuals to learn how to leverage it. Examples of successful AI tech in the classroom still required enormous resources upfront and experienced many initial failures. For example, Jill, the virtual assistant, had to be fed more than 40,000 posts from various forums before she could answer questions and interact with students.
Additionally, managing and using the volumes of data that AI technology requires means that staff beyond IT teams will need to be trained to use the data and the tools, raising skills gap concerns. At the University of Iowa, for example, many campus buildings use AI to monitor energy efficiency and address any problems. This means that staff will need to learn how to incorporate computers and data into their workflows in ways they weren’t initially trained for.
What Can We Do?
These challenges are not arguments against leveraging AI technology in higher education (or other settings). There are things we can do to mitigate these challenges and even plan for them. When it comes to bias in data models, we must constantly challenge the source, method of data collection, assumptions, and interpretation and use of data, especially when it comes to decisions that will impact access to education. The higher education sector would do well to model student outcomes and discover innovative approaches to changing them and mitigating implicit biases. And when planning to implement AI technology, institutions must account for staff training and the required upfront resources needed for the success and effectiveness of these tools.
AI can seem like the wild wild west, and if your business is feeling lost in the frontier, reach out to the OGs of AI for support. With a focus on AI and ML, we deliver transformational results for our clients by leveraging the latest technology and empowering companies to disrupt, transform, accelerate, and scale.