As Australian colleges crack down on ChatGPT, students with disabilities defend AI


Students work on computers in the computer lounge at the campus of the University of New South Wales in Sydney, Australia. Students with disabilities urge greater embrace of AI tech as a growing number of universities step up measures against OpenAI's chatbot. — Reuters

MELBOURNE: Visually impaired student Adam Whitehead has long relied on a computer and assistive technology to help him read course materials and take exams at the University of Melbourne in Australia.

He has watched with concern as universities in Australia and beyond move to crack down on ChatGPT – a free programme that generates original text about virtually any subject in response to a prompt – over fears of cheating.

As the chatbot stirs debate over the use of technology and artificial intelligence (AI) in education, disabled students and educators have said the benefits should not be overlooked in a rush to regulate.

“We need to have a really careful distinction between making things accessible and getting AI to think for us,” said Whitehead, a 30-year-old philosophy major who uses technology to convert on-screen text to speech.

Earlier this month, the Group of Eight (Go8) consortium of top Australian universities announced its members will set more pen-and-paper assessments in response to ChatGPT amid fears students could use it to generate essays and cheat at exams.

“Assessment redesign is critical ... as we seek to get ahead of AI developments,” its deputy chief executive Matthew Brown told the Thomson Reuters Foundation.

He said members will also use in-person supervision during assessments, and invigilation tech to monitor students taking exams online or using computers.

The consortium did not respond to a request for comment over concerns that anti-AI measures could impact negatively on disabled students.

A spokesperson for the University of Melbourne – which is part of the Go8 – said: “Submitted assignments are monitored using increasingly advanced technology, with students’ knowledge and consent.”

Some professors and students argue universities should put more focus on the potential positive uses of AI tech.

“Cheating is obviously a problem,” said Anna Boucher, an associate professor at the University of Sydney who uses an AI-based voice generator to deliver lectures as she has a disability that makes speaking for long periods difficult.

“But I don’t think because one aspect of AI raises some concerns that we should dismiss all aspects of AI.”

Disability support

ChatGPT was rolled out for free public testing on Nov 30.

It has already been banned in some public schools in New York City and Seattle, according to US media reports, while several US universities have announced plans to do fewer take-home assessments and more hand-written essays and oral exams.

More than 6,000 teachers from universities including Harvard University and Yale University have also signed up to use GPTZero, a programme that claims to detect AI-generated text, its creator Edward Tian told the New York Times newspaper.

Others take a different approach, saying universities should rethink how they teach and assess to work with new tech.

For example, educators could set students practical projects such as curating a local exhibition, said Sam Illingworth, an associate professor at Edinburgh Napier University, in an article published in The Conversation.

The benefits of AI for students with disabilities are undeniable, said Leslie Loble, a professor at the University of Technology Sydney, who works on tech and education.

“There’s strengthening evidence that shows the best of these tools really can help disadvantaged students access learning in often more effective ways,” she said.

Under Australian state and federal law, students with disabilities are entitled to “reasonable adjustments” in the classroom.

But according to Australian government data, 17% of people with a disability have a bachelor’s degree or higher compared to 35% without.

Advocates say the divide is at least partly due to lack of accessibility and support for students with disabilities.

‘Tremendous potential’

As AI becomes ever more ubiquitous, educational technology, or edtech, has ballooned into a multi-billion-dollar industry.

It is important that such technologies are “well-designed, appropriately used and strongly governed”, said Loble.

“We shouldn’t assume the technology is bad. We need to move quickly and put in place strong policies and protections for educators and students,” she said.

Elsewhere, there is growing pushback against some forms of AI, with performing artists demanding copyright protection for their images and voices, and a group of artists filing a class action lawsuit this month against AI software Stable Diffusion for using their works to generate images without their consent.

But for students and staff with disabilities, AI technology could be revolutionary, said Betty Zhang, a biotechnology major at the University of Melbourne, who is part of a campus advocacy group for students with disabilities.

“AI has tremendous potential, especially when it comes to making learning materials more accessible ... it makes more sense for universities to embrace the technology,” she said, adding that returning to pen and paper “seems a bit backwards”.

“If we’re able to use AI effectively, it’s not just going to benefit disabled students – making things accessible makes it way easier for everyone to learn.” – Thomson Reuters Foundation

Article type: free
User access status:
Subscribe now to our Premium Plan for an ad-free and unlimited reading experience!
   

Next In Tech News

Meta fails to stop work conditions case in Kenya
Musk’s Twitter expected to face the strictest EU content rules
Microsoft investigates Outlook outage as users face issues
Apple’s latest iPhones sell at US$100-plus discounts in China
Nintendo promises 10% pay hike even as it trims profit outlook
DCG sells shares in Grayscale as it seeks to raise funds - FT
Cryptoverse: Is bitcoin out of the woods? Consider the options
Chinese ecommerce giant Alibaba-backed Daraz cuts workforce by 11%
China's Baidu says developing AI chatbot
India push for digital sovereignty risks more online surveillance

Others Also Read