Next steps for AI in higher education
Countdown to Reservations End Date
Event Details
Description
This one day online conference focuses on the future use of AI in higher education, bringing out latest thinking on maintaining academic integrity, as well as opportunities that the new technology presents for teaching and learning.
It will be an opportunity for key stakeholders and policymakers to discuss implications of recent developments, such as the QAA’s Navigating the complexities of the artificial intelligence era in higher education report, which highlighted opportunities for the sector, including increasing accessibility, supporting those with additional learning needs, and improving flexibility and productivity in learning. Delegates will also discuss potential challenges AI presents to the sector, such as decreasing the digital divide, guaranteeing accessibility, addressing infrastructure issues within universities and managing integration. Nick Watmough, Quality Enhancement and Standards Specialist, QAA, will deliver a keynote address at the seminar.
Delegates will consider AI’s impact on student learning, development, creativity and achievement. Strategies for effectively integrating AI into HE will be examined, focusing on enhancing teaching methods, staff and student academic relationships, supporting different learning needs, and improving administrative efficiency.
The agenda will bring out latest thinking on best practice and the positive potential of increased use of AI, including on academic staff time and university finances in the short-term. Areas for discussion include its use in routine operations, such as application processing, timetabling and providing student support, as well as predictive analytics to optimise utilisation of facilities and teaching staff. Longer-term planning, adaptive learning and assessment systems will also be looked at.
The conference will assess approaches to maintaining academic integrity in the age of AI, including regulatory and institutional measures to deter misuse of AI tools and address academic misconduct. Attendees will discuss priorities for developing fair and proportionate guidelines for addressing suspected cases of AI-related misconduct, including the use of AI detection tools, and standardising sanctions.
Sessions will consider how HEIs are rethinking assessment strategies in light of generative AI, exploring best practice in assessment design, and updating academic integrity policies and fostering transparent communication between students and staff.
Further discussion is expected on leveraging AI to widen participation in HE and tackle the digital divide, and ensuring equitable access to technological advancements.
The conference will also look at how HEIs can respond swiftly to technological innovations by implementing policies and structures that keep pace with AI developments, as well as key considerations for the sector following confirmation in the Budget of the forthcoming Artificial Intelligence Opportunities Action Plan, which will support the Government in utilising AI opportunities and foster growth.
Overall, areas for discussion include:
• ethical AI integration: developing guidelines for responsible use of AI in HE - fostering an ethical culture among staff and students - balancing innovation with academic integrity
• policy development: creating effective institutional policies for AI adoption - aligning with regulatory standards - ensuring consistent implementation across departments
• staff and student training: equipping educators with AI competencies - integrating AI literacy into curricula - promoting awareness of AI’s capabilities and limitations
• assessment methods: re-evaluating assessment strategies with AI in mind - exploring alternative models to traditional assessments - maintaining academic standards amidst technological change
• academic misconduct: detecting and preventing AI-assisted cheating - implementing fair and proportionate sanctions - promoting a culture of honesty and transparency
• equity and access: addressing the digital divide in AI adoption - ensuring all students have access to necessary AI tools - supporting underrepresented and disadvantaged groups
• AI for inclusivity: leveraging AI to support disabled students - personalising learning experiences - enhancing student engagement and outcomes
• data privacy: safeguarding personal data in AI applications - understanding legal and ethical implications - establishing secure data management practices
• sector collaboration: fostering partnerships between institutions, policymakers, and industry - sharing best practice - driving innovation in AI-enhanced education
• future readiness: preparing institutions for ongoing AI advancements - developing agile strategies - anticipating future challenges and opportunities.