Think differently

3 questions to Peter Daly and Julia Milner about Artificial Intelligence (AI) in Higher Education

Peter Daly , Professor
Julia Milner , Professor

Although half of the students and one-third of professors occasionally use AI, 80% of students and 90% of professors believe regulations are needed for text-generative AI in education (1). This is just one of dozens of examples of how educators, students and institutions are facing the implications, challenges and opportunities presented by these technologies (2). We spoke with Peter Daly and Julia Milner, EDHEC Professors, who shared their insights on the integration of AI in educational contexts and the associated ethical considerations.


Reading time :
12 Jun 2024

How can AI be effectively integrated into educational practices?

Peter Daly: AI can be utilized in a myriad of ways beyond exam correction (3). One significant application is providing feedback. AI, through proper prompts, can give more detailed and immediate feedback than human instructors. However, there are challenges, such as the potential for feedback to be too extensive or for AI to produce "hallucinations," where it generates nonsensical responses. It's crucial for faculty to embrace AI tools, as students are already using them (4). Understanding how to create effective prompts notably is a skill of the future. We need to ask ourselves as academics whether we can use AI to automate repetitive task as well as how AI can augment our tasks to assist us in our teaching and research. Hence, artificial intelligence becomes ‘additional’ intelligence for academics and scholars.


Julia Milner: In education, AI can assist in administrative tasks, personalized learning, and enhancing student engagement. However, educators must maintain a human touch, interactions and emotional connection. It's about balancing technology use with empathy and understanding the nuances of its impact on learning.


What are the primary ethical concerns related to the use of AI in education?

Julia Milner: The ethical considerations of AI in education are multifaceted. One major issue is the disconnect between predefined ethics and actual behavior, as seen in our research on smartphone use (5). This is applicable to AI as well, where the ethical stance of individuals might not align with their actions under pressure. Privacy concerns, the potential for academic dishonesty, and the need for transparency in AI's functioning are paramount. Institutions should foster open conversations about these ethical challenges and support educators in navigating them, ensuring that AI enhances rather than detracts from the educational experience.


Peter Daly: Ethical concerns around AI in education include the potential for algorithmic bias, where AI systems may perpetuate biases present in their training data, such as gender or language biases. There's also the risk of students becoming overly reliant on AI, potentially undermining their critical thinking skills. Additionally, the transparency of AI tools and their decision-making processes is a significant concern.


How can institutions support educators and students in adapting to the rise of AI in education?

Peter Daly: Institutions need to establish clear guidelines and policies to address these issues and ensure that AI is used responsibly and ethically. They should also adopt a proactive approach by providing training and resources to help educators understand and utilize AI tools effectively. This includes developing internal policies that address the ethical use of AI and integrating these technologies into the curriculum in a way that complements traditional teaching methods. For instance, at EDHEC, we organize learning expeditions that incorporate visits to companies, where students and faculty can engage with cutting-edge AI applications and gain certifications (6). It's also crucial to promote a culture of academic integrity, ensuring that AI is used to augment learning rather than replace it.


Julia Milner: Institutions need to provide professional development opportunities that help educators integrate AI into their teaching while maintaining the human touch. This includes training on the ethical use of AI, understanding its limitations, and developing strategies to use AI creatively. Encouraging open dialogue about the benefits and challenges of AI can help alleviate apprehensions and promote a balanced approach. Also being playful, trying things out and remaining open can help to navigate the technology landscape whilst openly discussing regulations of AI are essential to move the application of AI in education forward.



(1) Nov. 2023, "L’IA dans l’enseignement : résultats détaillés d’une enquête où étudiants et enseignants confrontent leurs regards" -

(2) Dec. 2023, ENSSIB - "IA et Enseignement Supérieur : quels enjeux et impacts ?"

(3) Daly, P. & Deglaire, E. (2024) AI-enabled correction: A professor’s journey. Presented at EURAM – European Academy of Management, 25 – 28th June, University of Bath School of Management, Bath, UK.

(4) Le Monde, 3 juin 2024 « L’explosion de l’intelligence artificielle a été beaucoup plus rapide que le temps universitaire »

(5) Daly, P., Milner, J. & Milner, T. (2018) ‘The Ethics of Smartphone Usage: A Business Student Perspective’. Paper presented at the European Academy of Management (EURAM) Conference, 19 – 21 June 2018, University of Iceland, Reykjavik, Iceland.

(6) See also the EDHEC IA cross-disciplinary initiative in the new strategic plan Générations 2050

Other items you may be
interested in


How hybrid working is reinventing management

  • Olga Kokshagina , Associate Professor
  • Sabrina Schneider , Management Center Innsbruck

“Dark green” equity funds could go “full green” with very limited impact on their risk profile

  • Aurore Porteu de La Morandière , Scientific Portfolio ESG Researcher
  • Benoît Vaucher , Scientific Portfolio (an EDHEC Venture) Director of Research
  • Vincent Bouchet , Scientific Portfolio (an EDHEC Venture) ESG Director