𝗚𝗲𝗻 𝗭 𝘂𝘀𝗲𝘀 𝗔𝗜 𝗺𝗼𝗿𝗲 𝘁𝗵𝗮𝗻 𝗮𝗻𝘆 𝗼𝘁𝗵𝗲𝗿 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻. 𝗕𝘂𝘁 𝗵𝗲𝗿𝗲'𝘀 𝘄𝗵𝗮𝘁 𝘄𝗲 𝗼𝗳𝘁𝗲𝗻 𝗼𝘃𝗲𝗿𝗹𝗼𝗼𝗸: 𝘁𝗵𝗲𝘆 𝗱𝗼𝗻'𝘁 𝗳𝘂𝗹𝗹𝘆 𝘁𝗿𝘂𝘀𝘁 𝗶𝘁. This isn't speculation. It’s something I've observed through my consulting work, coaching interventions, and increasingly in my doctoral research on AI’s role in performance management for Gen Z. Yes, they’re digital natives. Yes, they’re fast to adapt. But that doesn’t mean they trust what’s behind the screen. Many avoid using AI in school or work—not because they’re resistant, but because they lack clarity. The rules are fuzzy. The outcomes feel opaque. The accountability is unclear. That’s where I believe 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝗔𝗜 (𝗫𝗔𝗜) becomes critical—not just for compliance, but for confidence. 𝗪𝗵𝗲𝗻 𝗔𝗜 𝗯𝗲𝗰𝗼𝗺𝗲𝘀 𝗮 𝗯𝗹𝗮𝗰𝗸 𝗯𝗼𝘅, 𝘁𝗿𝘂𝘀𝘁 𝗲𝗿𝗼𝗱𝗲𝘀. 𝗪𝗵𝗲𝗻 𝗶𝘁’𝘀 𝘁𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝘁, 𝘁𝗿𝘂𝘀𝘁 𝗯𝗲𝗰𝗼𝗺𝗲𝘀 𝘁𝗲𝗮𝗰𝗵𝗮𝗯𝗹𝗲. We must move beyond just integrating AI into learning and workspaces. We need to make the thinking behind AI visible—so users (especially Gen Z) understand how decisions are made, what data is used, and what boundaries exist. 𝗫𝗔𝗜 𝗶𝘀𝗻’𝘁 𝗷𝘂𝘀𝘁 𝗮 𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗳𝗲𝗮𝘁𝘂𝗿𝗲—𝗶𝘁’𝘀 𝗮 𝗹𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆. If we fail to explain AI clearly, we’ll see more hesitation, more disengagement, and eventually, a wider skills-confidence gap. So here’s my question for leaders, educators, and designers of work: Are you building trust into your AI systems—or assuming it will show up by default?
Openness and trust in digital learning systems
Explore top LinkedIn content from expert professionals.
Summary
Openness and trust in digital learning systems refer to creating transparent and reliable environments where users understand and feel confident about how technologies like AI operate in education. This means making the rules, processes, and decisions of digital tools clear, so learners and educators can use them with confidence and accountability.
- Prioritize transparency: Share the reasoning behind digital tools, clearly explain their role in learning, and make guidelines for responsible use easy to understand.
- Encourage shared dialogue: Create opportunities for students and educators to discuss how and when AI is used, so everyone feels comfortable sharing experiences and questions.
- Promote critical engagement: Teach learners to think carefully about digital outputs, distinguish AI-generated content from their own work, and reflect before accepting recommendations.
-
-
🔍 European Digital Education Hub: Explainable AI (XAI) in Education This comprehensive report, developed by an expert group of practitioners, explores how explainability in AI is critical to fostering trust, accountability, and transparency in educational settings. 📘 What’s it about? The report provides guidance for policymakers, educators, developers, and institutional leaders on how to responsibly integrate explainable AI into teaching, learning, and governance systems. It links legal obligations (AI Act, GDPR) with ethical and pedagogical concerns. 📌 Key Themes and Insights: 🧠 Core Concepts of XAI ▪️Transparency, interpretability, explainability, and understandability are distinct but interconnected ▪️XAI bridges technical development and human comprehension, especially in complex, data-driven AI models ▪️ Transparency is not just a technical issue—it’s also an ethical and pedagogical imperative ⚙️ Legal and Ethical Foundations ▪️The EU AI Act does not mandate explainability outright but requires human oversight, fairness, and risk mitigation ▪️In education, this means AI systems must be understandable by learners, actionable for educators, and auditable by authorities ▪️ High-risk AI in education (e.g., grading, tutoring systems) must meet specific compliance criteria 👨🏫 Implications for Vocational Education and Training (VET) ▪️VET learners often engage with automated decision systems (e.g., intelligent tutoring, skill assessments) ▪️XAI ensures learners understand why an AI tool makes recommendations - vital for preserving learner agency, building metacognitive skills, and enabling self-regulated learning ▪️Teachers in VET must develop competences in AI literacy and critical thinking to use and explain these tools responsibly 🤝 Stakeholder Roles & Collaboration ▪️Developers must build for clarity and accountability ▪️Educators and institutions must scrutinise AI outputs, especially where decisions impact learners' futures ▪️ Multi-stakeholder collaboration is essential—particularly involving pedagogues in the design phase 📊 Pedagogical Design of Explanations ▪️Explanations should be tailored: local vs. global, simple vs. technical, and conditional vs. correlational ▪️In VET, actionable feedback (e.g., “you need to improve welding precision due to X pattern in your practice log”) is more effective than opaque scores 👩🏫 Educator Competences for XAI ▪️The report defines core digital and pedagogical competences for integrating XAI in curricula, including: ✔️Understanding AI models' logic ✔️Interpreting and communicating AI-driven outputs ✔️Teaching students to question, reflect, and act on AI advice ▪️Includes practical activities adaptable from for all leveles of education 📈 Conclusion Explainable AI isn’t a technical luxury—it’s a pedagogical necessity. In VET, it safeguards learner agency, fosters trust, and supports equitable learning outcomes #AIinEducation #DigitalCompetence Francisco Bellas European Commission
-
Yesterday, a student in my class candidly shared with me some of their go-to AI resources. That openness was a big moment for me—not because of the tools themselves, but because it showed me that they felt comfortable enough to talk freely about how they’re using AI in their work. It’s a sign that the trust we’ve been building in the classroom is paying off. When students start sharing how they’re leveraging AI without hesitation, you know the atmosphere you’ve created supports real learning and growth. Trust is the cornerstone of effective AI integration. Here are five ways I’ve worked to cultivate that trust: Be Transparent About AI’s Role: I’m upfront about how AI fits into our learning goals. I set clear guidelines but also explain the reasoning behind them, so students see AI as a supportive tool, not a replacement for their thinking. Show Vulnerability: I let students know that I’m also figuring things out as we go. By being honest about the learning curve I’m experiencing, I encourage them to be open about their own challenges and discoveries. Encourage Real-Time Conversations: When students mention how they’ve used AI, I don’t just nod and move on—I dive in. We talk through what worked, what didn’t, and how they approached it. This normalizes AI use and turns it into a shared learning experience. Celebrate Their Process: Whether they successfully apply AI or run into challenges, I make sure to recognize their efforts. This reinforces that AI is a tool for growth and experimentation, not just a quick fix. Model Responsible AI Use: I regularly demonstrate how I incorporate AI in my own work. When students see me using AI thoughtfully, they’re more likely to adopt similar practices, knowing that the tools have a real, practical role in our classroom. In the end, trust allows AI to become more than just another tool—it becomes part of a larger dialogue about learning, creativity, and innovation. And when students trust the process, they engage with AI more confidently and effectively. Amanda Bickerstaff Aco Momcilovic Brian Schoch Christina B. 👨🏫🤖 "Dr. Greg" Loughnane Goutham Kurra Iulia Nandrea Mike Kentz Michael Spencer Milly Snelling Anna Mills David H.
-
❓👩🏫 Do you Trust your students on the responsible use of AI 🎓🤖 👉 Lecture time is starting again and I had a discussion with my students at the Technische Universität Wien on the papers they will have to deliver for exams 👉 For more than 25 years I support an open book policy at exams - but LLMs? Need to think about that! 👉 So I did some research and came across the study titled "I don't trust you (anymore)! – The Effect of Students' LLM Use on Lecturer-Student Trust in Higher Education" that provides some insightful answers. ☝️ Transparency in using AI tools significantly enhances trust. Lecturers are less concerned about whether using these tools is fair and more interested in knowing how and when students employ them. ☝️ Responsible AI Use Can Enhance Research Quality (sure about that) but students need to critically evaluating the outputs and clearly distinguishing AI-generated content from their original analysis ☝️ Properly integrated, LLMs can serve as powerful resources, helping students explore new ideas, verify facts, and elevate their critical thinking. Some take aways: ✅ Transparency First: Clearly disclose and reference any AI-generated insights or text. ✅ Critical Engagement: Encourage students to critically evaluate AI outputs rather than unreflectively adopting them. ✅ Clear Guidelines: establish policies that clarify responsible AI usage. 👉 Discussion: How can we best integrate AI tools into educational environments while maintaining trust and academic rigor? Share your thoughts! 🔗 to the paper in the comments #artificialintelligence #AcademicIntegrity #ResearchQuality #ResponsibleAI #education