
Inclusive by Design: Rethinking AI for Environmental and Social Sustainability.
By Dr Dhouha Kbaier and Prof Khalid M. O. Nahar.
“AI will shape the future, but for whom, and at what cost?” This question lies at the heart of my work. In an era where artificial intelligence is celebrated for its transformative power, it’s time we ask what kind of futures we’re building, and who gets left out in the process.
The term “sustainable AI” is becoming a buzzword, typically invoked in conversations about energy efficiency, model size, or computational power. But true sustainability goes beyond carbon footprints. It must be human-centred. It must include digital equity, linguistic inclusion, and accessibility. It must reflect the needs of the many, not just the connected, the privileged, or the loudest voices in the room.
This broader view of sustainability isn’t abstract; it’s anchored in my lived research and practice. At The Open University, I’ve had the opportunity to lead a number of initiatives focused on designing AI that is not only technically robust, but socially inclusive. These projects have consistently reinforced a powerful insight: accessibility is not a bolt-on, it’s a system transformer.
Inclusive Innovation in Practice: Lab Assist
Take the Lab Assist project, for example. Developed within the OpenSTEM Labs at the Open University, Lab Assist began as a targeted intervention to support students undertaking remote STEM experiments, particularly those facing accessibility challenges, whether due to physical, cognitive, or situational barriers.
Our aim was to create an intelligent assistant that could guide and adapt to students’ varied learning needs. While the system wasn’t initially trialled with explicitly identified disabled users, we expanded testing to include a wide range of learners, some of whom may have had accessibility requirements.
What emerged was a powerful insight: designing with inclusivity in mind improves the experience for everyone. What began as a focused support tool evolved into a more universally beneficial learning enhancement system.
This reflects a key principle in inclusive design; when we build with equity and flexibility at the core, the resulting systems don’t just accommodate difference, they elevate the experience for all learners. It’s not about separate solutions; it’s about designing with responsiveness and fairness from the ground up.

Expanding the Map: AI Beyond English and Beyond Sight
Another strand of my research addresses a different kind of exclusion: the linguistic and cultural bias embedded in mainstream AI tools. English, and by extension, Latin script languages dominate most of today’s AI infrastructure. But the world doesn’t speak with one voice or think in one script.
That’s why I’m currently leading a project funded under the Open Societal Challenges programme, focused on developing a brain-computer interface (BCI) for direct Semitic text generation, with Arabic serving as the initial case study.
This work is being conducted in close collaboration with Professor Khalid Nahar from Yarmouk University in Jordan, whose expertise in neural engineering and Arabic linguistics is vital to the project’s success. Together, we are building systems that can translate brain activity into Arabic words. But this isn’t just about cutting-edge neuroscience or machine learning, it’s about recognising the deep cognitive and cultural layers tied to language.
Arabic, as a morphologically rich, right-to-left Semitic script with diacritical markings and complex grammar, presents challenges that most Western-designed NLP and BCI systems are ill-equipped to address. Professor Nahar brings crucial regional insight into Arabic script processing and cognitive diversity, ensuring that the system we develop is not only technically sound but also culturally grounded and linguistically authentic. This partnership exemplifies the kind of cross-border, interdisciplinary collaboration that inclusive AI design requires, one that respects local contexts while driving global innovation.
Importantly, this work also intersects with disability inclusion. BCIs hold transformative potential for people who are non-verbal or have limited motor control. But if these systems only “speak” in English or are trained exclusively on Western linguistic norms, they risk reinforcing exclusion even as they claim to offer liberation.
Our project poses a fundamental, proactive question: Can AI learn to think differently, to hear different minds, and to respect diverse identities?
Our answer is clear: Yes, but only if we design it to do so, intentionally and inclusively.

From Tools to Ecosystems: Rethinking AI Ethics
Sustainability, then, is not just about whether our algorithms run on renewable energy, it’s about whether they are ethically aligned, culturally fluent, and socially responsive.
This ethos runs through other areas of my research as well. My work in cybersecurity, digital trust, and misinformation has shown how fragile our digital ecosystems can be, especially when designed without inclusion in mind. In vulnerable communities, digital exclusion often compounds social and economic inequality. In such contexts, the ethical stakes of AI become even more urgent.
Sustainable digital futures must be resilient in more ways than one: they must withstand technical shocks and social injustices alike.
A Call to Design Differently
We are at a crossroads. AI is no longer a futuristic concept, it’s a present reality shaping our education systems, healthcare delivery, climate models, and public discourse. But if we fail to embed inclusion into our design processes now, we risk baking systemic inequity into the tools of tomorrow.
Sustainability is not just about low emissions or efficient code. It’s about who we choose to include in the future we’re building.
So, here’s my invitation: Let’s design AI that empowers more people. Let’s co-create systems with those who are often excluded from the conversation. Let’s ensure that sustainability includes justice.
Because if we’re designing AI for the future, everyone deserves a seat at the table.
Author biography:
Dr Dhouha Kbaier is a Senior Lecturer in Computing and Communications at The Open University. Her work spans AI for accessibility, misinformation resilience, and inclusive digital systems. Dhouha actively engages in interdisciplinary collaborations and has earned recognition for her impactful research.
Prof. Khalid M. O. Nahar is a professor at the Faculty of Information Technology and Computer Sciences, Yarmouk University, Jordan. He received his BS and MS degrees in computer sciences from Yarmouk University in Jordan, in 1992 and 2005 respectively. He was awarded a full scholarship to continue his PhD in Computer Sciences and Engineering from King Fahd University of Petroleum and Minerals (KFUPM), KSA. In 2013 he completed his PhD and started his job as an assistant professor at Tabuk University, KSA for two years. In 2015, he returned to Yarmouk University and became the Assistant Dean for Quality Control. He later chaired the Training Department at the Accreditation and Quality Assurance Centre from 2020 to 2022. His research interests include artificial intelligence, speech recognition, Arabic computing, natural language processing, multimedia computing, content-based retrieval, machine learning, IoT, and data science. Prof. Khalid was nominated as an expert in AI at Yarmouk University and was elected as the representative of the University at the United Nations Office for AI Ethics. Moreover, Prof. Khalid registered a patent in Germany for AI under the number (DE202023100058U1) and submitted another invention in the USA (under processing) for driver inattention prediction using fuzzy logic. Prof. Khalid has published more than 60 papers in reputable journals, including Scopus, IEEE, MDPI, and Springer.

Dr Dhouha Kbaier

Prof. Khalid M. O. Nahar

Delivering a public talk at Soapbox Science, Milton Keynes – “Innovation at the Intersection: Where Technology Meets Environmental Science,” 6th July 2024.