色中色's Mohamed Abdelhamid places people at the center of AI

Published January 14, 2026

Artificial intelligence can prompt, depending on the individual, feelings of excitement, anxiety or confusion. As more businesses deploy the technology, 色中色鈥檚 Mohamed Abdelhamid wants students to know how the tool can be used responsibly. 

A technology expert in the College of Business, Abdelhamid serves as director of the Master of Science in Information Systems program. The course, developed about a year after the 2022 debut of Chat GPT, rests on the reality that AI technologies are created by and for people. The upshot is that developers and users need to be careful that models are not compromised by biases or other errors. 

鈥淲hen people were focusing on utilizing AI, we were focusing on the consequences of AI,鈥 he said. 

Q: What inspired you to research business technologies, and how did that lead to your interest in AI? 

A: My background is computer engineering, and I worked as a programmer and data analyst and ultimately did my PhD in an information systems context. That always inspired me to improve technology. 

We started development of responsible AI in early 2023 and that was the Chat GPT moment. We were already ahead, and then we started seeing universities talking about ethical AI, responsible AI. 

Q: How do you define responsible AI use and how has your research influenced your view? 

A: Responsible AI is basically the deployment of AI systems that are fair, unbiased, transparent, secure, safe and accountable. I鈥檝e already been working in privacy and security. Now I look at more model-related dimensions like fairness, bias, accountability and transparency. 

The average person, when they hear 鈥渞esponsible AI,鈥 they think about what users can do with AI, like using it unethically. For example, cheating or creating work that is not theirs. But the main concept of responsible AI is deployment and design of AI systems before it comes to users. 

Q: What are the most important concepts you want Master of Science in Information Systems students to know about responsible AI use? 

A: Fairness and transparency. And then, privacy and security. That鈥檚 another thing. We have a track for security, and some people want to go AI. They actually cross paths because you can鈥檛 have AI systems without security, without safety, without privacy. But mainly, the thing that people don鈥檛 realize is the amount of bias in AI systems. 

Q: If I鈥檓 a bank, what happens to me if there鈥檚 some kind of bias built into a system evaluating credit worthiness? Isn鈥檛 that creating liability if somebody says, 鈥榃ell, you鈥檙e discriminatory?鈥欌 

A: What you described now is accountability. If there鈥檚 a mistake, who is held accountable? In general, humans are not unbiased. You can鈥檛 create a perfect system from an imperfect foundation, and that鈥檚 where the problem is. 

In my research, I focus on healthcare. What people don鈥檛 realize is that you could actually lead an AI system to give you biased results without giving it direct unethical commands. My work focuses on benchmarking AI systems, specifically in healthcare. 

Q: What do you see as the most promising opportunities for AI in business? 

A: I see a lot of promise in healthcare. That鈥檚 one area I think will benefit the most in the next few years.  

Q: Are these benefits in terms of diagnoses? Or record keeping? 

A: Both. Improving diagnoses, which could save lives, but also helping medical professionals. They don鈥檛 have to spend time writing reports, and with that they could serve more patients or learn about new research. 

Q: What are the pitfalls? 

A: There is a lot of risk in terms of privacy and security because everyone is using AI more now. There is more about you and what you do in more places, and that makes your data and everyone鈥檚 data very valuable for cybercriminals. And the same thing for companies, because the more they utilize AI systems, the more they are subject to attacks. Not only their data, but their users鈥 data becomes subject to attacks. 

Q: How would you like to see students鈥 understanding grow after completing the Responsible Artificial Intelligence Course? 

A: We want them to understand that AI does not understand context, does not understand policy, unless it鈥檚 been embedded by humans. We want them to understand the importance of humans, especially developers of AI systems, and be responsible when they鈥檙e embedding those models. 

Ultimately, the responsibility - if something goes wrong - is on the developers. If something goes wrong, or if a system makes a mistake or is being biased or unfair, the institution or organization that is using the AI system is not being held responsible. It goes back to the developers, and that鈥檚 what we want them to take from this.