- Acronym
- Posts
- BTS: Making sense of ethical AI
BTS: Making sense of ethical AI
A UN AI Advisor takes us behind the scenes on AI ethics
Like my work? Here are 3 ways you can help:
Share this newsletter with a friend 🗞️
Click the ad below 🖱️
Learn AI in 5 minutes a day
This is the easiest way for a busy person wanting to learn AI in as little time as possible:
Sign up for The Rundown AI newsletter
They send you 5-minute email updates on the latest AI news and how to use it
You learn how to become 2x more productive by leveraging AI
As artists have candid discussions about creative harm and IP rights that accompany generative AI use, journalists like myself contemplate how best to use AI to advance their work while maintaining authentic storytelling (see this nifty use case of using AI to pore through content to better understand Kash Patel’s worldview without having to sit through weeks of tedium—note: the AI isn’t doing any of the writing).
Lawyers, healthcare professionals, educators and beyond are all up against the wall when it comes to what’s right and what’s wrong.
And they’re not alone.
Even at the highest level, decision makers struggle to identify what makes AI ethical.
“Getting the member nations to agree upon a baseline set of ethics when each of us have our own different moral code, it’s an uphill but necessary battle,” said Neil Sahota, AI Advisor for the United Nations, which is currently made up of 193 countries.

Neil Sahota, UN AI Advisor and CEO of ACSILabs
Sahota also helped create AI for Good, a foundation that thinks up world solutions using AI and other emerging technologies. Meanwhile, he heads up ACSILabs, which develops AI tools like immersive simulators that can help us prepare for various potential futures. In other words, he’s the guy to talk to about AI ethics.
If everyone’s ethics were on the same page, wars would be nonexistent and we’d have a utopian society a la Aldous Huxley’s Island. But, as you can tell with world events like large-scale dissent against the Trump administration, or the scenarios that necessitate the Free Palestine movement, interpersonal priorities are far from parallel.
This extends to AI ethics, too. AI helps in some ways and harms in others. For example, Facebook’s digital redlining is largely considered harmful. But when I talk about how AI can help increase clinical trial diversity by using more sophisticated patient selection methods, I generally get a positive reaction.
And there’s another issue. What seems helpful to some seems harmful to others. For example, China’s use of smart glasses in policing—which can automatically identify people just by seeing them—would never fly in the US. 🤷♀️
It never dawned on me that someone would do something like that.
But while we fight over how to use AI correctly, bad actors continue to use the tech in overtly harmful ways that developers may not have even considered as an option.
“Technologists tend to build to an outcome. Someone wants to do x, so no one's thinking about y or z. A couple of years ago, there was a guy in Canada that was taking images and videos of real children from social media and generating child pornography from that using deepfake technology,” said Sahota. When asked why technologists didn’t foresee this outcome, he said, “The honest truth is because I'm not a pedophile. It never dawned on me that someone would do something like that.”
According to Sahota, the UN is currently working to create focus groups that can anticipate potential misuses of AI. He adds that anyone developing this technology should think of AI as much more than an IT project. Engineers can’t foresee potential bias and misuse all on their own. Much like diversity creates better business outcomes, diverse perspectives create fairer AI outcomes.
AI ethics may be a buzz-phrase, but it all boils down to “ensuring that new technologies benefit all communities, especially those that are underserved,” according to the Brookings Institution.
As criminals and misguided folks continue to operate, experts agree that the only option to combat the bad is by propelling ethical AI. Industries are beginning to come to terms with what that means on a smaller scale (for example, the American Bar Association issued its first ethics guidelines last year) but for larger groups like the UN, a baseline definition remains in the works.
In the meantime, in your life or at your company, you can propel diverse perspectives, critical thinking and ethical guardrails that create a safe space for AI use in your little corner of the world.
Thanks,

Reply