Subscribe Now

* You will receive the latest news and updates on the Canadian IT marketplace.

Trending News

Blog Post

The ethics of code: Developing AI for business with 5 core principles
C-SUITE

The ethics of code: Developing AI for business with 5 core principles 

The “Canadian Mafia” was known for their extensive work with deep learning at the University of Toronto for many years. The work they put in eventually paid off.

Yoshua Bengio, Yann LeCun, and Geoffrey Hinton are now leading AI research for some of the world’s largest tech companies, and their legacy in Canada lives on – with efforts in the country to invest heavily in research and talent being directed from the top down in order to maintain Canada’s position as a leader in technology innovation.

So much so that Prime Minister Justin Trudeau’s latest economic budget proposes offering $125 million to launch a Pan-Canadian Artificial Intelligence Strategy for research and talent to “promote collaboration between Canada’s main centres of expertise in Montréal, Toronto-Waterloo, and Edmonton.” Ultimately, this strategy will “cement Canada’s position as a world leader in AI.”

 Kriti Sharma

So what exactly is all the fuss about? Simply put, AI is the creation of intelligent machines that think and learn like humans. Every time Google predicts your search, you use Alexa or Siri with your voice, or your iPhone predicts your next word in a text – that’s AI in action.

AI can also be found in less obvious situations, like when you make an unusual purchase with your card and get a fraud alert from your bank. AI is everywhere, and it’s making a huge difference in our lives every day.

Game changing

I began working on AI a few years ago, and even in this short time, the game has changed massively.

As AI engineers, coders, and hackers – whatever you want to call us – we now have a lot of choices to make about how we implement AI into the products we are developing. For instance, we can choose to create our own AI technology, or we can also simply leverage generic tools and apply them to special problems we are working on.

Let me give you an example of how we worked like this at Sage when building our own AI chatbot, Pegg.

First, we developed and trained our own AI for the financial domain with skills to take the admin out of accounting, payments, and invoices, expenses (…everyone hates doing expenses, right?)

We partnered with Microsoft, Amazon, and Facebook to teach the AI to understand generic entities like date and location. 

We then chose to design our own personality for Pegg to suit the needs of our business users. Pegg has British accounting humour, does not pretend to be human, and is proud of being a bot!

The democratization of technology

The democratization of technology we are experiencing with AI is awesome. It helps to reduce time to market, it is deepening the talent pool, and it is enabling businesses of all sizes to gain access to the most cutting edge technology.

But with great power comes great responsibility. With a few large organizations developing the AI fundamentals that all businesses can use, we need to take a step back and ensure that the work we do is ethical and responsible.

Summarized below are a set of values I work to when building AI, and the guardrails I believe the tech community should adopt to develop AI that is accountable and purpose-driven, especially at a point in time when AI is poised to revolutionize our lives.

The Ethics of Code: Developing AI for Business with Five Core Principles

Below you will find a condensed version of the 5 core principles. The full version can be found here.

  • AI should reflect the diversity of the users it serves.
  • Both industry and community must develop effective mechanisms to filter bias as well as negative sentiment in the data that AI learns from, ensuring AI does not perpetuate stereotypes.
  • AI must be held to account – and so must users                               
  • Users build a relationship with AI and start to trust it after just a few meaningful interactions. With trust comes responsibility, and AI needs to be held accountable for its actions and decisions, just like humans.
  • Technology should not be allowed to become too clever to be accountable. We don’t accept this kind of behaviour from other ‘expert’ professions, so why should technology be the exception?
  • Reward AI for ‘showing its workings’

Any AI system learning from bad examples could end up becoming socially inappropriate – we have to remember that most AI today have no cognition of what it is saying. Only broad listening and learning from diverse data sets will solve this.

One of the approaches is to develop a reward mechanism when training AI. Reinforcement learning measures should be built not just based on what AI or robots do to achieve an outcome, but also on how AI and robots align with human values to accomplish that particular result.

AI should level the playing field

Voice technology and social robots provide newly accessible solutions, specifically for people disadvantaged by sight problems, dyslexia, and limited mobility. The business technology community needs to accelerate the development of new technologies to level the playing field and broaden the available talent pool.

AI will replace, but it must also create

There will be new opportunities created by the robotification of tasks, and we need to train humans for these prospects. If business and AI work together it will enable people to focus on what they are good at – building relationships and caring for customers.

Kriti Sharma is vice-president for Bots and Artificial Intelligence at Sage. She is an artificial intelligence technologist, mobile product inventor and one of the first chatbot executives in the technology industry.

Related posts