Could Your AI (Artificial Intelligence) Machines Land You in Trouble?

Written by EO Executives on Sep 13, 2017

shutterstock_635324807.jpg

Szu Hill, EO Executives – VP & C-Suite exec search, 13th September 2017

In AI-proliferated industries, one of the biggest immediate minefield for the C-Suite is perhaps the GDPR implications in data collection & processing by AI entities.  When AI entities used by your organisation collect vast amounts of data, either:

  • via impersonal acquisition of such data via the Internet of Things (such as sensors or trackers), or,
  • via more personal interactions with customers - e.g. through chatbots in automated online banking transactions, or through an Alexa-like AI customer assistant when purchasing goods via an e-commerce platform,

… we have to ask if all the information collected about the customer has been obtained in a
GDPR-compliant manner?  And when that data has been collected, how will it be ensured that it will be legally processed so that it doesn’t infringe the customer’s data privacy or consumer rights? 

Therefore, how can senior management and the C-level suite ensure that the AI machines working for them can tell the difference between what data is OK to collect & process, and what isn’t? 

For example, can we actually fully control or limit the types and depth of interactions AI entities (such as Chatbots or Virtual Customer Service Assistants) have with your customers?  At what point does the AI entity know when to escalate a customer interaction or transaction to a human co-worker to deal with?

No doubt GDPR-compliant policies and procedures can be built into the types of questions asked by
AI entities to its customers, and the way they ask it, so that inappropriate data isn’t accidentally solicited from a customer in their response answers. 

However, in more advanced machine-learning instances, AI entities may self-learn to contextualise data and make more sophisticated logical inferences from the customer responses (ie data) they are acquiring … making AI entities like Virtual Customer Service Assistants in finance, mortgage or insurance services more able to process data much more sophisticated (in multiple dimensions) and therefore, potentially able to make more sophisticated decisions for or on behalf of the customer.

However, what happens if the customer decides that the wrong decision has been made for them, or on behalf of them? Especially in weightier cases like mortgage applications processing, or in financial investment decisions governed by algorithms, or in medical diagnosis?

The EU Privacy Regulation states that individuals “shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her, or similarly significantly affects him or her”.  Therefore, the customer or consumer has the right to contest decisions borne out of AI-enabled capabilities, i.e. the right to receive a justification of the automated decision.

A key issue that arises for senior management is when the AI used by their companies becomes so advanced or complicated, and the AI’s decisions are based on such a vast quantity of data, that makes it impossible to give a clarification (or justification) of a specific decision taken or enabled by an AI entity.  It is possible that even technology companies who build the AI functions in systems and machines may struggle to explain their AI’s decisions, as algorithms used in deep machine learning techniques can be so complex that it becomes impossible to explain how the AI entity reaches a particular decision.

So going back to the question - what happens if the customer decides that the wrong decision has been made for them, or on behalf of them?

This is something that the C-level individuals will have to safeguard their organisations and their customers against, when harnessing AI capabilities in their companies, but how exactly?

I look forward to your ideas and opinions!

If you haven't already, take a look at the other blogs in this series: 

 

 

Follow Us

Follow us on LinkedIn Follow us on Twitter Follow us on Facebook Follow us on RSS

Follow Us

Follow us on LinkedIn Follow us on Twitter Follow us on Facebook Follow us on RSS

Join over 40,000 followers and receive updates on hiring talent and progressing your career.

Subscribe to Email Updates

Popular Entries

Categories