Expert Q&A with KPMG’s Sylvia Klasovec Kingsmill on Risk, Privacy, and your AI Roadmap in 2020

SHARE

Expert Q&A with KPMG’s Sylvia Klasovec Kingsmill on Risk, Privacy, and your AI Roadmap in 2020

SHARE

Data and privacy issues are – and will continue to be – one of the foremost concerns and challenges for executives seeking to effectively manage risk and change in the realm of AI. We sought out advice from one of KPMG Canada’s experts in the field, Sylvia Klasovec Kingsmill, to gather actionable insights on how organizations can future-proof their AI projects. Sylvia brings 15 years of experience to the table, having provided strategic privacy risk management and advisory services to clients and partners in the US and Canada. Here, we ask Sylvia six essential questions on the topics of risk, data, intelligent partnerships and more.

1. When implementing an AI roadmap, many organizations weigh various risk factors surrounding data, strategic decisions, infrastructure, etc. What are the most important things to keep in mind when assessing risk?

In order to harness the benefits that AI technologies can bring, we need to proactively manage the risks. There’s no shortage of stories where harmful algorithmic bias has manifested to negatively affect the public sphere: think the Cambridge Analytica scandal and the manipulation of the 2016 US election results. For AI to be truly transformative – in the enterprise and elsewhere – we must have confidence in how it functions.

I recommend that business leaders implement a comprehensive trust model that functions by building reliable algorithms, great cyber security practices, IT processes, trustworthy data management, and control mechanisms. Without a thorough understanding of how your AI will impact any of these categories, you are setting yourself up for significant risk.

A handful of these risks include:

  1. Risks in human bias in training, coding, logic or inappropriate modelling resulting in flawed algorithmic design
  2. Data risks, including inappropriate data collection (lacking consent or legitimate basis to collect), as well as inaccurate, incomplete or irrelevant data, or inappropriate contextual use or assumptions about data outputs
  3. Lack of implementation or integration with business operations resulting in inappropriate decisions being made
  4. Lack of governance or integration with an organization’s values and principles
  5. Technical design flaws due to lack of testing, validation or training of algorithms
  6. Security and privacy flaws due to hacked data inputs which result in flawed outcomes
  7. Lack of transparency given the proprietary nature of the solution resulting in decisions being made in a “black box”

2. From a more general perspective, what should business leaders remain hyper-aware of when they are beginning their AI journey? What types of risks are they likely to be challenged on the most?

First and foremost, I will say that it’s a good thing to be challenged on risks. It might be frustrating when you’re focused on getting your project up-and-running, but it’s worth it in the long run. There are three main types of risks: 1) strategic and financial risks; 2) reputation risks; 3) regulatory risks.

In the first category, these errors can put an organization and its customers at risk when algorithms are relied upon for financial or strategic decisions that could result in loss or inaccurate reporting. Reputation risks, on the other hand, include misalignment from algorithmic output on the values, principles and ethics of an organization. And lastly, regulatory risks relate to flawed algorithmic decisions that violate the rule of law, as well as constitutional rights and freedoms, including privacy rights. If a fully-fledged AI Risk Framework is out of scope for your AI roadmap this year, you should ensure that you’re working with a partner that checks all of the above boxes in their own processes.

In the chart below, you can find my recommended components for an AI Risk Framework and try it for your organization.

AI Risk Framework

3. Strategic AI partners can offer immense benefits to organizations that are on-boarding new, transformative technologies. What should today’s enterprises seek in an AI partner?

It’s essential that partners provide complementary skill sets that offer a dynamic team to implement AI solutions. AI deployments require an integrated digital and human collaboration, where organizations need to work collaboratively with government, business, and educational institutions. Strategic partners are well-positioned to help AI providers enter into the marketplace to ensure their solutions are scalable, adaptable and implemented with the right support to drive adoption. This is at the core of the Stradigi AI & KPMG Canada partnership.

4. You work with a number of different clients across North America. To date, what’s the biggest blind spot they typically encounter when it comes to privacy and regulations in the context of AI?

I would say this is more of a misconception than a blindspot: clients may think that they can comply with one set of different privacy rules or standards in a jurisdiction, as opposed to another, simply because local laws may be non-existent, piecemeal, or enforcement is ineffective. Right now, Canada and the EU have stronger privacy rules at the national and local levels, backed by vocal privacy regulators and data protection authorities. Simply put, those who adopt the highest privacy and ethical practices for data processing will reap the benefits of earning customer trust. I believe that local laws will soon catch up to the speed of AI, and it will be imperative that organizations stay well-informed and up-to-speed leading up to, and during, these legislative evolutions.

5. What are your recommended resources for business executives who want to keep a close watch on developments in AI-related policies?

Most of my work is in the Canadian context — so that’s where most of my recommendations come from! For my clients and AI partners in Canada, I always recommend checking out the Office of the Privacy Commissioner of Canada. This is a great resource because they issue guidelines around regulatory expectations. In general, governments, businesses, and citizens alike are demanding more accountability for how AI technologies are used. To stay on top of what these accountability items are, and to best know the practices inside and out, you can also take a look at industry-setting bodies such as the CIO Strategy Council.

Editor’s note: companies in the United States should check out this resource as a starting point. The CSO website also features up-to-date news and insights.

6. Where do you see the future of the regulations, policy, and governance space? How will innovative organizations continually ensure they’re upholding evolving standards?

In 2020, organizations can ensure they are abiding by standards by adopting a proactive risk mitigation strategy that leverages traditional risk principles, while also standing the test of data protection regulations and guidelines. This would involve adopting new risk management techniques that adjust for new ways of testing controls, by continuously assessing, evaluating and testing the effectiveness of the algorithmic outputs as opposed to a point-in-time approach to assessing algorithmic risks. In addition to implementing the above-mentioned AI Risk Framework, the next step is adopting a “privacy by design” approach. The privacy by design approach supports the fact that privacy is not a “one and done” exercise but rather, a continual obligation for any enterprise managing its operational and reputation data. Stradigi AI’s legal council, Helene Beauchemin, has great insights to share on that topic in this article.

This Q&A is an excerpt from, The AI Ascent: Engineering your pathway to innovation. Download our eBook here!

 

This site is registered on wpml.org as a development site.