At Stradigi AI, we are deeply passionate about the ethical and legal considerations surrounding artificial intelligence. We have been discussing different ways to not only shed light on this much-talked about issue, but contribute our insights in a hopeful and refreshing manner without overlooking the challenges associated with the disruptive nature of AI. How do we integrate this technology into our lives in an equitable and meaningful way? How do we ensure that the end results not only reflect the values and freedoms we strive for as a society, but can thrive and be improved upon?
I previously had discussed why legal and ethics issues must be a part of the conversation, now I would like to discuss a few of the key legal issues in building this framework.
So, are robots going to dominate our world and steal all our jobs?
Yes, and no. It seems highly probable that jobs will be lost (and others created), and robots will dominate many aspects of our lives (and probably improve efficiency and quality of life).
But this isn’t about being a futurist. Albeit a very important consideration, whether or not the robots take over or the impact it may have, will depend greatly on how we regulate development both from a legislative and an industry standards perspective. Regulation is extremely challenging. First, as is the case with most technology that develops at the speed of light, laws and regulations addressing AI directly are few and far between. However, that isn’t necessarily a bad thing. The reason is twofold: (1) our legal framework (in this case in Canada) allows for flexibility in interpretation, which provides the space to apply well-known principles to new concepts or technologies; and (2) attempting to regulate a technology that is constantly changing is like trying to shoot a moving target: if you do it too soon, you’ll miss, but if you wait too long, it will be outdated.
To fix this conundrum, finding a balance is essential.
Part of the solution is having an honest discussion with all stakeholders to obtain a better understanding of different perspectives. We are also lucky enough in Montreal to have a few groups dedicated to fostering these conversations and bringing together these actors, such as Forum IA responsable and Mtl Ethics Meetup. Many other organizations are doing similar important work around the globe.
For companies in the AI space, a key question is “what does the legal scenery in Canada look like right now regarding AI?”
What are you allowed or not allowed to do? Where can you incur the most risk? In order to answer those questions, let’s first clarify some elements. AI can refer to many things, but for the purposes of this conversation, we will focus only on weaker AI (i.e. not robots, which are generally referred to as “strong AI”), therefore, we are discussing machine learning algorithms. If you’re interested in AI, you know that an algorithm is a set of rules or calculations for solving a problem that is programmed to achieve a specific action. The process for building an AI algorithm typically goes like this: you use a relevant dataset for your problem, you feed it to your algorithm (therefore, you “train” it using a chosen learning method), and you are left with an output – your results.
In order to understand how our current legal framework can be applied to these issues, we need to understand how an AI product is built and used. As we mentioned before, a key component is having an appropriate dataset. The nature and quality of these datasets is crucial in determining how accurate and efficient your algorithm will be. As they say, “garbage in, garbage out”. Since your product will be influenced by the data you feed it, you need to ensure that (a) it includes all relevant information for the problem you are trying to solve; and (b) you have identified its weaknesses and the potential consequences on the final output. Once your algorithm is trained on one, or several datasets you have chosen, you need to test it and refine it. After testing and achieving a reasonable level of accuracy, the product is ready to be tested and released into the wild.
This is the heart of the most pressing legal issues.
At the core of AI is a tool based on learning and adapting to new data based on an algorithm. You can’t predict everything that is going to happen giving it an inherent uncertainty (or confidence level). Our legal system is based on providing (relative) certainty and guidance for individuals and companies as to expectations on how to behave in society, so the unpredictable nature and potential impact of AI is unsurprisingly what scares people the most.
In response to this uncertainty, many legal questions will need to be solved to assess and control the risks related to an AI solution. Concretely, the usual suspects will be involved:
If your dataset is biased at the outset, what happens with the output coming from the algorithm? Will certain groups/populations be disadvantaged? What will the consequences be for the company and the individuals?
Contracts and Liability:
If you’re a company, you can’t possibly assume all risks related to your product (especially the unpredictable ones). How do you then draft a fair contract between your clients, yourself and ultimately, the users?
You have access to millions and millions of data points – how do you protect them? Can you really protect information, or is it hopeless? If your algorithm is based on confidential or sensitive information, how do you make sure that this information is useful for the product, but also protects the owners of said information? Does AI mean that privacy will be a thing of the past?
Since AI algorithms are highly collaborative, built upon the prior research and development, how do we balance the competing concerns of the public good and market advantages? How do we increase collaboration between academia and industry? How can the private legal concerns of creating IP and value be addressed, while we push forward the collective good?
It’s a tall order. One thing is for certain: despite everything, we remain convinced that by reflecting on these issues and integrating practical steps into our own products, we are going in the right direction. We will continue answering and reviewing the questions regarding the legal and ethical issues raised by AI, and attempt to offer solutions where we can promote equitable development.