When Academia and AI Industry Experts Meet

Jaime Camacaro, Chief Innovation Officer, Ben Tang, Business Analyst and Bobby Prévost, Research ScientistJaime Camacaro, Chief Innovation Officer, Ben Tang, Business Analyst and Bobby Prévost, Research Scientist

I recently had the pleasure of representing Stradigi AI at the IVADO Quebec Data Valorization Conference hosted by HEC Montréal with Jaime Camacaro, our Chief Innovation Officer, and Bobby Prévost, one of our research scientists. Holding true to IVADO’s mission, the two day conference offered useful teachings on fostering productive collaboration between academia and industry.

Here are some of my thoughts and key takeaways!

1. On Public Private Partnerships

Artificial intelligence (AI) research has historically been conducted in universities by academics seeking to advance our understanding of the world. However, with the recent resurgence of AI led by deep learning and methods such as Convolutional Neural Nets, prominent professors have been recruited by major tech companies like Google.

Although this is not inherently a bad thing (our own Chief Scientist, Carolina Bessega, also hails from academia), we should be mindful that universities and companies have different interests and fulfill different roles. Thierry Badard from Laval University spoke on how universities are often more focused on answering complex long-term questions, whereas companies work to address concrete business problems. Even when a company engages in fundamental research, it is likely tangential to larger strategic business objectives.

At Stradigi AI we believe that the biggest benefit of uniting academia and industry is the diversity of opinion that emerges. Our research team is composed of different disciplines and backgrounds, ranging from computer science and fundamental physics, to software engineering and neuroscience. Researchers with purely academic backgrounds have successfully adapted to the industry environment. As a result, we are able to successfully combine the talents of those who are interested in theory with others who are intensely product-focused. We break difficult problems into smaller challenges and constantly iterate to design optimal solutions.

2. On Smart Cities

It was announced in 2017 that Google had been awarded the opportunity to develop 800 acres of waterfront property in Toronto. Sidewalk Toronto promises to be the epitome of urbanism in the 21st century, complete with interconnected digital health infrastructure, affordable mixed-use house, and autonomous vehicles of all sorts.

Although exciting, this poses questions for local governments and their citizens. As attracting world-class talent becomes increasingly important for cities, more and more concessions may be made to companies like Google to do as they please with municipal land. Stéphane Guidon, acting director of Montréal Smart City, warned that the proliferation of autonomous vehicles presents new concerns for data privacy. It would be impossible to constantly gain consent from every single person that interacts with a Google-operated device or service.

Who will make the decisions that urban planners are currently responsible for, such as whether to prioritize cars or pedestrians? Perhaps smart cities are better designed by local companies, as they will have a larger stake in the outcome and a stronger connection with the community.

Municipalities need to carefully manage their relationships with tech companies and should strive to maintain control of key infrastructure and decision making. Otherwise, we risk following in the missteps of modernists from the late 50 and 60s, who were so consumed with utopian views of the future they failed to design livable spaces for real people.

AI ethics panel at IVADO Quebec Data Valorization Conference

AI ethics panel at IVADO Quebec Data Valorization Conference

3. On AI Ethics

Although I am an optimistic about the benefits of AI, we must also consider potential negative impacts on our social, political, and economic systems. The second day of the conference acted as a kick-off for the co-construction of the Montreal Declaration for Responsible AI.

The declaration outlines seven main principles of well-being, autonomy, justice, privacy, knowledge, democracy, and responsibility. The document is intended to guide our thinking in how to address changes in society at large. At Stradigi AI, we believe that the questions surrounding AI ethics are some of the most important of our time.

One area with near term implications is healthcare. AI-powered medical imaging is now able to outperform human doctors in diagnosing various illnesses. The FDA has already approved clinical decision support tools for diagnosing stroke and certain types of cancer using deep learning. The physician still gets the final say on the diagnosis, but at what point, if ever, will this change?

If AI systems one day are so accurate that introducing human decision into the mix reduces health outcomes, should the doctor’s word be disregarded, and the AI allowed to function autonomously? If the physician foregoes the assessment made by the AI and misdiagnoses their patient, would this amount to malpractice? There was some disagreement among the panel, as Francois Laviolette of Laval University expressed concerns about delegating full autonomy to AI systems, whereas others admitted that this may be necessary in situations where humans are time-constrained. Technologies we are developing at Stradigi AI in medical signal processing may eventually need to address these questions, and we are happy to be actively involved in the conversation.

All in all, I am incredibly glad that Stradigi AI took part in these important discussions, and I look forward to sharing more of my thoughts on both the societal impacts of AI, as well as its business applications.

Leave a Comment

This site uses cookies to provide the best experience. By continuing to browse the site, you are agreeing to our use of cookies.