Welcome – Shivon Zilis

Hello and welcome!

Wow, it’s the fifth year we’ve done this conference. Every year, Ajay and I discuss whether we’re going to do it again (it’s not a given!) and what it comes down to is whether or not there are enough topics we’re grappling with that we feel aren’t getting enough light of day. This year they were in abundance, so thank you for showing up today and hopefully getting us closer to some answers!

I thought I’d go out on a limb and share the most thorny topics I’ve found myself wrestling with the past year. I’ll readily admit that they’re further out there than your average AI discussion, but in leaning into some of these difficult issues I’ve come out the other side with a more realistic and optimistic perspective so I figured it was worth sharing. At the very least, consider it a fly on the wall perspective of some of the thinking in Silicon Valley.

As always, please take this as merely one of many perspectives and of course come to your own conclusions!

 

When to withhold and when to deploy an AI system

One observation I’ve had over the years is that people get ~10,000x more angry at a machine that causes harm than a human that causes harm. If the end goal is to maximize benefit and minimize harm, one implication of this is that we’re placing an incredibly high tax on the deployment of machine intelligence in mission-critical situations (e.g., healthcare, transportation). We should, of course, be incredibly fastidious about safety and consistently work to improve it, that is not a debate, but the question is what’s the right framework for pressing “go” on a solution that could be a high benefit but is in a safety critical domain?

One point of view on this is one shouldn’t deploy an AI system unless it’s 100% certain it will never err or cause harm. I find that line of argumentation confusing because it fails to take into account a baseline for comparison. Humans make the majority of decisions today and we often err in ways that cause harm, so what’s struck me as a better way to evaluate these issues is to compare an AI system to a current baseline and reason from there.

If there were a situation where 12% of medical images are currently being misdiagnosed and an AI system brought it up to 98% accuracy, does that mean it’s harming the 2% when it gets it wrong or that it’s providing a safer readout by a significant amount? If an algorithm has some inherent bias that is difficult to remove but is nonetheless much less biased than the humans currently making the decision which solution do we go with?

I’m not certain what the right threshold for deploying a solution is (should it be just over the human baseline, 2x, 10x?) but I realized that a lot of good won’t happen if the often unattainable bar of perfection is the bar so I’ve tried to take a more nuanced perspective!

 

Planning for the future of AI

When I ask people what makes humans unique the most common answer is that humanity developed the neocortex. This is true — but the question is why does it matter? Sure, we’re smarter than other species in many ways but there are also ways in which we’re also outclassed. So what’s the unique thing that gave our species a breakaway lead? I think that which is most fundamentally human is that we are a technology creating species — specifically in a way that compounds intergenerationally.

We often use “technology” to describe bleeding edge digital innovations, but that’s an overly narrow meaning that misses the point on just how core harnessing technology is to humanity. From controlling fire, to developing language, to inventing agriculture, to building industrial machines — humanity’s major milestones have all been technology-based. When I think about it, I struggle to find anything that is more uniquely human.

The reason I raise this is because it’s at the core of many discussions about the far future of AI. In conversations about what AI looks like in 10-50 years, I notice that people naturally assume that we have complete control over whether AI continues to progress or exist at all. While I agree that we have a lot of control to shape AI, it doesn’t feel plausible that we have a choice as to whether it continues to become smarter and more powerful over time.

If humanity has shown one thing it’s that in spite of war, plague, and famine we still manage to move the technology bar forward. Sure it ebbs and flows in a non-linear way, but it happens! When I ask people to define a ceiling past which it’s impossible for AI to become more intelligent I’ve always failed to get a plausible answer and, unlike other cases where the development of a technology can be somewhat constrained (e.g., nuclear weapons), the building blocks of AI are readily available (computer, data, and algorithms).

This is a long way of saying it took me years to realize I shouldn’t waste energy on debating “if” AI would get extremely intelligent and whether I want that to happen or not. Rather, because I take it as a given, at least on some time scale, I focus 100% of my attention on the set of actions we can take to ensure the best possible outcome for humanity. Some would call this a “fatalistic” approach, but I view it more as pragmatic: take certain forces you can’t control as a given, then focus on everything you can affect!

 

Winner take all dynamics

One of the things that’s been really hard for me to stomach is the market dynamics of digital technologies and AI. It’s network effects on network effects with the end result likely being a winner take all effect, at least within each subdomain. Historically, many industries with network effects have been nationally regulated to prevent winner take all outcomes, but now the world is extremely global. So you have this stacked network effects phenomenon coupled with a waning of effective governance. Yikes!

The existence of monopolies goes against all of my sentiments and “winner take all” is perhaps the most offensive statement to the general Canadian ethos — so the question is what should one do to make sure we wind up with a society we like in spite of these phenomena? Clearly having better global governance of AI is a positive step but many of us are not in policy positions so, if that’s the case, what positive action could Canadians interested in best positioning our country for the future take?

It’s pretty hard to argue for what should happen with a given technology unless you have a true seat at the table because you’ve helped create it, so I think the only answer is to fight hard to be the winner in several valuable AI subdomains then choose what happens with the ensuing technology and returns. Value creation will be disproportionate in AI, but the distribution of its benefits doesn’t have to be! My hope is that Canadians will double down to win in specific areas that are important to us then do wonderful things for our own population and the world.

One of my favorite article titles that’s come out of this conference has been “Why Artificial Intelligence Should Be More Canadian”. I won’t lie that I’m heavily biased towards our national morality and way of life, so I do hope we get to play a significant role in dictating how AI affects the world!

Register here!

Tickets