Home News Patrick M. Pilarski, Ph.D. Canada CIFAR AI Chair (Amii) – Interview Series

Patrick M. Pilarski, Ph.D. Canada CIFAR AI Chair (Amii) – Interview Series

0
Patrick M. Pilarski, Ph.D. Canada CIFAR AI Chair (Amii) – Interview Series

Dr. Patrick M. Pilarski is a Canada CIFAR Artificial Intelligence Chair, past Canada Research Chair in Machine Intelligence for Rehabilitation, and an Associate Professor within the Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta.

In 2017, Dr. Pilarski co-founded DeepMind’s first international research office, situated in Edmonton, Alberta, where he served as office co-lead and a Senior Staff Research Scientist until 2023. He’s a Fellow and Board of Directors member with the Alberta Machine Intelligence Institute (Amii), co-leads the Bionic Limbs for Improved Natural Control (BLINC) Laboratory, and is a principal investigator with the Reinforcement Learning and Artificial Intelligence Laboratory (RLAI) and the Sensory Motor Adaptive Rehabilitation Technology (SMART) Network on the University of Alberta.

Dr. Pilarski is the award-winning writer or co-author of greater than 120 peer-reviewed articles, a Senior Member of the IEEE, and has been supported by provincial, national, and international research grants.

We sat down for an interview on the annual 2023 Upper Certain conference on AI that’s held in Edmonton, AB and hosted by Amii (Alberta Machine Intelligence Institute).

How did you end up in AI? What attracted you to the industry?

Those are two separate questions.  When it comes to what attracts me to AI, there’s something beautiful about how complexity can emerge and the way structure can emerge out of complexity. Intelligence is just one among these amazing examples of that, so whether it’s coming from biology or whether it’s coming from how we see elaborate behavior emerge in machines, I believe there’s something beautiful about that. That is all the time fascinated me for a really very long time, and my very long winding trajectory to work in the world of AI I work in now, which is machines that learn through trial and error, reinforcement systems that interact with humans while they’re each immersed in it, the stream of experience, flow of time, got here through all types of different type of plateaus. I studied how machines and humans could interact when it comes to biomechatronic devices and biotechnology, things like artificial limbs and prosthesis.

I checked out how AI might be used to support medical diagnostics, how we will use machine intelligence to begin to grasp patterns that result in disease or how different disease might present when it comes to recordings on a machine. But that is all a part of this long-winded drive to essentially start to understand how you may have the option to get very complex behaviors out of quite simple foundations. And that is what I actually love, especially about reinforcement learning, is the thought the machine can embed itself throughout the flow of time and learn from its own experience to exhibit very complex behaviors and capture each the complex phenomenon’s, really, on this planet around it. That is been a driving force.

The mechanics of it, I actually did quite a lot of sports medicine training and things like that back in highschool. I studied sports medicine and now here I’m working in a environment where I have a look at how machine intelligence and rehabilitation technologies come together to support people of their each day life. It’s a really interesting journey, just like the side fascination with complex systems and complexity, after which very practical pragmatics of how can we begin to take into consideration how humans might be higher supported, live lives they need to live.

How did sports initially lead you to prosthetics?

What’s really interesting about fields like sports medicine is taking a look at the human body and the way someone’s unique needs, whether it’s sporting or otherwise, can in reality be supported by other people, by procedures and processes. The bionic limbs and prosthetic technologies are about constructing devices, constructing systems, constructing technology that helps people live the lives that they need to live. These two things are really tightly connected. It’s actually really exciting to have the option to return full circle and have a few of those much earlier interests come to fruition in, again, co-leading a lab where we have a look at… And particularly machine learning systems that work with in a tightly coupled way, the person who they’re designed to support.

You’ve previous discussed how a prosthetic adapts to the person as an alternative of the person adapting to the prosthetics. Could you talk in regards to the machine learning behind this?

Absolutely. As a foundation within the history of tool use, humans have adapted ourselves to our tools after which we have adapted our tools to the needs that now we have. And so there’s this iterative technique of us adapting to our tools. We’re, at once, at an inflection point where for the primary time, you’ve got perhaps heard me say this before in talks if you happen to’ve checked out a few of the talks that I’ve given. But really, we’re at this necessary point in history where we will now imagine constructing tools that herald a few of those hallmarks of human intelligence. Tools that may actually adapt and improve while they’re getting used by an individual. The underlying technologies support continual learning. Systems that may continually learn from an ongoing stream experience. On this case, reinforcement learning and the mechanisms that underpin it, things like temporal difference learning, are really critical to constructing systems that may continually adapt while they’re interacting with an individual and while they’re in use by an individual supporting them of their each day life.

Could you define temporal difference learning?

Absolutely, what I actually like about that is that we will think in regards to the core technologies, temporal difference learning and the elemental prediction learning algorithms that underpin much of what we work on the lab. You may have a system that, very similar to we do, is making a prediction about what the longer term goes to appear to be with respect to some signal, with respect to something like the longer term reward is what we often see. But another signal you may imagine like, how much force am I exerting at once? How hot is it going to be? What number of donuts am I going to have tomorrow? These are the possible things that you just may think predicting. And so the core algorithm is admittedly taking a look at the difference between my guess about what is going on to occur at once and my guess about what is going on to occur in the longer term together with any type of signal that I’m currently receiving.

How much force am I exerting as a robot arm is lifting up a cup of coffee or a cup of water? This may be taking a look at the difference between the prediction in regards to the amount of force it can be exerting at once or the quantity it can over some period of the longer term. After which comparing that to its expectations in regards to the future and the force it’s actually exerting. Put those all together, and also you get this error, the temporal difference error. It is that this nice accumulation of the temporally prolonged forecast in the longer term and the differences between them, which you’ll then use to update the structure of the educational machine itself.

And so this, again, for conventional reinforcement learning based on reward, this may very well be taking a look at updating the best way the machine acts based on the longer term expected reward you may perceive. For quite a lot of what we do, it’s taking a look at other forms of signals, using generalized value functions, which is the variation of the reinforcement learning process, temporal difference learning of reward signals to any type of signal of interest that may be applicable to the operation of the machine.

You regularly speak about a prosthetic called the Cairo Toe in your presentations. What does it should teach us?

The Cairo Toe University of Basel, LHTT. Image: Matjaž Kačičnik

I like using the instance of the Cairo Toe, a 3000-year-old prosthesis. I work in the world of neuro prosthetics, we now see very advanced robotic systems that may in some cases have the identical level of control or the degrees of control as biological body parts. And yet, I’m going back to a really stylized picket toe from 3000 years ago. I believe what’s neat is it’s an example of humans extending themselves with technology. That’s what we’re seeing at once when it comes to neuro prosthetics and human machine interaction shouldn’t be something that’s weird, latest or wacky. We’ve all the time been tool users, animals, non-human animals also use tools. There’s many great books on this, especially by Frans de Waal, “Are We Smart Enough to Know How Smart Animals Are?”.

This extension of ourselves, the augmentation and enhancement of ourselves through using tools shouldn’t be something latest, it’s something ancient. It’s something that has been happening since time and memorial within the very land that we’re on at once by the individuals who lived here. The opposite interesting thing in regards to the Cairo Toe is that the evidence, no less than from the scholarly reports on it, show that it was adapted multiple times over the course of its interactions with its users. They really went in and customised it and adjusted it, modified it during its use.

My understanding, it was not only a set tool that was attached to an individual during their lifetime, it was a set tool that was attached but in addition modified. It’s an example of how, again, the concept tools are adapted during their span of use and a sustained span of use is definitely something that can also be quite ancient. It is not something latest, and there is a number of lessons we will learn from the co-adaptation of individuals and tools over many, a few years.

You’ve previously mentioned the feedback pathway between prosthetics and the human, could you elaborate on feedback pathway?

We’re also in a special time when it comes to how we’re viewing the connection between an individual and the machine that goals to support them of their each day life. When someone is using a man-made limb, to illustrate someone with limb difference, someone with an amputation is using a man-made limb. Traditionally, they might be using it very very similar to a tool, like an extension of their body, but we’ll see them largely counting on what we consider the control pathway. That some sense of their wheel or their intent is being passed all the way down to that device, which is then tasked with determining what it’s, after which executing upon that, whether it’s opening and shutting a hand or bending an elbow or making a pinch grip to grab a key. We regularly don’t see people studying or considering the feedback pathway.

So numerous artificial limbs that you just might see deployed commercially, the pathway of data flowing from the device back to the person may be the mechanical coupling, the best way that they really feel the forces of the limb and act upon them. It may be them hearing the worrying of the motors or them watching as they pick up a cuff and move it across a desk or they grab it from one other a part of their workspace. And so, those pathways are the standard way of doing it. There are amazing things which are happening across the globe to have a look at how information may be higher fed back from a artificial limb to the person using it. Especially even here in Edmonton, there’s quite a lot of really cool work using the rewiring of the nervous system, targeted nerve renovation and other things to support that pathway. However it continues to be a highly regarded emerging area of study to take into consideration how machine learning supports the interactions with respect to that feedback pathway.

How machine learning can support a system that may be perceiving and predicting rather a lot about its world actually transmit, having that information transmitted clearly and effectively back to the person using it. How can machine learning support that? I believe that is an awesome topic, because if you could have each that feedback pathway and that control pathway, each pathways are adapting and each the device getting used by the person and the person themself are constructing models of one another. You may do something almost miraculous. You may almost transmit information free of charge. If you could have each these systems which are actually well attuned to one another, they’ve built a really powerful model of one another and so they have an adaptation each to manage the feedback pathways, you possibly can form very tight partnerships between humans and machines that may pass an enormous amount of data with little or no effort and little or no bandwidth.

And that opens up whole latest realms of human machine machine coordination, especially in the world of neuroprosthetics. I’m really think this can be a pretty miraculous time for us to begin studying this area.

Do you think that these are going to be 3D printed in the longer term or how do you think that the manufacturing will proceed?

I do not feel like I’m one of the best place to take a position on how that may occur. I can say though, that we’re seeing a big uptick in industrial providers of neuroprosthetic devices using additive manufacturing, 3D printing, and other types of additive on the spot manufacturing to create their devices. This can also be really neat to see, that it isn’t only a prototype using additive manufacturing or 3D printing, it’s 3D printing becoming an integral a part of how we offer devices to individuals and the way we optimize those devices to the precise those who are using them.

Additive manufacturing or bespoke manufacturing, customized prosthesis fitting happens in hospitals on a regular basis. It is a natural a part of care provision to individuals with limb difference who need assisted technologies or other type of rehabilitation technologies. I believe we’re beginning to see that quite a lot of that customization is beginning to mix into the manufacturers of the devices, and not only left to the purpose of care providers. And that is also really exciting. I believe there’s an awesome opportunity for devices that do not just appear to be hands or are used hands, but devices that very precisely meet the needs of the person using them, that enables them to specific themselves in the best way that they need to specific themselves, and lets them live lives that they need to live the best way they need to live it, not only the best way we expect a hand must be utilized in each day life.

You’ve written over 120 papers. Is there one which stands out to you that we must always learn about?

There is a recently published paper in neural computing applications, however it represents the tip of an iceberg of considering that we have recommend for well over a decade now, on frameworks for the way humans and machines interact, especially how humans and prosthetic device interact. It’s the thought of communicative capital. And so that is the paper that we recently published.

And this paper lays forward our view on how predictions which are learned and maintained in real time by a, say, prosthetic device interacting with the person, the person themself can form essentially capital, can form a resource that each of those parties can depend on. Remember, previously I said we will do something really spectacular when now we have a human and a machine which are each constructing models of one another, adapting the real-time based on experience, and beginning to pass information in a bidirectional channel. As a sidebar, because we live in a magical world where there’s recordings and you possibly can cut things out of it.

It’s essentially like magic.

Exactly. It’s appears like magic. If we return to thinkers like as Ashby, W. Ross Ashby, back within the Nineteen Sixties and his book “Introduction of Cybernetics” talked about how we’d amplify the human intellect. And he really said it comes all the way down to amplifying the flexibility of an individual to choose from one among many options. And that is made possible by systems where an individual is interacting with, say, a machine, where there is a channel of communication open between them. So if now we have that channeled communication open, whether it is bidirectional, and if each systems are constructing capital in the shape of predictions and other things, then you definitely can begin to see them really align themselves and to change into greater than the sum of their parts. You may get more out than they’re putting in.

And I believe because of this I consider this to be one among our most fun papers, since it does represent a thought shift. It represents a thought shift towards considering of neuro prosthetic devices as systems with agency, systems that we won’t just describe agency to, but depend on to have the option to co adapt with us to accumulate these resources. The communicative capital that lets us multiply our ability to interact with the world, lets us get more out than we’re putting in and permit people to, I’ll say, from a prosthetic lens, stop enthusiastic about the prosthesis of their each day life and begin enthusiastic about living their each day life. Not the device that is helping them live their each day life.

What are a few of the applications you’ll see for brain machine interfaces with what you only discussed?

Considered one of my favorites is something we recommend, again, over the past almost 10 years, is a technology called adaptive switching. Adaptive switching relies on the knowledge that many systems we interact with every day depend on us switching between many modes or functions. Whether I’m switching between apps on my phone or I’m attempting to work out the best setting on my drill or whether I’m adapting other tools in my life, we switch between many modes or functions on a regular basis, considering back to Ashby, our ability to choose from many options. So in adaptive switching, we use temporal difference learning to permit a artificial limb to learn what motor function an individual might need to use and once they need to use it. So really quite an easy premise is that, just the act of me reaching over to a cup and shutting my hand.

Well, a system should have the option to accumulate predictions through experience that in this case, I’m likely going to be using the hand open close function. I will be opening and shutting my hand. After which in the longer term, in similar situations, to have the option to predict that. And once I’m navigating the swirling cloud of modes and functions, give me kind of those that I need without having to sort through all of those many options. And this can be a quite simple example of increase that communicative capital. You may have a system that’s in reality increase predictions through interaction, they’re predictions about that person, that machine, their relationship in that situation at the moment. And that shared resource then allows the system to reconfigure its control interface on the fly, such that the person get what they need and once they want. And really, in a situation where the system could be very, very sure about what motor function an individual might want, it may well in reality just select that for them as they’re stepping into.

And the cool thing is, is that the person all the time has the flexibility to say, “Ah, that is what I actually wanted,” And switch to a different motor function. In a robotic arm, that may be different sorts of hand grasps, whether it’s shaping the grip to grab a doorknob or pick up a key or to shake someone’s hand. Those are different modes of functions, different grabs patterns. It is vitally interesting that the system can start to accumulate an appreciation of what is appropriate in what situation. Units of capital that each of those parties can depend on to maneuver more swiftly through the world, and with less cognitive burden, especially within the a part of the unit.

LEAVE A REPLY

Please enter your comment!
Please enter your name here