ποΈΒ Podcast LinkΒ ποΈ
I’ve been meaning to get my thoughts on the recent developments and likely disruptive impacts of #Artificialntelligence into a video, and to do this free of any constraining context, as most of my keynotes, workshops etc. over the coming year will be specific to certain contexts or domains (e.g. #education, #tech etc.).
So here it is: a free form video where I discuss why I think recent #AI developments are most definitely not “business” as normal, and the key reasons why.
My key points are:
β my perspective is shaped by both a deep tech research background and from spending a substantial fraction of the last few years advising and learning from a very wide range of stakeholders about the effects of transformative tech; from big tech to local community advocate groups, and from pre-school children to retirees, both internationally and within Australia
β that the likely depth and breadth of impact on what so many of us do on a day to day basis is unprecedented – past AI tech innovations have primarily had shallow impacts on many people or very deep impacts on a small subset of people.
β that the “barriers” to access that have so commonly both hindered enthusiastic early adopters, and comforted a lack of action by reluctant organisations, are no longer there – the capabilities of these systems are stark and immediately, viscerally graspable by anyone within minutes of interacting with them
β that, again unlike past AI tech disruptions, the likely disruption doesn’t require one or more multiple “miracles” of further improvements: the status quo is enough even given their substantial flaws, and
β that nevertheless any reasonable projection of future development would suggest that at least some of their known shortcomings will be mitigated, further scaling their impact
Finally, whilst needing a treatment in of themselves, I talk about:
β why I am cautiously optimistic that the misuse and abuse of these systems will be successfully countered and moderated, and, to the primary point of the video, that
β I think their immediate huge disruptive potential is in no way reliant on this tech getting close to #artificialgeneralintelligence (#AGI) – a whole other debate
With this video I hope to provoke discussion and welcome your thoughts and comments!
π Pls reshare as you see fit!
#research #technology #robotics #science #AI #artificialintelligence #tech #chatgpt #chatgpt4 #bard #google #meta
Full Video Transcript
Hello, it’s a Friday morning in sunny Brisbane, Australia, and I thought I’d take the time to chat a little bit about what is happening in the artificial intelligence field at the moment and my particular perspective on what I think the likely impact is going to be. The reason I’m making this video, which is off the cuff, is simply because of the need for urgency to consider this topic and the fact that what we’re seeing happening, I think, is in many cases or in many aspects unprecedented in terms of the history of technological development.
Now, there are a lot of voices in the room with all sorts of different perspectives, and I think that inherently isn’t a bad thing. It’s a good thing; it adds a lot of different perspectives on what is a complex and sometimes challenging, sometimes very exciting topic. Whenever you add your voice to the room, you’ve always got to think about whether there is really the need for an extra voice. In this case, I think, and I’ll explain why, that there is a particular perspective that I can add to the conversation that I would like to put out there.
So, if you don’t already know me, my name is Michael Milford. I’m a professor of Robotics, I jointly direct the large Robotics Research Center, and I work extensively in the robotics area and related areas of AI. We work all the way from blue-sky fundamental research to very applied research. I’m speaking as an individual here; I’m not really representing any of the particular organizations I work with, just to make that very clear.
So, why am I adding my voice to the conversation? I’m a typical senior researcher, so I’ve worked on a lot of projects where we try to translate technologies into useful applications for our various collaborators and partners, and we’ve also worked extensively in the fundamental research space. The one particular aspect of this conversation that I think I bring a relatively rare perspective to as a senior researcher is that I’ve spent a very large amount of my time, especially over the last six or seven years, talking with, learning from, and advising an incredibly wide range of stakeholders around transformative or potentially transformative new technologies like robotics, automation, autonomous vehicles, and most frequently, artificial intelligence.
When I say a diverse range of stakeholders, I really do mean that. So, a typical week for me, and this is every week, would be something like working in Sydney or Melbourne, providing some insights into AI’s potential effect on the legal industry, and then that very day, flying back to Brisbane and advising a community advocate group who is perhaps concerned about the effects of autonomous vehicles on cyclists.
I’ve had it’s really one of the great privileges and most enjoyable aspects of my job. I spend time talking to four-year-olds and six-year-olds, I spend time talking to retirees, I spend a lot of time talking to people at Fortune 500 companies, and talking to people from not-for-profit groups. In the course of doing an incredible amount of that, I guess I’ve learned a lot about people’s perception about these new technologies, their perceptions of the barriers, if people are scared of a new technology, what those fears are founded on, and if people are really enthusiastic to adopt a new technology, what the motivations for that enthusiasm are and what perhaps some of the barriers that they are encountering are that are frustrating their efforts to deploy that technology.
So, those are basically the perspectives or credentials that I bring to it, and it’s particularly that those many years of talking extensively and learning from people from a very wide cross-section of society that I bring to this particular video. So, what is happening? Well, people have made a great point of saying that recent AI developments aren’t in of themselves particularly new, and that a lot of the foundations for this work goes back many years, and that is true. But that does not take into account the fact that recent artificial intelligence capabilities, specifically in the area of things like large language models (so you’ve seen things like ChatGPT) and also generative artificial intelligence (which you’ve seen probably generate amazing visuals) have really hit the mainstream consciousness.
And what I’d like to make the point in this video is that it is not business as usual; this is really a transformative step change, and there are a small number of very specific reasons why I think that’s the case. For the rest of this video, I’ll try and illustrate what those specific reasons are and provide some reasoning for why I think they’re important, add a little bit of humor into some of our experiences in this area, and hopefully leave you with some food for thought regarding this very fast-moving, potentially incredibly impactful field.
I’ll start with an anecdote, though. So, without making the obvious mistake of assuming being a professor of Robotics relates at all to what the majority of the population does on a day-to-day basis, I would like to point out that on a day-to-day basis, what we do, or what I do and what my group does, has changed radically in a way that it has not changed in the last 20 years. We lived through the so-called deep learning popularization, a revolution around the era of 2012. We spent a lot of time investing in that time in terms of responding and proactively adopting these new AI Technologies. We actually created a lot of the pioneering work in our particular domain, which is navigation systems for Robotics and autonomous vehicles and did a lot of the high-impact initial work in that area.
So, we’ve lived through what people would sort of refer to as a major revolution in the AI space, and I’m here today to tell you that what we’re going through now makes those previous ones pale into insignificance.
So, on a day-to-day basis, our group’s work has changed substantially. People in our group are using these tools to help them with writing papers, writing grants, writing social media posts, and writing reports. We’re using them to help with our 3D prototyping, to generate a bunch of potential ideas for 3D prototypes of embedded devices we might be using. We’re using it to create illustrations, and our team is using it to help with coding tasks and a number of other core bits of the activity of what our job is.
Basically, now, are the tools perfect? No. Are we able to just hand over what we do to the tools entirely? No, of course not. But what is happening is it’s affecting a substantial fraction of the activities that we do on a day-to-day basis. And that really is, I guess, the overarching point of this video. The transformations or developments that we’re seeing recently are for the first time transforming, or potentially transforming, a substantial fraction of the activities that many of us do on a day-to-day basis. And doing this for a substantial fraction of types of roles that we play in society and at our work, and that has really not happened before, especially with past advances in the AI space.
So, in the past, typically, developments have either deeply affected a very narrow range of professions or they’ve had a very shallow, minor effect across a wider range of professions. Now, this is where those years of talking to end users and stakeholders from such a diverse range of backgrounds is important.
Let’s focus first on the people who are reluctant to adopt these technologies or have been in the past.
One humorous aside I will add to illustrate this is: I’ve had moments or epiphanies where I’ve been giving a talk in a boardroom on the 50th floor of some luxury building in Melbourne, Sydney, or Brisbane. Midway through the talk, I’ve realized that the people who have invited me to talk there are basically using me to entertain and educate their clients, but they have no feeling whatsoever that the technologies I was talking about in the past were actually going to have any meaningful, substantial, disruptive effect on what they saw their day-to-day business being.
And that’s one of the things that has changed. The effects of what these new tools can do are very confronting, and you can’t really bury your head in the sand. Unlike past tools where you might have needed special experts or special technicians to deploy it, you might have needed to change all your data, you can, within the course of a minute or two, sign up and usually get access to these tools and immediately start seeing how transformative they could be for substantial fractions of what you do or what your team does on a daily basis. That really has not happened before.
So, while some, I guess, slow-moving sectors in the past could have quite successfully just said that new technology is a flash in the pan, we can ignore it because it only affects a tiny fraction of what we do, that is much less likely to play out here because of the depth of the effect of these technologies and the breadth of different roles and activities which they can or are likely to affect.
This also goes for enthusiasts. Enthusiasts in the past have been really keen to adopt a lot of these technologies, automation, AI, etc. And despite that level of enthusiasm, there’s been substantial technical, financial, and other barriers that have really stopped them from being able to deploy these technologies in their organization. Those barriers have largely been removed for the first time, and so all of the enthusiasts, the early adopters, can actually get their hands on and start using this technology almost instantly.
And that again is a relatively new phenomenon that has not happened extensively in the past, which makes what’s happening now very different from most of what we’ve seen in the past couple of decades.
For example, another key difference of what is happening now versus what has happened before is the amount of future improvements or additions that are needed to occur in order for the technology to really reach its full potential or to have a substantial disruptive effect. In the past, many people have joked about the fact that some technologies, for example, autonomous vehicles, need multiple miracles to occur in parallel or simultaneously for the technology to be viable. I don’t want to get into autonomous vehicles in this video, but purely to draw the point of contrast or perceived contrast.
These new technologies, even with no further improvements, are likely to be substantially disruptive. You can make pretty reasoned, valid assumptions that in some aspects of what these technologies currently struggle to do, there will still be substantial, fairly rapid improvements that will increase their utility, their accessibility, and the range of tasks that they impact. It’s important to note that we don’t need all of the shortcomings of these technologies to be solved in any time frame for them to have this really disruptive effect. That is another key contrast between what’s happening now with recent AI developments and what has happened in the past with technologies that have come along and perhaps been a bit disappointing in terms of their actual deployment and reach.
When people have talked about the potential consequences or impact of technologies in the past, they have used well-evidenced research from past technology changes that show that there’s typically a period of disruption, but then things do settle down in the long term. While some jobs change or the roles of jobs change, and some job types do genuinely go extinct, there have always been new job types and new economic sectors that have been generated by past technology changes. So the net effect has not been necessarily negative at all and often quite positive, but there has been a period of disruption.
Again, I would say that the key change here is that disruption in the past has typically been isolated to either a very small fraction of what many of us do on a daily basis or disrupting substantially a very small percentage of roles that we each play in society. The difference is that these technologies that have come out and are being widely accessed in just the last year or so have the potential to affect a much larger range of roles and to affect them much more deeply. What that means is that the consequences for how we go about upskilling and retraining people to adapt to these technologies are more significant than ever before.
To adapt to the new roles that result as a consequence of these new technologies, the urgency, pace, and scale of those retraining and upskilling initiatives is much, much larger than it was for past, much more isolated technological changes. It also means that if we don’t manage these changes well, the level of disruption, again both in terms of depth and breadth, will be much more substantial. So, we will want to put extra emphasis and focus on making the transitions as streamlined as possible.
Finally, the potential for a wide range of, I guess what you’d call, modalities by which this technology can be disruptive is much higher now than it was for past technologies. If the technology and its impact is entirely limited to the digital domain, to disrupting people who write, who communicate, who do reporting, who do online analysis, that could already by itself be the most disruptive technological transformation or one of the most disruptive transformations in history.
But there is substantial potential for this disruption to span into more physical domains. My particular area of work, and many of our colleagues, is in robotics, which is really about physical embodied intelligence. The entire community is scrambling to see the extent to which these large language models and other foundational new technologies will transfer to the physical embodied world. While a lot of this investigation is literally only a few months young, the initial indications are that at the very least, it will substantially change how we approach things like embodied intelligence in robotics.
The full extent of this transformation is still yet to be understood. It may be that it stalls substantially because of some harsh realities of physics in the real world, but we really don’t know. We are working hard to understand this, and it will take months or years to understand the full potential transformative effect outside of the pure digital domain. If it ends up being really revolutionary in embodied intelligence as well, then its potential disruptive effect is even larger. Because rather than being restricted to stuff that we do on a computer or on the Internet, it will start to potentially affect all the physical aspects of roles that many of us do in society, and that would increase its disruptive potential even further.
Over the last few years, it’s become increasingly clear to me that one of the most valuable and relatively unique roles that researchers at in the University or research system have is to attempt to provide deeply informed, expert, but also as objective, impartial advice as possible on potentially transformative new technologies like the AI developments I’ve been talking about in this video.
Obviously, we also do a lot of training of new talent and research development. But a particular strength, I think, or contribution, is in trying to provide impartial advice, which is what the purpose of this video is.
I’d mention that there are two other major aspects of these new technologies that I have not addressed in this video. Not because I do not think they are as important or more important than the topic I talked about, but because it would turn into a four-hour video rather than something that people can actually sit down and watch.
The first is the ethical aspects of these new technologies and, I guess, the potential for misuse and abuse. These are huge potential issues. I have some substantial optimism in this area because the level of sophistication of responses to technologies is already pretty high in terms of dealing with fake news and other things. And so, while there will still be the need for a lot of attention paid to these things, I’m optimistic that a balance can be reached that is acceptable for society, like it has been for other technologies.
The other more philosophical point is this whole point around whether these technologies are potentially leading to artificial general intelligence (AGI), so intelligence that is on par or superior in some respect to human intelligence. That is the topic of another whole video. There are a lot of strong opinions on both sides. The current technologies are clearly not AGI, but what people are talking about is whether they are on the pathway to what potentially might be AGI in 10 years or 100 years.
My main point I would make here is that the disruptive and transformative effect of these technologies on the vanilla tasks that we do every day and part of our roles in no way relies upon this technology ever getting close to being AGI or artificial general intelligence. And I think that point is really important to make. It does not need to be as genuinely intelligent as we are in order to be the most incredibly disruptive and transformative revolution in society that we’ve seen in technology for many years, potentially ever.
I look forward to seeing and thinking deeply about the developments that we see play out in this field and encourage everyone to, at the very least, pay attention to what happens over the coming months and years. I look forward to any feedback that others may have on this issue and will probably revisit where we’re at in a future video. Thank you for listening.