The Coming Wave: AI and the edge of human frontiers

I have just read a bracing book – bracing, as in part-terrifying, that is, with some uncomfortable truths. I have, however, found over the years that the majority of bracing books also serve as prompts that lead to positive action, and this one was no exception. The book? ‘The Coming Wave’ by Mustafa Suleyman (with Michael Bhaskar). Subtitled ‘AI, Power and the 21st Century’s Greatest Dilemma’, it delves in fascinating detail into Suleyman’s insights, born of his career in AI, which included co-founding DeepMind, and leading AI product management and policy at Google.

Very well-written, with recent, relevant and real ‘hooks’ on which we can readily hang our understanding, ‘The Coming Wave’ makes a compelling case for an appropriate containment of AI as we explore its potential – and, moreover, introduces us to a framework of ideas that shows us how we could potentially achieve this. I have been reading it as part of my preparation for chairing an upcoming AMCIS webinar in May, which I am really looking forward to – and a huge thank you to Nick Richardson of Marketing Advisers for making the introduction of Michael Bhaskar to AMCIS! ‘The Coming Wave’ traces the history of AI through the unique lens of Mustafa Suleyman, and draws on events – recent and past – in human history to explore the patterns of world-changing waves of innovation, and to situate the development of AI within this history. (Michael has also written ‘Human Frontiers‘, another fascinating book!)

Our greatest dilemma as a human race, according to Suleyman and Bhaskar, is this: that either new technologies or the absence of those very same technologies could be equally catastrophic for the world, if we do not prepare sufficiently well for them … and at present we do not appear to be engaging in preparation that is adequate enough. Containment – by which they mean the ability to monitor, curtail, control and potentially even close down technologies – is, they argue, necessary, although this is an especially challenging option with this particular wave of innovation, because of its unique features (not least the speed of its development).

It should be noted, however, that in arguing for containment, the authors are not advocating for a Luddite-like rebellion against AI … in fact, on the contrary, despite the grave warnings embedded in the text (the terrifying bits), there is a discernible positive, forward-looking perspective that runs throughout the chapters, culminating in Chapter 14 – ‘Ten Steps Towards Containment’. These 10 steps include developing strong safety protocols, bringing in challenge from intelligent critics who are prepared to test the ‘what if’ questions, and developing international alliances and treaties, in much the same way as nuclear power is managed globally.

The proposed step that spoke to me particularly powerfully, from the perspective of what we, as educators in schools, can achieve, was number 8: ‘Culture: Respectfully Embracing Failure’, because in addition to its nod to the inherent importance of building and deploying a growth mindset, it emphasises the social and moral responsibility that developers and scientists – and, indeed, all of us – have collectively in the field of AI.

Schools can be real generators of this social and moral responsibility, with deep values at their core which when voiced, practised and modelled, genuinely enable young people to become decent human beings and good citizens, able to distinguish between right and wrong, and motivated to make a positive difference in the world. With schools explicitly part of the task force of AI development, then help is on its way.

So … there is light ahead of us, but a lot of work to be done. The message to take away is that the sooner that schools step up to their responsibility to guide the development of AI, the better for us all …

Leave a Reply

Your e-mail address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.