Close Search
 
MEDIA, JOBS & RESOURCES for the COMMON GOOD
Opinion  | 

‘Give It the Moon’, and Other Bad Ideas for Stopping the AI Apocalypse


26 July 2018 at 8:40 am
Will Dayble
Once you get your head around artificial intelligence, it stops being so ridiculous, which is terrifying, in the most absolute sense, writes Will Dayble, the founder of the Fitzroy Academy.


Will Dayble | 26 July 2018 at 8:40 am


0 Comments


 Print
‘Give It the Moon’, and Other Bad Ideas for Stopping the AI Apocalypse
26 July 2018 at 8:40 am

Yes, that’s a ridiculous title for a Pro Bono Australia article. But once you get your head around artificial intelligence, it stops being so ridiculous, which is terrifying, in the most absolute sense, writes Will Dayble, the founder of the Fitzroy Academy.

I’m writing this from a cafe in Bangkok, a day after two excellent events: The SingularityU Thailand Summit (all about technology and AI and exponential everything) and IFC Asia (all about impact, people, and social change).

Here’s what got me thinking: These two events were in the same city, at the same time, and no-one I met knew about the other event.

This speaks to a hunch I’ve harboured about the social change change world:

We don’t grok exponentials, and it’s dangerous

Exponentials just don’t “click”, practically or instinctually, because so much of what we do is about being in service of people and community.

Some changemakers are wonderfully technically proficient, and smarter people than I have stood astride the boundary of impact and technology to do incredible things. But they’re a tiny minority.

To me, the scariest (and most fun) thing to think about in the exponential technology world is the AI singularity. A bit like the previous beginners guide to decentralisation, I’m going to put on my social sector hat, and dig into another technical topic.

This time, let’s talk about a sentient artificial intelligence eating the world.

This is another long one, let’s get apocalyptical! Whee! 🙌

Sidebar: What is the singularity?

This chunk is for the newbies, if you’re (un)comfortable with this stuff already skip down to “Exponentials and social change”.

The singularity is the theoretical end result of someone building an artificial intelligence that is “generally” smart.

“General” in this case means smart like a dog or monkey or human is smart: thinking generally about lots of different problems of different types.

This is different to a “narrow” AI, which does one thing really well. The key difference is that an AGI (artificial general intelligence) can learn to get smarter, and re-engineer itself to keep getting smarter.

You already use “narrow” AIs all the time, from Siri’s voice recognition to the way Google ranks search results, through to more obvious examples like self-driving cars, and programs that can beat humans at board games.

The freaky thing about a self-improving AGI is that silicon is much faster than biology or evolution, in the same way that carbon fibre and steel are physically stronger than flesh. A self-optimising, clever AI built upon a “substrate” of computing power could get abominably smart, very quickly.

It will also likely live in the cloud, be capable of hacking every computing device connected to the net, and very quickly have a scary amount of power and reach. Think about how Russia has been hacking US elections, then imagine that happening to everyone, everywhere, all at once. One scenario: Immediate catastrophe.

Some singularity geeks think that “quickly” means that within a few minutes of becoming sentient the AGI could become millions of times smarter than us, in a form of intelligence that we can’t comprehend. We’d create it, but we couldn’t control or understand it. To the AI, we’d be tiny weird creatures, just how humans rarely think of monkeys as “mum and dad”, or particularly care about the emotional lives of bacteria. The AGI may not think about humans as a meaningful form of life at all.

Note that I’m speaking about the AGI using the definite article (the), because it’s entirely likely that the first and only AI to become sentient would immediately wipe out any other sentient AI competition the moment it wakes up.

After killing off all competition, the singularity could start working on converting all ~5.9×10^24 kg of matter we call Earth into “smart matter”, or a giant computer brain, and nudge all that matter into space to be as close as possible to all that free nuclear energy the sun gives off.

It sounds ridiculous, yes, but not much more ridiculous than telling someone from the 1950s that an 8-year-old kid in 2018 is instantly connected to every other human and piece of knowledge we’ve ever created, all from a pocket-sized computer, for free.

And that’s where the title of this article comes from, one possible safeguard against this risk: obviate the apocalypse by doing all our AGI research on the moon.

In theory, any runaway intelligence is kept physically separate from us by the void of space, the way we make hermetically sealed environments in labs for researching nasty biological things like bird flu and anthrax.

Give it the moon, so it’s less likely to eat the Earth.

There’s a mere ~7.3 × 10^22 kg of mass in the moon, only 1.2 per cent of Earth, and it’s a whopping 384,000 kilometres away.

Safe as houses, right?

Probably not.

What breaks my brain is that the moon scenario is one of many ideas that almost certainly wouldn’t work in a rogue singularity scenario. Jumping from planet to planet to gobble up matter is easy stuff for a super-intelligent, space-faring entity.

An AGI on the moon might think of Earth as 1.3 light seconds away, while it blasts virus-laden packets of data via laser beam at our poorly fire-walled networked systems.

Consider that the USA’s “use them or lose them” nuclear war scenario has a five minute timer. 1.3 seconds isn’t much time to prepare for the apocalypse.

A bad singularity scenario is a lot like nuclear proliferation, climate change, or any other huge, exponential threat. It’s just exponentially bigger, faster, and more absolute.

This scenario is difficult to get your head around, sort of like explaining nuclear deterrence to a goldfish. The numbers are weird.

Impact, exponentials and change

In Phnom Penh last week I watched the physical difference that around four years can have on a skyline. Buildings are going up incredibly quickly, planning controls are virtually non-existent, and the externalities of that approach are affecting local people in profound ways.

But walking the streets, I’m struck that even that rate of change is entirely insignificant compared to the speed of change a super-intelligence would impose upon us.

So where do we land? More questions.

Another set of questions:

  1. Can we teach social impact folks to understand the singularity?
  2. Will doing so make any difference?

From my conversations with social impact people so far, the first question should probably be rephrased as: “Can we teach social impact folks to even care about the singularity?”

I’d gamble that I’m just doing a bad job of explaining it. Perhaps adults just find it tricky to think about this stuff? Perhaps this conundrum is a generational one.

Think of the children!

In the same way that toddlers these days get confused when any screen isn’t a touch screen (and try to “pinch and zoom” on pictures in magazines) we might want to deliberately teach our kids to think in both the exponentials of computing power, and the grounded, interpersonal realities of being human.

There’s a good chance that the youth of the near future will be augmented humans, constantly plugged into the cloud via direct neural interfaces, ie devices plugged directly into our brains.

We’re already augmented humans, we have a phone in our pocket that tells us when we have to leave to get to a meeting on time, a meeting we’d forget about if a cloud-enabled calendar hadn’t remembered for us.

Some AI geeks like to refer to phones and the internet as an “exocortex”, a kind of extra cortex outside our bodies that does thinking for us, in the cloud.

The future is now! We may have noticed this too late…

What if the future crash is already happening?

Try this thought experiment on for size: Imagine you’re standing on the intersection of a quiet street, and a car is speeding around the corner at 100 km/h. You don’t even realise until with whining engine and headlights a-flashing, it’s upon you.

When is the right time to react in time to get out of the way?

Obviously not the moment you see the car, that’s too late. This isn’t an action film, you don’t have Hollywood-grade reaction speed. You’d need to somehow see the rogue car before it turns the corner, right? Only then would you have enough time to move your slow, pathetic human muscles.

There’s a good argument that the speeding car crash of AGI is already shining its high beams at us, and we’re simply stunned in the glare.

When things move fast, just seeing it happen at all means we’ve noticed too late.

For example, AIs and video games:

  1. In 1997, Deep Blue beat Kasparov at chess. It’s now impossible for a human to beat a sufficiently powerful artificial intelligence at chess.
  2. In May 2017, Google’s AlphaGo beat Ke Jie at Go, a distinctly more complex and artful game than chess.
  3. August 2017, OpenAI creates a bot that beats the world’s top professionals at 1v1 matches of Dota 2 under standard tournament rules.
  4. Last month (June 2018), OpenAI’s bots are starting to beat human players at 5v5, team based Dota 2.

Just to clarify for the non-gamers: Dota 2 is the world’s biggest eSports video game. The prize pool for the Dota 2 International is in the order of $20M USD and growing, already double the 2017 PGA golf tour’s measly $10 million. It’s a big deal.

The freaky thing about this is that at every point along the scale, we assumed AIs would “never” be able to beat the next level of game.

  1. Chess is a game that takes geniuses decades to master.
  2. Go is hugely more complex than chess.
  3. Go and chess are both games where both players know the complete state of the board (ie no information is hidden from the opposing player). In Dota 2 most of the game information is hidden to other players, forcing players to guess and gamble to win.
  4. In 5v5 Dota 2, not only is the game full of hidden information, but teamwork between players is involved.

At each level of game, things get orders of magnitude more difficult, and the relevant AI is learning more complex and nuanced behaviour. Check out the Open AI Five blog for detail. Now these are narrow AIs, programmed to do one thing well, but they’re getting good absurdly quickly.

To oversimplify, Open AI Five plays 180 years worth of games against itself every day, to train. That’s ~900 years per day counting each hero separately. Holy moly.

But I can hear some of you saying things like “so what?”, or “it’s just video games”, and I must admit your reactions are reasonable.

And therein hides my worry: Being reasonable when the stakes are so high could be dangerous.

This is a story written in silicon, and it evolves too fast for the puny humans to keep up.

Story fallacies

As you’ve probably experienced yourself, people fall prey to good stories all the time. We only remember the events with a nice beginning, middle and end. We want a hero, called to action by a higher power, who only falters before finally succeeding against all odds. Huzzah!

We don’t enjoy the stories where everything goes to hell in a handbasket, or the stories that make us uncomfortable because they don’t fit our world view, or its cultural, geographical and biological underpinnings.

We’ve seen this over and over in politics, history, fundraising, and anything else where a well-told story can pervert the truth.

The story of AI taking over the world goes something like this: “Once upon a time, the end.”

The story is over before it starts. It’s just like the tales of disruption we hear from tech companies who go from nothing to billions of users in a few years. In a blink the game has changed. But even those companies are run by humans.

The truth will hopefully be slower and more nuanced than that, and the alarmist soundbites from smart people like Stephen Hawking have been softened through debate and investigation.

But what if we’re wrong?

If it’s the fate of all DNA-based life, forever, how should we stack that threat up against the important, human stories that the social sector works on every day?

Again, no conclusion

This seems to be the theme with these Pro Bono News opinion pieces; I’m forced to finish without satisfying conclusions, and more annoying questions. If you’d like to compare notes, email me: will@fitzroyacademy.com.

If you’re interested in digging deeper, check out the Concerning AI podcast (a wonderful listen, and the main inspiration for this article), dig into Open AI, and read more near-future science fiction.

I have a sneaky feeling that like other technological step changes, the thinking that got us here will not get us out again, and we need to listen to ancient wisdom, spiritual leadership, or completely non-technical intelligence for the answers. Or, to help us remove the desire for answers.

Remember: Once upon a time, the end. Sleep well! 🙂

About the author: Will Dayble is a teacher, and founder of the Fitzroy Academy, an online social impact school. The academy works with students and educators to teach people about entrepreneurship and social impact. Will is at once a loyal supporter and fierce critic of of both the startup and impact ecosystems.

This is part of a regular series of articles for Pro Bono Australia exploring impact, education and startups.


Will Dayble  |  @ProBonoNews

Will Dayble is a teacher, and founder of the Fitzroy Academy, an online social impact school. The academy works with students and educators to teach people about entrepreneurship and social impact. Will is at once a loyal supporter and fierce critic of of both the startup and impact ecosystems.


Get more stories like this

FREE SOCIAL
SECTOR NEWS

Your email address will not be published. Required fields are marked *



YOU MAY ALSO LIKE

Salary Survey reveals pay rises across the board

Danielle Kutchel

Monday, 29th May 2023 at 5:00 pm

Your essential guide to a successful NDIS Internal Audit

Maz Nabavi

Tuesday, 21st March 2023 at 7:00 am

New president for ACOSS

Danielle Kutchel

Wednesday, 15th March 2023 at 3:22 pm

ATO cracks down on NFP misconduct

Danielle Kutchel

Monday, 6th February 2023 at 12:02 pm

pba inverse logo
Subscribe Twitter Facebook
×