Understanding the Everyday Chaos of Life

The dominant trend in business, and seemingly in life, is towards chaos. Disruption has become a virtuous pursuit. Everyday Chaos gives the cleanest insight into what the people driving that trend are thinking.

I draw two things out in this week’s video:

1- “If this then that”, or cause-and-effect reasoning is being challenged by deep learning models which can determine probabilistic outcomes from data, without a working model or laws.

2- Recognizing that causality is a model rather than reality itself can open us up to new webs of meaning, complexity and particularity.

Want to know more?

This blog post is part of a series I am making called Reading For The Aspirational Self. Don’t think of this as book summaries – I’m not doing that. Instead, I’m drawing out specific lessons that I find particularly interesting. And which I think could act, together, to help people who share my aspirations. If you, too, want to be present, family-centric, intrinsically motivated and polymathic, I can help.

  • The most distilled version of what I’m offering is a free mailing list designed for learning, “Think On Thursday” – each e-mail will include a lesson designed around the content. Click here for some information on that.
  • The series is also on YouTube in the form of 7-12 minute videos. Here’s the channel link – the video and transcript are below.
  • I’m tweeting excerpts from the videos, as well as some of the story of this project, how we’re doing it, and where it is going, on Twitter. @DaveCBeck

If you want to know more about Reader, Come Home take a look at Weinberger’s website here.

Starboard reflections,

Dave.

This week’s video:-

Transcript:-

The dominant trend in business, and if you believe the media, in life, is towards chaos and disruption. Everyday Chaos gives a very clear insight into how the people who are driving that change are thinking.

Law based reasoning and particularly simple causality: “if this, then that”, is being left behind by machine learning, opening up new possibilities and increasing the rate of change for everything.

In this video. I’ll talk through “if this, then that” reasoning and how it’s being challenged by deep learning models, which can determine probabilistic outcomes from data which don’t generate a working model or causal laws.

Secondly, I’ll talk about how recognizing that causality is a model rather than a reality can open us up to new webs of meaning, complexity and particularities.

I’ll start with a caveat. I disagree with a lot of this book, particularly the assumption behind it, as far as how we think currently. I think Weinberger has it a little bit wrong there. I think he’s talking more about how scientists talk about how we think, rather than how our brains actually work.

But it’s a really important book because it’s emblematic of a trend that is already shaping our future. A trend towards chaos, disruption is the word that you’ll hear most from the business community.

And that trend is important to understand.

This book offers a useful insight into the thinking behind that trend, that’s why I think it’s important enough to share with you.

Section title suggestion: “If this, then that” reasoning

We think according to Everyday Chaos, that things happen according to understandable and universal laws. The scientific method is based on this idea that if we investigate the world around us, we can eventually produce laws. And through those laws, we can project the future.

The idea that we can distill the world around us into singular laws and then apply those to everything else, is according to Weinberger, coming to an end.

Our natural sciences have made three centuries of immense progress based on this idea that we can distill the complexity of the world around us into universal singular laws, and that we can then use those scientific laws, those theorems to predict the future and to intervene to change what’s happening around us, to intervene on the physical world and make things different for the future.

This applies to everything from how we understand physics, to how we’ve understood farming and nutrition inputs.

Just about every field of the sciences has according to Weinberger been operating according to these principles. And this is how we as humans think. We use laws to simplify the world, to predict the world. And currently, that has given us a few centuries of improved progress. We can predict the world much better than we could previously. We can intervene and do amazing things that were impossible 50 years ago, let alone a few centuries. And that era of progress, according to Weinberger is coming to an end now.

We are realizing the limits of it. So, law based reasoning can’t explain the weather perfectly, despite the huge amounts of data collection, the satellites, the sensors. and everything that collects weather information at an incredibly fine scale, we can only predict three, sometimes five days ahead with any real degree of accuracy. The same goes to an even greater extent for earthquakes.

So earthquakes, we know where they might occur, but the actual triggers of earthquake events can be something as small as a microscophic pebble being crushed across a fault line, which releases some tension and that, that leads to an earthquake. Theyre unpredictable by modern technology, our laws don’t have enough resolution or enough complexity to do that.

When it comes to our everyday reasoning. What we find is that accidents are the exception. There might be the exception that proves the rule, as the saying goes. The idea is that there’s a rule behind everyday life, an average traffic on the way to work or an average time it will take you to get ready in the morning with your kids, and you can use that rule.

And the accidents are the abberations, the accidents are the abnormals. Weinberger makes the argument that that’s its a trick of focus. And this is something I agree with him on actually.

Because we’re focused on having those rules, we ignore the everyday accidents all around us and the continuity of accidents, that kind of average out to what we call rules.

It’s a trick of focus and we also lead our lives according to those rules. So, if you think it’ll take you and your kids about 30 minutes to get ready to go out so daddy can record a video. Then about 25 minutes and you’ll start shooting the kids along. And then it takes 30 minutes because the kids’ kind of pick up on that urgency and go out.

So, the way that we lead our lives according to averages and rules and set times and predictions, leads to them happening according to that. And for Weinberger, that’s a kind of trick of focus and it’s something that proves that accidents are actually the most common thing and that averages are probabilistic predictions.

Section title suggestion: Deep learning models

On a scientific level, deep learning and machine learning in general is kind of changing that idea that there are law based principles behind everything. So what deep learning does is feeds a huge amount of data into something called a neural network, which is a set of computer circuits that are loosely modeled on the human brain.

They’re loosely modeled on how neurons interact in network with each other, and they can understand complexity and make probabilistic predictions. So they can guess what’s going to happen next. And in some areas they can actually guess now with more accuracy and certainly more quickly, because of the scale that they’re operating at, than  even trained humans.

There are exceptions to that. So one of the interesting exceptions is chicken sexing. A trained chicken sexer can look at  about 1200 baby chicks an hour. That’s one every three seconds. And they can determine with 98% accuracy, which sex that chick is, but they can’t explain why. They just do it through practice.

It’s a trained intuition that these people have. And what machine learning does is tries to emulate that trained intuition of people and apply it to complex things.

And in some areas, machine learning has surpassed what humans are capable of, certainly in terms of the speed at which they operate. So when it comes to detecting breast cancer, Now, and I think it’s deep patient, one of the Google things and some of the medical software companies can detect breast cancer from images of mammograms, better than the doctors can with a greater probabilistic accuracy.

It’s still not perfect. And there are questions around, how those tumors would develop as opposed to how the tumors detected by doctors would develop, but they’re doing the task set quicker and more accurately than people can. The same goes for facial recognition on a scale that will be impossible to do with trained individuals.

More controversial aspects might be like working out who’s going to re-offend on bail, which is only a question, because the machine learning is apparently more accurate at doing that.

That is to say the ethical questions that you might have heard around machine learning and deep learning and handing over our ethics to computers, they only arise because these things work at scale. These things, when you’re trying to predict huge numbers of people on huge amounts of complexity for precise problems work better than our existing methods.

They don’t have underlying principles. They don’t output a rule or even a set of factors that is understandable by a human that leads to the predictions that they make. They just look at the facts, most of the models, and they make the predictions. Those which are particularly ethically controversial, often are programmed in such a way that they have to output a why as well as a prediction.

It’s suggesting that in many cases, there is no dominant factor. There is no dominant if we change this, then that would happen. There’s just a huge web of interconnected things, which Weinberger argues goes against how we currently think ourselves.

For much of our past, the universe wasn’t seen as a clock or a computer or anything that was predictable in a, “if this, then that” way it was instead of web of meaning. And what Weinberger argues is that this trend in machine learning is returning us to that past. It’s stripping away the clock-like simplicity of the world, or even the internet-like network effects.

And it’s returning this to a world in which humans can read signs, signs for meaning. And machine learning apparently can read those signs better than humans. And a world in which simple causality is discarded for a greater degree of freedom, a greater degree of autonomy and interoperability.

For how we think about the world, machine learning then is returning us to this world of science and to galaxies of meaning. So it’s stripping away the idea, even of words, having singular defined, meanings. Because as you look at the complexity of how words are actually used, they often don’t. They mean one thing, I mean, this goes particularly for international languages, but also for languages that are only spoken in one country who is saying the word and the context in which it is said affects the meaning hugely

Machine learning is also in a way ending simplification. So, when one of these deep learning algorithms comes up with a certain prediction, that’s often flagged as an error. If it knows that something will happen, it’s probably a mistake in the programming, somewhat because very few things in life are certain.

So with machine learning, we’re getting closer to something like probabilistic truth.

I know that sounds a bit like an oxymoron if you’ve been taught to think particularly by universities recently, but probabilistic truth is the idea of making predictions that are as accurate as they could be given the chaos and complexity of the world. So when you say something’s 80% likely to happen, if it through a hundred predictions made and you say 80%, if 80 of them occur. That’s a probabilistic prediction as accurate as you could possibly have made it.

The price that we pay according to Weinberger is of a universe that’s simple enough first to understand. He argues that our understanding is limited to this law based reasoning and that once we strip away this law based reasoning, we need to give up the idea that we can understand the world. We need to give up the idea that we can make perfect predictions and that we can plan our days and the accidents or abberations.

Actually we can’t predict, except for probabilistically, we can’t plan perfectly.

This paradox of control and then comprehension, you can’t have both in a way, is felt through awe. Awe is the emotion which Weinberger closes on. And the idea is: that if you open yourself up to the complexity and the kind of astounding nature of the world around us, and the fact that we can’t predict it, despite centuries of innovation in science, the emotion you should feel when you realize that is awe.

Awe opens more of the world.