Articles

 

A Living Systems Approach to Sustainable Healthcare, Robin de Carteret

Originally Published on the CleanMed's Europe Website

See the original article here


I’m seeing a pattern. We currently face multiple crises:

    A crisis in healthcare, with rapidly rising demand for expensive high tech treatments and an increasingly elderly population.

    A financial crisis, with banks, businesses and whole countries facing bankruptcy.

    A resource crisis, with many of the key resources we currently depend on, like oil and rare earth metals, approaching peak
    production, where many analysts say prices will rise dramatically.

    An ecological crisis, with strong scientific consensus that climate change, biodiversity, and our effects on the nitrogen cycle
    are way past safe limits.

So is it a coincidence that we face all these crises together, at this moment in time, or are they all linked to something more fundamental? I believe a key factor in creating these problems is our mainstream culture’s linear, mechanistic view of the world; our society has been seduced by the simplicity and clarity we find in analytical, linear thinking. It’s understandable, as this kind of thinking has given us incredibly powerful and useful technologies, including engines, electricity, and much of modern medicine. However many of the systems we deal with, such as living things, communities, cultures and economies are not mechanisms; they are dynamic, interconnected, Complex Systems. If you treat Complex Systems as if they are machines you’ll find yourself in trouble when unacknowledged feedback loops create unintended effects. We are trying to control nature, people and economies, acting as if they are predictable mechanisms rather than inherently unpredictable but creative living systems.

I’m inspired by a new, and also ancient, view of the world as a living, interdependent set of nested systems. Understanding the emergent behaviour in ant colonies, for example, or how river deltas form, can inform our understanding of embryo development, genetic networks, and how we influence and are influenced by the ecosystems we live in.

In my workshop ‘Complex Adaptive Systems’ I will use interactive activities to explore ideas from complexity theory and a Living Systems approach to offer new perspectives of systems at multiple levels - the way human bodies work, how we organise in teams and organisations, social and economic dynamics, and our impacts and dependence on ecological systems.

I am not a medical practitioner, but work in Complex Systems Education. Its encouraging to see the work already mentioned in this blog, such as the SHE network who are already teaching medical students about a Living Systems approach. I think everyone, including healthcare professionals, could benefit from applying systemic concepts in their work and I’m keen to hear your views on how they can be and are already being applied. Are there systems you work with that can’t be understood by breaking them down into their constituent parts? Where do you notice self-regulating, or amplifying feedback loops and do you see signs of tipping points where the state of a patient or system goes past a point where it suddenly changes and is hard to bring back to the original state? How resilient is the system you work in to shocks and changes. Have you got the right balance of efficiency and redundancy to allow for responsiveness to the unexpected?

We are involved in complex dynamic interactions all the time, and I believe seeing the world more explicitly in these terms will give us new perspectives and offer creative, deeper, more interconnected solutions to many of the problems we face.

When we look at our current crises in isolation they can seem hopelessly insoluble. But something that really gives me hope is that when we look at them together, and as part of the same dynamic, there is evidence that the very same things that make people healthy and happy - meaningful work, being part of a supportive community, high self-esteem, sense of belonging, connection to nature - are also ones that reduce our ecological impact, and our dependence on high energy and resource use. They help create resilient, more self-reliant communities of people, who look after each other and are less dependent on a global financial system. That’s a vision of a future that I am excited to work towards!

Robin de Carteret is a freelance educator and consultant in complexity science and sustainability, see systemsgames.org.uk


More info on the conference


Home  |  About  |  Workshops  |  Schools  |  Organisations  |  Events  |  Articles  |  Contact

The living systems are not a machine, by Robin de Carteret

Originally Published the Transition Free Press, Issue 3, Autumn 2013

See the original article here

This blog post by Stu Packer of Transition Glastonbury of a recent Next Generation Leaders weekend at Sharpham Estate in Devon gives a good sense of some of the work I do:

Next Generation Leaders for …..
sshhh, don’t call it Sustainability!

Next Generation Leaders for ….. sshhh, don’t call it Sustainability!
by Stu Packer of Transition Glastonbury

Published on October 16, 2013, by Isabel Carlisle

Artificial Intelligence needs Artificial Wisdom


A conversation with Robin de Carteret



Originally published on Transition Tech - Friday, 27 June 2014

See the original article here














Robin de Carteret recently gave a workshop at Google’s London office entitled ‘A Living Systems Perspective: Should we allow our technological systems to develop a mind of their own?’. Afterwards Nick Arini got a chance to sit down with Robin and asked him to explain more.



Nick: You recently gave a Tech Talk at Google’s London office, what were you talking to them about?


Robin: I was talking about the need for a shift to a more systemic way of seeing the world and what light this sheds on issues around developing artificial intelligence.

Our technical systems are approaching the complexity of living systems. But the way we perceive and talk about the world is still largely based in linear thinking and reductionist, 19th-century mechanistic metaphors. We have gained a huge amount from reductionist science - which breaks the world down into parts and analyses and tries to understand them with the assumption that once you understand the parts of a system you can understand the whole system. This works very well for relatively simple systems like machines and most human-made technology. But most of the systems we deal with in life, like people and other organisms, organisations, social networks, economic systems, and ecosystems, are complex systems that do not behave in a linear way. Even if you understand how the parts of a complex system operate, that doesn’t mean you understand the whole system due to, often very unpredictable, emergent properties. For example, I can predict exactly how a watch will behave by understanding the battery, motor, gears etc. but if I understand how isolated individual ants behaves that does not mean that I will be able to predict all the behaviours of an ant colony. In complex systems the dynamics and relationships between parts are often more important than the behaviour of the parts themselves. Our current education system and the language and framing used in the media and politics often foster a linear type of thinking with an over simplistic sense of cause and effect. If you apply linear thinking to non-linear complex systems then you won’t understand them well - and if you are interacting with those systems, problems are likely to occur! I think the current economic and ecological crises we are facing could be solved much better with some more systemic thinking that takes into account multiple causes and influences, feedback loops and emergent properties.


Nick: Why do you think integrated systems thinking is not more ingrained in our culture, education systems, businesses and systems of governance? What impact does this have?



Robin: Linear, mechanistic thinking has been incredibly successful in achieving the main goals of our current society -  economic growth, industrial development and understanding the material universe. I think the question we are starting to ask now is, are these goals meeting the more fundamental goals we assumed they would lead to - sustained happiness, fulfilment and well-being. There is a certain amount of material wealth that leads to well-being but once an individual or community has enough to meet their needs, wellbeing stops rising in proportion to wealth and can even decrease. So mechanisation and industrialisation may have worked very well in the past - but now we need to be more discerning about whether a new technology will make things better or worse. For example, a solution to urban pollution may be encouraging people to drive hi-tech electric cars, but, as seems to be being demonstrated in cities around the world, the lower tech solution of cycling could be much better as there are additional health and social benefits to the environmental ones. Creative people and organisations are already thinking systemically.  It will take time for mainstream culture, education and politics to catch up. There is still a strong narrative about ‘growth’ and ‘progress’ being unquestioningly good. The more interesting conversation is ‘growth of what?’ and ‘progress towards what?’ I think a Living Systems perspective has a lot to say about good growth and that death and decay are a part of life and need to be integrated into systems, rather than aiming for a, clearly impossible, state of endless growth. And gives us a vision of what healthy living systems look like, to progress towards.


Nick: In the workshop you used entertaining games to demonstrate these ideas.


Robin: Yes, we ran a couple of complex system models using ourselves as the parts of the system.  Each person was following a simple rule in how they related to others in the system. We saw unpredictable, emergent group behaviours coming from very simple individual behaviours. We also experienced a very important feature of complex systems - a tipping point. We were modelling the climate system and found that after a period of chaotic movement we settled into a stable arrangement, like the climate has a number of relatively stable states, such as, an ice age, or the current temperate climate. We imagined human activities pushing the CO2 element of the system. When this person took a very small step forward the whole of the group went in to chaotic movement again. Often when you disturb a self-regulating system like this it will stabilise back to the original stable state. But in this case even that small movement of ‘CO2’ was too much, and the system went past a tipping point into chaos until it stabilised in a new state. In the current climate situation we are in, this could mean a period of chaotic and unpredictable climate change followed by a stable but much hotter climate that is not habitable for a large human civilisation. This is a very worrying possibility and an example of how, in the linear mode of thinking, we find it very hard to understand the implications of changing a complex system like the climate. Being non-linear it will not necessarily go back to its current habitable state if we decide extreme weather events are becoming too damaging and stop adding CO2 to the atmosphere. To realise the unpredictable and non-linear nature of complex systems can be worrying, but it can also be inspiring - as small changes for good in the world can have a much bigger influence than we might expect. And as demonstrated many times over human history, cultural and social changes like the Berlin Wall coming down, can happen surprisingly quickly once you reach a tipping point in the social dynamic.


Nick: The thing I took away from this was that the types of systems you are describing can’t effectively be controlled in the way we apply control to linear systems, yet we attempt to control them anyway using the same old tools and approaches. Historically the systems we have built as humans (as opposed to natural systems) have not been too complex to manage in this way but we are rapidly approaching the development of complex artificial systems which operate more like natural systems and may be beyond our complete control or even understanding. What do you think this means for the way we go about developing this kind of technology?


Robin: My question is: ‘will it be possible to create artificially intelligent systems that are not complex adaptive systems?’ I think probably not. I think we will find that that is what works, and the linear processing route becomes way too clunky. The question then is: Will these systems still be within our control? Well in short, no. In natural systems this is just how it is. Natural living systems, like for instance, a field of plants, are too complex to fully control and I think we need to let go of the aim of controlling nature and move to a more participatory model where we recognise the value of natural systems, learn from them and then influence them to provide what we need in a smart way, like encouraging predator species to reduce pests rather than using a simplistic method like spraying with pesticides. So with natural complex systems we just can’t fully control them and we’ll do better work if we recognise this and behave accordingly. But with technical systems we are designing we currently do have control of them and I think it is a very serious decision that humanity has to make about whether we go ahead and make tech systems that aren’t fully under our control. I’m glad that one of the conditions of sale when Google bought artificial intelligence startup DeepMind, was that Google sets up an ethics committee to deal with these sorts of questions.


If we are going to create complex intelligent systems that we don’t have full control over (and I am not advocating that we do) what could we do to make this safer and less likely to end up with a HAL (from 2001: a Space Odyssey) or Terminator style disaster?

Systems thinker Donella Meadows in her article ‘Leverage Points: Places to Intervene in a System’ suggests that most effective place to intervene in a system is “The mindset or paradigm out of which the system - its goals, power structure, rules, its culture - arises.” So what’s the mindset leading to us wanting to create artificial intelligences? I think a key focus of our current mindset is the value of knowledge and information. I think the more systemic equivalent of knowledge is wisdom. And though information and knowledge are important and useful, without wisdom they can also be very damaging - as in the example of knowledge about splitting the atom being applied to creating nuclear bombs.


Nick: You raise some big points there. Let me take these one at a time. Firstly are you suggesting that it would be possible and desirable to create artificial systems which we relate to as partners rather than controllers? Our role then becomes one or participant rather than master-slave. Could this ever be an equal relationship? One of the most famous artificial intelligence systems in the world is the IBM Watson computer which beat the world champion in Jeopardy. This AI is now being used to aid doctors in diagnosis. The final decision is not (yet?) being left to the AI but it rather rapidly searches the available medical literature and provides the doctor with its diagnosis and the process it used to determine it along with the most relevant papers the AI used to make the diagnosis. This strikes me as a kind of participatory model. The doctors involved don’t understand how the AI works but it isn’t a black box either because the decision (diagnosis A based on B and C) is presented. This will either give the doctor confidence in their own diagnosis or conflict which will force them to check their analysis. What happens however when the AI becomes so powerful and the knowledge base it is searching so vast that no human could ever understand the process it is going through? The chances are the AI will greatly outperform any human by objective benchmarks in such a scenario. Do we trust it them?


Robin: I do think it will be possible to create artificial systems that we relate to as partners. I personally don’t think it is desirable. It could be in the future but I think our technology and ability to effect the material world is running way ahead of our wisdom to decide if, when and how to apply it. The ecological crisis we are facing does not need a superhuman artificial intelligence to come up with solutions, there are plenty of brilliant, beautiful and elegant solutions for change at individual, community, and global levels that would allow us to live high well-being lives using a fraction of our current resource use. Its the social and personal changes that are the real challenges and I’m not sure creating artificial intelligences will help with this, I think its more likely to continue distracting us from where the real change needs to happen. It is interesting to think what a superhuman AI would say about the state of the world. I imagine a very clever but compassionless intelligence would say “for the sake of continuing diverse life on earth and future generations of humans we need to kill three-quarters of the people on earth”! I could imagine an intelligence even with high compassion could come to the same conclusion. So you ask would I trust them? I don’t think we can trust them to do what we want! I think we need wisdom in making choices about our use and development of AI and if we are going ahead developing artificial intelligences we need create not only intelligent but wise systems.  


Nick: You suggest we need to develop wise systems but this is a difficult concept to even define. How would you define wisdom?


Robin: This is not easy. To answer this myself I started off by thinking ‘who is wise?’. And I think its quite indicative that if you do a search on the internet for wise people its difficult to find any well known people, alive today, from western culture, though this may partly be that wise people might not tend to look for fame or positions of power. But its interesting that, though we don’t have many examples of wisdom in our culture it is still a highly valued trait.

Paraphrasing Robert J. Sternberg PhD in his book ‘Wisdom: Its Nature, Origins, and     Development’ heres a list of some keys features of wisdom:

    Has Empathy and compassion

    Deeply understands things

    Humble and aware of the limitations of knowing

    Can see things from many perspectives and avoids black and white thinking


How do we become wise? Again there are no clear answers but there are few things that I think can lead to greater wisdom:

    Being present

    Slowing down - creating space/silence

    Listening to your heart, body & soul as well as your mind

    Spending time with Nature

    Experience of life


Something I notice here is that a lot of things in this list are made much harder when we have easy access to amazing technology like smart phones and the internet - which though incredibly useful can easily  become ‘addictive’ and take away opportunities for stillness and reflection which come to be seen as ‘wasting time’. But this is a separate debate!


If we are going to create intelligent systems that aren’t fully in our control then I think we need wisdom both in the processes that create the systems and in the systems themselves. So my questions to companies like Google who are currently working on systems like this are:


Have you got wise people working on these systems?

and

What would Artificial Wisdom look like & how would we create it?


Some of the things to think about might be: How would an artificially intelligent system have empathy and compassion and be aware of its limitations of knowing. Why is meditation and stillness so important for humans and what would be the equivalent in a technical system? How do you create deep and broad understanding?


Nick: There seems to be a dominant mythology in our culture that technology is a modern invention yet technology in the form of tools, fire etc. has been in use by humans for millions of years. Some people might look upon the pace of development and say its too much, that we need to take a step back. I see it slightly differently. I believe we need an evolution in the way we perceive technology. Rather than building technical systems which seek to dominate nature we need systems which integrate seamlessly with natural systems and complement them without disrupting their essence. This would require a very different mindset in the development of those systems. Would such a system meet your definition of wise? Permaculture could possibly be described as such a "technology". Permaculture also allows for high tech appropriate technology such as solar PV. Could this concept stretch to something like AI? How different would the world be if we could build permaculture ethics and principles into our technical systems?


To turn the whole thing on its head what about considering how an AI or AW might help us to understand and relate to natural systems in a positive and beneficial way. There is an obvious link between the natural complex system which is the human brain and artificial intelligence systems. Maybe by building systems like these and other forms of biomimicry we can better understand our place in nature?


Robin: I agree with you that we should be shifting our technical systems from ones that dominate nature to ones that integrate and compliment it. I wonder, though, if we will ever be able to create wise artificial intelligences based on natural systems when we have so little connection with natural systems ourselves. Studying biology in the way it is practiced today (with a focus on the mechanics of life), does not necessarily give us a good sense of how to relate to and be part of living systems. I think we need to have direct and prolonged experience of the natural world to get a holistic understanding of the systemic nature of the world. Once we have this, (and there are people and cultures in the world that have much more of this than the mainstream ‘western’ culture) perhaps we would have the ability to create artificial wisdom - but I think it more likely that we would wonder why anyone would want to do that in the first place when there is such intelligence and wisdom already in the natural world. I think this raises an interesting point, our culture seems to have lost the ability to question itself (one of the key aspects of wisdom) it seems nearly impossible to ask the question, in the long term, taking into account all of the possible implications, do we actually need and want any type of artificial intelligence? And if the answer is yes - why is that? Usually its because it makes life ‘easier’ (or more profitable) but it also makes life quicker, more resource intensive, and disconnects us even more from our sense of being dependent on and part of nature. And this feeling of connection, of being part of something is, I think, essential to our well-being as humans. And we can do this in conjunction with our use of technology - but again it needs wisdom to get the balance right.