Care As The Driver Of Intelligence: A Workshop With Dr. Thomas Doctor

As part of worldwide simultaneous events organized by Human Flourishing, the Center for the Study of Apparent Selves (CSAS), which is hosted by RYI and headed by RYI’s Associate Professor Dr. Thomas Doctor, will hold a conference on “Care as the Driver of Intelligence” in the Kathmandu Valley at the end of November. Chokyi Nyima Rinpoche is one of the key speakers at this event. After the conference, there will also be several events in Lumbini in early December. Register today: www.humanflourishing.org
Intelligence is not knowledge accumulation (in which case we should call a good encyclopedia “intelligent”), nor is it heightened perception (in which case a space telescope would be intelligent). Instead, we suggest that intelligence is the ability to identify problems and seek their solution. So, to be intelligent is to have engaged concern. To be intelligent is to care.
All intelligent systems (organic, machine, or hybrid) appear to have natural limits on their sphere of concern. If we think in terms of biological organisms, a bacterium, for example, can try to manage local sugar concentrations, with a bit of memory and a bit of predictive power. A dog has a larger area of concern, significant memory and predictive capacity in the short term, but it is probably impossible for a dog to care about something that will happen 100 miles away, 2 months from now. Humans have a huge cognitive envelope, but there’s still a limit to what we can genuinely care about, and so also on the scope of our intelligence. Of course, we can expand our scope to some extent, and sometimes dramatically, but there is, it seems, always a limit.
What if we now said that there is a type of engaged concern, a type of intelligence, that has no limits? Sounds outrageous, far-fetched? At least suspicious? Imagine that we now also say that limitless intelligence can be induced in intelligent beings like us, and that the method for this is taught in an ancient religion. By now, we will then likely have lost most sincere listeners—if not all. But if intelligence is defined by engaged concern for problem solving then the apparent limits on a system’s intelligence can be broken by extending its sphere of concern. Buddhism teaches that an emerging bodhisattva makes this promise: “I shall achieve insight in order to care and provide for all beings, throughout space and time.” What happens to the sphere of concern of someone or something that accepts this pledge?
Is it misleading, or a mistake, to define intelligence as care? Or is there something intrinsically insincere or inauthentic about making that grand promise, called “the bodhisattva vow”? In either case, bringing intelligence as care together with the bodhisattva vow becomes irrelevant. But if the idea of formally accepting responsibility for the flourishing of all beings is at least somewhat plausible, then the contours of a genuinely open-ended expansion of intelligence begin to emerge.
Invitation
Please join us in exploring: Can the bodhisattva perspective suggest a new way of modeling intelligence? If so, could the cultivation of care provide a path to Artificial General Intelligence, and beyond? Might the bodhisattva’s commitment to all beings help us engage with current and newly emerging cognitive systems in a way that allows sustainable human flourishing, in synergy with diverse biological and non-biological intelligences? Can Buddhist concepts, such as bodhisattva agency, and their implications for the construction of new AI models, lead to a paradigm shift in AI and biotechnology that may enable us to better understand and achieve our deepest human potential? And in the simplest terms possible, what would it mean to accept responsibility for the flourishing of all forms of life?
Our workshop aims to bring together a group of thought leaders from the spheres of biology, psychology, medicine, Buddhism, and the sciences of information and cognition to engage one another critically and constructively. Critically, because the recognition of conceptual opaqueness, operational weaknesses, or mismatches between aspiration and practice within one’s own community and its interfaces with human society and the global ecosystem will be explicit objectives. Constructively, because the aim of the workshop is to describe, critique, and model caring intelligences in ways that map to concepts in biology, Buddhism and cognitive science and are relevant for the development of AI.
In its classic conception, the bodhisattva is a model of activity and progress that is studied for the sake of practical application. The model is, moreover, traditionally developed and applied in a way that successively distinguishes and brings together modes of cognition that can be described as conceptual vs. nonconceptual, basal vs. constructed. Bearing in mind the holistic and generative aspirations of the traditional model, at the workshop we will aim to explore and evaluate both intellectual and experiential features of the Bodhisattva framework with the guidance of a traditional expert. The aim is to become acquainted with the model in both intellectual and practical terms, and from both 3rd and 1st personal perspectives, thereby enhancing the potential for integration and evolution across otherwise separate technological, biological, or hybrid cognitive contexts.
You can find out more about this conference and Human Flourishing here
CENTER FOR THE STUDY OF APPARENT SELVES
The Center for the Study of Apparent Selves explores Buddhist perspectives on artificial intelligence and explains what Buddhism and the discipline of artificial intelligence can learn from one another. It is headed by RYI alumnus and faculty member Dr. Thomas Doctor. You can learn more about the project by visiting: www.csas.ai
