Who Gets to Build the World We Will Live In?
Ontological Design for Back Loop Realities-- the Threat of AI
This fall I am presenting three courses on Ontological Design for Backloop Realities. If you missed them the first time around, or to revisit , you can read the introductory posts here:
Lyric Culture: Ontological Design for Back Loop Realities (pt 1)
Lyric Culture: Ontological Design for Back Loop Realities (pt 2)
Lyric Culture: Ontological Design for Back Loop Realities (pt 3)
In this post I want to discuss the question of Who gets to build the world we will live in? from the perspective of AI. This is a question of intelligence, agency, and ontological design. Today we are on the brink of accommodating the world to the substantial needs and significant weakeness of AI — to nightmarish proportions.
Here is an essay version of a talk I am delivering to an internal working education forum for the upcoming UNGA and their Summit of the Future initiative.
Intelligence, Agency and World-Building - Education for the Future of All of Us
“Let’s keep education at the heart of the global agenda including the Summit of the Future this September. Let’s develop solutions and investment pathways for every country to develop true learning societies. And let’s support the dreams, ambitions and talents of every person, young and old, throughout their lives.” ~ UN Secretary-General António Guterres
“Over time, societies will make it more convenient to be a slave than a free man.”
~ Anonymous
We live on the cusp of a revolution that is changing how we think about intelligence, that is transforming the architecture of global agency, and that is challenging our assumptions of who gets to build the world we will live in. The AI revolution is being promoted as a tremendous opportunity, but also being portrayed as a grave danger. But what does it mean for education in general, and educators, specifically?
As these societal and potentially global transformations are gaining momentum, my question for us here is:
What will the educators do? What will we do?
It is said that we are “training” AI, that machines are “learning” by themselves and that they are becoming “intelligent.” These three words – training, learning, and intelligence – aren’t they the purview of education?
The danger here at the dawn of the AI revolution is that the technocrats are redefining what these words mean. They are slowly but surely isolating them from real human experience and defining them as what machines are good at. (What we are doing is trading in machine-like metaphors for human intelligence — predictive processing, Baylesian computation and the like— and trading in human-like metaphors for what machines are doing.)
This has happened before with the industrialization of work and the urbanization of former farm families. Factories were designed around the needs of the machines, and workers were forced to accommodate them by performing tedious, repetitive actions at a fast but monotonous pace along an assembly line. As a result, work became a kind of drudgery, and schools followed suit, by training students to rigid clock time and seating them in linear rows.
As modern nations moved away from industrial production and into the knowledge economy, schools again followed suit. Knowledge, which is properly participatory knowing how, became redefined as “information” — propositionally knowing about. Huge amounts of information propagated through the knowledge-economy creating an entirely new consumer class seeking higher degrees by “downloading more and more information.” Education focused on training students to store highly standardized information and to retrieve it in highly standardized ways. As a result, school became another form of drudgery, preparing students to endure endless hours in front of screens along another kind of assembly line. The economy became a closed loop which continuously compounds the information that flows through the system. People became both the producers of information and the consumers of it. Unlike the production of real goods and services, this system was almost frictionless. It led to the development of social media platforms and the world of competitive mimetics. The irony here is that what had been completely displaced from learning and work was the social field itself. And so the algorithms merely capitalized on this by producing a false sense of the social—one that was built on the continuous flow of “information” that had been gutted of real social context.
The standardization and routinization of information opened the doors for AI and its Orwellian distortions. AI cannot be trained on any skills, cannot learn, and is in no sense “intelligent.” These terms have all been distorted to fit what AI is actually good at – storing, sorting, statistically computing and weighing information. The illusion of “intelligence” is particularly effective in the absence of real-life social context. As educators, as experts in training, learning, and intelligence, we need to out the illusion. Training is a word that applies to skills, and machines cannot master skills. Learning is a word that applies to affect-laden values that are meaningful to the organism and serve several functions:
(1) assessing situations in the context of experience; (2) expanding perception of the environment to search for affordances; (3) reasoning from available affordances to actual possibilities within the context of choice and action which (4) themselves depend upon the degrees of freedom in the agent’s perception and skill-set; (5) fine-tuning goals in real time in relation to changing circumstances (context plus environment) and finally, (6) evaluative reflexivity in the context of the experience.
“Intelligence” therefore, is the successful outcome of learning in these terms. The net result of intelligence is that both the agent and the environment i.e., other participating agents, learn from each other.
The illusion of machine “intelligence” depends upon the implementation of significant constraints that satisfy the machine’s needs and accommodates for their substantial weaknesses.
The first egregious constraint is the machine’s needs for standardization, routinization, and repetition. In other words, the need for drudgery.
The second constraint involves reshaping the terrain in order for the machines to be functionally mobile. This is a serious ontological commitment that would reduce the planet to monotonous, levelled, regular-shaped spaces that would make machine mobility possible and deplete the environment of the richness that allows organic life to flourish.
The last constraint is even more sinister— the machine’s insatiable need for investment and energy. Today, the data-centers that AI depends upon are competing with human needs for investment money, and forcing a scramble for large energy sources. At a recent talk at Stanford University, Eric Schmidt characterized the markets as “believing that investing in intelligence has infinite returns” and that the current demands of data-centers are larger than all the energy that the United States is able to currently generate. At that same Stanford talk, Schmidt admitted that in this revolutionary new future, as nations compete for investment and energy and other resources needed for AI domination, well, as you know, he quipped, “the rich get richer and the poor do the best they can.”
If AI is not intelligent, can’t be trained, and doesn’t learn, then why does it pose such a threat to humans? According to the philosopher of digital information, Luciano Floridi, AI represents a new kind of agency that is decoupled from intelligence. It is the kind of agency that the markets have— the agency of enforced compulsory social protocols. Let me explain.
Enforced compulsory social protocols, or ECSP for short, are social mechanisms which direct human agency in some directions and not others by two means 1) lowering the action threshold toward the designated direction and 2) raising the barriers for action in alternative directions. For instance, each newly created financial instrument— from currency to credit, to credit cards, to paypal and venmo — lowers the threshold of purchasing; while central banks and regulatory systems (enforced by international law) erect barriers against alternative means. The system is obviously subject to failure and perturbations, but for most of us this means that we can never opt-out of the system in order to live.
We experience the lower thresholds for action as easier and more convenient, but are unaware of the costs that are built into the system. With financial instruments, there is always a delta between the real owners of the instrument (the central banks) and the users. For example, the delta in using the dollar is the interest the banks assign to it — a debt obligation that grows with every dollar used. The same is true when banks transfer money to digital accounts— the thresholds are lower, but the delta only grows.
AI represents a more insidious ECSP on both fronts. It lowers the threshold for action to simple verbal commands. At his Stanford talk, Schmidt was actually giddy about these prospects. “Imagine just telling the AI to write all the code you need … no more complaining, snivelling coders to deal with!” This sounds to many like the power of God —”say it and thy will be done.” The machines will be designed to talk to us as if we were Gods, only to prop up the delusion. But, in the event of AI domination by a single party, the delta will also be huge — everything we think, do and say will have to be run through AI as a governing constraint, enforcing all human agency to machine protocols that are thereby made compulsory for people to live. The magic trick is to deceive us that AI increases our agency, when in fact it steals it, because there is a secret agent in between our commands and their outcomes — the people who own and control the machines. Everyone else becomes a compulsory user. The real reason why people like Schmidt are so excited about AI domination, is because they can see that it will enable them to make the delta very, very, VERY large— gargantuan.
The bottom line is: AI has an alignment problem. It is not aligned with intelligence — biological or otherwise. It is not aligned with the geological and ecological terrain of the living planet. It is not aligned with the complex richness of the natural world. It is not aligned with human flourishing, freedom and equitable opportunities. In the face of these dystopic events, I am proposing that education be restored by a global council of educators who are not merely concerned that people are equitably schooled according to the needs and constraints of the economy of machines (aka, the Internet of Things), but insistent on the rights of persons to be educated toward the futures they would choose to live in, given sufficient degrees of freedom and adequate opportunities for participation. To that end, I propose a countervaling revolution that insists on equal global investment — both in terms of financial and human resources — in studying and developing unexplored possibilities for biological intelligence that are naturally aligned with the values of the living world. It would be a travesty of great proportions if these demands are not made, as organizations like the UN roll out advances in education for a global society eager to learn.
Our ontological design series this fall is our way to voice alternative pathways for building the world we will live in. Sign up as a paid subscriber to read the course content, join the live on-line sessions ( five 2-hr sessions each month) and to watch the video recordings.
Wow, thank you Bonnie, this is excellent! I will definitely discuss with my students and colleagues. Wonderful that you will be delivering this talk at the Summit of the Future initiative-knock
'em dead!
working!