Learning Without Thinking
abstract / full text
BY JACOB BROWNING
As machine learning continues to develop, the intuition that thinking necessarily precedes learning — much less that humans alone can learn — should wane.
“Mindless learning.” The phrase looks incoherent — how could there be learning without a learner?
Learning — broadly defined as improving at solving a given task over time — seems to require some conscious agent reflecting on what is happening, drawing out connections and deciding what strategies will work better. Contrary to intuition, learning is possible in the absence of any thought or even the capacity to think — agentless, mindless learning. In fact, this kind of learning is central to contemporary artificial intelligence.
Tracing a history of the idea of mindless learning can replace our anthropocentric intuitions about learning and thinking with an awareness of the different ways — both natural and artificial — that problems can be solved. It can also reshape our sense of what is capable of learning, and the benefits attached to non-human kinds of learning.
As it is commonly understood, thinking is a matter of consciously trying to connect the dots between ideas. It’s only a short step for us to assume that thinking must precede learning, that we need to consciously think something through in order to solve a problem, understand a topic, acquire a new skill or design a new tool. This assumption — an assumption shared by early AI researchers — suggests that thinking is the mechanism that drives learning. Learning depends on reasoning, our capacity to detect the necessary connections — causal, logical and mathematical — between things.
Think of how someone learns to grasp a few geometric proofs about the length of lines and then the area of squares, moving and turning imaginary shapes in their head until they discern how the pieces relate. Identifying the essential features of lines and squares allows them to draw out necessary connections between other shapes and their interrelations — using old rules to generate novel inferences about circles, triangles and a host of irregular shapes.
“Learning is possible in the absence of any thought or even the capacity to think.”
Our capacity to reason so impressed Enlightenment philosophers that they took this as the distinctive character of thought — and one exclusive to humans. The Enlightenment approach often simply identified the human by its impressive reasoning capacities — a person understood as synonymous with their mind.
This led to the Enlightenment view that took the mind as the motor of history: Where other species toil blindly, humans decide their own destiny. Each human being strives to learn more than their parents and, over time, the overall species is perfected through the accumulation of knowledge. This picture of ourselves held that our minds made us substantively different and better than mere nature — that our thinking explains all learning, and thus our brilliant minds explain “progress.”
“Where other species toil blindly, humans decide their own destiny.”
Many philosophers and scientists have argued that associative learning need not be limited to explaining how an individual animal learns. They contend that arbitrary events in diverse and non-cooperative agents could still lead to problem-solving behavior — a spontaneous organization of things without any organizer.
The 18th-century economist Adam Smith, for example, formulated the concept of the “invisible hand of the market,” which revealed the capacity for millions of localized interactions between strangers to push a market toward a dynamic, efficient and thoroughly amoral deployment of resources over time. This perspective enables casting off the rationalist conviction that the mind is the acting force of history and all progress is the result of “geniuses” and “great men” (because it was always men in these male philosophers’ telling).
Rather, “progress” — if it is appropriate to use the term — in politics, language, law and science results not from any grand plan but instead from countless, undirected interactions over time that adaptively shape groups towards some stable equilibrium amongst themselves and their environment.
As Charles Darwin saw, this kind of adaptation was not unique to humans or their societies. The development of species emerges from chance events over time snowballing into increasingly complex capacities and organs. This doesn’t involve progress or the appearance of “better” species, as some assumed at the time. Rather, it suggests that members of a species will eventually develop an adequate fit with other species in a given environment. As such, mindless learning is more natural and commonplace than the minded variety we value so highly.
“Progress’ in politics, language, law and science results not from any grand plan but instead from countless, undirected interactions over time.”
New mindless learning machines — like the headline-grabbing GPT-3, a tool that uses deep learning to create human-readable text — help point toward the expansive possibilities of non-human intelligence. GPT-3 is trained by hiding a handful of words in millions upon millions of texts, forcing the machine to guess the hidden word at random, and then telling it how different its answers are from the original. In the process, it learns when to use a noun instead of a verb, what adjective works and when a preposition is needed, and so on.
Many critics rightly note that GPT-3 doesn’t “understand” the world it is talking about, but this shouldn’t be seen as a criticism. The algorithm adapts to the world of text itself — the statistically relevant ways humans deploy symbols — not to the world as such. It does not occupy our niche, the niche of social beings who use language for diverse reasons, but its own: the regular interplay of signs.
Undermining the anthropocentrism of earlier assumptions, GPT-3 displays a wildly diverse facility with language. It can write poetry in many styles, code in many programming languages, create immersive interactive adventures, design websites and prescribe medicines, conduct interviews in the guise of famous people, explain jokes and generate surprisingly plausible undergraduate philosophy papers.
Mindless learning proves that many of our ideas about what a mind is can be broken into distinct mindless capacities. It pushes us to ask more insightful questions: If learning doesn’t need a mind, why are there minds? Why is any creature conscious? How did consciousness evolve in the first place? These questions help clarify both how minded learning works in humans, and why it would be parochial to treat this as the only possible kind of learning AI should aspire to.
“Much human learning has itself been mindless.”
We should be clear that much human learning has itself been, and still is, mindless. The history of human tools and technologies — from the prehistoric hammer to the current search for effective medicines — reveals that conscious deliberation plays a much less prominent role than trial and error. And there are plenty of gradations between the mindless learning at work in bacteria and the minded learning seen in a college classroom. It would be needlessly reductive to claim, as some have, that human learning is the only “real learning” or “genuine cognition,” with all other kinds — like association, evolution and machine learning — as mere imitations.
Rather than singling out the human, we need to identify those traits essential for learning to solve problems without exhaustive trial and error. The task is figuring out how minded learning plays an essential role in minimizing failures. Simulations sidestep fatal trials; discovering necessary connections rules out pointless efforts; communicating discards erroneous solutions; and teaching passes on success. Identifying these features helps us come up with machines capable of similar skills, such as those with “internal models” able to simulate trials and grasp necessary connections, or systems capable of being taught by others.
But the second insight is broader. While there are good reasons to make machines that engage in human-like learning, artificial intelligence need not — and should not — be confined to simply imitating human intelligence. Evolution is fundamentally limited because it can only build on solutions it has already found, permitting limited changes in DNA from one individual to the next before a variant is unviable. The result is (somewhat) clear paths in evolutionary space from dinosaurs to birds, but no plausible path from dinosaurs to cephalopods. Too many design choices, and their corresponding trade-offs, are already built in.
If we imagine a map of all possible kinds of learning, the living beings that have popped up on Earth take up only a small territory, and came into being along connected (if erratic) lines. In this scenario, humans occupy only a tiny dot at the end of one of a multitude of strands. Our peculiar mental capacities could only arise in a line from the brains, physical bodies and sense-modalities of primates.
Constraining machines to retrace our steps — or the steps of any other organism — would squander AI’s true potential: leaping to strange new regions and exploiting dimensions of intelligence unavailable to other beings. There are even efforts to pull human engineering out of the loop, allowing machines to evolve their own kinds of learning altogether.
The upshot is that mindless learning makes room for learning without a learner, for rational behavior without any “reasoner” directing things. This helps us better understand what is distinctive about the human mind, at the same time that it underscores why the human mind isn’t the key to understanding the natural universe, as the rationalists believed. The existence of learning without consciousness permits us to cast off the anthropomorphizing of problem-solving and, with it, our assumptions about intelligence.
NEW YEAR VACATION DESTINATIONS FOR 20/21
NUDITY IN THE MOVIES: THE BEST MAINSTREAM FILMS
Erotic films are suggestive of sexuality, and usually contain nudity, though that is not a prerequisite. Unlike actual porn, mainstream movies are saddled with things like "plot" and "coherent storylines," but that doesn't mean that those plots and coherent storylines can't be served by lots (and lots and lots) of nudity... →