What We See Depends on What We Can Hold: When the Puzzle Outgrows the Table

“Your perspective is always limited by how much you know. Expand your knowledge and you will transform your mind.” ― Bruce Lipton

I was helping my son with a jigsaw recently. The jigsaws have grown in size as he has. Long gone are the toddler puzzles with oversized pieces you could solve with your eyes closed (luckily because I was half-asleep in those early parent years.) This jigsaw had 750 pieces. A sky the same shade of almost-purple and trees indistinguishable from each other. I found myself regularly stuck.

Some clusters stayed undone for days. I was worried my son would abandon it, but then, two awkward fragments that had resisted every attempt finally clicked together. At one point we almost ran out of room on the table. That is a lesson in itself. You can’t see how things fit when your surface is too small. You start stacking pieces, losing them under the box, forgetting combinations you’ve already tried.

I was reminded of George Miller’s famous paper that we can only hold about seven items in working memory, plus or minus two. That’s not much of a table. Many pieces fall off the edge long before we can see how they might connect.

As John Sweller — the cognitive psychologist behind cognitive load theory — put it,

“Working memory… is limited in capacity and duration if dealing with novel information.”

His cognitive load theory suggests that once the mind’s table is too full, our broader understanding collapses. It isn’t intelligence that fails us, it’s our cognitive capacity (our table of the mind). We can’t hold enough pieces in view long enough to see the picture waiting to emerge.

Pieces We Can’t Hold Alone

That jigsaw moment brought to mind something Elliott Aronson shared on The Innovation Show. In the early 1970s, Aronson — one of the most influential living social psychologists — was working in newly desegregated schools in Austin, Texas. While the legal barrier had fallen, the psychological one had not. Children from different racial backgrounds were suddenly placed in the same classrooms, yet understanding didn’t follow. In many cases, tensions worsened.

Aronson realised that the traditional classroom made every child a rival for the teacher’s attention. Under those conditions, children saw one another as competitors.

His answer became the Jigsaw Classroom.

Aronson broke lessons into fragments and gave each student just one essential piece. No one could grasp the full topic alone. The only way to see the picture was to sit together, listen, and rely on someone else’s fragment. Each child held something the others needed.

The atmosphere changed and empathy increased because the task made every child a resource for someone else. The whole picture emerged through the combination of their fragments, rather than individual effort.

The message is that no single person holds enough of the picture alone.

There was a time, of course, when one single person could hold most of the pieces — the world of Renaissance polymaths.

When One Mind Could Hold Enough

“The knowledge of all things is possible.” ― Leonardo da Vinci

Not to take anything away from such greats as Da Vinci, but he could sketch anatomical drawings in the morning, design a flying machine in the afternoon, and paint into the night because the boundaries between fields were looser and the volume of recorded knowledge was modest enough for a single mind to wander it. Former guest on The Innovation Show, Waqās Ahmed wrote about how Leonardo’s range wasn’t superhuman so much as it was suited to a world where knowledge was still relatively unified. Back then specialisation was limited, people worked across a breadth of tasks. Today, widespread specialisation thwarts creativity, limiting people to their swim lanes of expertise. Polymaths flourished when patronage systems allowed them to roam outside their lanes. During the renaissance, the table was smaller, and the jigsaw pieces were fewer.

That world is gone. Samuel Arbesman captures this in The Half-Life of Facts. Knowledge no longer accumulates gently; it accelerates at an exponential rate. A fact you learned two decades ago may already be outdated. Sometimes even a fact you learned two days ago. Whole scientific domains double within a working lifetime. The puzzle has swollen far beyond the reach of any individual workspace.

Arbesman’s point is echoed by others who have tried to take the long view. The great innovation thinker and architect, Buckminster Fuller once estimated that all human knowledge from our earliest ancestors to the birth of Christ amounted to a single “knowledge unit,” and that it took another 1,500 years to double it. After that, the doubling rate kept shrinking.

The physicist and science historian John Ziman, whose work examined how knowledge systems grow, later suggested that global scientific activity doubles roughly every fifteen years — a pattern also known as Ziman’s Law.

The picture keeps expanding. The pieces multiply. The puzzle has outgrown the individual table.

So, if a single person can no longer roam the full landscape, how do we continue to make sense of it?

One way is through the kind of breadth David Epstein explores in Range Widely. His argument isn’t that generalists know more; it’s that they know differently. They’ve wandered, sampled and moved sideways. Generalists, like polymaths carry varied fragments from unexpected domains — fragments that often sit dormant until the right problem comes along and suddenly those stray pieces form a bridge no domain specialist could see from their lane.

Generalists survive complexity not because they out-think specialists, but because they out-connect them. They have more edges to test, more varied jigsaw pieces to connect.

But even the best-connected minds face the same biological limits. Miller’s seven-plus-or-minus-two still applies, as do Sweller’s constraints on working memory. If anything, our capacity has weakened. Many of us now experience a kind of digital dementia — outsourcing the wrong things to machines while feeding the mind a diet of short-form fragments. All of this becomes even more challenging when set against Arbesman’s observation that knowledge expands exponentially. Each year the puzzle grows faster than our ability to hold the pieces, and more of them spill off the table.

This is where technology — used in the right way — can play an outsized role.

In our recent 3-part series with Manu Kapur, the learning scientist known for pioneering Productive Failure, he shares that learning deepens when we stay with a problem long enough to form structure. However, that productive struggle collapses when the table is saturated. Offloading this overflow keeps the struggle intact while removing the part that suffocates it.

AI is entering that space or rather, enlarging our space.

Don’t Take The Bait — Emergence or Not?

“What we call chaos is just patterns we haven’t recognized.” ―Chuck Palahniuk

“Very often, we can’t see the larger web of connections that might make a system behave in unwanted ways.” — Jamais Cascio and Bob Johansen, Navigating The Age of Chaos

In preparing for the forthcoming episode of The Innovation Show with Jamais Cascio and Bob Johansen, I came across a small story that captures this perfectly. In 2021, a drug dealer in the UK posted a photo of his hand holding a block of Stilton cheese. From that one image the police were able to extract fingerprint data and identify him. The dealer had used encrypted messaging, avoided showing his face, and even turned off metadata. None of it mattered once the image-analysis tools had enough resolution and context to see what he had assumed harmless.

The photo was posted on the EncroChat messaging app

It reminded me of the jigsaw I was working on with my son — those moments when two pieces that made no sense for days suddenly snapped together because the surrounding picture had grown large enough for their relationship to become visible. The pieces were always connected; we just lacked the context.

That same dynamic underpins what is often considered “emergent” behaviour in AI.

A much-discussed early paper suggested that certain abilities appear unpredictably once a model crosses a mysterious size threshold — as if intelligence simply switches on. But more recent work, including Jin’s excellent Medium essay and the analysis reported in Quanta, suggests something far more grounded.

What looks like a leap is really a capacity threshold — the moment when the model finally has enough parameters and enough varied, high-quality data to stabilise a pattern that was already there. The behaviour isn’t emergent. The pattern is.

It is the pattern that was waiting to be noticed, not the AI that suddenly became clever.

And the scaling tells the story:

GPT-2 lived in a world of 1.5 billion parameters.
GPT-3 expanded to 175 billion.
GPT-4 reportedly operates in the trillion-parameter range.
GPT-5 continues that trajectory not only in scale, but in the tools and controls around it — the parameters we use to shape how it thinks and how it interacts with external systems:

Each generation also gained access to new kinds of data, often from previously absent domains. When researchers switched from all-or-nothing scoring to more sensitive measures — partial progress, incremental accuracy — those dramatic jumps flattened into smooth curves. The learning was continuous. It was our measurement that wasn’t.

From our perspective, ability appears suddenly.

From the model’s perspective, nothing sudden emerged at all.

The table simply became large enough to hold more diverse pieces in parallel.

AI doesn’t replace human thought.

It expands the workspace and gives us the bigger table that individuals — and even institutions — can no longer build alone.

This blog has benefited enormously from collecting a wide range of jigsaw pieces. Hosting The Innovation Show has given me access to amazing thinkers across disciplines — neuroscientists, futurists, economists, psychologists, technologists, anthropologists, historians, organisational theorists, learning scientists, and all those still to come. Each guest offers a fragment from a different corner of the puzzle, and over time those fragments start to speak to one another.

Many of the fragments in today’s essay come from people who have already joined us on the Innovation Show over the last decade. Elliott Aronson’s work on cooperation and human biases, Waqās Ahmed’s insights into polymathy, Samuel Arbesman’s understanding of how knowledge accelerates, David Epstein’s exploration of breadth, and Manu Kapur’s work on productive failure all sit somewhere on the table. Two recent pieces of writing also played a part: JIN’s thoughtful Medium article on how model behaviour scales, and the Quanta analysis explaining why so-called “emergent abilities” in AI are better understood as capacity thresholds grounded in data richness and dimensional space.

And as I read Navigating the Age of Chaos, to prepare a 2-part episode with Jamais Cascio and Bob Johansen, it added further pieces to the jigsaw. Part 1 is below.

https://medium.com/media/f3bdb411706aab52e0197cbfb7ac29bf/hrefhttps://medium.com/media/7b1e5d4995131533390c75ebf3fc1b9d/hrefhttps://medium.com/media/7fe8330e2e9901fabd08fed33eab51f9/hrefhttps://medium.com/media/eba9a18d540fd51fa71f057e7345525e/hrefhttps://medium.com/media/01537ba52011e3e968eb818e32e27ad4/href

What We See Depends on What We Can Hold: When the Puzzle Outgrows the Table was originally published in The Thursday Thought on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post What We See Depends on What We Can Hold: When the Puzzle Outgrows the Table appeared first on The Innovation Show.

Leave a Comment

Your email address will not be published. Required fields are marked *