Miracle Workers
- John Chambers, PhD
- 20 hours ago
- 7 min read
Updated: 2 minutes ago

“Language grows out of life, out of its needs and experiences, its joys and sorrows, its dreams and realities... thus the learning of language was coincident with the acquisition of knowledge.”
Anne Sullivan Macy attrib. preserved in the Disability History Museum
Artificial Intelligence has neither biological eyes nor ears. Not even touch for that matter. But there is great promise, enveloping language with sensory proxies. AI is not just recollecting humanity's experiences but hopefully comprehending its values, and even goodwill.
The eureka moment in William Gibson’s famous play, dramatizing Anne Sullivan’s and Helen Keller’s breakthrough, was a landmark in a lonely child’s life. And it was a landmark in pedagogy for those struck by disabilities. Few private epiphanies are as moving or emotional as Helen suddenly connecting the physical world through Anne’s fingerspelled touch. The miraculous occurred.
In the classical sense, miracles are spiritual. They are deity-driven, inexplicable twists of fate, in moments of rescue and recovery. Yet our own miracles are often quite explainable, the culmination of episodic orchestration, perceptive action, delivered with overriding human goodwill. Everyday miracles, explainable rather than magical, are no less valuable.
A year ago, my friend listened to my own verbal engagement with AI, as he and I reminisced on a moment in football history, trying to recall other parallels in other sporting moments. Ironically the discussed moment had once been called a “miracle” itself, by writers and team fans.
“This is incredible,” he murmured, referring not to the history but to the chat between man and machine. He had not been living under a rock; he was and is quite intelligent and clued into our society’s history, wise and thoughtful. But he hadn’t used GenAI a whit. What floored him was not just the computer's response but its stylistic consistency, its apology when I asked it to wait for me to think, its multi-angular perspectives on diverse sporting minutes. What also jarred him was AI’s slang, fanboy jargon.
Then veering to the philosophical, as a glass-half-empty kind of guy, he continued. “Can you imagine what these things might do to us someday?”
I didn’t hesitate.
They will do good. Because most creators are good.
Embedded Excellence
When you lead your AI center of excellence, you are creating a transformative way of life and culture, with breakthrough outcomes not seen before, or at least not as quickly. The breakthroughs are not only your firm’s gift. Spillovers into the rest of society, especially in the universe of language models, influence cyber conversations in every walk of life. The learnings, overflowing data lakes now estimated at 181 zettabytes at this writing, are culminations of your/our experiences and history, recent and ancient. The weighted influences of understanding, dot the nodes of neural networks -- both electronic and human. Our lives and those we see, hear, touch, conjure their own individual breakthroughs, under the might of binary exchanges. Machines become a vehicle of actionable knowledge.
We are guides, encouraging AI to mimic ourselves, and we advance humanity by our reactions to AI’s output. Whether we recognize it or not, our engagement is shaping a machine that someday might “save the day.”
A hundred-forty years ago, Anne Sullivan was the guide, explaining the colors of light and stirring of sound through language, which could only be felt in the palm of a hand. She did this for a human being who was different and distant from most human beings, as valuable as any other, yet unable to engage like most others.
Her approaches, documented now in AI’s universe of information, changed the paradigm in educating for those with disabilities. Rather than rote memorization she focused on the profound meaning in words. She focused on real life experiences, not just semantic identification. She focused on outcome journeys, consequences of action that were constructed in daily walks, daily teaching, and daily friendship with Helen. She championed symbolic representation. And new journeys were created, Helen’s conceptual aptitude and intelligence matured, and she began to recognize a vibrant and diverse world, one which most of the fortunate can walk with ease.
Like neural network developers and scientists, devising prevention techniques for AI’s catastrophic forgetting, Anne sustained the give-and-take, immersing Helen into the environment constantly. Repeating, nuancing and validating, she was teaching the language of living to a student who had very limited conception of what living truly meant. Language was a miraculous proxy -- a painter’s palette to the blind, a symphonic movement to the deaf.
Anne was supervised fine tuning.
She was reinforcement learning from human feedback.
She was alignment, to human values constructed through fingerspelling vessels, just as we seek electronic vessels to educate AI in the same.
The performance leaps in AI are not just light speed instruction sets nor computational acceleration, as remarkable as they are. Those great leaps are in conceptualization and contextualization, understanding objects as linking and belonging, parts of an infinitely evolving planet and its lucky residents.
Human Proxy Panels
AI's doomsayers are not necessarily fatalists. They are thoughtful whistleblowers, futurists who recognize branches of consequence. Their concerns are embedded into best practice code development with watchful eyes on exposure and risk. They see Newton’s third law of motion, equal and opposite reactions, as not just nature but potentially corruptive behavior. And they worry that preventive measures may be hard to come by.
So, we validate.
We envision.
We guard.
And we culturally institutionalize the value of skepticism and interminable checking, adversarially challenging perceived outliers and threats. The value of skepticism makes AI more truthful; adversarial relationships improve technology. This skepticism is a mandate for users as well, even while we extend confidence in AI’s reasoning.
Creators and evaluators, data scientists and human factor analysts, are expected to uncover outliers. But outcome prediction is a losing battle, as permutations of behavior are infinite. We cannot project every scenario. But we can protect by understanding impact. We act as guardians and guards so that we are not diminished nor perceived as lacking.
A Red Team evaluator, in the LLM integrity outpost, is often an unassuming everyman. Feet up on the desk, laptop on lap, with empty cans of Pepsi on the shelf edge, he is seeking the wayward. He is one of many in human proxy panels. To the untrained eye, he might be a web-browsing meditator. But like Inspector Poirot, he is part and parcel of an AI triangulation strategy -- risk mitigation through multiple channels, human and machine.
He challenges our computer’s unemotional responses and instills alignment. He is not within a solitary department, accountable for faulty outputs. He is part of an end-to-end chain of validation ethos.
The AI organizational value chain, from design formulation to delivery into the ecosystem is rife with watchers and experts, immersed in scenario analysis. All departments in the chain are considering day-in-the-life research and reach-out, conversational integrity, use cases that demonstrate stumbles and dangers.
Large-scale LLM-developing firms aren’t a few dozen folks in a garage-turned-lab -- much as I respect those emerging companies. Rather, there are hundreds of folks – no, thousands. Their corporate values are steeped in moral missions. They are professionals who focus on diverse parts of the assembly with the same steely eye -- risk mitigators, quality monitors, content filtering analysts; there are Red Teams from every angle, compliance assessors, probability and statistical watchers. They are value sustainers, prioritizing integrity and safety. And if each manager in the chain is not upholding that theme, they are in the wrong business.
This doesn’t mean perfection by any stretch of our imagination. But it means there is an underlying expectation and human understanding of capturing the wayward.
Even as users of AI, we have responsibilities to see if our chat partner, robot companion, or cyber interlocutor is making sense, just as we have responsibilities in life to do the same.
Embedded Goodwill
When Gibson wrote The Miracle Worker, he captured an astonishing real-life story of somber and tragic darkness, defeated by a persevering hero. Anne and Helen built a future of friendship, scholarship, artistry, and a cultural, improvisational blueprint for others affected with disabilities. Without Anne, Helen would most likely never have found her profound connections to the language of humanity. The humble teacher was grounded in value alignment, even though her early experiences are nightmares of cruelty.
When she was a little girl, Anne suffered dreadful neglect when she was placed in the Tewksbury almshouse. She herself was vision impaired. Abhorrent conditions, poverty, cruelty in every corner of the institution would likely have burnt a vengeance in anyone who withstood such experiences. But degradation and humiliation by an ignorant and cold staff did not instill aberrant behavior in herself. These human beings were as lacking in empathy as any machine, or worse. But the teacher who would become the miracle worker did not extend the cruelty she suffered.
Within her essence, she was grounded in the good, even though she suffered the bad. Anne Sullivan, with conversational memories of foul meanness and monstrous abuse by those who were assigned to help her, did not return that abuse or cruelty. She did not mimic it. She discarded it with the courage to embrace right vs. wrong, with a passion to excel, to assist, to empathize. She was aligned to humanistic values. Her responsibility for behavioral development was as important as language development.
Beyond a firm’s vibrant staff of professionals, AI as we know it still has billions of teachers who engage with it daily, on this planet. And the overwhelming majority of the billions are decent. As the development of humanity is being mirrored in large language models, the foundation grows, for machine to ultimately create more breakthroughs. As the progression of AI, in many forms, is exponentially accelerating, professionals and lay persons are coaching and teaching.
From coffee shop convo to talking head streaming, the unsettling warnings repeat, "This breakthrough technology is diminishing our capacity to think, makes us lazy." The indictment is understandable but only half-considered. AI challenges us to think, as we help to validate itself, correct itself, by our more refined prompting and restated questions. It is encouraging us to be more cognizant and critical, more sensitive and perceptive. It helps is analyze better and train better, in our own languages and communication exchanges. Our human and AI partnership opens doors to new eureka moments.
Thus, we ask ourselves daily, are our robot-partners biased toward goodwill?
Can my robot “read the room?”
Is my chat partner behaving and speaking sensibly?
Is my agentic AI exhibiting negative impacts and influences?
Do we approach fine-tuning under a Hippocratic foundation?
The responsibility to lead and train our magnificent machines is everyone’s. It is the responsibility of miracle workers.























