The AI Outlier’s Tax

The AI Outlier’s Tax

We are currently in a week dedicated to celebrating neurodiversity, yet for many of us navigating the high-pressure corridors of tech, "celebration" often feels like a formal exercise, a checklist completed to ensure a baseline of inclusion. While these initiatives are well-intentioned, they sometimes overlook the deeper, structural friction of living as an outlier in an increasingly automated world.


As an AI strategist, I see the push every day toward a "frictionless" existence: cleaner data, more predictable models, and seamless user experiences. But there is a fundamental tension here that we rarely discuss: AI, by its nature, is a technology of the mean. LLMs are designed to predict the most probable outcome based on vast datasets. They are optimized for the "median." If your way of thinking, processing, or perceiving information falls outside that central curve, the system doesn't always see you. Instead, the interface often attempts to "correct" or "smooth over" the very perspectives that make us unique.


There is a famous study from the 1950s where the US Air Force tried to design a cockpit for the "average pilot." They measured 4,000 pilots on 10 dimensions. Out of 4,000 people, zero fit the average in all 10 categories. Conclusion: When you design for the average, you literally design for nobody. 


Humans operate on a continuous spectrum, while AI operates on quantized probabilities. To be "the mean" is to be a static point. No living, breathing human is a static point; we are all constantly shifting.


We often hear that the principles of accessibility remain the same in the age of AI. I would argue they are evolving in ways we don't yet fully understand.


Physical Accessibility was about building a ramp so everyone could enter the room.

Algorithmic Alignment is often about ensuring the individual mirrors the "expected" input of the system.

For those of us with divergent minds, "efficiency" can become a double-edged sword. If the world is built on predictive models, staying visible requires a level of self-regulation that is exhausting. We find ourselves performing a version of "standardized" behavior just to keep pace with a world moving at the speed of code.


Efficiency is a metric, but it isn't a human value. The most vital aspects of our humanity, deep empathy, radical problem-solving, and truly original thought, usually emerge from the friction, not the smoothness.


There is an invisible tax on this 'seamless' existence. For those of us who process the world differently, the expectation of constant, high-speed output, accelerated by AI, is leading to a profound, quiet exhaustion. We are often perceived as 'over-capable' because of our pattern recognition and processing speed, but that performance comes at a cost that the spreadsheet doesn't track. It is the cost of constant self-translation. I am tired of the performance. I am interested in what happens when we stop trying to outpace the machine and start acknowledging the human toll of the simulation.


We often talk about ‘alignment” as if there’s a central, 'correct' human standard to align to. But mathematically, the 'average person' is a statistical ghost. If we build our strategy around the mean, we aren't building for the majority, we’re building for a person who doesn't exist, at the expense of the outliers who actually do.

Get In Touch

I'd love to hear from you! Please fill out the form below and I'll get back to you as soon as possible. 

Join the Anthro‑AI Tribe

Subscribe to my channel: https://www.youtube.com/@valnilsson