Motivation logo

Languaging To Dehumanize In AI

The Slippery Slope In Decreasing The Perceived Societal Value Of Human Lives

By Dr. Cody Dakota Wooten, DFM, DHM, DAS (hc)Published a day ago 4 min read

What are you worth?

What is your value?

This is a question that has fascinated...

And perhaps in some instances plagued...

Humanity since the beginning of recorded history.

Now, I am of the opinion that human beings are beyond value...

Meaning our worth is infinite.

There is a challenge with this idea when it comes to the running of the world.

We have limited resources.

Different people choose to contribute more...

And others contribute less.

Some change the world...

And some enjoy their comfort zone, wishing to remain within it their entire lives.

So, how do we distribute those limited resources given these different factors?

Well, the generally accepted principle in today's world is to put a value on us as humans so that these resources can be distributed accordingly.

Different systems view humans in different ways.

Capitalism often looks at output and innovation to determine value...

Even if those innovations and outputs are not beneficial to humans.

Socialism often looks at treating everyone as perfectly equal...

Even when individuals are not contributing, requires stealing resources from the people, or even if it causes mass starvation.

I do not think these systems are perfect, and they surely have their flaws.

But one thing that I believe we should always pay attention to...

Is when languaging is utilized as a way to dehumanize individuals...

By reducing their value further.

This is frequently done by individuals and organizations through what is known as false equivalence...

Where a comparison is made which might "sound" good on paper or while talking...

But its impact when applied decreases the value of humans...

Which is frequently used as a means justify and to do terrible things to humans.

Not to mention that, often, the false equivalence is not even made based on facts...

Causing people to make improper associations that cause harm.

So, why am I bringing this up now?

Artificial Intelligence.

You are hearing about it everywhere and from everyone.

You have the techno-utopia perspective, which claims that AI will bring about human salvation...

And you have the techno-apocalyptic perspective, which claims that AI will turn on humans and decide to eliminate us.

Your perspective of AI is your own...

But what I am interested in at this very moment is how languaging is being used in AI.

There is specifically something that Sam Altman, posterchild of today's AI movement, recently said...

"One of the things that is always unfair in this comparison is … people talk about how much energy it takes to train an AI model relative to how much it costs a human to do one inference query. But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart."

Now, on paper, this might sound "unfair" as Sam Altman is proposing.

If there is an amount of time and energy required to train AI...

And there is an amount of time and energy required to train a human...

Then we can evaluate these to make business decisions.

Here is the problem...

This dehumanizes us.

It treats us, humans, as just a product of how much time and resources are required to "become smart"...

As AI companies define as "smart"...

Even if that is only a very specific, small subset of what is scientifically meant when we say "intelligence".

Essentially, what Altman is getting at...

Is that, in theory, if we can train and develop AI to become "smarter" than humans...

Then we can justify the costs because it will be more "important" than humans.

This thinking is already being utilized to justify exorbitant fees that will be placed upon us to "sustain" these AI...

As well as justification to commit mass layoffs for "dumber" and "more expensive" humans.

He is utilizing languaging to devalue the human growth cycle...

Even if the comparison does not actually make any sense.

For instance...

If you actually look at the costs of raising humans versus the costs of training GTP-4...

A human consumes 2,000 kcal per day, over 20 years to become "smart" in adulthood, requiring 17,000 kWh of energy...

Versus GPT-4, which consumes 50 GWh (50,000,000 kWh) for "one" training run of the model.

This means that 3,000 times "more" energy is needed to train an AI model...

For relatively mediocre results.

AI models still continuously hallucinate...

Or provide false information with no basis for how it came to that conclusion...

With some estimating it occuring 30% to 50% on average...

And up to 94% of the time in more complex queries.

It also does not justify that AI only looks at a hyper-specific type of intelligence...

Which, again, it frequently gets wrong...

While ignoring many other types of intelligence that humans naturally have.

This includes...

Spatial-Visual Intelligence...

Bodily-Kinesthetic Intelligence...

Emotional Intelligence...

Existential Intelligence...

And more.

These are things that AI simply do not have...

Aspects that are "brushed away" as "irrelevant" when we make false equivalency statements.

But Altman and other AI proponents do not want society to make decisions about AI based on the fullness of human value...

They want us to make sacrifices to make their businesses succeed...

So that they can have their success...

Even if it is to our own detriment.

What makes this worse is...

These sacrifices that are being asked of us...

Do not seem to be based on any real statistics or evidence.

There is a lot of smoke and mirrors...

As well as "trust us" examples.

The famous example here is the promise of "AGI Superintelligence"...

That is "right around the corner"...

And "closer than we think"...

All while every single AI product we have seen...

Seems to be having diminishing returns...

Exponential costs...

Failed implementation of projects...

All while failing to really prove any form of intelligence that is remotely close to humans.

No proof.

No demonstrations.

Just...

"Trust us".

Now, should we or should we not invest into AI?

That will be your call.

But what we should avoid...

Is falling into psychological traps of false equivalency...

That ask us to devalue and dehumanize ourselves.

---

Are You Ready to Go Beyond Leadership?

Tired of Broken Algorithms and AI Slop?

Excited to Dive Deeper into Psychophysiological Mastery?

Want to Change The World?

The Seeking Sageship Newsletter is for You!

Click Here to Subscribe for Free!

advicecelebritiesgoalshappinesshealinghow toproduct reviewquotesself helpsocial mediasuccessVocal

About the Creator

Dr. Cody Dakota Wooten, DFM, DHM, DAS (hc)

Multi-Award-Winning Sageship Coach, Daily Digital Writer (1,000+ Articles), Producer, TV Show Host, Podcaster & Speaker | Faith, Family, Freedom, Future | Categories: "Sageship" & "Legendary Leadership"

https://www.SeekingSageship.org/

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.