Could Shopify be right in requiring teams to demonstrate why AI can’t do a job before approving new human hires? Will companies that prioritize AI solutions eventually evolve into AI entities with significantly fewer employees?

These are open-ended questions that have puzzled me about where such transformations might leave us in our quest for Knowledge and ‘truth’ itself.

“ is so frail!”

It’s still fresh in my memory: 
A hot summer day, large classroom windows with burgundy frames that faced south, and Tuesday’s Latin class marathon when our professor turned around and quoted a famous Croatian poet who wrote a poem called “The Return.”

Who knows (ah, no one, no one knows anything.
Knowledge is so frail!)
Perhaps a ray of truth fell on me,
Or perhaps I was dreaming.

He was evidently upset with my class because we forgot the proverb he loved so much and didn’t learn the 2nd declension properly. Hence, he found a convenient opportunity to quote the love poem filled with the “scio me nihil scire” message and thoughts on life after death in front of a full class of sleepy and uninterested students.

Ah, well. The teenage rebel in us decided back then that we didn’t want to learn the “dead language” properly because there was no beauty in it. (What a mistake this was!)

But so much truth in this small passage — “knowledge is so frail” — that was a favourite quote of my professor.

No one is exempt from this, and science itself especially understands how frail knowledge is. It’s contradictory, messy, and flawed; one paper and finding dispute another, experiments can’t be repeated, and it’s full of “politics” and “ranks” that pull the focus from discovery to prestige.

And yet, within this inherent messiness, we see an iterative process that continuously refines what we accept as “truth,” acknowledging that scientific knowledge is always open to revision.

Because of this, science is indisputably beautiful, and as it progresses one funeral at a time, it gets firmer in its beliefs. We could now go deep into theory and discuss why this is happening, but then we would question everything science ever did and how it did it.

On the contrary, it would be more effective to establish a better relationship with “not knowing” and patch our knowledge holes that span back to fundamentals. (From Latin to Math.)

Because the difference between the people who are very good at what they do and the very best ones is:

“The very best in any field are not the best because of the flashy advanced things they can do, rather they tend to be the best because of mastery of the fundamentals.”

Behold, frail knowledge, the era of LLMs is here

Welcome to the era where LinkedIn will probably have more job roles with an “AI [insert_text]” than a “Founder” label and employees of the month that are AI agents.

The fabulous era of LLMs, filled with unlimited knowledge and clues on how the same stands frail as before:

And simply:

Cherry on top: it’s on you to figure this out and test the outcomes or bear the consequences for not.

“Testing”, proclaimed the believer, “that is part of the process.”

How could we ever forget the process? The “concept” that gets invoked whenever we need to obscure the truth: that we’re trading one type of labour for another, often without understanding the exchange rate.

The irony is exquisite.

We built LLMs to help us know or do more things so we can focus on “what’s important.” However, we now find ourselves facing the challenge of constantly identifying whether what they tell us is true, which prevents us from focusing on what we should be doing. (Getting the knowledge!)

No strings attached; for an average of $20 per month, cancellation is possible at any time, and your most arcane questions will be answered with the confidence of a professor emeritus in one firm sentence: “Sure, I can do that.

Sure, it can…and then delivers complete hallucinations within seconds.

You could argue now that the price is worth it, and if you spend 100–200x this on someone’s salary, you still get the same output, which is not an acceptable cost.

Glory be the trade-off between technology and cost that was passionately battling on-premise vs. cloud costs before, and now additionally battles human vs. AI labour costs, all in the name of generating “the business value.”

Teams must demonstrate why they cannot get what they want done using AI,” possibly to people who did similar work on the abstraction level. (But you will have a process to prove this!)

Of course, this is if you think that the cutting edge of technology can be purely responsible for generating the business value without the people behind it.

Think twice, because this cutting edge of technology is nothing more than a tool. A tool that can’t understand. A tool that needs to be maintained and secured.

A tool that people who already knew what they were doing, and were very skilled at this, are now using to some extent to make specific tasks less daunting.

A tool that assists them to come from point A to point B in a more performant way, while still taking ownership over what’s important — the full development logic and decision making.

Because they understand how to do things and what the goal, which should be fixed in focus, is.

And knowing and understanding are not the same thing, and they don’t yield the same results.

“But look at how much [insert_text] we’re producing,” proclaimed the believer again, mistaking volume for value, output for outcome, and lies for truth.

All because of frail knowledge.

“The good enough” truth

To paraphrase Sheldon Cooper from one of my favourite Big Bang Theory episodes:

“It occurred to me that knowing and not knowing can be achieved by creating a macroscopic example of quantum superposition.

If you get presented with multiple stories, only one of which is true, and you don’t know which one it is, you will forever be in a state of epistemic ambivalence.

The “truth” now has multiple versions, but we are not always (or straightforwardly) able to determine which (if any) is correct without putting in precisely the mental effort we were trying to avoid in the first place.

These large models, trained on almost collective digital output of humanity, simultaneously know everything and nothing. They are probability machines, and when we interact with them, we’re not accessing the “truth” but engaging with a sophisticated statistical approximation of human knowledge. (Behold the knowledge gap; you won’t get closed!)

Human knowledge is frail itself; it comes with all our collective uncertainties, assumptions, biases, and gaps.

We know how we don’t know, so we rely on the tools that “assure us” they know how they know, with open disclaimers of how they don’t know.

This is our interesting new world: confident incorrectness at scale, democratized hallucination, and the industrialisation of the “good enough” truth.

Good enough,” we say as we skim the AI-generated report without checking its references. 
Good enough,” we mutter as we implement the code snippet without fully understanding its logic. 
Good enough,” we reassure ourselves as we build businesses atop foundations of statistical hallucinations.
(At least we demonstrated that AI can do it!)

Good enough” truth heading bold towards becoming the standard that follows lies and damned lies backed up with processes and a starting price tag of $20 per month — pointing out that knowledge gaps will never be patched, and echoing a favourite poem passage from my Latin professor:

“Ah, no one, no one knows anything. Knowledge is so frail!”


This post was originally published on Medium in the AI Advances publication.


Thank You for Reading!

If you found this post valuable, feel free to share it with your network. 👏

Stay connected for more stories on Medium ✍️ and LinkedIn 🖇️.

Share.
Leave A Reply