
Welcome to the 37 new techies who have joined us this week.
If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox.
Weekly insights and inspiration from the global LEAP community.
What Le Guin said:
“For magic consists in this, the true naming of a thing.”
It’s from her 1968 novel A Wizard of Earthsea. Le Guin’s idea, that power flows from precise naming, feels important today – in the age of LLMs. If you’ve ever launched a vague prompt and received a vague answer, you already know: the way we name the task shapes the world a model builds in response.
You can use this idea to create your own strategy for working with LLMs. (Yes, really – give us a minute to explain).
In ancient Greece, seekers climbed the slope at Delphi with questions; and the Pythia answered in language that demanded interpretation.
Today we route our questions to model endpoints. We use more GPU heat than the laurel smoke of ancient Greece, but the human part hasn’t changed. We still invent rituals to interrogate the unknown.
Heraclitus warned that we never step in the same river twice. Models drift, contexts shift, and yesterday’s incantation may fail you tomorrow. But maybe that’s the point of a good ritual; it allows you to exercise curiosity in the face of uncertainty.
Start by naming the outcome you want, not just the action: ‘Draft a two-paragraph brief that a non-technical VP can read in 60 seconds’ will beat ‘Summarise’ almost every time.
Give the model a role with constraints (for example, ‘You are a cautious fact-checker; if unsure, say so and list sources’) and show your workings by pasting relevant context while drawing a clear boundary around what to ignore.
Ask for reasoning without the theatrics: assumptions, alternatives, unknowns, next steps.
And importantly, invite uncertainty into the mix deliberately. You could do this by requesting a confidence score, with a sentence on what would raise or lower it.
Then close the loop by asking the model how your prompt could be improved. Each pass names the task more truly; each refinement sharpens the magic.
Scepticism is a charm against false awe and irrational trust. Treat LLM outputs as hypotheses until you’ve verified them.
Prioritise answers that cite sources you can check, then…actually check them.
Re-ask the question from a different angle and see if the story holds. Keep a tiny log of failures and near-misses so you can improve where it counts. And be wary of the temptation to anthropomorphise: models don’t believe or intend. They predict.
In Earthsea, knowing a thing’s true name grants both power and responsibility. In the tech world, ‘true names’ look like access controls that define what a system may do; provenance trails that say which data shaped which answer; evaluation criteria that spell out how behaviour is judged; and disclosure norms that state when and how AI helped.
Name these elements clearly and you can wield them well – because the LLM will know exactly what’s expected of it. Name them poorly and even good tools become tricksters.
Let us know one decision you’re wrestling with.
It could be anything from ‘which of these three product ideas should we prototype first?’ to ‘how do we roll out an AI policy without freezing experimentation?’ – or you could make it less business and more personal if you want to. We’re open to all dilemmas.
In an upcoming newsletter, we’ll analyse a reader case two ways: the Oracle, a narrative reading of context, stakes, and patterns; and the Debugger, a logical pass through assumptions, metrics, counter-tests, and next steps. Between oracle and debugger lies a practical truth: better questions make better futures.
See you next week. Until then, keep your questions sharp – and remember Heraclitus’ river.
Have an idea for a topic you'd like us to cover? We're eager to hear it. Drop us a message and share your thoughts.
Catch you next week,
The LEAP Team