How I Tricked Chat GPT To Be 100% Accurate
After making personalized PDF Guidebooks for my clients since 2018, I’d come to terms with a simple fact: Maybe 5% of my clients actually utilize them, at least as far as I know. And that’s totally cool. Human Design can be intense, there’s jargon to learn, and the core ideas can fade over time.
But sometimes I’ll get a message from a client telling me just how impactful their session was–years later, or how their Guidebook continued to pay dividends every time they dove in for a refresher. I love that. It’s why I do what I do.
Human Design reveals itself in layers, in stages of discovery, but unless you're a real investigator, immersing yourself in 30-40 pages of the written word is a little old school, even if it’s valuable long term.
So when I set out to create the ultimate Human Design GPT, I had a clear vision: No more studying–only perfect, practical advice for real world situations on demand.
Ambitious? Yes. Here’s why: As soon as someone asks me for “just the facts,” or just the absolute need to know stuff about their design–and I deliver it, that’s the exact moment they ask for more context :) And vice versa: Someone asks for a deep dive? Done. But then they ask “could you please explain that to me in simple terms?” Finding the perfect balance between the two has been my holy grail of Human Design.
But Human Design on demand? That’s actually the promise of LLMs– something they can deliver like no other tool–pattern recognition, instant synthesis, and if done right, real-time application. The goal was to create a living Q+A portal almost as good as and sometimes even better than a session with yours truly.
As I began to create this ultimate Chat GPT experience, I immediately ran up against some obstacles. I made some assumptions about how it was all going to work from the outset, mostly because like you, I’m a logical thinker, and I fell into the familiar trap of believing that AI could help me with the logic.
Bad mistake. It couldn’t, and since AI isn’t logical (yet), I was the one who was going to have to architect it. So I’d thought I’d share with you the surprising lessons I learned along the way that made it possible (and maybe this logic can inform how you design your own custom GPTs–in a way that can maximize results for those ultra-specific tasks you want to optimize).
Hint: If I spoke Chat GPT, I would say, “It’s NOT a magic prompt. It’s the architecture.”
AI Problem #1: Inaccuracy
We’re all familiar with the fact that AI is very rarely able to access “the truth, the whole truth, and nothing but the truth,” and so, it will often spin out inaccuracies that get spun into falsehoods.
Reason: Open Sourcing
Just like any topic on the internet, there’s a mountain of distorted information available about Human Design. Why? Mostly because enthusiasts believe that the primary source material is too difficult to explain, and therefore, they morph, interpret, and soften it for their intended audience. But when practitioners adapt the language, they change the teaching, and so as AI combs the internet for answers to your questions, what you get back is mostly pop culture Human Design soup.
Side note: It would be great if SEO would send seekers to the jovianarchive.com or mybodygraph.com, the only two official sources in existence, but for a host of reasons, it doesn’t.
Solution: Closed Source Material
There was a simple fix. I was going to need to provide the closed-sourced material: Sharp, real Human Design, plus experienced-based applications in order to provide you all the answers.
AI Problem #2: GPT’s Can’t Keep Data Siloed
At first, my thought was to provide the GPT with:
(1) enough source knowledge to serve as a reference plus
(2) your individual chart
But that quickly spawned another problem: If you upload two separate documents into a GPT, suddenly they’re no longer separate. As soon as two files get uploaded, they’re merged into one, which leads to a plethora of reporting errors. This was initially quite surprising, because even in our human brains, it’s very easy for us to separate one file from another inside the same folder.
Another assumption was that AI was capable of cross referencing between two data sets, a basic element of core reasoning. It couldn’t. So this limitation killed the grand idea of being able to access deep Human Design knowledge in relation to the data in your personal chart.
Reason: A GPT Is Not A Database
This was clear. I could have added databases with other apps to solve this, but this basically would have meant that I’d have to create my own app to get the performance results I wanted. (And I think Human Design apps are facing the same problem–which is why there are not any good ones to mention here.) So I refocused on a much simpler, low tech idea–one that would work with how the GPT was actually designed and focus on total personalization at the same time.
Solution: Clean Divide Between Data And Instructions
Now things started making sense. All I needed was one clean PDF of source material all about your Human Design and a separate set of instructions for the GPT to run:
PDF for personal source knowledge.
GPT owns how to present it to you–not the knowledge itself. This was the key insight.
AI Problem #3: GPTs Can’t Sort Out Competing Logic
GPTs are fantastic at repackaging the known universe, but bad at open-end synthesis. Even if GPTs can read thousands of pages of source material and summarize it, that’s not the same as applying it on the spot like an enlightened professor.
I asked Chat GPT to elaborate on what it can teach objectively, and what it can’t. Here’s what it told me:
The Good: In finite areas, AI is essentially compressing and delivering validated human knowledge. It’s not “deciding” truth—it’s retrieving and restructuring the context that’s already there.
The Bad: The moment you move into domains like: Leadership, Psychology, Strategy, Purpose, or Human Behavior…you’re no longer dealing with purely objective principles. You’re dealing with context-dependent truths within competing frameworks, observer bias, and many other hidden variables (emotion, unconscious patterns, etc.) AI can still teach principles here, but answers depend on you asking the right questions and providing nearly all of the necessary context.
Takeaway: I needed to sharpen this tool for better self inquiry, not just for better answers.
Reason: GPTs Don’t Work Off Of Principles So they Can’t Reason
Here’s what I mean: Human Design is a set of core principles stacked upon core principles, layered in order of significance. Sometimes one principle impacts another principle. It’s just like any other discipline in that sense.
You want to be able to “ask AI anything” about how those core principles apply to you, and while it’s capable of reflecting back what it knows, what it can’t do is weigh or apply each core principle directly to Anne, Bill, or Christine and their differences in the correct way. This is where misalignments and “hallucinations” turn up, because there is zero actual reasoning happening behind the screen. This article illustrates the point as well as I could.
In other words, GPTs are unable to intelligently apply principles 1, 2, and 3 to variables A, B, and C–unless the GPT already knows every possible outcome impacting A, B, and C. That’s pattern matching, not logic.
At best, a GPT is only simulating logic anyway, but to the point, it can only seem as “intelligent” as the applied source knowledge it already knows.
Solution: Bake The Personal Applications Into The Source Knowledge
This final insight sent me off to make a brand new PDF with tried and true leadership applications already baked in. In other words, for the engine to work, I needed to sandwich each person, each principle, and each application into one bite for easy digestion, minus the background knowledge of Human Design that offers the wisdom in a broader system context (this was a huge shift and balancing act).
For faster learning and leadership application, less was way more. Rather than a tool for learning about yourself against the backdrop of the textbook (like I had taught myself in an academic sense), the real gains during beta testing were happening by only focusing on you, no grand system logic included…
And that’s how I created The Leader’s Guide PDF: A direct slant on “leading thyself” with Human Design in practical terms.
“Leading thyself is the first and often most difficult act of leadership. It requires intentional self-awareness, accountability, and emotional mastery before stepping in to guide others. It asks for tangible ‘behind the scenes’ work to build integrity, credibility, and confidence.” –Ahram
Want to feel it in action? You’re going to love the insights and accuracy.
Purchase The Leader’s Guide PDF—it’s 99.00
Pair it with the companion GPT called CoPilot™.
Access for life. Breakthroughs for leadership on demand.
And like I’ve said a million times, if Human Design hasn’t blown your mind yet, then you haven’t experienced the real thing.