Why ChatGPT Can't Do Astrology

People use AI to write wedding vows, decode contracts, and diagnose rashes. Naturally, they are also turning to ChatGPT and other generative AI tools for astrology readings. After all, if this seemingly magical technology can impersonate a lawyer, why not an astrologer?
Astrology is commonly dismissed as a baseless parlor trick or psychic guesswork. But in reality, it is a structured discipline grounded in a consistent, rule-based framework. For millennia, it has functioned as a form of cosmic geometry, using precise astronomical data and mathematical principles to analyze and interpret life's potentials. One in three Americans believes in astrology, and a reported 70 million Americans check their horoscope daily, signaling a widespread cultural value that has stood the test of time.
In theory, a discipline grounded in data and mathematical rules is the perfect job for a computer. But whenever an LLM assumes the role of an expert, we must ask: is it right, or merely confident? Large Language Models (LLMs) are known to hallucinate, and any professional astrologer – myself included – will tell you that AI readings are consistently, fundamentally flawed. Yet the delivery is often smooth enough to fool even seasoned seekers.
So how can astrology fans learn to spot the digital snake oil?
The Parrot Versus the Analyst: Core Differences in How AI and Astrology Operate
To recognize AI's failures, one must first understand the fundamental differences in how an LLM and a trained astrologer operate. LLMs like ChatGPT are not oracles, but parrots trained to recognize patterns in unfathomable quantities of text. Chatting with an LLM feels like magic, because they tend to nail tone and syntax with eerie precision. But they don’t know anything, nor do they verify truth by default. When you ask an LLM a question, it consults a weighted probability matrix to string together the combination of words that is statistically most likely to give the answer you expected. In other words, an LLM is like a fancy autocomplete that is very good at guessing.
As for Astrologers, while some are deeply intuitive, we are analysts, not fortune tellers. We produce insights by observing the objective, mathematical positions of the planets in your birth chart against those of another chart (like a person, or a time). Based on the geometry between each set of planets – and conventional astrological wisdom – we make predictions.
In sum, the practice of astrology requires calculating an accurate birth chart, consulting real-time planetary movements, and synthesizing complex insights.
Fortunately, you need not be a professional astrologer to see the cracks in an AI's analysis; you just need to know where to look. While an LLM can be deceptively impressive, its failures are consistent and identifiable. By performing a few simple checks, any astrology fan can learn to spot the difference between valid insight and seemingly eloquent nonsense.
Pitfall #1: AI Cannot Calculate Basic Critical Details
ChatGPT may provide a resonant personality profile if you feed it your "Big Three” placements – say Capricorn Sun, Scorpio Moon, Pisces Rising – but if you provide only your raw birth data, you will witness its rudimentary failure.
When provided a birth time, date, and location, LLMs like ChatGPT will consistently return incorrect placements, typically getting the sun sign correct, but fabricating the moon, rising, and other signs. This happens because an LLM is a text predictor, not a precision calculator; it is bound to "hallucinate" your birth chart, inventing planetary positions out of thin air. Before proceeding with any AI reading, always compare the chart output to an accurate one generated by a dedicated astrological calculator. If even one planet is in the wrong sign, the entire reading is invalid.
Pitfall #2: AI Has No Sense of Time
Beyond the static birth chart, a second critical failure emerges when dealing with time-sensitive questions. You can test this by asking ChatGPT something like “What is my astrology for September?”
This requires analyzing “transits” — the real-time locations of the planets relative to the zodiac— and any astrologically significant angles these make with your natal chart planets. And it exposes a fatal technical flaw: standard LLMs have a knowledge cut-off date and are disconnected from live astronomical data. Because of this, they employ some sleight of hand to produce horoscopes. Sometimes, they search the web and cobble together a report based on relevant headlines. Often, these headlines are from years prior. When they don’t search the web, they fabricate a plausible sounding forecast of pure generative fiction.
Pitfall #3: AI Analysis is Mad Libs, Not Synthesis
Even on the rare occasion an LLM gets all the requisite facts right, how good is the analysis?
Here, I have asked ChatGPT for a compatibility reading. Setting aside the glaring factual inaccuracies—like the fact his Mars is placed in Gemini, not Taurus —this is a textbook case of AI’s failure to synthesize the complex, often conflicting layers that give a real reading resonance. It reads like a jumble of generic takes, because that’s effectively what it is: LLMs like ChatGPT are trained on astrology scraped from the internet—mostly surface-level, one-size-fits-all blurbs. A real reading demands context, nuance, and a mind that can honor complexity rather than reducing it to a list of soundbytes.
The bottom line? People love astrology readings, and until AI came around, splurging on a pricey appointment was the only way to get one. At surface level, AI seems perfectly qualified to be your personal astrologer. But now that you know where to look, you can see for yourself how the illusion breaks.
A Crossroads for an Ancient Tradition
Astrology has survived for millennia not because it promises easy answers, but because it invites deep reflection. That tradition is now at a crossroads, as the rise of confidently inaccurate AI tools threatens to reduce it to a vat of meaningless nonsense. So what can we—my fellow astrologers—do to safeguard our practice in this inevitable age of AI? Our primary job, whether we like it or not, has now become one of public stewardship.
To my colleagues inclined to protest: I understand the instinct, but burying our heads in the sand will not put the AI genie back in the bottle. The only path forward is proactive engagement. This means educating the public on how to spot opportunistic dreck; it means educating ourselves to differentiate worthy tools from the worthless; and for some, it means getting directly involved in building AI that respects the integrity of our craft.
Ultimately, if the real experts decline this fleeting opportunity to set the standards, technologists who see astrology as just another way to make a quick buck off gullible consumers will happily take it from here–and integrity will not be their primary consideration. The fate of this tradition is in our hands.