Research Sceptical of Astrology
McGrew & McFall, "A Scientific Inquiry into the Validity of Astrology", 1990
Main News Page
Return to Home Page
McGrew & McFall, "A Scientific Inquiry into the Validity of Astrology", 1990 - This experiment appeared in the wake of the notorious Carlson Double Blind Astrology Test, published in Nature in 1985. Six astrologers attempted to match twenty-three astrological birth charts with biographical, psychological and photographic information about each subject. The experiment attempted to test the practice of astrology without the limitations of the California Psychological Inventory (CPI). However, it failed to live up to the more famous Carlson test in other ways.

A small group of local astrologers

John McGrew and Richard McFall, Psychology Professors at the University of Indiana, enlisted a selection of six astrologers from a little-known local group, the Indiana Federation of Astrologers. This compares with twenty-eight nationally known astrologers, selected by NCGR who accepted to participate in Carlson's test, [1]. The experimenters claimed that "Superior ability among the subject group was documented in several areas." However, an example of such a superior astrologer was one that had "been a professional astrology writer for a syndicated column". This sounds remarkably like a Sun Sign column. While there are a few good astrologers who write newspaper or magazine astrology, it is quite wrong to assume that those who write the columns have superior abilities as astrologers especially in the task McGrew and McFall planned for them. Though one of the astrologers was a published author, it is surprising that in justifying their claim that these are 'superior astrologers' there is no mention that any of the astrologers are qualified in astrology, from NCGR, for example or in psychology. If you were one of the astrologers who participated, please contact me as I would like to hear more about the experience.

Too much information resulted in a huge number of variables

One of the major criticisms of the Carlson test was the limitations of the California Psychological Inventory (CPI) as an accurate measure to identify a horoscope. It turns out that it was limited as the subjects were unable to identify their own CPI from a selection including their own analysis plus two random CPIs. Recent investigations into Carlson's data shows that the astrologers were able to overcome the limitations of the CPI to make successful matches to a statistically significant level. (p=.037) However mindful of the CPI limitations, McGrew and McFall went out of their way to give the astrologers an abundance of information. The subjects answered a 61-item questionnaire created by the astrologers, plus two standardized psychological tests. [2] While, Vidmar claims they are superior clinical tests to the CPI, the huge amount of data created by these three tests made the tasks of the astrologers more complex. It's not the quantity of data that was a problem with the Carlson test, it was the quality of data.

The charts of the subjects were too similar being within a 2 year time range

Unfortunately, this abundance of data also included photographs of the subjects. This additional information meant that the age range of the subjects: 30-31 had to be very narrow (to avoid clues from the photos). Though the subjects appeared to have diverse lives, they were all born around the Saturn Neptune conjunction in Libra in the late 1950s. So given that the subjects were of the same generational (and astrological) cohort, there was a higher level of homogeneity than Carlson's subjects and a similar lack of self-awareness/experience.

Selecting 1 in 23 charts like a needle in a haystack.

A criticism of Carlson was that he required the astrologers to match a horoscope to three psychological profiles of which only one was genuine rather than a choice of two as designed by Vernon Clark (1961) in the previous test of this type.[3] Professor Suitbert Ertel states "The advantage of a pair comparison is that the complexity of the subject's task is minimized and the precision of the results is increased. Scientists testing a null hypothesis, which they prefer to come true a priori, should provide, following Karl Popper's demand, a fair chance for its refutation."[4] This is especially important with an experiment like this where many of the psychological profiles were very similar due to the uniformity of the group. With this experiment, the astrologers were given an Everest-like challenge. They were required to match each birth chart to not 3 CPIs but, I kid you not ... 23 psychological profiles! "We would like you to do a penalty shots at goal, but from the halfway line, with your left (or weaker) foot and Casillas of Real Madrid in goal."

Could Psychologists or teachers have done a similar match?

To establish the feasibility of the test, it might have been useful to have a group of psychologists demonstrate their ability to match the three tests (assuming these tests are not duplicated and that there are no obvious clues) for each of the 23 subjects. Would they find this task easier than making their own assessment of a client or patient? How would teachers who regularly provide their students with an end of year report, be able to match 23 students self-assessments to their reports? And would that matching be easier to perform accurately than his regular report? If these professionals failed in these matches, would they be failures in their profession? I believe the answer is that even these relatively simple tasks are much harder than one might think. Whatever your opinion, it is an untried and unknown task. However, McGraw and McFall try to justify their untried experimental task by asserting that it was easier to perform accurately than astrological counselling without any supporting evidence. They are asking us to take a leap of faith to rescue their experiment from the obvious fatal flaw which is that it was an impossible task.

Astrologers were not given a fair opportunity to rate or even rank the range of match possibilities

The astrologers were only asked for first choice match which they could rate as a percentage. However, there was no opportunity to rate less fitting secondary matches or to reject poor matches - a feature that dramatically increased the power of the Carlson experiment. Even asking astrologers to rate the closest and poorest match would have provided more useful data. Though McGrew and McFall did not know it at the time, this was probably the most important lesson from the Carlson test. The more scope the astrologers were given to express their judgement and confidence, the more precision with which their decisions could be measured and assessed. As this precision increased, the statistical significance of the results increased.

No graphs or tables of data to support conclusions or to enable analysis.

Though there are no data tables to analyze, McGrew & McFall assure us the astrologers were unable to match significantly and were surprised at their lack of agreement. This is what happens if you design a test that favours random results in a small sample.

Criticism of Astrological Techniques without understanding them.

McGrew and McFall attempt to analyze why the astrologers came up with different results. It's hard to tell if their attempts to use astrological terminology is to impress other scientists with their expertise in what they call an 'overcomplex subject' or not. Any astrologer (or anyone versed in ancient history or philosophy) reading "many possible combinations that result from assigning different weightings to various elements of the chart..." will think in terms of the four classical elements. However, elements are used here in a non-astrological sense. A few lines later, the word aspect is dropped in a misleading way "... at least, from the aspect of personality characteristics. Aspects of timing are probably less ...". Again, to an astrologer an aspect refers the angular relationship between planets and points in a chart. However, here McGrew and McFall use the word in a muddled way. I think the lesson here is before attempting to criticise how astrologers analyze a chart, know what they are doing. Astrology is bound to be an 'overcomplex subject' to anyone who thinks they can learn it by osmosis.

Submission to the science journal, Nature

Apparently, McGrew and McFall submitted their paper to Nature - perhaps hoping to emulate the 'success' of Shawn Carlson. However, cobbling together a few local astrologers to do an impossible test with an inevitable result and a sceptical conclusion, does not gain entry to a journal like Nature. At that time with Nature under the stewardship of Maddox the MO was: What you know was less important than who you know! The young undergraduate, Carlson appears to have been shoe-horned in by CSICOP.

Rejection by Nature

Nature rejected their paper by peer review. Who did the peer review? None other than Shawn Carlson, the author of the previous astrology test of which McGrew and McFall had been critical. Carlson was scathing, accusing them of 'sloppy scholarship' and that they are 'unaware of the current state of the literature'. Understandably, McGrew and McFall responded with a letter of complaint to the deputy editor of Nature, Peter Newark complaining of Carlson's vehement and disturbingly personal tone. Newark replied saying "While we appreciate that Dr.Carlson (he was not a Dr at the time) has a position to defend, nevertheless his critique contains several points that seem seriously to undermine any case for publication in Nature". (Official correspondence with John McGrew, Sept 7 1988.)

Bias & Questionable ethics within Nature

The fact that two psychology professors conducting psychological research were reviewed by a graduate student in physics who was the subject of criticism in their paper, does not reflect well on the quality and ethics of the peer review process under the editorship of John Maddox, CSICOP fellow. Membership of unscientific groups like CSICOP (now named CSI) is unacceptable in positions where they judge matters of science by their personal prejudices rather than their objective judgement.

In this instance, the astrologers were not given a fair chance.

While the scientists were unjustifiably, unfairly and impolitely treated by Nature, their experiment was not worthy of publication as it set an impossible task. At the end of the study, McGrew and McFall noticed that the astrologers were unimpressed with their 'evidence'. They archly commented that their "response to the study raises interesting questions about the nature of belief systems and the resistance of belief systems to change in the face of disconfirming evidence." For most professional astrologers, astrology is not a belief system; it is a based on knowledge. This is why the astrologers would have known that it was McGrew and McFall in their benign ignorance who failed to have given astrology a fair chance. Given a history of experiments that have failed to create a realistic measure of astrology, it is small wonder that the astrologers are going to trust their own empirical studies of astrology rather than the spurious claims of a scientific test by scientists who don't understand astrology.
A Scientific Inquiry Into the Validity of Astrology
Psychology Department, Indiana University, Bloomington, IN 47401
Journal of Scientific Exploration. Vol. 4, No. I, pp. 75-83, 1990
[1]Though 28 Astrologers accepted their invitation to participate in Carlson's experiment, he does not state how many actually completed their assignment.
[2]Strong-Campbell Interest Inventory (Form T325) and the Cattell 16 P.F.Form.
[3]Experimental Astrology. Vernon Clark, 1961. Aquarian Agent, 22-23
[4]Ertel "Appraisal of Shawn Carlson's Renowned Astrology Tests", Journal of Scientific Exploration p.128. [Review]

Robert Currey
Share Share
McFall & McGrew

Astrology News & Famous Charts Main Astrology News Page. Information, stories, theories and facts.

Index to past articles Over 50 articles relating to astrology.

Why it is no longer acceptable to say astrology is rubbish on a scientific basis.

Philosophers who refused to look through Galileo's telescope

Problems with testing astrological practice under strict scientific methods

Why Randi cannot be trusted to be impartial.

How and why astrology became outcast from mainstream thinking.

Bias can infect even top scientists and journals.

Shawn Carlson test of astrology

A similar Blind Astrology Test: Wyman & Vyse

Astrology News & Famous Charts Main Astrology News Page. Information, stories, theories and facts.

Index to past articles Over 50 articles relating to astrology.

Cognitive bias in the McGrew and McFall experiment: Review of "A scientific inquiry into the validity of astrology". ISAR International Astrologer 41(1), 31-37. A published peer reviewed paper on this experiment by independent researcher, Ken McRitchie.
Map of Web Site: Astrology Shop Home New Age Exhibitions around the world
 EQUINOX 2010   Contact us *   Page: