Architectural Blatherations

James Cramer & DesignIntelligence Bungle Rating the Architecture Schools

top rule

The Good Oil

Note Along with the rest of this site, this page ceased updating in 2014. No further results will be published.

Every few years we rate the world's architecture schools on their research performance.

DesignIntelligence (DI) also provides an annual rankings list of American schools. It purports to guide prospective students to the light: you just have to pay them a hefty fee to receive this illumination. On this page we sort out some of the differences between DI's rankings and our own. We also take a look at the Cramer Report, a once-off attempt in 2008 by Mr Cramer to lift his game.

We think our rankings serve a worthwhile but very specific purpose. We also think our stats are valid and robust. We have no idea what purpose DI's rankings serve—apart from making them buckets of money—and we do not believe their rankings are valid or robust.

Our challenge to DesignIntelligence and James Cramer

We challenge you to describe your methodology in detail, to reveal key statistical information such as standard deviations, correlations, and distributions: as we do.

We challenge you to increase your sample size to all measurable American schools: as we do.

We challenge you to explain the apparent methodological discrepancies between the 2008 Cramer Report and the annual DI rankings.

We challenge you to move from your antiquated pay-per-view downloadable-pdf business model to a free web-page: as we do.

Ranking Schools

As Aussies, we are a bit mystified by the whole rankings mystique. Americans seem to have a particular penchant for numbers and ratings. May be its that baseball thing. Mr James Cramer—founding editor, publisher, CEO, Grand Wizard and Chief Dingleberry of DI— puts this down to the American love of a meritocratic system, in which the best people get to the top. So that explains George W. Bush and Kim Kardashian.

We think it's nuts, for reasons we discuss below. Sure, we can understand that you want to choose a good school rather than a dud. At the coarsest level, that's easy: Harvard is better than Emphysema U in Buttfuck, Montana. But we knew that already, didn't we?

And, yes, we do publish our own rankings but we go out of our way to say they should be taken with a grain of salt; no make that a salt mine of salt. Mr Cramer and DI hold their ratings as sacred as the tablets of Moses.

The Cramer Report and the DesignIntelligence Ratings

Apart from our own system, we'll be talking about two sets of rating in this article, both published by James Cramer of DesignIntelligence. The most widely known is the annual rankings of American schools published by DI: America's Best Architecture & Design Schools (hereafter the DI report). This simply provides two Top Twenty lists of undergraduate and graduate architecture programs. It provides no numerical data: we do not know what score the top school received compared to the 20th. The second is the once-off 2008 Cramer Report. It assigns a numerical score to each school.

How many schools are covered by the rankings?

At Architectural Blatherations we include over 160 schools from the United States, the United Kingdom, Canada, Australia, New Zealand and other nations. We surveyed every single school that we could get reliable data from: 103 from the USA. The last DI report we could be bothered to pay for (2010) lists 33 schools. In the 2008 Cramer Report, James Cramer finally got off his well-padded arse {ass} to provide some meagre information about 71 schools.

For comparison, the NAAB accredits 117 schools, and the ACSA has 122 members (2010 data).

Dr Garry (2009) DI (2010) Cramer Report (2008)
Percentage of American schools covered 83% 27% 56%
Percentage of the American architectural student population covered 88% 33% 62%

Data on schools and student populations from the ACSA (2010).

Assuming, for the moment, that DI's rankings are of some use, this means that DI is a guide to only one in every three people looking for an architecture school. It gives no guidance at all to the majority of those looking for information.

The aims of the rankings

We make very clear here what the aim of our ranking system is: to guide architecture students wishing to pursue research, to those schools that do it best. There are many utter morons who pretend to read our intentions otherwise, claiming that we publish a general guide to the schools. No, we don't.

We are not sure what the purpose of DI's rankings is. DI talks about "quality programs". Given the audience it is targeted to, we assume that the purpose of DI's system is to direct young people into those schools that will…

  • Maximize their income over their professional lifetime?
  • Give them a good education in architecture?
  • Give them a liberal education?
  • Make them attractive to future employers?
  • Reduce their parent's neuroses?
  • Justify their parent's school fees?
  • Increase their prospects of being laid?

So what is DI's aim in publishing its rankings? We would be called cynical if we said that the primary aim was to make money for DI.

Making money from rankings

Dr Garry does not make a single goddam penny from this website or his rankings. He has no interest in promoting one school over another, nor one nation over another. He wants to spend his days swanning around in a smoking jacket, while dusky maidens cater to his every need and spaniels gambol around his feet. But every so often buffoons like James Cramer rouse him from his indolence and require a response. Dr Garry's had to fire all his dusky maidens to get the time to write this page, and replace his spaniels with cats. We hope you're satisfied, James Cramer!

James Cramer and DI demand you pay them money. DI has a vested interest in making sure you pay money year after year for the same annual report of the same tiny set of American schools. Who's the top school this year? Yale! Columbia! Emphysema U!

Dr Garry's methodology at Architectural Blatherations

We have a defined measure, and a defined purpose. We tell you exactly what we measured and how we did it.

We do not give spurious accuracy. In our rankings of the American schools, we report our results to two significant figures (tops), the limits to our accuracy.

We conduct our rankings at respectable time intervals. We used to do it every two years, but we now move to a five year schedule. The quality of academic research production changes slowly. We discovered that the turnover (entrants, retirements, and moves to other schools) of American professors is very low, less than 10% per two-year period. So a whole school's research score will only change slowly. Annual rankings would just pick up noise in the data.

James Cramer and DI's confused methodology

You would think that the methodologies behind the 2008 Cramer report and the annual DI rankings would be in complete harmony. Especially since they are authored by the same guy. But fracked if we can work out what is going on.

The Cramer Report's Methodology

Fracked if we know what it is. The sidebar in the Cramer Report we have lists five confused and opaque measures or categories of measure. Mr Cramer shuffles a few numbers, conjures a score into existence, and hopes no-one notices his sleight-of-hand. He's been pulling off the same tacky trick for more than ten years. Take this category, category 2:

Category 2: Results of the 10 years of DesignIntelligence rankings by professional practices, with more weight given to recent years.

Eh? How were the surveys conducted? Who did you talk to? Were the firms population-weighted? What weights did you use for each year? How did you decide them? Why ten years? How was this entire category weighted against the other four? How did you decide that? Now look at Mr Cramer's Category one:

Category 1: Results of the 2009 [Dr Garry: actually 2008 data] DesignIntelligence rankings by professional practices.

Whoa! Doesn't that collide with your category two? And what weighting did you give that

They seem to be confusing several measuring instruments and mashing them into one scale. What are the correlations between these different measures? If several of Mr Cramer's categories are highly correlated, then including them all provides a deceptive super-weight to the whole scale. For example, if DI's categories one and two are highly correlated, then including both simply over-weights the underlying quality they are measuring. We'd like to see some studies.

Here's a tip guys: hire a statistician, just for a few weeks. No, better: hire us for a couple of months! We don't claim to be statisticians, but even we could do a better job than you.

Implicit in DI's opaque rankings is that the distribution of architecture schools looks something like the chart below, a normal or Bell curve. In this sort of distribution a school with an average or mean score is a typical school. Fair enough.

Now take a look at the distribution of America's schools on our research rankings. Very different. In this sort of distribution, a school with an average or mean score is not typical: for that you have to look at the median score. Are we saying that our ratings distribution corresponds to DI's? No. But we are saying DI should justify to their paying customers WTF they are doing.

Distribution of American schools in our research ratings
The distribution of American architecture schools in our research ratings.

World-class?

The Cramer Report is subtitled "America's World-Class Schools of Architecture". So, are they? What makes a world-class school. Mr Cramer has a very simple criterion.

Category 5: DesignIntelligence considers NAAB accreditation to be a minimum standard for world-class status.

No, Mr Cramer, NAAB accreditation in the USA is a minimum standard for accreditation in the USA. Mere accreditation in your country does not constitute world-class status. You really do need to get out more, James. Do you really have to display your naivete to the rest of the world? Take a look at this page.

DI's Methodology

The annual DI rankings are as methodologically confused as the Cramer report. The Cramer report uses five confused categories. The DI rankings have a much simpler but equally crap system. We quote from the 2010 report (page 13):

The 11th annual…study…ranks accredited undergraduate and graduate programs from the perspective of leading practitioners. The survey, conducted in mid-2009, tapped professional practice leaders who have had direct experience in hiring and evaluating the performance of recent architecture…graduates.

Right. You are relying on the ravings of sad old farts trying to relive their youth astute observations of practitioners from eminent firms. Ok, fair enough.

What we don't understand is the rest of the text you have on the same page. On the same page you also say that you queried participants about many issues. Did these factor into your ratings? You don't say. You also say:

The rankings are compiled using data from surveys conducted and analyzed by DI with supplemental information from the National Council of Architectural Registration Boards, the National Architectural Accreditation Board, [and] the American Instate of Architectural Students.

WTF? At the top of page 13 you said you relied entirely on information from leading practitioners. Now you say you have used a whole bunch of other stuff! Exactly how does that work, Mr Cramer? We're confused! And we suspect you are too!

Time-lag effects

DI produces an annual ranking, which implies that schools jump around enough to justify such a short time interval. Why would you pick a school to study architecture based on DI's ranking now? If annual rankings are so volatile, wouldn't you want to know DI's ranking of the school when you graduate? We hope Mr Cramer has a Tardis.

Prospective employer: You spent five years at school at Podunk U. Why did you choose Podunk?

You: When I entered, DI ranked it the very bestest school in the whole universe, ma'm.

Prospective employer: This year it didn't even make their list. So during your time there the whole place fell apart? What sort of crap education did you get?

Cramer vs Cramer

The 2008 Cramer report gives the lie to Mr Cramer's other annual money-spinning venture, the DI annual rankings. The chart below is a scree-plot or scree-chart, derived from the Cramer report. Each school assigned a numerical score in the Cramer report is ordered from left to right, the best school at the left. We show the breakpoints Mr Cramer used to distinguish "Super Dooperest" schools from those merely "Super". We also show the breakpoint at which a school would enter DI's Top 20 list.

Scree-plot from the 2008 Cramer Report
Scree-plot from the 2008 Cramer Report. Scores of the schools, best at left. The breakpoint at which a school would enter DI's Top 20 list is also shown.

Are the breakpoints justified? With regards to the Cramer report we'd say: sort of. But they do seem to be eensy-weensy differences, don't they? A school with a score of 468 is Highest Distinction, but one with 440 is in the lowest category of Distinction. That's only 28 points out of almost 500. Really: 28 out of 500.

The difference between a Highest Distinction school and a High Distinction school is one point (468 vs 467). I'll say that again one point. Really? One point is all it takes? Of course there must be breakpoints between categories, but we are not convinced that the Cramer report has constructed the right ones. We have no idea how the scores were assigned, and more importantly, we have no information about the other 50 or so American schools Mr Cramer did not include in his report. What scores did they get? Let's take a look at the exact same chart above, but with the score axis set to zero.

Scree-plot from the 2008 Cramer Report
Same chart as above, scaled to show the zero score.

Yuh, right. How did you choose those breakpoints, Mr Cramer? And how do you justify choosing only a Top 20 for your annual DI rankings? You obviously have some notion of the concept of a breakpoint, but you persist in a naive Top 20 list. That's crap and you know it is. How many points difference between your 20th school and the 21st? Was that a reasonable breakpoint? Did your 20th school have as many points as your 21st, 22nd, 23rd, etc?

We acknowledge the Gadigal and Wangal peoples of the Eora nation as the traditional custodians of the land upon which this website is produced.

Copyright © 2001–2014 Garry Stevens. All rights reserved. Original research on this site is commercial-in-confidence and copyright © 2001–2014 Garry Stevens. It is not public domain. Notwithstanding all this legal palaver, you may freely quote any research on this site or other material to your heart's delight; provided proper attribution to Dr Garry and this site is given.

All opinions expressed are professional opinions, given in good faith.

This website is manufactured entirely from recycled electrons that may have once belonged to nuts and crustacea. View at your own risk.