A Small Change Leads to Big Questions
Because I'm a student at the University of St. Thomas, I regularly receive news items about things that UST would like me to know about. Something that caught my eye this week was an item about UST's ranking in the annual BloombergBusiness rankings of MBA programs. It wasn't the news of yet another round of rankings by yet another news agency that caught my attention. It was the methodology used in the rankings, or more specifically, a change in the methodology from 2014 to 2015 that intrigued me.
In past years, BloombergBusiness has said the purpose of their rankings research is to determine which business schools offer the strongest education and best prepare MBAs for their careers. In past years (prior to 2015), the methodology rankings were based on three weighted measures; a survey of student satisfaction (45% of the rankings); a survey of employers who hire those graduates (45%) and the expertise of each school's faculty, measured by faculty research in esteemed journals (10%).
This year (2015) BloombergBusiness says that their annual ranking for full-time MBA programs now focuses on "what most people hope to get after business school: a satisfying well-paying job".
With this change in focus, the methodology was changed to expand the base of survey subjects (alumni are now included in addition to employers and students). And additional measures were added; job placement rates now are 10% of ranking and starting salaries are also 10% of rankings. Notably and unceremoniously, the measure of faculty research was dropped from the methodology:
"Older elements of our ranking, including a tally of faculty research, have been scrapped because they don't get at our fundamental question: how well does this business school channel its graduates into good jobs?"
Methodologically speaking, based upon the research question being posed, this could be a justifiable decision. But, I wonder if the new methodology produced rankings that are any more meaningful than those produced by the previous methodology. And who is being served (or dis-served) by this change? For instance, in 2014 UC Berkeley (Hass) and NYU (Stern) were ranked #19 and #22 respectively. In 2015 Hass is #9 (up ten spots) and Stern is #24 (down two spots). Really? What is this saying? (other than that rankings are bullshit publicity tools). What is a prospective MBA student to surmise from this?
But mostly, I can't help but think about the unintended consequences of dropping faculty research from the rankings. This affects both schools of business and the students who attend them. I'll be elaborating on what I think are potential consequences in future blog entries. But, what do you think? Was this a good decision? Is faculty research no longer important to an MBA degree? Do you think dropping faculty research from the methodology of MBA program rankings impacts on the debate surrounding the relevance of research in business schools?
If you want to read more, here is a link to the article about the 2015 research. http://www.bloomberg.com/features/2015-best-business-schools/
If you want to read more, here is a link to the article about the 2015 research. http://www.bloomberg.com/features/2015-best-business-schools/