Wednesday, October 28, 2015

Methodology Matters

A Small Change Leads to Big Questions

Because I'm a student at the University of St. Thomas, I regularly receive news items about things that UST would like me to know about. Something that caught my eye this week was an item about UST's ranking in the annual BloombergBusiness rankings of MBA programs. It wasn't the news of yet another round of rankings by yet another news agency that caught my attention. It was the methodology used in the rankings, or more specifically, a change in the methodology from 2014 to 2015 that intrigued me.  

In past years, BloombergBusiness has said the purpose of their rankings research is to determine which business schools offer the strongest education and best prepare MBAs for their careers. In past years (prior to 2015), the methodology rankings were based on three weighted measures; a survey of student satisfaction (45% of the rankings); a survey of employers who hire those graduates (45%) and the expertise of each school's faculty, measured by faculty research in esteemed journals (10%).  

This year (2015) BloombergBusiness says that their annual ranking for full-time MBA programs now focuses on "what most people hope to get after business school: a satisfying well-paying job".

With this change in focus, the methodology was changed to expand the base of survey subjects (alumni are now included in addition to employers and students).  And additional measures were added; job placement rates now are 10% of ranking and starting salaries are also 10% of rankings.  Notably and unceremoniously, the measure of faculty research was dropped from the methodology:  

"Older elements of our ranking, including a tally of faculty research, have been scrapped because they don't get at our fundamental question: how well does this business school channel its graduates into good jobs?"

Methodologically speaking, based upon the research question being posed, this could be a justifiable decision.  But, I wonder if the new methodology produced rankings that are any more meaningful than those produced by the previous methodology.  And who is being served (or dis-served) by this change?  For instance, in 2014 UC Berkeley (Hass) and NYU (Stern) were ranked #19 and #22 respectively.  In 2015 Hass is #9 (up ten spots) and Stern is #24 (down two spots).  Really?  What is this saying? (other than that rankings are bullshit publicity tools). What is a prospective MBA student to surmise from this?

But mostly, I can't help but think about the unintended consequences of dropping faculty research from the rankings. This affects both schools of business and the students who attend them. I'll be elaborating on what I think are potential consequences in future blog entries.  But, what do you think?  Was this a good decision?  Is faculty research no longer important to an MBA degree?  Do you think dropping faculty research from the methodology of MBA program rankings impacts on the debate surrounding the relevance of research in business schools?  

If you want to read more, here is a link to the article about the 2015 research. http://www.bloomberg.com/features/2015-best-business-schools/

Monday, October 26, 2015

Coming At It from Different Angles

Triangulation

The idea of triangulation comes from navigational and land surveying techniques where a single point, either on land, water or in space, is calculated using multiple angles originating from multiple locations.  In everyday parlance, the term is often used when someone is referring to a considered approach that reviews a situation from multiple perspectives. 

We have three texts and other assigned readings for the Research Methods class and each textbook and chapter is looking at research methodologies through a different lens.  One presents research methodologies as they are "classically" understood within the framework of scientific inquiry.  Another presents research methodologies as they are applied in the setting of organizations.  Yet another presents research methodologies through a framework of usefulness and relevance.  

Because of these different lenses, I've been experiencing a push and pull effect. I find myself becoming engaged with the theoretical concept of a certain methodology and then rejecting it once I understand the more practical challenges of application within organizations and relevance.  

Now, I'm starting to think that it is more helpful to think in terms of a triangulated approach. Instead of thinking of the textbooks as looking at methodologies through totally separate lenses, I'm beginning to think about what impact different settings and different research designs have on the different methodologies.  Ultimately this becomes a series of trade-offs based upon what it is you want to accomplish.

Once again we are back to the importance of the research question. There is so much to consider.  For instance do you want the research to produce findings that are specific to one setting or do you want to be able to 'generalize' the findings to a larger group?  Is there a theory being tested or is the research designed to generate a theory?  Triangulation is also useful as an approach of using both quantitative and qualitative data to look at a research question.  

I've found triangulation to be very useful for everyday decision making.  I have a better understanding of why I make the decisions I do, and ultimately it leads to higher quality and greater confidence in my decisions.


Monday, October 19, 2015

What Happens to Research Failures?

The Positive Results Bias

Last week I learned about a thing called the Positive Results Bias in research.  

Basically this describes a situation where research that produces a positive result or "significant findings of interest" is considered more worthy of attention in the research marketplace than research producing what is called a negative result. Negative results aren't an indication of finding something bad. The term "negative" is used when the research findings failed to support a new hypothesis, a new theory "doesn't work" or a commonly held belief could not be disproved.   

By characterizing results as 'negative' they become immediately less appealing to research journals and can even reflect negatively on the perceived quality of the research.  This contributes to what is sometimes referred to as the "file drawer effect."  When early research findings show that a hypothesis may not be supported, or the results are negative or inconclusive, the findings may be 'filed away' and never shared with the broader research community.  

This bias is of particular concern in biomedical research where, for instance, the failure to publicize negative results in clinical drug trials could have life and death consequences.  But for any researcher concerned with a balance of rigor and relevance, as I am in the social sciences and research work regarding organizations, this is very concerning.

When research journals and the popular media focus their attention on studies with positive results they do it at the expense of understanding all the knowledge being created through research in any given field.  From my perspective - a scholar-practitioner who uses research as a tool to help organizations be more effective -this is keeping valuable knowledge from the marketplace of ideas.

Change in organizations is constant and influenced by many different factors. A finding of something NOT producing an expected result or findings that support already existing theories and practices can be just as relevant and valuable as, for instance, findings that isolate a statistically significant change in the contribution of one individual factor to a multi-factored result.  

Some of the best ways I can help organizations is to identify barriers to progress and to help eliminate things that DON'T work.  This can often be found in what would be considered negative research results.  When researchers limit the sharing of their work to that which only produced positive results it means that other researchers will continue to invest in needless repeated trips down the same rabbit hole. 

Researchers and research journals should consider a research project to be a success if it is a quality, rigorous, methodology, design, and execution that produces reliable results, regardless of whether those findings produce positive or negative results.  

Sunday, October 4, 2015

Diving In...

Whenever I take on a new project, I tend to immerse myself in it.  I call it being at least three questions deep - being able to answer three consecutive questions about the how, why or what of something.  

Right now, I'm immersed in the Research Methods Lab and trying to understand research methodologies that are used in the field of Organization Development.  It's beginning to look like three questions deep doesn't even scratch the surface.

Something I'm finding interesting is how, in research, things can go wrong before you even start.  A common mistake is to go out and ask a lot of research questions before the problem is fully understood.  The process should start by identifying the problem.

A research problem is found where there is a void of knowledge (something we don't know) and from there, a research question is formulated (what we want to know).  How this research question is formulated (how we will fill the knowledge void) begins to determine the methodology used to answer the question. This can be a circular, iterative process.  The research question is important in determining the methodology and concurrently, the methodology for how the question will be answered helps shape the question.   All of this happens before we can start the research!

With that in mind, here are my top five goals for the Research Method Labs course.

1. Stay Patient.   - Methodology comes from the Greek methodikas meaning methodical or systematic.  Oh, this will be hard for me.  

2. Keep the jargon in check. - Hermeneutics and epistemology are important concepts for a researcher to learn and understand but probably off-putting as small talk at the dinner table.

3. Keep the "practical" in practice - both words coming from the Greek praktikos meaning "fit for action, fit for business".  As a scholar-practitioner, I want to understand how to formulate research questions that produce relevant "practical" research results.

4. Keep the "search" in research - I can see how it would be possible to become so focused on methodological rigor that we lose sight of the goal of answering questions. A good researcher should stay open and curious, regardless of the methodology being used.

5.  Find my "Methodology Match"  - This involves learning as much about myself as methodologies.  Research is not a one-size-fits-all proposition.  Everyone has natural inclinations and preferences with different methodologies resonating differently based upon a person's interests and experiences.   It's important for me to "try on" different methodologies to see how they fit so I can eventually gravitate toward those methodologies that suit me best and, for which I am best suited.