Wednesday, December 2, 2015

Is it a balance? Or is it a tug-of-war?

Rigor and Relevance

I have been reading the postings for our final class assignment where we have worked in teams to think about a potential research problem that we find interesting, relevant and recent to the field of OD.  The assignment asks us to define a problem and to support and describe both a qualitative and a quantitative approach to research this problem. Additionally, the assignment asks us to be sure to account for the context in which we plan to conduct the study as this will be central to identifying the pros, cons, and values of each approach.  

In reading through the reports, I'm struck by several things.  First is a reaffirmation of the diversity of the field of OD as evidenced in the wide range of research questions and methodology approaches presented by the cohort thus far.  They contain references to the social and behavioral sciences, management studies, industrial/organizational psychology, human resources management, communication sociology and cover a wide range of research settings.  OD is truly a big tent.

I've also been thinking a lot about the concept of a scholar-practitioner and what it means to be a bridge between scholarship and practice. I'm learning that being this bridge isn't as easy as the logic makes it sound and that the idea of a balance between theory and practice sometimes feels more like a plain old tug-of-war.  In reading through the reports, I have found myself locked in my own internal debate over how to evaluate the different approaches.  

Something I found very helpful was in that last line of the assignment; be sure to account for the context in which you plan to conduct the study as this will be central to identifying the pros, cons and values of each approach.  I was reminded that, once again, we are back to having no rights or wrongs, just "it depends".  

Even though it may seem like this complicates the matter, to me it felt just the opposite. With the reminder that context is central the approach, I felt more grounded in my considerations and more attentive to questions like, who or what is being served. I found it particularly helpful at those moments when it felt as if there was an inverse relationship between the methodological rigor and the potential for discovery and usefulness.  Who is being served? Is this adding value?  

I consider this to be a valuable lesson in my development as a scholar-practitioner. We are not bound by rigid scholarly expectations, nor are we simply doing without thinking.  We adapt to situations, take context into consideration, and draw from sound principles and practices in multiple disciplines to guide our work. This, combined with the diversity of settings appropriate for the application of OD reaffirms, for me, how valuable the scholar-practitioner can be to organizations.

I'm interested in knowing how others in the cohort have felt about the assignment. Are there any interesting thoughts you have had from reading the papers that have been posted?


Monday, November 16, 2015

Explaining Me

At our recent face-to-face meeting for the Research Methods class, we briefly discussed how the blogging assignment was going.  As usual, our diverse cohort had diverse opinions and I came to realize that my approach, and frankly my enjoyment of the assignment may be unique to me.  That's why, in this blog entry, I'd like to share some thoughts I've had since that discussion.

I appreciate the blogging assignment for the opportunity it presents to practice critical thinking and presenting that thinking in writing.  I consider this to be an important competency for becoming the kind of scholar-practitioner that I want to be and I'm glad to have the chance to do it. I use my blog to integrate what I am learning from class into the context of my everyday life and what interests me.  I also use it as a chance to practice developing and expressing my point of view on important topics in our field of study. 

In addition to active posting on my own blog, I have been active in making comments on blogs written by other cohort members and sometimes they take the form of challenging questions or provocative statements.  My purpose in posting these comments was to encourage more lively discourse, which is something that I have enjoyed in other learning communities in the past.  I was attempting to model a behavior that I desired from others.  

Upon reflection, I realize I was wrong to think that because I desire this kind of feedback that others do, too.  More importantly, I realize that my choices could be misunderstood and that there is a potential for my comments to be considered disrespectful or worse yet, as a kind of intellectual bullying.  I am sorry I created this situation, and I hope this explanation serves as an adequate apology to any of my fellow cohort members who may have misunderstood my efforts.

From the very start of our program, I have been questioning my obligations, and the expectations of me as a member of the cohort in supporting the other members.  I decided and still believe, despite this failed attempt, that we can and should learn from each other, and what I can contribute lies in sharing what I have learned from my experiences in working in organizations.  I'm not going to stop trying to help others, and will consider this a learning opportunity, as we so often extol the virtues of learning from our failures.  

But, you can't learn from failure unless you actually do fail at something - and admit it.  And what better place for failure to happen than in the psychologically safe environment that we consider our cohort to be.  So with that in mind, I wonder, why aren't we doing it more? Why aren't we challenging each other, expressing different opinions or stretching ourselves beyond our comfort zone? This is our chance!  It's our opportunity to safely fail and learn.

We have about 18 months remaining to take advantage of the psychological safety of our cohort and, speaking only for myself, I want to make the most it.  I will continue to use my blog to share my thoughts with contemplative or provocative posts - probably even after this class is over.  And I am imploring my fellow cohort members and other followers to comment with thoughts that continue the conversation and stretch our collective thinking - or just tell me that you think I'm full of "it" and why!.   In return, I will no longer use comments on other blogs for provocation unless it is specifically asked for.  

Thanks for allowing me to express my thoughts and for staying open.  And happy blogging.

"Don't it always seem to go, that you don't know what you've got 'til it's gone"  - Joni Mitchell


Wednesday, November 11, 2015

A "Real Life" OD Research Problem

The assignment for this blog entry is to find a real-life OD research problem that has not been studied yet.  Yikes!  That sounds intimidating.  Hasn't everything been studied?  

In thinking about it, I realized the answer can't be yes because there is still so much we don't know, there are still many problems needing to be solved and there will always new data, tools and insights that provide new ways to look at old problems.  Unfortunately for me, that realization seemed to make the assignment that much harder. 

The idea of looking at old problems in new ways is always intriguing to me.  I have, more than once, found myself reading the findings of a research study and feeling that something was missing or having a sense that some assumption in an analysis was flawed. The MBA rankings research that I blogged about is a good example of this.

I like to think about problems at the highest level, integrating knowledge from many disciplines to make meaning of situations.  And I'm drawn to theory development; particularly to developing theories that challenge conventional wisdom and produce real insights that can benefit organizations and the people in them.  

The lack of relevance in academic research has been talked about for a long time, but it can't go on forever.  Leaders relying upon quick fixes and short-term thinking even when addressing fundamental strategic problems is not new, but it can't go on forever, either. These are ultimately unsustainable situations that cry out for new ideas and solutions and they are the types of organization problems that are well-suited for OD work and interest me. 

So with that, here are three research questions that I would find very interesting for theory development.

1. Can there be a model for a sustainable, functional organization?  What characteristics does the organization and the people within it have?  How does it work?  How is it led?   (by functional, I mean the opposite of dysfunctional)

2.  How does the concept of a sustainable organization change the dialog about inclusion and the ways diversity can make a positive difference in organizations?

3. How would the concept of a sustainable organization change the dialog about the purpose of business schools and how would their students and faculty benefit from looking at the purpose of business schools through an sustainable model lens?  

This represents just the very beginning of my thinking in this area so I'm interested in hearing how it sounds to you.  What do you think of the idea of a model for a sustainable organization? Do you think this could be the basis of a useful theory?

Monday, November 9, 2015

MBA rankings dilemma, continued

This is a follow-up to my previous post on the MBA program rankings.

I'm intrigued with this because it touches on two timely topics for our class; research methodology and the debate surrounding the relevance of research being done in business schools.  

Several good comments that pointed out various shortcomings in the Bloomberg methodology were made on my first post. These comments echo the ongoing criticism coming from academia surrounding the various methodologies being used to rank business schools. So, I wonder, if the rankings are so flawed, why does the business media continue to do them, and, why do business schools continue to take them seriously? (even while they are simultaneously protesting their very existence)  

At the risk of sounding overly simplistic, I am attributing this to what I see as a series of disconnects surrounding the 'business' of business schools.  It's as if all the stakeholders (students, faculties, university administrators, donors, businesses, and the business media)  are taking on different agendas and driving them in different directions. 

For instance, most business schools leaders would recoil at the thought of their purpose being, as the Bloomberg study said, "to channel its graduates into good jobs".  Most of the top business schools compete for the top students by saying their value lies in how they prepare students to be great business leaders and entrepreneurs. Yet the Bloomberg research showed fewer than 10% of the full-time MBA students surveyed either went to start ups or started their own business and 43% of the full-time MBA students went into positions in either consulting or financial services which, coincidentally or not, is where the highest starting salaries were found.  

Additionally, BloombergBusiness has set a new agenda for the conversation regarding the importance of creating new knowledge and the relevance of this once highly-regarded role of business school faculty, saying that it is irrelevant to what students or businesses want from MBA programs and/or business schools.  Whether students or business leaders value the academic research done at business schools is, I believe, still an open question, but this decision is a disconnect that puts business schools in the defensive position of having to justify the value of their faculty research programs.

Lastly, Bloomberg's decisions bring to light that academic research - generally considered by the faculty to be the most prestigious and valued work of a business school - does not generate revenue for the school and that undergraduate teaching and expanding MBA programs are generating significant revenue, in both tuition and donations, for business schools that are otherwise strapped for cash.  All of this highlights the disconnect between that which generates revenue for business schools, the expectations and desires of the academic business scholar, and what students, employers and donors expect from their investment in a business school.

It seems as if there are many research questions embedded in this dilemma.  











Wednesday, October 28, 2015

Methodology Matters

A Small Change Leads to Big Questions

Because I'm a student at the University of St. Thomas, I regularly receive news items about things that UST would like me to know about. Something that caught my eye this week was an item about UST's ranking in the annual BloombergBusiness rankings of MBA programs. It wasn't the news of yet another round of rankings by yet another news agency that caught my attention. It was the methodology used in the rankings, or more specifically, a change in the methodology from 2014 to 2015 that intrigued me.  

In past years, BloombergBusiness has said the purpose of their rankings research is to determine which business schools offer the strongest education and best prepare MBAs for their careers. In past years (prior to 2015), the methodology rankings were based on three weighted measures; a survey of student satisfaction (45% of the rankings); a survey of employers who hire those graduates (45%) and the expertise of each school's faculty, measured by faculty research in esteemed journals (10%).  

This year (2015) BloombergBusiness says that their annual ranking for full-time MBA programs now focuses on "what most people hope to get after business school: a satisfying well-paying job".

With this change in focus, the methodology was changed to expand the base of survey subjects (alumni are now included in addition to employers and students).  And additional measures were added; job placement rates now are 10% of ranking and starting salaries are also 10% of rankings.  Notably and unceremoniously, the measure of faculty research was dropped from the methodology:  

"Older elements of our ranking, including a tally of faculty research, have been scrapped because they don't get at our fundamental question: how well does this business school channel its graduates into good jobs?"

Methodologically speaking, based upon the research question being posed, this could be a justifiable decision.  But, I wonder if the new methodology produced rankings that are any more meaningful than those produced by the previous methodology.  And who is being served (or dis-served) by this change?  For instance, in 2014 UC Berkeley (Hass) and NYU (Stern) were ranked #19 and #22 respectively.  In 2015 Hass is #9 (up ten spots) and Stern is #24 (down two spots).  Really?  What is this saying? (other than that rankings are bullshit publicity tools). What is a prospective MBA student to surmise from this?

But mostly, I can't help but think about the unintended consequences of dropping faculty research from the rankings. This affects both schools of business and the students who attend them. I'll be elaborating on what I think are potential consequences in future blog entries.  But, what do you think?  Was this a good decision?  Is faculty research no longer important to an MBA degree?  Do you think dropping faculty research from the methodology of MBA program rankings impacts on the debate surrounding the relevance of research in business schools?  

If you want to read more, here is a link to the article about the 2015 research. http://www.bloomberg.com/features/2015-best-business-schools/

Monday, October 26, 2015

Coming At It from Different Angles

Triangulation

The idea of triangulation comes from navigational and land surveying techniques where a single point, either on land, water or in space, is calculated using multiple angles originating from multiple locations.  In everyday parlance, the term is often used when someone is referring to a considered approach that reviews a situation from multiple perspectives. 

We have three texts and other assigned readings for the Research Methods class and each textbook and chapter is looking at research methodologies through a different lens.  One presents research methodologies as they are "classically" understood within the framework of scientific inquiry.  Another presents research methodologies as they are applied in the setting of organizations.  Yet another presents research methodologies through a framework of usefulness and relevance.  

Because of these different lenses, I've been experiencing a push and pull effect. I find myself becoming engaged with the theoretical concept of a certain methodology and then rejecting it once I understand the more practical challenges of application within organizations and relevance.  

Now, I'm starting to think that it is more helpful to think in terms of a triangulated approach. Instead of thinking of the textbooks as looking at methodologies through totally separate lenses, I'm beginning to think about what impact different settings and different research designs have on the different methodologies.  Ultimately this becomes a series of trade-offs based upon what it is you want to accomplish.

Once again we are back to the importance of the research question. There is so much to consider.  For instance do you want the research to produce findings that are specific to one setting or do you want to be able to 'generalize' the findings to a larger group?  Is there a theory being tested or is the research designed to generate a theory?  Triangulation is also useful as an approach of using both quantitative and qualitative data to look at a research question.  

I've found triangulation to be very useful for everyday decision making.  I have a better understanding of why I make the decisions I do, and ultimately it leads to higher quality and greater confidence in my decisions.


Monday, October 19, 2015

What Happens to Research Failures?

The Positive Results Bias

Last week I learned about a thing called the Positive Results Bias in research.  

Basically this describes a situation where research that produces a positive result or "significant findings of interest" is considered more worthy of attention in the research marketplace than research producing what is called a negative result. Negative results aren't an indication of finding something bad. The term "negative" is used when the research findings failed to support a new hypothesis, a new theory "doesn't work" or a commonly held belief could not be disproved.   

By characterizing results as 'negative' they become immediately less appealing to research journals and can even reflect negatively on the perceived quality of the research.  This contributes to what is sometimes referred to as the "file drawer effect."  When early research findings show that a hypothesis may not be supported, or the results are negative or inconclusive, the findings may be 'filed away' and never shared with the broader research community.  

This bias is of particular concern in biomedical research where, for instance, the failure to publicize negative results in clinical drug trials could have life and death consequences.  But for any researcher concerned with a balance of rigor and relevance, as I am in the social sciences and research work regarding organizations, this is very concerning.

When research journals and the popular media focus their attention on studies with positive results they do it at the expense of understanding all the knowledge being created through research in any given field.  From my perspective - a scholar-practitioner who uses research as a tool to help organizations be more effective -this is keeping valuable knowledge from the marketplace of ideas.

Change in organizations is constant and influenced by many different factors. A finding of something NOT producing an expected result or findings that support already existing theories and practices can be just as relevant and valuable as, for instance, findings that isolate a statistically significant change in the contribution of one individual factor to a multi-factored result.  

Some of the best ways I can help organizations is to identify barriers to progress and to help eliminate things that DON'T work.  This can often be found in what would be considered negative research results.  When researchers limit the sharing of their work to that which only produced positive results it means that other researchers will continue to invest in needless repeated trips down the same rabbit hole. 

Researchers and research journals should consider a research project to be a success if it is a quality, rigorous, methodology, design, and execution that produces reliable results, regardless of whether those findings produce positive or negative results.  

Sunday, October 4, 2015

Diving In...

Whenever I take on a new project, I tend to immerse myself in it.  I call it being at least three questions deep - being able to answer three consecutive questions about the how, why or what of something.  

Right now, I'm immersed in the Research Methods Lab and trying to understand research methodologies that are used in the field of Organization Development.  It's beginning to look like three questions deep doesn't even scratch the surface.

Something I'm finding interesting is how, in research, things can go wrong before you even start.  A common mistake is to go out and ask a lot of research questions before the problem is fully understood.  The process should start by identifying the problem.

A research problem is found where there is a void of knowledge (something we don't know) and from there, a research question is formulated (what we want to know).  How this research question is formulated (how we will fill the knowledge void) begins to determine the methodology used to answer the question. This can be a circular, iterative process.  The research question is important in determining the methodology and concurrently, the methodology for how the question will be answered helps shape the question.   All of this happens before we can start the research!

With that in mind, here are my top five goals for the Research Method Labs course.

1. Stay Patient.   - Methodology comes from the Greek methodikas meaning methodical or systematic.  Oh, this will be hard for me.  

2. Keep the jargon in check. - Hermeneutics and epistemology are important concepts for a researcher to learn and understand but probably off-putting as small talk at the dinner table.

3. Keep the "practical" in practice - both words coming from the Greek praktikos meaning "fit for action, fit for business".  As a scholar-practitioner, I want to understand how to formulate research questions that produce relevant "practical" research results.

4. Keep the "search" in research - I can see how it would be possible to become so focused on methodological rigor that we lose sight of the goal of answering questions. A good researcher should stay open and curious, regardless of the methodology being used.

5.  Find my "Methodology Match"  - This involves learning as much about myself as methodologies.  Research is not a one-size-fits-all proposition.  Everyone has natural inclinations and preferences with different methodologies resonating differently based upon a person's interests and experiences.   It's important for me to "try on" different methodologies to see how they fit so I can eventually gravitate toward those methodologies that suit me best and, for which I am best suited.