Class is in Session! Interpreting Survey Data 101.
The first day of fall and the start of the school year has reminded me that there is a great deal of survey fact and lore that doesn’t appear in text books or is taught in classes. Knowledge of these facts, however, can boost the quality of survey data interpretation and insight pulled from the results. So grab a desk, open your notebooks and sharpen your #2 pencil!
- Engagement peaks in the first year of service and it’s all downhill from there. Ok, that’s a bit of a generalization. The fact remains that one of the most resilient patterns in survey data is by years of service. In the vast majority of surveys scores peak in the first year, take a steep drop soon after and then hit a bottom at “mid-career”. The definition of mid-career varies by company, as some companies tend to encourage long tenures and others have few employees with more than 10 years of service. What happens after the bottom can vary. I have seen scores improve as service grows (but almost never reach early levels) and I have seen scores remain stable at their low point. The pattern you are likely to see is typically a fishhook (improve after the bottom) or a hockey stick (stable at the low point). So that “generations” pattern you have discovered in your survey results? Sorry to disappoint you. It’s been around for generations.
- Scores get better as you move up the food chain. Senior executives provide more favorable ratings than directors. Directors are more favorable than managers. Managers are more favorable than the rank and file. There are sometimes exceptions to this finding, but in general you should see higher levels of favorability as you move up the hierarchy. If you don’t, then there is a serious problem in the organization. If senior executives don’t feel positive about the organization that infects everyone below them. In this case, addressing leader issues would be the top priority for action. First-line managers with scores on par with their direct reports can be interpreted as “going native”, meaning they are no longer acting as agents of the organization but rather throwing in with employees. That needs to be addressed as well.
- Don’t EVER compare results from Japan to those from India. We all know that life in Japan is very different than life in the US or Germany or India. The cultures are vastly different, so why would we assume that survey scores in one culture should be on par with results in another? They aren’t, even though employees in those different countries work for the same company. Global cultures have a significant influence on survey response patterns, and it’s a rookie mistake to conclude that Japan has serious issues to address because scores there are far less positive than those in India. There has been a great deal written on cultural differences, so I won’t repeat it all here. The bottom line is that interpretation of country data should be based on a like-to-like comparison. Compare results in Japan to a Japanese external norm, India to an India norm and the like. While absolute scores for countries can be very different, often the pattern of results will be consistent across countries in the same company. For instance, there may be similar declines or improvements in scores for all or many countries within a company, or a particular topic can score very well or poorly across countries. Interpretation of country results can be quite nuanced, so don’t jump quickly to conclusions.
- Neutral and unfavorable scores have feelings too. And they don’t like to be ignored. Many companies emphasize percent favorable scores in survey reporting and analysis, and that makes sense. However interpretation of a 60% favorable score can be vastly different when there is a corresponding 10% unfavorable score versus a 35% unfavorable score. Pay attention to all response choices, and when you have history data examine how each changes from the previous year. A favorable score that didn’t change year-over-year will mean something different if a significant number of respondents moved from neutral to unfavorable.
That’s a smattering of common patterns in survey data. Which ones have you seen? What questions do you have for the class?
Having successfully completed this class, you are ready for Advanced Survey Data Interpretation 201. Stay tuned.
Is that the bell? Class dismissed.