In a recent article in the journal PLOS ONE, we tested this possibility in a new theory called the semantic theory of survey responses, or STSR (Arnulf, Larsen, Martinsen, & Bong, 2014). We used text algorithms to assess the overlap of meaning among the items of scales used to measure common leadership surveys such as the Multifactor Leadership Questionnaire (MLQ), Leader–Member Exchange (LMX), and the Ohio State University Leadership theory inventory, the LBDQ. We also included a five-factor personality inventory, the NEO-FFI.
Related variables such as economic exchange, intrinsic motivation, organizational citizenship behavior (OCB), work effort, turnover intention, and job satisfaction were also included. The semantic values were compared to the observed statistics from human respondents in four large samples.
Determined by semantics
The main finding was that the observed statistics were predictable a priori from the semantic properties of the items. By the most conservative estimate, semantics explained 79% of the variance in the inter-item correlation matrix of the MLQ. Depending on assumptions, semantics could explain as much as 86% of the variance. Similarly, in a large sample surveyed for transformational leadership, LMX, the Ohio State 2-factor theory, as well as motivational variables, semantics explained between 47 and 87% of the variation.
Moreover, it seems that mediational relationships may also be determined by semantics. In a commonly applied regression model, intrinsic motivation was found to mediate between transformational leadership and outcome variables such as work effort and OCB. These relationships were also mutually explainable through semantics. Only in the case of the NEO Personality Inventory did semantic values fail to explain a significant part of the variation.
Our findings cast doubt on the prevalent use of surveys as research methods in this field because semantics (similarity in item content) is only supposed to play a role in the within-scale variation. Relationships between different variables should ideally be caused by patterns in the attitude strength of respondents. To ensure this, methodologists recommend various types of factor analysis to demonstrate that the scales are relatively independent and not inherently correlated (Nunnally & Bernstein, 2010).
As expected, we found that intra-scale relationships (measured by Cronbach’s alpha) were semantically determined. However, we could also show that the same semantic relationships pervaded into all meaningful relationships among the leadership and organizational behavior variables. Confirmatory factor analysis could neither detect nor prevent this, as better fit indices were associated with more obvious semantic determination of data patterns.
Four consequences
These findings may have some alarming consequences for prevalent methodologies in leadership research. First, the semantic patterns are properties of items involved a priori in obtaining responses. If the item correlation matrices are largely caused by the language recognition of respondents, the emerging statistical patterns really don’t tell us anything we didn’t know already (Smedslund, 1994).
Second, as theoretically argued (Van Knippenberg & Sitkin, 2013), it seems that many variables in leadership research have tautological relationships. As we could show in all three samples, the values of outcome variables were already given by their semantic relationships to the independent variables, suggesting some theoretical confounding in the basic concepts used. As two of the authors show in another empirical study, language links leadership to other phenomena also, such as heroism, in ways that are difficult to entangle by survey research (Arnulf & Larsen, 2015).
Third, the prevalent statistical modeling techniques used to demonstrate variable independence seem inadequate to this purpose, a possible explanation for the frequently observed levels of common method variance in leadership research (Podsakoff, MacKenzie, & Podsakoff, 2012).
A fourth and illustrating consequence concerns the cross-cultural validity of survey data. The crucial point in semantic relations is that they only reflect what is proposed by the linguistic structure of the sentence. If similarities in survey correlation matrices are observed across languages, it merely means that the survey was correctly translated. In our case, the items were fed into the text analytical algorithms in American English, but the respondents were all Norwegians responding to Norwegian versions of the surveys. This issue casts doubt about inferences from cross-cultural survey data as these merely show how the same statements are expressible across languages. Such data provide no information about how actual behavioral interactions are comparable, such as leadership or employee behavior across cultural divides. This possibility has also been proposed by psychological theories exploring the discrepancy between attitudes, language, thought, and action (Gollwitzer & Sheeran, 2006; Parks-Stamm, Oettingen, & Gollwitzer, 2010; Prinz, Aschersleben, & Koch, 2009).
A new approach to psychological measurements
Finally, we believe that our study offers a new approach to understand why some psychological measurements carry more predictive value for future behaviors or effects than others. Mere compliance with semantic structures in surveys could be a reason why some instruments show appropriate psychometric properties but little predictive validity (Bing, LeBreton, Davison, Migetz, & James, 2007). In our study, the five-factor model did not seem strongly influenced by semantics, but is known to be predictive of a number of life events ranging from health to career success (McCrae & Costa, 2004).
The observed statistics were predictable from the semantic properties of the items.
References
Arnulf, J. K., & Larsen, K. R. (2015). Overlapping semantics of leadership and heroism: Expectations of omnipotence, identification with ideal leaders and disappointment in real managers. Scandinavian Psychologist, 2, e3. doi: 10.15714/scandpsychol.2.e3.
Arnulf, J. K., Larsen, K. R., Martinsen, Ø. L., & Bong, C. H. (2014). Predicting survey responses: How and why semantics shape survey statistics in organizational behavior. PLOS ONE, 9(9), 1–13. doi: 10.1371/journal.pone.0106361.
Bing, M. N., LeBreton, J. M., Davison, H. K., Migetz, D. Z., & James, L. R. (2007). Integrating implicit and explicit social cognitions for enhanced personality assessment: A general framework for choosing measurement and statistical methods. Organizational Research Methods, 10(1), 136–179. doi: 10.1177/1094428107301148.
Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and goal achievement: A meta-analysis of effects and processes. Advances in Experimental Social Psychology, 38, 69–119. doi: 10.1016/S0065-2601(06)38002-1.
McCrae, R. R., & Costa, P. T. (2004). A contemplated revision of the NEO Five-Factor Inventory. Personality and Individual Differences, 36(3), 587–596. doi: 10.1016/S0191-8869(03)00118-1.
Nunnally, J. C., & Bernstein, I. H. (2010). Psychometric theory (3rd Ed.). New York, NY: McGraw-Hill.
Parks-Stamm, E. J., Oettingen, G., & Gollwitzer, P. M. (2010). Making sense of one’s actions in an explanatory vacuum: The interpretation of nonconscious goal striving. Journal of Experimental Social Psychology, 46, 531–542. doi: 10.1016/j.jesp.2010.02.004.
Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. In S. T. Fiske, D. L. Schacter & S. E. Taylor (Eds.), Annual Review of Psychology. Vol. 63 (pp. 539–569). Palo Alto: Annual Reviews.
Prinz, W., Aschersleben, G., & Koch, I. (2009). Cognition and action. In E. Morsella, J. A. Bargh & P. M. Gollwitzer (Eds.), Oxford Handbook of Human Action. Boston: Oxford University Press.
Smedslund, J. (1994). Nonempirical and empirical components in the hypotheses of 5 social-psychological experiments. Scandinavian Journal of Psychology, 35(1), 1–15. doi: 10.1111/j.1467-9450.1994.tb00928.x.
Van Knippenberg, D., & Sitkin, S. B. (2013). A critical assessment of charismatic-transformational leadership research: Back to the drawing board? The Academy of Management Annals, 7(1), 1–60. doi: 10.1080/19416520.2013.759433.