There is a fundamental flaw in business and management research. While most studies excel at explaining what is happening in their data, they pay surprisingly little attention to how well their solution could predict outcomes in other data.
In other words, they are good at telling us what has happened in a specific context, but not very good at telling us if that also applies elsewhere.
In a world where we are increasingly seeking solutions to complex challenges, research should primarily aim to define practical implications, helping practitioners and students understand the world as it was – and how it will be.
- Data management strategy in higher education: a blueprint for excellence
- How to use a project management approach to help run research projects
- How we used a business management theory to help students cope with uncertainty
The model should go: research is conducted, business schools and universities teach, consultants build frameworks and policymakers shape policies. Yet a crucial question typically goes unasked: when applied to new situations, does the research-based explanation hold? Management scholars care more about explaining than about predicting.
In a study that has been recently published at the JORS – the flagship journal of the Operational Research Society – we examined over 6,500 management studies. We found that virtually all claimed to make predictions, yet few tested whether their models actually worked on new data, even from just a year or two later in the same context.
There is a balance to achieve between explanation and prediction. If researchers use all available data to explain, they get a specific explanation. The problem is that specific explanations are rarely transferable to other settings, such as predicting the future. They predict poorly.
Hence, management research needs to find a balance between providing a good explanation and making accurate predictions. For example, the circumstances shaping 2015-20 might not shape 2021-23 in the same way, even if underlying relationships seem stable.
When good explanations make bad predictions
The consequences play out across business decisions. To illustrate the consequences of considering explanation without considering prediction, we tested different analytical approaches, using UK data, on the gender pay gap for employees and entrepreneurs separately.
The gender pay gap is a significant management and policy topic, with research yielding diverse and inconsistent results across studies. When testing previously used models, we found that complex models, which best explained historical results, were sometimes poor at predicting future outcomes. Simpler models, in some instances, provided better predictions, despite appearing to explain less.
For employees, one sophisticated model suggested the gender gap had nearly disappeared. However, when tested against future earnings data – that is, new data – a different approach proved more accurate, revealing that the gap persisted, particularly for mothers.
For entrepreneurs, the pattern reversed entirely. What initially appeared to be a robust explanation crumbled in hindsight when predicting the future, even in the absence of dramatic policy changes.
This isn’t about major disruptions. Even in relatively stable periods, models that best explained the past often failed to predict the near future. Some changes that seemed important turned out not to matter moving forward. Other shifts that looked minor became more significant in subsequent years.
Different analytical approaches didn’t just produce slightly different estimates; they told fundamentally different stories about whether problems were resolving or persisting. For policymakers weighing equality legislation, that distinction is critical. For companies deciding on diversity initiatives, it shapes resource allocation. For individuals planning their careers, it affects strategies and expectations.
Without testing predictions against future outcomes, there’s no way to know which story is accurate. You’re left with multiple plausible explanations of history, all claiming to inform the future, yet none of which have been proven to do so.
What’s the answer?
The solution doesn’t require abandoning existing methods. Instead, we suggest adding one crucial step: after building a model using historical data, test it on data it has not yet seen. Develop the model to explain a specific period and use it to predict outcomes for other years. Then, verify whether the model holds.
This basic forecasting approach is prevalent in meteorology and operations research but has been notably absent from management research. More importantly, when forecasting methods have been used elsewhere, they often rely on randomly selecting periods or individuals to hold back for testing. The problem is that different random draws produce different results. It makes it impossible for the researchers to know whether findings reflect genuine patterns or just which data happened to be selected.
Our approach combines randomness with reproducibility. The method systematically tests multiple combinations of out-of-sample data, but in a structured way, producing identical results each time. The randomness eliminates selection bias, so there’s no cherry-picking favourable test periods, while the systematic structure ensures replicability.
The method is straightforward. For example, if you have data from 2009 to 2022, systematically hold back different combinations of years and samples, build models on each subset and test predictions on the out-of-sample portions.
Equally important, hold back different groups, specific companies, industries or individuals to test whether patterns found in some units predict outcomes for others. Models capturing meaningful patterns perform reasonably well across multiple tests. Those merely fitting historical quirks fall apart. Because the process follows a defined structure, other researchers can verify findings exactly.
We have made this accessible by providing tools that any academic can use, without needing to learn entirely new statistical methods. It works alongside traditional approaches rather than replacing them, with a crucial advantage: avoiding both the selection bias of hand-picked test periods and the irreproducibility of purely random approaches.
Why this matters
The timing couldn’t be more critical. Organisations face unprecedented uncertainty: hybrid work is reshaping careers, GenAI is transforming jobs and demographic shifts are remaking workforces. Distinguishing between research that genuinely illuminates the future and research that merely explains the past becomes crucial.
When businesses invest in leadership programmes or organisational restructuring, they’re implicitly betting that patterns will continue or change predictably. If the underlying research hasn’t been tested on future data, everyone operates with less certainty than they realise.
A way forward
We are not calling for revolution, but instead evolution: to keep explaining the past, while also testing predictions. Don’t just show models fit historical data well – demonstrate they maintain reasonable accuracy when applied to unseen periods and contexts.
This will not completely eliminate uncertainty. Even well-tested models sometimes fail when genuinely new conditions emerge. However, it would separate models that capture enduring patterns from those that capture historical accidents, helping to identify which findings are likely to remain relevant and which might be artefacts of specific circumstances.
Management research needs both explanation and prediction. Understanding why something happened matters. But if that understanding can’t predict what happens next, even in relatively stable conditions, its practical value diminishes sharply.
In an era of disruption, yesterday’s patterns make uncertain guides to tomorrow. But some patterns are more reliable than others. It’s time for management and business research to develop better ways to distinguish between them.
Ahmed Maged Nofal is professor of entrepreneurship and Frédéric Delmar is professor of entrepreneurship, both at EMLyon Business School.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment