REvaluation Conference

REvaluation Conference 2027

25–26 Feb 2027 | Vienna, Austria

REvaluation Conference 2027

Scope

The REvaluation Conference 2027 invites scholars, policymakers, practitioners, and evaluators to contribute to an international dialogue on the future of science, technology, and innovation (STI) policy evaluation. Under the overarching theme “Scouting paths on shifting maps”, the conference explores how evaluation can remain meaningful, responsible, and constructive in times of accelerating change and growing pressure on science and innovation systems.

We welcome submissions of extended abstracts for academic contributions, case studies and lessons learned, policy analyses, tool and prototype demonstrations, and practical insights from a broad range of topics related to the following themes:

Finding ways

1.                    Nurturing evaluation cultures – Communities of Practice
2.                   New methods and approaches in evaluation – Methodological track
3.                   Data, tools and technology in evaluation – Technical track

Bridging gaps

4.                   An “industrial turn” in STI policy
5.                   Evidence for policy: Navigating across increasingly diverse communities
6.                   The big picture: System-level evaluations

Coping with uncertainty

7.                   Shifting science systems: Changing conditions of research production
8.                   Evaluators under pressure: Changing demands and roles
9.                   Open strand: Your ideas – our uncertainty

Finding Ways

Building on the conversations around Communities of Practice initiated at REvaluation Conference 2024, this strand focuses on three aspects of how evaluators explore, adapt, and co-create novel approaches to make sense of complexity in evaluation. We invite contributors to historicize, document and reflect 1) current practices nurturing evaluation culture between all actors in a field, 2) emerging methods and approaches in evaluation practice, and 3) technological advances and implications for evaluation. How do tools, approaches and collaborations influence evaluative reasoning, ethical considerations, and the boundaries of professional practice? And what does it mean to “find a way” when both our methods and our maps are being redrawn?

1.       Nurturing evaluation cultures - Communities of practice

Evaluation does not develop in isolation but is shaped through sustained interaction among diverse actors across institutions, disciplines, and national contexts. This subtheme focuses on Communities of Practice as collective arrangements that help align expectations, norms, and practices in the field of STI policy evaluation. It invites contributions that explore how evaluation cultures are actively nurtured: how communities of practice emerge, how they are supported over time, and what enables them to thrive. Particular attention is given to mechanisms that foster shared standards, mutual learning, and professional commitment, as well as to the role of platforms and networks in coordinating stakeholders and strengthening evaluative practice. We explicitly welcome experiences from other fields and networks, reflecting on how they cultivate good evaluation and reflexion practices, sustain engagement, and balance openness with coherence in evolving evaluation ecosystems.

2.      New methods and approaches – methodological track

Amid shifting societal expectations and policy challenges, evaluators are expanding their methodological repertoires to capture dynamic, interconnected processes. This subtheme invites to critically engage with methodological pluralism - ranging from developmental and participatory evaluation to systems approaches, foresight methods, and real-time learning designs. It also encourages reflection on the tensions between innovation and reliability: How innovative can evaluation methods be without losing credibility or comparability? How do clients’ expectations and institutional frameworks shape what counts as “acceptable” methodological novelty? Contributions can discuss how evaluators can navigate the risks and opportunities of methodological experimentation - balancing the need for robustness with the urge to explore untested, yet potentially transformative, approaches. We also welcome the presentation of new developments, prototypes and testing architectures for new evaluation methods and approaches.

3.      Data, tools and technology in evaluation – technical track

Digital and computational technologies increasingly structure how evaluation data are collected, processed, linked, and analysed. This technical track focuses on the use of data- and software-based tools in policy evaluation, including AI-enabled approaches, and addresses their role within contemporary evaluation infrastructures. It invites contributions that engage with technical architectures, platforms, and analytical systems supporting evaluation practice, such as monitoring environments, evaluation dashboards, data pipelines, interoperable data spaces, and governance models for evaluation data. Submissions may present or examine software tools, analytical workflows, visual analytics, decision-support systems, and machine-learning or large-language-model-based applications used in evaluation contexts, as well as approaches to standardisation, interoperability, and reproducibility across administrative, statistical, and STI systems. Beyond individual tools, the track also welcomes analyses of how technical systems interact with organisational settings, professional roles, and accountability arrangements in evaluation, particularly in contexts where evaluation processes are subject to increasing demands, constraints, or scrutiny.

Bridging Gaps

Evaluation plays a crucial role in connecting different worlds – science and policy, evidence and action, industry and societal needs. This strand focuses on how evaluators, scientists and policymakers can bridge such gaps to foster more coherent, informed, and adaptive systems. Contributions may address the role of evaluation in industrial and innovation policy, the translation of evidence into decision-making, or the creation of shared spaces between diverse actors. How can evaluation act as both a mirror and a bridge across these divides?

4.       An ‘industrial turn’ in science, technology and innovation (STI) policy

Analogous to the ‘innovation turn’, which describes the political dominance of innovation in STI funding, we introduce at the REvaluation Conference27 the concept of the ‘industrial turn’ as a diagnostic shorthand for the observation that STI policy more broadly is increasingly functioning as an instrument of industrial, location and security policy. With increasing pressure to strengthen economic performance, STI funding is shifting back again toward a focus on industrial competitiveness and near-market applications. This has two consequences. First, it raises fundamental questions about the long-term consequences for the value of basic research, scientific autonomy, and the overall balance within national innovation systems. This subtheme invites contributions that analyse how this reorientation affects STI policy agendas, institutional behaviour, collaboration patterns, and the conditions for breakthrough innovation. We are particularly interested in how evaluators and policymakers can derive meaningful conclusions when economic, scientific, political and broader societal objectives intersect.

Second, STI policy evaluation is challenged not only to support STI policy but is increasingly called upon to support industrial policy. Both STI and industrial policy are undergoing profound transformation, shaped by green and digital transitions, new mission-oriented approaches and increased international competition and the quest for technology sovereignty. Therefore, STI policy evaluation faces a dual challenge: it needs to produce robust evidence on a broader range of dimensions as ever before, and, in order to be meaningful for policy making, it needs to go beyond accountability and effectiveness – offering learning, foresight, and reflexivity for policy adaptation in the face of competing demands on STI policy. This subtheme invites contributions that explore how evaluation can bridge the worlds of policy design, implementation, and industrial practice. How can evaluators support mutual learning between policymakers, businesses, and intermediaries, and what new forms of collaboration and evidence generation are emerging at these interfaces?

5.       Evidence for policy: Navigating across increasingly diverse communities

As policy challenges become more complex and the authority of expertise more contested, expectations around the role of evidence in policymaking have shifted. Rather than focusing solely on evidence-based decision-making, attention increasingly turns to how different forms of evidence are generated, combined, and mobilised within policy processes. This subtheme examines the contribution of evaluation as one important source of policy-relevant evidence, while also engaging with other modes of evidence production, such as monitoring systems, administrative data, modelling, foresight, expert advice, and stakeholder input. Contributions are invited that explore how these different evidence streams interact, complement, or at times interfere with one another as they move into decision-making arenas. We welcome analyses of the institutional, organisational, and political conditions that shape the use of evaluative and non-evaluative evidence, as well as reflections on how evaluators position their work within broader science-policy interfaces. In dialogue with debates on knowledge brokerage and policy advice, the subtheme encourages reflection on how evaluative knowledge circulates across epistemic and organisational boundaries, and how tensions between rigour, relevance, and timeliness are managed in practice.

6.       The big picture: System-level evaluations

System-level evaluations aim to capture the dynamics, performance, and coherence of STI ecosystems as a whole. While more important than ever , comprehensive system evaluations remain rare. They are resource-intensive, methodologically demanding, and often produce multifaceted findings that struggle to find traction in policymaking. Yet when they succeed, they provide unique opportunities for strategic learning, cross-sector alignment, and long-term orientation. This subtheme invites contributions that critically reflect on real-world experiences with (partial) system evaluations. Questions contributions can ask are, for example: What are state-of-the-art approaches to capture the role of STI policy for system developments? Where and how have system evaluations meaningfully influenced policy or organisational practice? What design choices, governance arrangements, communication strategies, or contextual factors helped ensure that complex findings translated into action? And what have been the major obstacles – methodological, political, or institutional – that limited their uptake? By learning from concrete cases, we aim to better understand how system evaluations can bridge the gap between structural insight and practical relevance.

Coping with Uncertainty

Science, policy, and evaluation systems worldwide face growing uncertainty – from geopolitical instability to technological disruption and shifting public trusts. This strand invites critical reflections on how evaluation can remain relevant and resilient under pressure. What does it mean to evaluate in times of crisis or systemic change? By comparing experiences from different regions and contexts, contributors are encouraged to explore how evaluators navigate complexity and uncertainty , and what new forms of practice emerge along the way.

7.       Shifting science systems: Changing conditions of research production

Innovation systems are increasingly exposed to multiple and overlapping pressures – economic constraints, political polarisation, digital transformation, and demands for societal relevance. This subtheme explores how these developments reshape both the objects and practices of evaluation. How do evaluators respond when science systems themselves become subjects of contestation or reform? We welcome comparative and conceptual contributions that examine how evaluation can illuminate, accompany, or even mediate change in science systems across Europe and beyond. Particular attention may be given to the interplay between global trends and local contexts, and to how evaluation practices adapt to shifting institutional logics, values, and expectations and thus can make a meaningful contribution to the resilience of the science system.

8.       Evaluators under pressure

In a rapidly changing world, evaluators are required to continuously adapt how they work, what tools they use, and how they position their expertise. In times of fiscal tightening, evaluation is increasingly drawn into politically sensitive contexts, particularly when budget reductions, programme terminations, or organisational restructuring are at stake. At the same time, the evaluation field itself is subject to growing competitive pressure, with new technologies – including AI-based tools and automated analytical systems – reshaping expectations around speed, scope, and cost of evaluative work. This subtheme invites reflection on what it means to practise evaluation under these combined pressures. How do evaluators navigate the introduction of new technologies while maintaining professional judgement, transparency, and accountability? What new role tensions, ethical dilemmas, and vulnerabilities arise when evidence is produced under time, budgetary, and technological constraints? We welcome contributions that examine strategies for sustaining professional integrity, methodological soundness, and independence in contexts where both the use of evaluation and the conditions under which it is conducted are increasingly shaped by external pressures.

9.       Open strand: Your ideas, our uncertainty

We invite evaluators, scholars, policy makers, and all others concerned with evaluation to submit your idea of what has been missing in our call.  We invite you to follow the prompt: “My contribution does not fit the above, but is still important for the Revaluation Conference 2027, because…”