Running an experiment in a large organisation
At Springer Nature, we continue to push for continuous improvements to our systems, developing ideas and innovations that add the most value to our users. In our programme of work, we’ve found that running an experiment or trialling a new tool or piece of technology before incorporating it into our work is an invaluable part of our process.
Since our digital department is made up of 150 people in a 13,000 global organisation, we work closely with other departments to set up these experiments to test our ideas and assumptions. However, as in any large company, these experiments can run without us and it is useful to bring in our ways of working to help empower others to make informed decisions on their ideas.
Hence, we’ve created an easy checklist for those interested to run an experiment themselves.
We’ve found the most effective approach is to pose ourselves a series of questions:
What problems are we solving?
We always ask what problems we think we’re solving for our users. This means thinking about how the pilot came about and what’s the primary purpose of it. What are the short, mid and long-term aims for the pilot? Understanding the full value proposition, allows us to see how that fits with both the vision for our users’ workflow and our programme goals within the organisation’s strategic framework.
How does this apply to the company/programme strategy?
With multiple experiments going on across the business units, tying an experiment back to the company and/or programme goals help us to see how this fits within the wider vision. When we’re looking at experiments, we also look at how it will work with the current workflow we’re building for. We aim to create user-centric and data-informed services and products that are seamless for the user.
Who benefits?
We think carefully about who the experiment benefits. Is it for our researchers or our editorial colleagues or our funders? Is it going to help with the researcher experience or provide internal efficiencies? This helps us to identify priorities and understand the strategic fit of the pilot within our programme of work.
How will it interact with our current technology landscape?
This means thinking about the technology architecture underneath the programme’s products and services, and trying to understand how integration might work. For example, if it’s a 3rd party tool, what is their business model? How does their technology work and how is it built? What types of dependencies does it have that we need to keep in mind? Knowing that we have an extensive portfolio of journals and staff, will this work on a large scale? We try to make sure all these questions are answered so we understand how we would integrate and maintain it going forward.
How will we measure success and failure?
This is about knowing if the experiment has helped us to validate our hypotheses. For example, did the test prove or disprove the idea? We establish a baseline measure — what is the current state of play? — and work out how we will measure against it. Are we looking to see a percentage uplift in activity or are we looking for improvements in user satisfaction, and so on. We like to make sure that everything has been fairly evaluated. Understanding the value through these measures can help inform us in which experiments to prioritise, which need to evolve for further testing or which to abandon.
Each of these questions helps us prioritise our workload and better understand what value we are bringing to our end users and the business. It’s important to us that we are constantly learning and we’ve had lots of opportunities to refine our approach to running experiments with our colleagues. If you’ve other questions or ideas to add, please feel free to comment below.