Project Governance – Creating a Value-Added Clinical Operations Metrics Program – Part 3

In Part 1, I confessed my love for metrics and also talked about the need to understand the Critical Success Factors and Key Performance Questions that each of your stakeholder groups has. In Part 2, I talked about some of the challenges of implementation. Here, I’ll talk about the need to understand metrics in context and the need to be practical by using metrics as a tool but not as the only end in themselves.

Don’t fall in love with the numbers for their own sake. While front-line study managers are less prone to this danger, it’s important for managers and people with a portfolio view to keep this in mind. You need to compliment quantitative measures with an understanding and a qualitative assessment of what’s really going on. The numbers certainly tell a story, but they only communicate part of the story.

One way to address this is with systematic and brief project review meetings. These need not be lengthy or complex. A check-in, once per quarter or even less frequently, can be as simple as “what’s going well and what are your concerns?”

Because we’re all so busy, there’s an understandable tendency to assume no news is good news. Unfortunately, it’s not a good idea to rely on issue management, even if you have a good issue escalation system that you trust.  A medium- or small-sized problem that’s being experienced by multiple teams across your organization is no longer just a small problem. Teams will often exert herculean effort to keep their projects on track – the project always wins – and that means that valuable resources are being wasted to overcome those extra challenges. In addition, proactive qualitative reviews will also surface “best practice” stories that may constitute Good News that you’d like to learn from and share.

The most valuable metrics programs then, will combine a focused set of quantitative measures with a project review mechanism that provides context and interpretation. Those quantitative metrics need to be refined and maybe revisited and revised periodically, particularly if they form some basis of an employee reward system. The qualitative information will provide another check point to make sure your view is fair.

I still love metrics, but I love them for what they allow us to do.

Project Governance – Creating a Value-Added Clinical Operations Metrics Program – Part 2

In Part 1, I discussed the need to begin by understanding Critical Success Factors and Key Performance Questions when you’re setting up a metrics program. In this article, I’ll talk about iterative approaches to implementation.

Once you have figured out what performance questions you’re trying to answer, you can then determine which measures different stakeholders will use to answer those questions and how to present the numbers. Very quickly, you will also need to identify data sources that enable you to always present the best version of “the truth.” Automate wherever possible: avoid the need to compile or manipulate data before they can be used. Many of the data points you’ll need are already being generated or captured in systems such as your CTMS, IxRS or other systems. Wherever possible, use automated feeds (or exports) of data from source systems to avoid duplicate work.

From the start, think of the development of your metrics dashboard or scorecard as an iterative process. You won’t get it right the first time and you will need to be flexible. Work with your stakeholder groups to figure out whether their respective performance questions are being answered adequately, and if they’re getting the information they need to assess their Critical Success Factors.

Because of the need for flexibility, it’s imperative to choose the right tools to create your metrics dashboard or read-outs. Larger companies may leverage data management or analytics groups to do this. Ad hoc query tools (like Spotfire and others) are flexible and can be set-up and modified quickly. That’s key. Smaller organizations may need to rely on Excel. That could be fine, but it’s important to get the measures right before investing in extensive programming. You know things will change, so plan for changes when you start.

My final thought on implementation is not to be afraid to take things away. It sounds like I am getting ahead of myself, I know. But quite often, data or analytics group will happily produce a suite of standard reports to meet stakeholder needs, then add to those reports as time goes by. The problem is that new reports get added to older ones and we end up with a long list of standard reports that intimidate users and scare them off.

Understand what’s being used and what’s not useful. Take away the less effective reports so users are able to find the best ones. Unfortunately, understanding what’s most useful is not as simple as monitoring traffic (as with so much in life, wisdom doesn’t necessarily live in the majority). You will also need to identify your lead users and understand what they use, how they use it and why, then use that information to educate other users.

Project Governance – Creating a Value-Added Clinical Operations Metrics Program – Part 1

I love metrics.  In particular, I love the idea of using metrics to monitor the progress and the “health” of a program, and even to proactively identify project issues before they become critical.  An ideal metrics program will also help identify where to look – which studies in a program and what aspects of those studies – to understand what may be going wrong.  Who wouldn’t love that idea?

The problem with many metrics programs is that they quickly become too complicated (let’s measure everything because, well, we can) or too burdensome (ensuring the source data are complete and available and analyses and presentations are done in near-real-time).  Metrics programs need to be selective in what they highlight to managers if they are to help us get our work done.

This short series of articles will provide guidance and explore some of the challenges we face in Clinical Operations as we endeavor to implement metrics to help manage a portfolio of clinical programs.

Part 1 – First, Know What’s Critical for Your Success

Thanks to great software and your local Microsoft Excel expert, there’s no shortage of measurements we can produce.  With all the things we can measure, we need to acknowledge the difference between what you should monitor, what you should manage and what you should focus on as a goal or be rewarded on.  It’s really important that you select and focus on the measurements that will help you do your work.

A modern automobile has a lot of computers and microprocessors on board.  But the dashboard still displays the speedometer and the fuel gauge most prominently.

The first thing you need to establish is what’s important to you.  The Metrics Champion Consortium (which is a collaboration of various stakeholders from across the entire Clinical Trials industry) talks about identifying Critical Success Factors and Key Performance Questions. Your Critical Success Factors are the things that really define success or failure.  For example, if your program needs to recruit its first patient in order to trigger a tranche of venture capital funding, or to complete recruitment to enable filing ahead of a competitor, then start-up and recruitment metrics need to be a focus.  If you’ve recently completed recruitment and are anticipating filings, then your focus may need to change to quality and compliance metrics in anticipation of upcoming inspections.  When you set up your metrics program, you want to include metrics for both speed and quality.

Key Performance Questions help figure out what you want to know about that Success Factor and will help you identify what you will actually measure and how it will be presented.  For example, if your CSF is about start-up and recruitment, the question would probably be “Are we achieving study site set-up as quickly as planned?” and “Are patients being enrolled as quickly as planned?”

The final framing question is to determine who the metrics are for.  The metrics used by study teams will be different than the ones needed by senior managers.  That’s because the Key Performance Questions that each of those groups ask may also be different.  But of course they should be consistent and feed into one another.

In Part 2 of this series, I will discuss things you’ll need to keep in mind when implementing your metrics program. Then in Part 3, I’ll talk about ensuring you have other ways to understand the full context for the measures you’re seeing.

Project Governance – where to look for Root Causes

Scenario: Your pivotal clinical trial is far behind schedule.  Although CTA (Clinical Trial Application) approvals are beginning to come in, site activations are proving difficult to schedule at sites, you’ve discovered that one of your key vendors will have difficulties delivering devices to sites on time and your study managers inform you that the CRO is warning they won’t be able to meet the timelines originally contracted.  Sound familiar?

There’s no shortage of things that can go wrong when we’re running large (or even smaller) clinical trials.  As we’re trying to “fix” the problem and get things back on track, it’s easy to jump to conclusions and attribute the situation to one thing.  In this case, maybe we blame the vendor.  Or we can go after the CRO… after all, they promised they’d manage our study and they are not living up to their commitments, right?  Unfortunately life is rarely that simple.

Having worked for years overseeing a large portfolio of clinical trials, I have found that problems usually fall into three categories:

  • Protocol-Specific
  • Process and Systems
  • Performance

Protocol-Specific includes things that are specific to the particular study at hand, like competitive trials you didn’t anticipate (timing!), eligibility criteria that end up eliminating an unanticipatedly large proportion of patients or site- and patient burden from a demanding study that ends up being particularly unattractive to participate in.

Process and Systems includes factors like technical vendors, drug supply issues and shortages, shipping problems.  Maybe there are issues with data uploads or transfers or even frequent device malfunctions that frustrate patients and sites.

Finally, Performance includes places where members of the study team are not living up to expectations.  This is rarely a result of poor attitude but more often is an issue of training, skills or motivation.  Most people really want to do a good job and they also want to back a “winner;”  they need a reason to push on when things get tough.

In the vast majority of cases, the full story behind study issues will not fall neatly within one of these three categories.  When studies are falling behind planned timelines, it’s important to completely investigate all three of these areas to unpack the whole story so you can appropriately focus corrective actions.

In our example, we would want to know why it’s difficult to schedule site activations.  Is it because of the device deliveries or because of competitive trials?  Are the device shortages temporary or a long term issue?  Are you and is the CRO managing the situation (and site engagement) appropriately to ensure their interest stays high?

Without scanning all three of these areas for possible root causes, you could end up only addressing part of the problem.

Let’s make clinical trial technical vendors part of the (training) solution.

One of the factors driving the burden of clinical trials is the number and complexity of instruments (including equipment, software or other devices) needed to measure study endpoints.  Each of these instruments requires proper set-up and calibration at the study site, staff training and sometimes user certification to make sure the assessments are valid and can be used in a trial setting.  This means there’s a significant need to train study site personnel and to ensure they understand how to conduct each of these assessments properly.

The usual way to deliver this training is to run sessions at face-to-face Investigator Meetings held during study start-up.  While the vendors who supply these assessments tend to be quite adept at conducting these sessions, there are a few shortcomings of this approach.  The most significant issues are timing, since many sites will start the study months after the IM, and the reliance on face-to-face training methods that are difficult to replicate later.  Although the sessions can be recorded (video) and the training materials stored (user manuals or PowerPoint), follow-up training for those who couldn’t attend or who need a refresher is often sub-optimal.  It’s then left to sponsor Study Managers to figure out which materials to use to fill the gap and how to deliver them.  Study Managers are certainly not training experts.

I propose we consider a different way, and ensure that excellent ongoing training modules are also provided by these technical vendors.  The content and format could take different forms.  Instrument providers know their technology best, including the pitfalls that need to be addressed in training.  We should ask them to obtain (or consult with) any expertise needed to choose the best training methods, and then provide modules that can be delivered remotely and to individuals who may be trained long after the IM.

This will be a new expectation and a bigger burden for many vendors.  Sponsors and vendors will be concerned about the added scope and expense.

I would argue that sponsors are already bearing both cost and risk either by using stop-gap training methods or, in some cases, by approving additional travel by vendor or site staff to attend remedial face-to-face training sessions.  Making remote training modules part of the core expectation of technical vendors is the most efficient way to partner with those vendors to make them part of the solution.

 

What’s so hard about patient centricity in clinical trials?

There’s a lot of talk about customer centricity and patient centricity in the pharma industry these days.  The discussion extends to drug development and clinical trials.

This started a few years ago, as we realized that our clinical trials were getting more and more difficult for patients due to the number of assessments and the grueling schedule often required of study participants.  To lower the barriers to participation, we look for ways to incorporate the patient’s viewpoint into study designs, and to make study participation less burdensome and time consuming.

These efforts should really be applauded.  Any effort that makes study participation significantly easier for patients, their families and caregivers is worthwhile.  But so far, the gains have been modest.  Why is that?

Years ago while teaching service management courses, I came across a great thought question:  What would a hotel look like if it were built for the ease of support departments like housekeeping, catering and security?  Answer: it would look like a hospital, and not a particularly nice one at that!  Anyone who’s had great service in a hotel will appreciate how those support departments can be orchestrated to create a first-rate and seamless customer experience.

Currently, clinical trials are definitely not built around the patient.  Sites will assure you that they are also not built around study coordinators, clinics or Investigators.  They are mostly built around the trial’s schedule of assessments, and it’s up to the study coordinator and other clinic staff to bring it all together.

Why is it so difficult for us to re-think study participation so it’s built more around patients and their caregivers?

The demands of organizing and managing studies are significant and getting worse.  The biopharma company’s clinical scientist is under pressure to design the study around valid endpoints that will lead to approval, or at least to a significant advancement of our understanding of the disease or treatment.  It’s difficult for biopharma study managers and procurement departments to identify, qualify and contract with the many technical vendors (sometimes dozens) needed.  All of these challenges need to be met under strict time pressure.

Given these challenges, it’s not surprising that patient-centricity has not yet been addressed.  In addition biopharma sponsors and CROs have lacked the internal expertise to address the re-engineering that may be required.  But patient centricity remains a frontier that the clinical trials enterprise really needs to conquer.

Addressing this won’t be simple, but it’s not impossible.  The ability to put the customer in the center of designing a service experience can definitely differentiate a great provider from the pack.  Returning to my comparison, most of us have had many more mediocre or terrible hotel service experiences than terrific ones.

This will require sponsors to re-think the study journey through the eyes of the patient and the study site.  It’s an exercise that won’t happen accidentally, but it can begin when study assessments are being finalized and most vendors have been identified.

The roadmap to achieve this includes three steps which should be considered imperatives for all clinical trials:

  1. Ensure the study assessments represent the simplest study you can conduct to achieve your objective. Remember that each procedure is more than a tick mark in a table – it also represents cost, logistical challenges and a real patient doing their best to complete your trial
  2. Take time to understand the patient’s journey through the clinic and what the experience of participating in the study would look like. How many times will they need to move from one assessment to another, and how long will it all take?
  3. Understand what the site’s challenge will look like. If your study requires several technical vendors, how can you ensure you know how they all work together, before it becomes the site’s problem to work it out?

Each of these requires time, a small amount of expertise and consultation with experts, but the result will be worthwhile.  Not only will you know that the individual project is more attractive to patients and investigative sites, but it will also build your reputation as an empathetic sponsor who thinks differently.

How to think about Training – three levels to Mastery

A few years ago, I led a team trying to organize a massive training challenge.  We were changing the way we employed some of our key project personnel around the globe.  It was a change that would impact nearly 1,100 professionals working on more than 300 projects.  There was a significant risk that people would leave during the transition, and certainly lots of project handovers to be anticipated.

We had to figure out how to organize ourselves for such a massive training challenge, particularly in the area of project-specific training.  In our context, each project was a clinical trial.  Each clinical trial had an overall protocol that explains the study in detail.  Because many of our trials are conducted in complex disease areas, most also require a certain amount of training on the specific disease area.  And many involve specific assessments using complex technical process or instruments (such as medical imaging, laboratory or tissue sampling). While the people affected knew their jobs and general company processes well, we still had to figure out the most efficient way to prepare to deliver all this training on demand.

We came up with a clear way of considering all the things that go into preparing someone to work on a new project.  I conceive of this as three levels of knowledge:

  1. Core Training includes the essential, “core” elements that are unique to the project.  This would include the project description and master protocol, as well as the key disease information (in our case) and the information about the key, unique assessments.  We created a list of core training modules that all project teams needed to provide and have uploaded to a central Learning Management System (LMS).
  2. Extended information includes the Core, plus all the other manuals and instructions about how to set up the project at a particular site, and the processes to ensure the people actually doing the project are set up.  This could be 100s of documents, and it would be impractical for us to try to manage all of these things centrally, but the project team should have an idea of what would be on their “long list” of process documents.
  3. Mastery involves understanding all the information in the core and extended materials, but it also involves knowing how all these things fit together and developing certain instincts about risks and challenges that may not be captured in those manuals or in the training modules.

While Core and Extended materials can be “trained” and tested using written, presentation or in-person courses, mastery requires a level of experience and practice that’s impossible to substitute.  Mastery is the main reason we still have apprenticeships and residency programs in many professions and trades.

Looking at it a slightly different way, think about a project person completing all those core training modules, and perhaps reviewing and studying the materials and manuals in the Extended information.  Even once they had completed that, they would still be unsure how to actually get started on the project, unless they also had some level of mastery around their broader role and context.

Projects in all industries are getting more and more complex and there’s more information to keep track of.  Our training challenge is to figure out how to move people along this full learning curve: training and assuring full competency on the project-specific basic Core Training, then making sure they have access to all Extended information they’ll need to understand the relevant technical instructions and finally, to ensure they can efficiently arrive a level of consolidation and confidence that constitute Mastery.

Leadership is a Responsibility

For a company video interview a few years ago, I was asked if I have a “leadership philosophy.”  The question caught me a little off guard and my first answer was a quick “no.”  About a minute later I realized that I do have a couple of consistent expectations that I try to live by.  If that qualifies, than I suppose I do indeed have a leadership philosophy.

I should start by clarifying that not all leadership comes from the person “in charge” of a team.  Leadership can come from any and ideally should come from all team members.

The first expectation is that leadership is a responsibility.  While I don’t think most leaders take their responsibilities lightly, there are many who don’t appreciate the impact their engagement could have on team members.  Leaders must take their roles seriously, particularly considering how they support, challenge and engage the people who are watching and following them.

Leadership is also a responsibility in another regard as well, in that it is also an imperative for those of us who can bring our credibility, integrity and skills to a challenge.  There are so many challenges facing us, and so few people who don’t know where to start.  If you can help out and help move it forward, then you really must.

The second part of my philosophy is most important to those who are in charge of  a group or team, but is relevant to everyone.  I truly believe that most effective leaders work for the team, rather than the other way around.  The leader’s most important job is to provide what the team needs by supporting, challenging and engaging them from where they are to get them where they need to go.  This is grounded in idea of servant leadership but it occurred to me long before I heard of that important concept.

So I suppose I do have a leadership philosophy.  Do you agree?