Catching Lightning in a Bottle: Advice for Human Research in Times of Crisis
In March 2020 the world came to a halt due to the COVID-19 virus, and we've been collecting data through it all. Here, we share some of the research lessons we have learned along the way. The data descriptor is available in Scientific Data and incoming data is regularly uploaded for open access.
Like many at the beginning of March 2020, I had a growing sense of unease and anticipation as rumors of a brutal virus running rampant across areas of Eurasia began to trickle more and more into the global headlines. My apprehension reached its zenith on March 13, 2020 when I finally decided to email my advisor, Dr. Elizabeth Kensinger, and get her thoughts on possibly preparing to collect some data “just in case” this superbug got out of control, so that we might be able to contribute to the effort by helping to understand its effects. On March 20, the same day as the first stay-at-home orders issues were issued in the US, we enrolled our first subject in the study, and for the last 18 months I have logged many, many screen hours collecting longitudinal data to the tune of nearly 60,000 survey responses from almost 2,000 participants. This not only resulted in the publication of our initial Scientific Data descriptor in February 2021, but data collection has continued and we have made all of the processed data collected up to this moment available for open access (https://osf.io/gpxwa/). As we begin to consider the end of data collection for this study, given the success of the data collection we wanted to use this platform to provide some tips, not only to provide additional context for our data, but also to help prepare others should we ever face similar disasters in the future.
Cast a wide, flexible net.
When we initially launched the study, my two primary aims were to determine if certain sleep or activities generated any additional protection or risk of contracting COVID-19 or recovering from it if infected. To do this, we put together a “daily survey” that we had really only expected to run for a few weeks, maybe a month, while we “waited for it all to blow over.” As such, the potential impacts of social isolation and quarantine was initially only my tertiary goal. Moreover, at the time there was incredibly little information available about the course and symptoms of COVID-19, and we obviously had no idea of the extent the stay-at-home orders and school and business closures would reach. All this is to say that in times of crisis and disaster, it can be incredibly challenging to even know the most important variables to consider. Even still, we believe that we ultimately collected some incredibly valuable information. We did this by including questions that covered a relatively wide range of topics from the very beginning (sleep, activity, mood, stress, worry, depression, etc.). Given that we didn’t know the course of the disease, it stands to reason that we didn’t know what would ultimately be deemed important either. Once the stay-at-home orders were in place and the infection curve began to flatten, it became apparent that our data would be more valuable in determining the effects of the rapid shift in societal functioning rather than acute infection and we were able to make some adjustments to optimize this new direction. First, once the gravity of the situation became clear and we realized that data collection might continue indefinitely, we made adjustments to decrease participant burden with the goal of enhancing retention. Specifically, we created a shorter version of the daily survey such that metrics that we did not expect to change as rapidly were assessed on a less frequent basis (importantly though, we did maintain the basic framework of the initial design so that we can compare all longitudinal data throughout the entire assessment period). Second, to collect a much richer dataset, we began launching occasional “one-time assessments” that helped us to gather both trait-level information on our participants, as well as collect more nuanced information about their experiences during COVID-19, the importance of which only became clear with time (we are just wrapping our 7th round of additional assessments, with at least one more planned). As such, by casting a wide initial net of assessments and remaining flexible enough to update it as needed, we were able to optimize the eventual utility of our dataset by making changes to both retain as many participants as we could and supplement their already expansive longitudinal data with more targeted assessments at less frequent intervals. While there are certainly a number of questions that we still wish we had asked or phrased differently, this general approach to human subject research in a time of crisis has already proven to be quite fruitful in capturing knowledge that may have otherwise been missed. I’ll end this section by encouraging careful utilization of fishing trip analogies in guiding your research. In this instance, “casting a wide net” was not the same as the more malicious “fishing expedition”. We collaboratively discussed our interest in and potential relevance of every measure we selected before it was employed. Moreover, as we have begun to publish findings from this dataset we have both brought on statistical experts as collaborators and relied heavily on pre-registrations to help ensure the quality of the research and minimize concerns that scuba equipment was involved in our data analysis plan, and we hope that others that use this dataset or other similarly large, epidemiological datasets will do the same.
Be ready and act fast.
As researchers, we all know one of the primary hurdles to starting data collection is the IRB process. This can make a huge difference in the usefulness of the data collected in situations where time is of the essence. I was fortunate to be credentialed at an institution in which the members of the Office of Research Protections similarly recognized the potential importance of the moment and worked equally hard that week between study conception and launch to expedite data collection while simultaneously ensuring the safety of our participants. This allowed us to have one of the earliest launch dates for a study of this magnitude that specifically targeted the emotional and societal toll of COVID-19. I know of a number of institutions that hit immediate roadblocks (e.g., no protocol for online consenting) that not only significantly slowed or entirely halted research during the pandemic, but also prevented the researchers from being able to get involved with COVID-19 research during the early days of the pandemic. If this was the case at your institution, I’d strongly encourage you to engage in conversations with your IRB sooner rather than later to discuss the current limitations and develop a protocol that meets the standards of the institution and could be employed safely, effectively, and flexibly should the need arise in a crisis situation.
Research during emergency situations necessitates an ability to be creative as it is almost certain that no two crises are exactly alike. I’ve already discussed the inherent challenges in designing surveys when you as the experimenter don’t know what the outcome is going to be. Another area in which I believe we have successfully been creative is with funding and remuneration. When we launched data collection in March 2020 we did not have a grant for “pandemic outbreak research” and spending freezes were quickly implemented at a number of institutions (including our own). We launched largely on good faith that many citizens of the world would want to find a way to contribute during a time of crisis, and that we would work equally hard to provide them with whatever compensation that we could. As such, my advisor scraped together any discretionary funds that she could make available and I began to write grant after grant after grant (a few successes, a lot of failures). With the funds we could muster we have been able to offer semi-frequent raffles for gift cards that hundreds of our nearly 2000 participants have now received at one point or another. Additionally, some participants requested the opportunity to donate their winnings to charity which we have incorporated as another regular option as well. And we have also sought non-monetary forms of expressing our appreciation, the most successful of which has been describing some of the initial results from the study to our participants once we believe it will no longer affect their future responses (e.g., sleep and emotion changes specifically during the early days of the pandemic). We have found our participants to be absolutely ravenous for these information follow ups, and plan to continue them for quite some time as a reward for their dedication. Once the study has officially wrapped, we also plan to provide them with a certificate listing the number of survey responses they provided to the data set as a recognition of their efforts and a token of our appreciation. Finally, I also wanted to mention an area in which I believe we failed in our creativity, and that was in the recruitment of a diverse subject pool. At the launch of our study we did everything we could to spread the word of our study as far and wide as we could without any strategies to target certain populations. This ultimately resulted in a relatively homogenous sample of ~75% White females. That was a failure on my part, and should I ever find myself in another similar recruitment scenario, I will be more intentional and creative in the recruitment of more diverse groups.
Far and away the most important thing that I learned over the last year and a half is that when doing research in highly stressful and uncertain times, treat your participants with respect and gratitude and allow yourself to be humanized in the process. I’ve detailed a number of ways that we have attempted to do just that above (e.g., find ways to reduce participant burden; work hard to compensate them; find creative ways to thank them for their effort, including following up with results from the study). Given the situation, I have been fiercely protective of the load on my participants, making all assessments optional and reminding them frequently to only respond when they feel that they have the time and energy to do so. I think it is very easy to appear to be a robot beaming questions from their ivory tower to “capitalize” on the situation, so we as researchers need to make it a priority to relate with the participants in a way that makes them feel comfortable that their efforts will be used to contribute to our knowledge of the situation and ultimately for the greater good of humanity. I’ve strived to do this through personally responding to nearly every email that I’ve received from the participants, taking the time to describe certain aspects of the research process, and by utilizing the occasional well-timed emoji. Throughout the 18 months of data collection, the feedback on this approach from my participants has been overwhelmingly positive (in numbers, we’re nearing 60,000 survey responses and have only had ~9% withdrawal rate across more than a year and a half of data collection, even after sending them literally hundreds of emails at this point). A number of my participants have emailed me (unsolicited and outside of the study) to share some of their good times (pregnancies, new babies, new jobs) and bad (loss of loved ones), and not once did they doubt that it was a human on the other end receiving and responding to the messages. So in summary, should you ever choose to engage in research during a disaster or time of crisis, don’t forget to include a healthy amount of “human” into the human subjects research.