News

Why we made our digital biomarker code open source

February 23, 2021

By Anzar Abbas, PhD

There is little doubt regarding the usefulness and practicality of digital phenotyping to measure patient health and behavior. Traditional clinical scales––the current gold standards for primary endpoints––can be cumbersome and demanding. Digital measurements purport to solve this problem. But the questions of their reliability as accurate assessments remain.

Ultimately, both digital and traditional measures are based on a common understanding: Disease often manifests itself in observable behavior; behavior that can be measured. Traditional clinical scales do this through clinician observation. Digital measurements do this through a combination of computational tools that mimic and in many cases enhance what a clinician would observe.

Of course, none of this is novel. Researchers have been developing digital measurement tools for decades. Computer vision-based coding of facial expressions showed reliability in the 1990s. Same goes for ubiquitously used software to measure vocal acoustics. From the very beginning, our goal has simply been to replicate the reliability of these measurements on the AiCure platform.

And that’s what we did. We scoured the scientific literature for digital measurements shown to be related to clinical functioning and replicated those methods so we could make those same measurements using our own platform. All of a sudden, we realized we were in sole possession of a unique codebase for reliable measurement of a multimodal set of facial and vocal digital biomarkers.

And so we put it all back in the public domain.

Folks have been confused by our decision to do this––but to us, it’s the only way forward. We believe that trust for novel measures should be built in the public domain, through peer-review, by the scientific and medical community. But this process has been remarkably slow, partially because methods for digital phenotyping have historically been disparate and difficult to access.

By making our code open source, we are increasing the accessibility of these methods to more researchers and our partners in industry. Consequently, we can all contribute to these methods, we can unrestrictedly comment on the reliability and validity of these methods, and we can collectively move closer towards adoption of these methods in patient care and clinical research.

I invite you all to check out our code on GitHub, read through the list of biomarkers that can be calculated through it, and watch a recording of the webinar we hosted when we launched the software in the fall of 2020. And as always, please reach out if you want to chat.

On March 3, Anzar will present with Colin Sauder PhD, Director, Clinical Scientist at Karuna Therapeutics, on “Moving Digital Biomarkers from the Future into the Present,” at SCOPE Virtual 2021. Click here to learn more.


1. Lien, J. J., Kanade, T., Cohn, J. F., & Li, C. C. (1998, April). Automated facial expression recognition based on FACS action units. In Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition (pp. 390-395). IEEE.

2. Boersma, P. (1993, March). Accurate short-term analysis of the fundamental frequency and the harmonics-to-noise ratio of a sampled sound. In Proceedings of the institute of phonetic sciences (Vol. 17, No. 1193, pp. 97-110).

   

Pritzker Group Home Page

Pritzker Group Venture Capital

Pritzker Group Private Capital

Pritzker Group Asset Management