Share on Pinterest
FatCamera/Getty Images

Technology has dramatically changed diabetes care for the better over the last few decades. It’s allowed people to go from having to boil needles before dosing insulin to being able to microdose insulin at the touch of a button. From occasionally checking glucose levels by matching the color of a saturated test strip against a printed chart to a continual stream of readings automatically collected from a sensor discretely attached to the body.

But what is the true impact of these technological advancements when they remain out of reach for so many? Especially when the reasons behind this lack of access come from systemic and societal bias and racism?

Also, can we truly trust that as medical care becomes more reliant on software algorithms that those algorithms themselves are free from bias? Just how big and wide-ranging are the datasets used by artificial intelligence (AI) to generate everything from suggested care plans to lab test results? What are the assumptions behind the calculations that humans are developing to measure our biological state of health?

Is there a danger that some groups of people are being left behind due to bias as medical technology and practices progress? Are the people in these groups more likely to, ultimately, experience more health complications and poorer health outcomes?

Many would say “yes” and working for “TechQuity” is the answer.

We explored TechQuity and its implications for diabetes care with two experts in the field:

Dr. Harpreet Nagra, a licensed psychologist and behavioral scientist and VP of behavior science and advanced technology at One Drop, and Hana Nagel, service design manager at Deloitte Digital and a UX researcher focused on ethical AI.

TechQuity brings technology and equity together. It’s a broad concept that is applicable anywhere technology is applied — including healthcare and diabetes.

TechQuity in the context of healthcare has a working definition of “the strategic development and deployment of technology to advance health equity.”

In diabetes care, TechQuity calls for all medical technology to be designed and deployed so that all groups of people have access and can benefit. Groups seeking equity are most often talked about in terms of race/ethnicity, gender and gender identity, age, sexual orientation, and economic status. In the context of diabetes, equity is also talked about in terms of diagnosis and diabetes type.

In diabetes and healthcare, the barriers to TechQuity can be found in both healthcare delivery and the medical technology itself.

“In care delivery, we know there are discrepant levels of introduction to diabetes technology for marginalized communities,” said Nagra.

“Nagra says that diabetes technology use rates among people with type 1 diabetes reflect the gap that exists between non-Hispanic white, non-Hispanic Black, and Hispanic people.” According to a January 2021 study published in the Endocrine Society’s Journal of Clinical Endocrinology & Metabolism: in the United States, 61 percent of white people with type 1 diabetes use an insulin pump and 53 percent use a continuous glucose monitor (CGM). In contrast, only 20 percent of Black people with type 1 diabetes use an insulin pump and 31 percent use a CGM. For Hispanic people with type 1 diabetes, the use rates are 49 percent for insulin pumps and 58 percent for CGM.

Regarding the development of diabetes technology itself, Nagel pointed out that “the challenges in diabetes technology are more around dataset diversity, as in software and algorithms rather than hardware. Most medical datasets are based on white men and this creates computational bias.”

One acknowledged real-life example of how this computational bias can play out is the pulse oximeter, a medical device for measuring oxygen saturation levels in the blood. It was developed based on data from a population that was not racially diverse. One study comparing the results for Black and white people in-hospital found that the pulse oximeter can overestimate the oxygen levels in the blood of people with darker skin. These results put patients with darker skin at risk of developing hypoxemia (blood oxygen levels below normal range) and having it go undetected.

Even when different groups of people are considered during the development of medical technology, bias can still create negative results. One example of this is how the glomerular filtration rate (GFR) test calculates kidney function. This test has a built-in multiplier to its algorithm that only applies to Black people. This multiplier is based on the assumption that all Black people have high muscle mass. As a result, the results for Black people tested skew toward higher levels of kidney function than may actually be present.

These pervasive, often unnoticed biases in healthcare technology put people at risk of not getting the care they need, experiencing more complications, and, ultimately, poorer health outcomes.

Bias in healthcare delivery leads to misdiagnosis, continuing a particular treatment approach even when it isn’t working, or dismissing information provided by the patient or their caregiver. Assumptions about a person’s education, affluence, and even their willingness to learn and use technology get in the way of having all care options discussed or offered.

A 2020 survey conducted by DiabetesMine showed that people in the Black, Indigenous, and People of Color (BIPOC) community living with diabetes are often given minimal or even false medical advice, like a misdiagnosis. Among those who mentioned misdiagnosis, a common theme was that healthcare providers were making “spot judgements” about them having type 2 diabetes simply based on their looks — a form of healthcare racial profiling that needs to be eradicated.

Bias is built into the assumptions that people bring with them. Every one of us, patients and practitioners alike, bring our own inherent cognitive biases with us.

In a talk presented at the September 2021 POCLWD (People of Color Living with Diabetes) Summit, Nagra explained that the most common sources of inherent bias are:

  • Anchoring – Giving more weight to information that supports an initial impression, even when that impression is incorrect.
  • Confirmation – Selectively gathering and interpreting evidence to confirm one’s existing beliefs, while neglecting evidence that may seem contradictory to existing beliefs.
  • Availability – The notion that what comes to mind quickly is deemed significant, often incorrectly.

Still, the biases that get built into our diabetes technology and healthcare systems aren’t always easy to spot.

We don’t know what data and assumptions went into building a medical device or developing a healthcare algorithm. Would any of us be able to determine whether a sensor works differently based on skin tone, or whether test results are influenced by our racial designation? Probably not.

One obvious — and common — red flag is when medical tech is developed based on data from a very small or homogeneous population. For example, an algorithm that is tested primarily with white males might work great for that group, but there’s no guarantee that it will also work well for Black males or even white females if these groups weren’t included in testing efforts.

Another red flag is when technology is developed with the assumption that all people in a particular group share a common characteristic. We saw this with the GFR, assuming all Black people have a higher muscle mass. This is simply not true, just as not all women are petite, etc.

Bias happens both at an individual level and a systemic level. Different tactics are needed to address both.

But first, we need to decide (individually and collectively) that we have the will and commitment necessary to make these changes. This is not easy work.

On an individual level, we need to be willing to, as Nagel says, “grapple with our uncomfortable history.” We haven’t gotten here based solely on random chance. We as individuals, our leaders, and our institutions have built systems that reinforce a status quo that favors some over others. We need to institute new processes that include and meet the needs of all groups, not just the most dominant or powerful.

We also need to take an active role in shaping the technology we choose to use. It’s not enough to simply accept the algorithms handed down to us by their developers. Nagra calls on us to “be more knowledgeable and demand more transparency” when it comes to the medical technology we use.

In September 2021, the Journal of the American Medical Informatics Association published a perspective piece titled “TechQuity is an imperative for health and technology business: Let’s work together to achieve it.”

The authors called on organizations, leaders, and individuals to take these essential steps toward fostering TechQuity and addressing systemic racism in healthcare:

  • Invest in people and communities – Calling on organizations to diversify their workforce, mentor, and promote diversity among management, and engage with diverse communities in their outreach and investment activities.
  • Be trustworthy, collect data that are relevant to diverse communities, and keep it secure – Building trust is imperative to address a history of exploiting People of Color and other historically marginalized groups in the name of science. Historical events like the Tuskegee syphilis study and the plight of Henrietta Lacks continue to cast a shadow of mistrust in many communities.
  • Use AI and analytics to promote health equity – The datasets used to train AI and generate algorithms must reflect the whole of the population they serve. Additionally, the people who work to create these algorithms must represent these same communities so that they inform these development efforts with real-world experiences and knowledge.
  • Purchasers of technology must also drive change – We as individuals can (sometimes) choose which medical tech we use, and consider TechQuity as part of our purchase criteria. More importantly, our medical institutions (e.g., healthcare systems, CMS, payers) can drive equity in medical tech by including TechQuity in their purchase and performance criteria.
  • Develop innovative partnerships that engage diverse communities – For the process of developing TechEquity to be successful, diverse groups that represent all the communities affected need to be welcome and work together.

As more and more healthcare decisions are driven by technology, any barrier to equitable access will foster a separate and unequal environment for those who are excluded. It’s up to all of us who engage with the healthcare system to make sure this doesn’t happen and we collectively move toward TechQuity.

Designer and researcher Hana Nagel will be speaking at the upcoming Fall 2021 DiabetesMine Innovation Days. She will share her thoughts on how to better design inclusive services and technology for diabetes. She will apply a socio-technical lens toward understanding the challenges that are causing disparate health outcomes, and explore how these challenges have their roots in systemic racism. Ultimately, she will suggest a path forward that includes diversifying datasets, design teams, and healthcare teams. Follow our event website to see her recorded presentation after the event.