“Desensitization” Process Improves Compatibility of Donor Kidneys

By ThinkReliability Staff

Many patients with advanced and permanent kidney failure are recommended for kidney transplants, where a donor kidney is placed into their body. Because most of us have two kidneys, donor kidneys can come from either living or deceased donors. If a compatible living donor is not found, a patient is placed on the waiting list for a deceased donor organ. Unfortunately, there are about 100,000 people on that waiting list. While waiting for a new kidney, patients must undergo dialysis, which is not only time-consuming but also expensive.

Researchers estimate that about 50,000 people on the kidney transplant waiting list have antibodies that impact their ability to find a compatible donor kidney. Of those, 20,000 are so sensitive that finding a donor kidney is “all but impossible” . . . .until now.

A study published March 9, 2016 in the New England Journal of Medicine provides promising results from a procedure that alters patients’ immune systems so they can accept previously “incompatible” donor kidneys. This procedure is called desensitization. First, antibodies are filtered out of a patient’s blood. Then the patient is given an infusion of other antibodies. The immune system then regenerates its own antibodies which are, for reasons as yet unknown, less likely to attack a donated organ. (If there’s still a concern about the remaining antibodies, the patient is treated with drugs to prevent them from making antibodies that may attack the new kidney.)

The study examined 1,025 patients with incompatible living donors at 22 medical centers and compared them to an equal number of patients on waiting lists or who received a compatible deceased donor kidney. After 8 years, 76.5% of the patients who were desensitized and received an “incompatible” living donor kidney were alive compared to only 43.9% of those who remained on the waiting list and did not receive a transplant.

The cost for desensitization is about $30,000 and a transplant costs about $100,000. However, this avoids the yearly life-long cost of $70,000 for dialysis. The procedure also takes about two weeks, so patients must have a living donor. The key is that ANY living donor will work, because the desensitization makes just about any kidney suitable, even for those patients who previously would have had significant trouble finding a compatible organ. Says Dr. Krista L. Lentin, “Desensitization may be the only realistic option for receiving a transplant.”

The study discusses only kidney transplants but there’s hope that the process will work for living-donor transplants of livers and lungs. Although the study has shown great success, the shortage of organ donations – of all kinds – is still a concern.

To view the process map for kidney failure without desensitization, and how the process map can be improved with desensitization, click on “Download PDF” above. To learn more about other methods to increase the availability of kidney donations, see our previous blog on a flushing process that can allow the use of kidneys previously considered too damaged for donation.

 

Patients and Insurers Pay Big for Discarded Cancer Drugs

By ThinkReliability Staff

A recent study has found that the size of vials used for cancer drugs directly results in waste, and a significant portion of the high – and steadily increasing – cost of cancer drugs.  With most cancer medications available in only one or two sizes, usually designed to provide an amount of medication for the largest patients, many times medication is left over in each vial.

The researchers estimate that about $2.8 billion is spent by Medicare and other insurers reimbursing for medication that is discarded.

This cost – paying for medication that is literally thrown out in most cases – can be considered an impact to the property goal.  As the cost increases for drugs, it’s not only Medicare and other insurers that are impacted, but patients, many of whom pay a fixed percentage of their drug costs.  This impacts the patient services goal.  The disposal of these drugs has a potential environmental impact, impacting the environmental goal.  The impacts to the goals as a result of an issue, as well as the what, when and where of that issue, are captured in a problem outline, which is the first step of the Cause Mapping process, which develops a visual diagram of the cause-and-effect relationships (a type of root cause analysis).

The second step of the process is to begin with an impacted goal and develop the cause-and-effect relationships.  This can be done by asking “why” questions and ensuring that all the causes necessary to result in an effect are included.  In some cases, more than one cause is required to produce an effect.  In these cases, the causes are both connected to the effect and joined with an “AND”.

In this case, beginning with the property goal, we can ask “Why do Medicare and other insurers have increased costs?”  This is due to the increased cost of cancer drugs, which results from significant amount of medications being thrown away.  We can add evidence to the causes to support their inclusion in the Cause Map or provide additional information.  For example, the study found that the earnings on disposed medication made up 30% of the overall sales for one cancer medication.

A significant amount of medication is being thrown away because there is medication left over in each vial used to deliver the medication, and the leftover medication in the vials is thrown away.  Both these causes are required to result in the medication waste.  Leftover medication is thrown away because it can only be used in rare circumstances (within six hours at a specialized pharmacy).  There is leftover medication in the vials because the vials hold too much medication for many patients.  (Most medication is administered based on patient weight.)  The vials hold too much medication because many medications are provided in only one or two vial sizes.  This is true of 18 of the top 20 cancer drugs.  Providing alternate vial size is not required by regulators, whose concern is limited to patient safety or potential medical errors.  Specifically, Congress has not authorized the US Food and Drug Administration (FDA) to consider cost. Drug manufacturers select vial size based on “marketing concerns” or, effectively, profit.  The study found that providing more vial sizes for one medication would reduce waste by 84% but would also reduce sales by $261 million a year.

Several of the vials for cancer medications are sized based on a larger (6’6″, 250 lb.) patient.  According to one drug manufacturer, this is done by design, resulting from working with the FDA for a vial that would provide enough medication “for a patient of almost any size.”  At least one drug manufacturer has suggested that the full vial be administered regardless of patient size, but one of the study’s co-authors says that extra medication does nothing to help patients, so it would still be wasted.

Instead, the researchers propose that the government either mandate the drugs be distributed in multiple vial sizes that would minimize waste, or that the government is refunded for wasted quantities.  They point out that alternate vial sizes are provided in Europe, “where regulators are clearly paying attention to this issue”, says Dr. Leonard Saltz, a co-author of the study.

To view the initial outline, Cause Map and proposed solutions, please click on “Download PDF” above.  Click here to view the study and drug waste calculator.

Do you know how an MRI works?

By Kim Smiley

About 30 million magnetic resonance imaging (MRI) scans are performed in the United States each year. They are most frequently used to create images of the brain and spinal cord, but can also help diagnose aneurysms, eye and inner ear disorders, strokes, tumors and other medical issues. MRIs are painless and do not expose a patient to potentially harmful radiation, making them one of the safest medical procedures available.

MRIs are fairly common and most people have heard of them, but do you have any idea how they work?  A Process Map is used to document how a work process is performed, which can be useful when explaining how a process works to somebody who is unfamiliar with it.  To view a high level Process Map of how an MRI is used to create an image, click on “Download PDF”.

The high level Process Map is very basic and would not be useful to somebody trying to learn how to perform an MRI, but it might be helpful in explaining to a patient what to expect during the procedure and how an MRI image is produced.  A more detailed Process Map that included information on each step that needs to be done to perform an MRI could be built for use as a training aid or as a way to document best work practices, but sometimes a basic high level Process Map can also be helpful.

So how does an MRI create detailed images of the inside of a human body? An MRI uses a strong magnet to create a large, steady magnetic field around the patient’s body.  Many atoms, such as hydrogen atoms, have strong magnetic moments that cause them to align in the same direction when exposed to a magnetic field.  Once atoms in the patient’s body are aligned along the field lines of the large magnet, the MRI machine produces a pulse of radio frequency current.  During the pulse of energy (which is extremely brief), atoms in the patient’s body absorb this energy and rotate to align with the radio frequency current.  Once the pulse is over, the atoms will rotate back to their original position, emitting energy.  Atoms in different types of body tissue return to their original positions at different rates and release different energy signals. The body is pulsed many times by different frequencies at different locations to target the specific type of issue being looked at by the MRI. All of the energy emitted by the atoms during these pulses is collected by antennas and a computer uses a mathematical formula to convert the data into images.

Obviously this is a very high level explanation that leaves out a lot of detail, but the basic idea is that an MRI uses changing magnetic fields and the body’s natural magnetic properties to produce detailed images of the human body.  The patient’s role during an MRI is simple (if maybe a little claustrophobic), but the process by which the MRI image is produced is fairly complicated to understand.  Having a simple, visible explanation of what is going on may help make a patient feel more comfortable with their experience.

Can you think of a time when it would be useful to explain the big picture of a work process to somebody, whether a manager or a customer? Creating a simple high Level Process Map to help explain a process to people that aren’t directly involved in the work is something that can be useful across many industries.

Hospital pays hackers ransom of 40 bitcoins to release medical records

By Kim Smiley

In February 2016, Hollywood Presbyterian Medical Center’s computer network was hit with a cyberattack.  The hackers took over the computer system, blocking access to medical records and email, and demanded ransom in return for restoring the system.  After days without access to their computer system, the hospital paid the hackers 40 bitcoins, worth about $17,000, in ransom and regained control of the network.

A Cause Map, an intuitive visual format for performing a root cause analysis, can be built to analyze this incident.  Not all of the information from the investigation has been released to the public, but an initial Cause Map can be created to capture what is now known.  As more information is available, the Cause Map can easily be expanded to incorporate it.

The first step in the Cause Mapping process is to fill in an Outline with the basic background information.  The bottom portion of the Outline has a place to list the impacts to the goals.  In this incident, as with most, more than one goal was impacted.  The patient safety goal was impacted because patient care was potentially disrupted because the hospital was unable to access medical records.  The economic goal was also impacted because the hospital paid about $17,000 to the hackers.  The fact that the hackers got away with the crime could be considered an impact to the compliance goal.  To view a filled-in Outline as well as a high level Cause Map, click on “Download PDF” above.

Once the Outline is completed, defining the problem, the next step is to build the Cause Map to analyze the issue. The Cause Map is built by asking “why” questions and laying out the answers to show all the cause-and-effect relationships that contributed to an issue.  In this example, the hospital paid ransom to hackers because they were unable to access their medical records.  This occurred because the hospital used electronic medical records, hackers blocked access to them and there was no back-up of the information.  (When more than one cause contributed to an effect, the causes are listed vertically on the Cause Map and separated with an “and”.)

How the hackers were able to gain access to the network hasn’t been released, but generally these types of ransomware attacks start by the hacker sending what seems to be routine email with an attached file such as a Word document. If somebody enables content on the attachment, the virus can access the system. Once the system is infected, the data on it is encrypted and the user is told that they need to pay the hackers to gain access to the encryption key that will unlock the system. Once the system has been locked up by ransomware, it can be very difficult to gain access of the data again unless the ransom is paid.  Unless a system is designed with robust back-ups, the only choices are likely to be to pay the ransom or lose the data.

The best way to deal with these types of attacks is to prevent them. Do not click on unknown links or attachments.  Good firewalls and anti-virus software may help if a person does click on something suspicious, but it can’t always prevent infection.  Many experts are concerned about the precedent set by businesses choosing to pay the ransom and fear these attacks may become increasingly common as they prove effective.

Patient death after ambulance delayed due to “extreme demand”

By ThinkReliability Staff

An inquest into the death of a young patient in London after a significant delay in the arrival of an ambulance released some disturbing details into the emergency process. We can perform a root cause analysis of the issues that led to the delay, and death, by capturing cause-and-effect relationships in a visual Cause Map.   As with many complex incidents, it will be helpful to capture the chronology of an event within a timeline. This timeline should not be confused with an analysis, but can be useful in organizing information related to the incident.

In this case, the patient, who had type 1 diabetes and had been feeling sick for more than a day, asked a friend to call an ambulance at about 5:00 pm on September 7, 2015. The friend dialed 111, which is the non-emergency medical helpline from the National Health Service. The initial call handler determined that the situation was not an emergency, but marked it for a 20-minute follow-up with a clinician. A clinical supervisor called back and spoke to the patient at 5:42 pm. She determined that it was an emergency that required an ambulance within 30 minutes. However, because it was known that the ambulance service was delayed, she asked the patient if she could get a friend to drive her to the hospital. The patient said she preferred an ambulance.

At this point it appears there was no contact until 10:15 pm, at which point a call-back was made to check on the patient’s ongoing symptoms. The friend at this time found the patient unconscious, having suffered cardiac arrest, and called 999, the emergency call system, at 10:23 pm. The ambulance arrived at 10:30 pm and took the patient to a hospital, where she died 5 days later.

At the inquest, the coroner testified that if the patient “had received definitive hospital care before she suffered a cardiac arrest in the evening of September 7, the likelihood is she would have survived.” Thus, from the perspective of the National Health Service, the patient safety goal is impacted because a death occurred that was believed to be at least partially due to an ambulance delay. Additional goals impacted are the patient services goal because of the delayed emergency treatment (the stated goal for the patient’s medical condition was 30 minutes, whereas the ambulance arrived nearly 4 hours after that goal). The schedule and operations goal is also impacted due to the insufficient capacity of both ambulances and the call system.

The Cause Mapping begins with an impacted goal and develops cause-and-effect relationships by asking “why” questions. The patient death was due to diabetic ketoacidosis, a severe complication of type 1 diabetes that may have resulted from an additional illness or underlying condition. As stated by the coroner, the delayed emergency treatment also resulted in the patient’s death. The ambulance that would take the patient to the hospital was delayed because the demand exceeded capacity. Demand was “extreme” (there were 200 other patients waiting for ambulances in London at the same time). The lack of capacity resulted from low operational resourcing, though no other information was available about what caused this. (This is a question that should be addressed by the service’s internal investigation.)

The patient was not driven to the hospital, which would potentially have gotten her treated faster and maybe even saved her life. The patient requested an ambulance and the potentially significant delay time was not discussed with the friend who had originally called. At the time of the first call-back, the estimated arrival time of an ambulance was not known. (By the time of the second call-back, it was too late.)

The second call-back was also delayed. Presumably this call was to update the patient’s symptoms as necessary and reclassify the call (to be more or less urgent) as appropriate. However, the demand exceeded supply for the call center as well as for ambulances. The call center received 300 calls during the hour of the initial call regarding this patient, which resulted in the service operations being upgraded to “purple-enhanced”. (This is the third-most serious category, the most serious being “black” or “catastrophic”.)   The change in operations meant that personnel normally assigned to call-backs were instead assigned to take initial emergency calls. Additionally, it’s likely the same “operational resourcing” issues that affected ambulance availability also impacted the call center.

Additional details of the causes related to the insufficient capacity of emergency medical services are required to come up with effective solutions. The ambulance service has completed its own internal investigation, which was presented to the family of the patient. The patient’s brother says, “I hope these lessons will be learnt and this case will not happen again” and the family says they will continue to raise awareness of the dangers of diabetes.

To view the initial analysis of this issue, including the timeline, click on “Download PDF” above. Or click here to read more.

more.

Study finds many patients don’t understand their discharge instructions

By Kim Smiley 

Keeping patients as comfortable and safe as possible following hospitalization is difficult if they aren’t receiving appropriate follow-up care after returning home.  But a recent study “Readability of discharge summaries: with what level of information are we dismissing our patients?” found that many patients struggle to understand their follow-up care instructions after leaving the hospital.  

Generally, follow-up care instructions are verbally explained to patients prior to discharge, but many find it difficult to remember all the necessary information once they return home.  The stress of the hospitalization, memory-clouding medication, injuries that may affect memory and the sheer number of instructions can make remembering the details of verbal follow-up care instructions difficult. 

In order to help patients understand and remember their recommended discharge instructions, written instructions are provided at the time of discharge.  However, the study found that many patients cannot understand their written follow-up care instructions.  The study determined that a significant percentage of patients are either functionally illiterate or marginally literate and lack the reading skills necessary to understand their written instructions.  One assessment found that follow-up care instructions were written at about a 10th grade level and another assessment determined that the instructions should be understood by 13 to 15-year-old students.  

One of the causes that contributes to this problem is that discharge instructions are written with two audiences in mind – the patient and their family as well as their doctor.  Many patients need simple, clear instructions, but other doctors understand medical jargon and more complicated care instructions.  

It is important to note that the study did have several limitations.  Researchers did not give patients reading tests and instead relied on the highest level of education attained to estimate literacy skills.  Non-English speakers were excluded.  Even with this limitation, the study provided information that should help medical professionals provide clear guidance on follow-up care recommendations. 

The obvious solution is to work towards writing care instructions that are as simple and clear to understand as possible. In order to help patients clearly understand their follow-up care instructions, the American Medical Association already recommends that health information be written at a sixth grade reading level.  Providing clear contact information and encouraging patients to call their nurse or doctor with any questions about discharge instructions could also improve the follow-up care patients are receiving.

What’s the best way to screen for breast cancer? Opinions differ.

By ThinkReliability Staff

In 2015, there were 40,000 deaths from breast cancer and 232,000 new cases of breast cancer in the United States. It is the second-leading cause of cancer death in women in the United States. The very high level cause-and-effect is that people (primarily women) die from breast cancer due to ineffective treatment. The later the cancer is detected, the later the treatment begins so early detection can help prevent breast cancer deaths. Currently the best solution for detecting breast cancer is a mammogram. But the matter of when mammograms should occur is based on risk-benefit analysis.

There’s no question that mammograms save lives by detecting breast cancer. This is the benefit provided in the analysis. Lesser known are the risks of mammograms. Risks include false negatives, false positives, unnecessary biopsies, and unnecessary treatment. The radiation that may be used in treatment can actually be a cause of future breast (and other types) of cancer.

On January 11, 2016, the United States Preventive Services Task Force (USPSTF) issued an update of their guidelines on mammogram starting and ending age (as well as other related recommendations). To develop these recommendations, the task force attempted to quantify the risks and benefits of receiving mammograms at varying ages.

For women aged 40 to 49, the task force found that “there is at least moderate certainly that the net benefit is small.” The net benefit here reflects the benefits of screening (~.4 cancer deaths prevented for every 1,000 screened and an overall reduction in the risk of dying from breast cancer from ~2.7% to ~1.8%) compared to the risks of screening. Risks of mammograms every other year for women aged 40 to 49 include ~121 false positives, ~200 unnecessary biopsies, ~20 harmless cancers treated, and ~1 false negative for every 1,000 women screened. The task force determined that in this case, the benefits do not significantly outweigh the risks for the average woman. Thus, the recommendation was rated as a C, meaning “The USPSTF recommends selectively offering or providing this service to individual patients based on professional judgment and patient preferences.” (Women who are at high risk or who feel that in their individual case, the benefits outweigh the risk, may still want to get screened before age 50.)

For women aged 50 to 74, the task force found that “there is high certainty that the net benefit is moderate or there is moderate certainty that the net benefit is moderate to substantial.” The types of benefits and risks are the same as for screening women ages 40 to 49, but the benefits are greater, and the risks are less. For women aged 50 to 74, there are ~4.2 cancer deaths prevented for every 1,000 screened and an overall reduction in the risk of dying from breast cancer from 2.7% to ~1.8%.   Risks of mammograms every other year for women aged 50 to 74 include ~87 false positives, ~160 unnecessary biopsies, ~18 harmless cancers treated, and ~1.2 false negatives for every 1,000 women screened. The task force determined that for women aged 50 to 74, the benefits of mammograms every other year outweighs the risk. Thus, the recommendation was rated as a B (the USPSTF recommends the service).

The task force determined it did not have enough evidence to provide a recommendation either way for screening women over age 74.

Comparing these risks to benefits is a subjective analysis, and some do not agree with the findings. Says Dr. Clifford A. Hudis, the chief of breast cancer medicine at Memorial Sloan Kettering Cancer Center, “The harm of a missed curable cancer is something profound. The harm of an unnecessary biopsy seems somewhat less to me.” To those that disagree, the task force reiterates that personal preference should determine the age screening begins. However, insurers may choose to base coverage on these recommendations. (Currently, private insurers are required to pay for mammograms for women 40 and over through 2017.)

Determining these recommendations – like performing any risk-benefit analysis – was no easy task and demonstrates the difficulty of evaluating risks vs. benefits. Because these analyses are subjective, results may vary. To view the risk vs. benefit comparison overview by the task force, click on “Download PDF” above.

Shoveling snow really can trigger heart attacks

By Kim Smiley

You may have heard that shoveling snow can trigger a heart attack and studies have found that there is truth behind that concern.  Before you pick up a shovel this winter, there are a few things you should know.

Shoveling can be much more strenuous than many people realize – even more strenuous than running at full speed on a treadmill.  Snow shoveling also tends to be a goal-oriented task.  People want to clear the driveway before they stop and they may push their bodies beyond the point where they would if they were exercising for fitness.

Cold temperatures can increase the risk of heart problems occurring.  When a body gets cold, the arteries constrict and blood pressure can increase, which in turn increases the risk of heart issues.  High blood pressure and a sudden increase in physical activity can be a dangerous combination.  Additionally, it may take longer than normal for emergency help to arrive if it is needed because of snow and ice on the roadways which makes the situation potentially even more dangerous.

If you are young and fit, snow shoveling can be a great workout (and maybe you could help out your elderly neighbors if possible…), but if you are at risk of heart problems, you may want to put some thought into how you attack the problem of clearing your driveway and/or sidewalks.  First off, you should know if you are potentially at high risk.  Studies have found that people over 55 are four times more likely to experience heart-related issues while shoveling and men are twice as likely as women. People with known heart problems, diabetes or high blood pressure are also potentially high-risk.  Anybody who is sedentary is also at a higher risk of heart issues than somebody who exercises regularly.

So what should you do if you are concerned about the risk of heart problems and shoveling?  If possible, you may want to avoid shoveling if there is somebody else who can do it.  If you are determined to shovel yourself, make sure you drink lots of water and dress warmly.  Try to push the snow if possible, rather than shoveling it.  It is also generally better to shovel lots of lighter loads rather than fewer, heavy loads.  If possible, you may want to shovel several times throughout the storm to spread the work out over time. Take frequent breaks and stop immediately if you feel tired, lightheaded, short of breath or your chest hurts. Stay safe this winter!

To see a Cause Map, a visual root cause analysis, of this issue, click on “Download PDF” above.  A Cause Map visually lays out all the causes that contribute to an issue so that it can be better understood.  This example Cause Map also includes evidence and potential solutions.

Chipotle Improves Food Safety Processes After Outbreaks

By ThinkReliability Staff

On February 8, all Chipotle stores will close in order for employees to learn how to better safeguard against food safety issues.  This is just one step of many being taken after a string of outbreaks affected Chipotle restaurants across the United States in 2015.  Three E. coli outbreaks (in Seattle in July, across 9 states in October and November, and in Kansas and Oklahoma in December) sickened more than 50 customers.  There were also 2 (unrelated) norovirus outbreaks (in California in August and Boston in December) and a salmonella outbreak in Minnesota from August through September.

In addition to customers being sickened, the impacts to the company have been severe.  The outbreaks have resulted in significant negative publicity, reducing Chipotle’s share price by at least 40% and same-store sales by 30% in December.  Food from the restaurants impacted by the fall E. coli outbreak was disposed of during voluntary closings, and the company has invested in significant testing and food safety expertise.

E. coli typically sickens restaurant customers who are served food contaminated with E. coli. Food ingredients can enter the supply chain contaminated (such as the 2011 E. coli outbreak due to contaminated sprouts), or be contaminated during preparation, either from contact with a contaminated surface or a person infected with E. coli. While testing hasn’t found any contamination on any surfaces in the affected restaurants or any employees infected with E. coli, it hasn’t been able to find any contaminated food products either. While this is not uncommon (the source for the listeria outbreak that resulted in the recall of ice cream products has not yet been definitively determined), it does require more extensive solutions to ensure that any potential sources of contamination are eliminated.

Performing an investigation with potential, rather than known causes, can still lead to solutions that will reduce the risk of a similar incident recurring.  Potential or known causes can be determined with the use of a Cause Map, a visual form of root cause analysis.  To create a Cause Map, begin with an impacted goal and ask “Why” questions to determine cause-and-effect relationships.  In this case, the safety goal was impacted because people got sick from an E. coli outbreak.  A contaminated ingredient was served to customers.  This means the ingredient either entered the supply chain contaminated or it was contaminated during preparation, as discussed above.  In order for a contaminated ingredient to enter the supply chain, it has to be contaminated with E. coli, and not be tested for E. coli.  Testing all raw ingredients isn’t practical.

Chipotle is instituting solutions that will address all potential causes of the outbreak.  Weekly and quarterly audits, as well as external assessments will increase oversight.  Cilantro will be added to hot rice to decrease the presence of microbes.  The all-employee meeting on February 8 will cover food safety, including new sanitation procedures that will be used going forward.  The supply chain department is working with suppliers to increase sampling and testing of ingredients.  Certain raw ingredients that are difficult to test individually (such as tomatoes) will be washed, diced, and then tested in a centralized prep kitchen and shipped to individual restaurants.  Other fresh produce items delivered to restaurants (like onions) will be blanched (submerged in boiling water for 3-5 seconds) for sanitation prior to being prepared.

Chipotle has released a statement describing their efforts: “In the wake of recent food safety-related incidents at a number of Chipotle restaurants, we have taken aggressive actions to implement pioneering food safety practices. We have carefully examined our operations—from the farms that produce our ingredients, to the partners that deliver them to our restaurants, to the cooking techniques used by our restaurant crews—and determined the steps necessary to make the food served at Chipotle as safe as possible.”  It is hoped that the actions being implemented will result in the delivery of safe food, with no outbreaks, in 2016.

To view the impacts to the goals, timeline of outbreaks, analysis, and solutions, please click on “Download PDF” above.  Or click here to learn more.

The water crisis in Flint, Michigan

By Kim Smiley

The quality of tap water, or rather lack thereof, in Flint, Michigan has been all over headlines in recent weeks. But prior to a state of emergency being declared and the National Guard being called up, residents of the town reported strangely colored and foul tasting water for months and were largely ignored. In fact, they were repeatedly assured that their water was safe.

Researchers have determined that lead levels in the tap water in Flint, Michigan are 10 times higher than previously measured. Forty-three people have been found to have elevated lead levels in their blood and there are suspected to be more cases that have not been identified. Even at low levels, lead can be extremely damaging, especially to young children under 6. Lead exposure can cause neurological damage, decreased IQ, learning disabilities and behavior problems. The effects of lead exposure are irreversible.

The water woes in Flint, Michigan began when the city switched their water supply to the Flint river in April 2014. Previously, the city’s water came from Lake Huron (through the city of Detroit water system). The driving force behind the change was economics. Using water from the Flint river was cheaper and the struggling city needed to cut costs. Supplying water from the Flint River was meant to be a temporary move to hold the city over while a new connection to the Great Lakes was built within a few years.

The heart of the problem is that the water from the Flint river is more corrosive than the water previously used. The older piping infrastructure in the area used lead pipes in some locations as well as lead solder in some joints. As the more corrosive water flowed through the piping, the lead leached into the water.

A Cause Map, a visual root cause analysis, can be built to document what is known about this issue. A Cause Map intuitively lays out the cause-and-effect relationships that contributed to an issue. Understanding the many causes that contribute to an issue leads to better, more detailed solutions to address the problem and prevent it from reoccurring. The Flint water crisis Cause Map was built using publicly available information and is meant to provide an overview of the issue. At this point, most of the ‘whats’ are known, but some of the ‘whys’ haven’t been answered. It isn’t clear why the Flint river water wasn’t treated to make it less corrosive or why it took so long for officials to do something about the unsafe water. Open questions are noted on the Cause Map by including a box with a question mark in it.

This issue is now getting heavy media coverage and officials are working on implementing short-term solutions to ensure safety of the residents. The National Guard and other authorities are going door-to-door and handing out bottled water, water filters, and testing kits. Michigan Governor Richard Snyder declared a state of emergency in Flint on January 5, 2016 which allows more resources to be used to solve the issue. However, long-term solutions are going to be expensive and difficult.

The city’s water supply was switched back to Lake Huron in October 2015, but it will take more than that to “fix” the problem because there is still a concern about lead leaching from corroded piping. Significant damage to the piping infrastructure was done and the tap water in at least some Flint homes is still not safe. It is estimated that fixing the piping infrastructure could cost up to $1.5 billion. A significant amount of resources will be needed to undo the damage that has been done to the infrastructure of the city, and there is no way to undo the damage lead poisoning has already done to the area’s residents, especially the children.