Tag Archives: medical error

Patient receives double dose of radiotherapy

By ThinkReliability Staff

The risk associated with medical treatment administration is high. There is a high probability for errors because of the complexity of the process involved in not only choosing a treatment, but ensuring that the amount and rate of treatment is appropriately calculated for the patient. The consequence associated with treatment errors is significant – death can and does result from inappropriately administered treatment.

Medical treatment includes delivery of both medication and radiation. Because of the high risk associated with administering both medication and radiation therapy, independent checks are frequently used to reduce risk.

Independent checks work in the following way: one trained healthcare worker performs the calculation associated with medical treatment delivery. If the treatment is then delivered to the patient, the probability that a patient will receive incorrect treatment is the error rate of that healthcare worker. (For example, a typical error rate for highly trained personnel is 1/1,000. If only one worker is involved with the process, there is a 0.1% chance the patient will receive incorrect treatment.) With an independent check, a second trained worker performs the same calculations, and the results are compared. If the results match, the medication is administered. If they don’t, a secondary process is implemented. The probability of a patient receiving incorrect treatment is then the product of both error rates. (If the second worker also has an error rate of 1/1,000, the probability that both workers will make an error on the same independently performed calculation is 1/1,000 x 1/1,000, or 0.0001%.)

However, in a case last year in Scotland, a patient received a significant radiotherapy overdose despite the use of independent checks, and verification by computer.   In order to better understand how the error occurred, we can visually diagram the cause-and-effect relationships in a Cause Map. The error in this case is an impact to the patient safety goal, as a radiotherapy overdose carries a significant possibility of serious harm. The Cause Map is built by starting at an impacted goal and asking “why” questions. All causes that result in an effect should be included on the Cause Map.

In this case, the radiotherapy overdose occurred because the patient was receiving palliative radiotherapy, the incorrect dose was entered into the treatment plan, and the incorrect dose was not caught by verification methods. Each of these causes is also an effect, and continuing to ask “Why” questions will develop more cause-and-effect relationships. The incorrect dose was entered into the treatment plan because it was calculated incorrectly (but the same) by two different radiographers working independently. Both radiographers made the same error in their manual calculations. This particular radiotherapy program involved two beams (whereas one beam is more common). The dose for each beam then must be divided by two (to ensure the overall dose is as ordered). This division was not performed, leading to a doubled calculated dose. The inquiry into the overdose found that both radiographers used an old procedure which was confusing and not recommended by the manufacturer of the software that controlled the radiotherapy delivery. While a new procedure had been implemented in February 2015, the radiographers had not been trained in the new procedure.

Once the two manual calculations are performed, the treatment plan (including the dose) was entered into the computer (by a third radiographer). If the treatment plan does not match the computer’s calculations, the computer sends an alert and registers an error. The treatment plan cannot be delivered to the patient until this error is cleared. The facility’s process at this point involves bringing in a treatment planner to attempt to match the computer and calculated doses. In this case, the treatment planner was one of the radiographers who had first (incorrectly) performed the dose calculation. The radiographers involved testified that alerts came up frequently, and that any click would remove them from the screen (so sometimes they were missed altogether).

The inquiry found that somehow the computer settings were changed to make the computer agree with the (incorrect) manual calculations, essentially performing an error override. The inquiry found that the radiographers involved in the case believed that the manually calculated dose was correct, likely because they didn’t understand how the computer calculated doses (not having had any training on its use) and held a general belief that the computer didn’t work well for calculating two beams.

As a result of this incident, the inquiry made several recommendations for the treatment plan process to avoid this type of error from recurring. Specifically, the inquiry recommended that the procedure and training for manual calculation be improved, independent verification be performed using a different method, procedures for use of the computer be improved (including required training on its use), and requiring manual calculations to be redone when not in agreement with the computer. All of these solutions will reduce the risk of the error occurring.

There is also a recommended solution that doesn’t reduce the risk of having an error, but increases the probability of it being caught quickly. This is to outfit patients receiving radiotherapy with a dosimeter so their received dose can be compared with the ordered dose. (In this case, the patient received 5 treatments; had a dosimeter been used and checked the error would likely have been noticed after only one.)

To view the Cause Map for this incident, please click on “Download PDF” above.

Man Paralyzed By Medical Error Hopes to Fix System for Others

By ThinkReliability Staff

The team investigating medical errors that happened at a Washington hospital has an unusual member: the man who was paralyzed as a result of these medical mistakes.  Not only does he want to know what happened, he hopes that his design experience (he formerly designed for Microsoft) can be translated to healthcare to “make hospitals everywhere safer for patients.”

While the full analysis of his particular case is not yet complete, the information that is known can be captured in a Cause Map, a visual form of root cause analysis.  The process begins by capturing the what, when and where of the incident, as well as the impact to the organization’s goals.  In this case, treatment was at a Washington hospital’s emergency room for a back injury obtained May 11, 2013. The interaction with the facility involved a missed diagnosis, poor communication, and eventually resulted in paralysis to the patient.  In this case the patient safety goal is impacted due to the paralysis of the patient.  The financial goal is impacted due to a $20 million settlement against the hospital. (Part of the settlement included the hospital working with the patient on the analysis.)  The labor/ time goal is impacted due to the months of rehabilitation required after the injury.

The second step of the process, the analysis, develops cause-and-effect relationships beginning with one of the impacted goals.  In this case, the patient safety goal was impacted due to the paralysis of a patient. The paralysis resulted from a spinal cord injury, which was caused by a significant back fracture.  There are times when more than one cause is required to produce an effect.  The significant back fracture was caused by an untreated hairline fracture on the back AND the patient being moved inappropriately.  If either of these things had not occurred, the outcome may have been very different.  The analysis continues by asking ‘why’ questions of both the causes.

The patient had a hairline fracture on his back that resulted from a fall out of bed (due to “luxurious sheets”) and a condition (ankylosing spondylitis), which makes the spine brittle and more prone to fractures.  Beginning on May 12, 2013, the patient visited the hospital’s emergency room four times in two weeks. The hairline fracture was untreated because it was not diagnosed during any of those visits, despite the patient’s insistence that, because of his condition, he was concerned about the possibility of a back fracture.  While the hairline fracture is visible on the imaging scan, according to the patient’s lawyers, it was missed because the scan was focused on the abdomen.  The notes from the first doctor’s visit were not documented until 5 days after the encounter.

On May 25, 2013, two weeks after the initial injury, the patient returned to the emergency room for severe pain and an MRI was ordered.  While being positioned in the MRI, the patient lost neurological function from about the neck down.  He was transferred to another hospital, who found it likely that the paralysis had resulted from being positioned in the MRI.

The patient was inappropriately moved, given his injury (which at this point was still undiagnosed and untreated).  The patient was being positioned for an MRI ordered to find the cause of his back pain (probably due to the untreated hairline fracture of his back).  Either the previous imaging scan was not reviewed by the doctors at this visit, or the scan was unavailable.  Had the imaging scan indicating a hairline fracture been available, the MRI may not have been necessary. If the patient was given an MRI anyway, the staff would have been aware of the fracture and would likely have moved the patient more carefully.

However, the staff was not aware of the injury.  The patient’s repeated concern over having a back fracture was unheeded during all his visits, and the staff appeared to be unaware of the medical information from the three previous visits, likely due to ineffective communication between providers (a common issue in medical errors).

As more detail regarding the case is discovered, it can be added to the Cause Map.  Once all the information related to the case is captured, solutions that would reduce the risk of the problem recurring can be developed and those that are most effective can be implemented.  The patient will be a part of this entire process.

To view the initial Cause Map of this issue, click on “Download PDF” above. Or click here to read more.

Death from Patient-Controlled Morphine Overdose

By ThinkReliability Staff

Could improving the reliability of the supply chain improve patient safety?

The unexpected death of a patient at a medical facility should always be investigated to determine if there are any lessons learned that could increase safety at that facility. A thorough analysis is important to determine all the lessons that can be learned. For example, the investigation into a case where a patient death was caused by a morphine overdose delivered by a patient-controlled analgesia (PCA) found that increasing the reliability of the supply chain, as well as other improvements, could increase patient safety.

The information related to this patient death was presented as a morbidity and mortality case study by the Agency for Healthcare Research and Quality. The impacts to goals, analysis, and lessons learned from the case study can be captured in a Cause Map, a visual form of root cause analysis that develops the cause-and-effect relationships in sufficient detail to be able to find solutions that will reduce the risk of similar incidents recurring.

Problem-solving methodologies such as Cause Mapping begin with defining the problem. In the Cause Mapping method, the what, when and where of the problem is captured, as well as the impact to the goals, which defines the problem. In this case, the patient safety goal is impacted due to the death of a patient. Because the death of a patient under medical care can cause healthcare providers to be second victims, this is an impact to the employee safety goal. A death associated with a medication error is a “Never Event“, which is an impact to the compliance goal. The morphine overdose is an impact to the patient services goal. In this case, the desired medication concentration (1 mg/mL morphine) was not available, which can be considered an impact to the property goal. Lastly, the response and investigation are an impact to the labor/time goal.

The analysis begins with one impacted goal and developing cause-and-effect relationships. One way to do this is by asking “Why” questions, but it’s also important to ensure that the cause listed is sufficient to have resulted in the effect. If it’s not, another cause is required, and will be joined with an “AND”. In this case, the patient death resulted from a morphine overdose AND a delayed response to the patient overdose. (If the response had come earlier, the patient might have survived.) It’s important to validate causes with evidence where possible. For example, the morphine overdose is a known cause because the autopsy found a toxic concentration of morphine. Each cause in the Cause Map then becomes an effect for which causes are captured until the Cause Map is developed to the point where effective solutions can be found.

The available information suggests that the patient was not monitored by any equipment, and that signs of deep sedation, which preceded respiratory depression, were missed during nurse checks. Related suggestions for promoting the safe use of PCA include the use of monitoring technology, such as capnography and oximetry, and assessing and recording vital signs, including depth of respiration, pain and sedation.

The patient in this case was given PCA morphine. However, too much morphine was administered. The pump settings were based on the concentration of morphine typically used (1 mg/mL).   However, that concentration was not available, so a much higher concentration (5 mg/mL) was used instead. The settings on the pump were entered incorrectly for the concentration of morphine used, likely because of confirmation bias (effectively assuming that things are the way they always are – that the morphine on the shelf will be the one that’s usually there). There was no effective double check of the order, medication and pump settings.

Related suggestions for promoting the safe use of PCA include the use of “smart” pumps, which suspend infusion when physiological parameters are breached, the use of barcoding technology for medication administration (which would have flagged the use of a different concentration), performing an independent double check, storing only one concentration of medications in a dispensing cabinet (requiring other concentrations to be specially ordered from the pharmacy), standardizing and limiting concentrations used for PCA, and yes, improving the supply chain so that it’s more likely that the lower concentration of morphine will be available. Any of these suggestions would improve patient safety; implementation of more than one solution may be required to reach an acceptable level of risk. Imagine just improving the supply chain so that there would be very few (if any) circumstances where the 1 mg/mL concentration of morphine is unavailable. Clearly the risk of using the wrong concentration would be lessened (though not zero), which would reduce the potential for patient harm.

To view a one-page downloadable PDF with the outline, Cause Map, and action items, click “Download PDF” above. Click here to read the case study.

Healthy kidney removed by mistake

By Kim Smiley

The Patient Safety Network presented a case study where a patient with suspected kidney cancer had the wrong kidney removed.  Instead of the right kidney that showed suspected renal cell carcinoma in a CT scan, the healthy left kidney was removed. A second surgery was then performed to remove the right kidney and the patient was left dependent on dialysis after losing both kidneys.  The patient wasn’t a candidate for a kidney transplant because of the cancer.

Reviewing and understanding case studies such as this one is important because wrong-site surgeries are one of the more common serious medical errors.  A Cause Map, a visual root cause analysis, can be used to better understand the many causes that contributed to this wrong-site surgery, and better understanding the causes of an incident leads to development of better solutions.  The first step in building a Cause Map is to fill in an Outline with the basic background information.  These details are often not published for medical errors to protect patient privacy, but the information should be recorded if available.  The bottom of the Outline also includes space to list how the issue impacts the overall organizational goals. The Cause Map itself is built by starting at one of the impacted goals and asking “why” questions.

Focusing on the patient safety goal as a starting point, the investigation could be started by asking “why was a healthy left kidney removed instead of the right?” The surgeon who performed the surgery believed the tumor was in the left kidney because all patient information readily available stated the tumor was in the left kidney.  The case study didn’t include details on how this error in the patient’s record occurred, but it is known that a CT scan was initially performed at a different hospital than the one that performed the surgery.  The patient sought treatment at the first hospital after suffering from abdominal pain and hematuria and a CT scan was performed.  He was transferred to a second hospital for the surgery after the CT scan revealed suspected renal cell carcinoma.  An image of the CT scan was not included with the patient records at the time of transfer and the records noted that there was a tumor in the incorrect (left) kidney.

The stage was essentially set for a wrong-site surgery and the surgeon missed the opportunity to prevent it.  The surgeon chose to perform the surgery based on the records without either verifying the original CT (because it was not available) or requesting an additional CT scan to be performed to confirm the diagnosis.  It does not appear that the surgeon was required to review the CT scan, but the decision on whether to do so was left up to the surgeon’s judgement. The error was only identified after the pathologist who examined the left kidney found no evidence of cancer and informed the surgeon who then reviewed the original CT scan and realized the wrong kidney had been removed.

Once the causes that contributed to an issue have been identified, the final step in the Cause Mapping process is to identify and implement solutions to prevent a problem from reoccurring.  One way to prevent similar errors is to require labeled radiology images to be available to the surgeon prior to any surgery.  Requiring a review of images prior to the surgery would build in a double check to ensure the surgery is performed at the correct site.  Building in a double check of medical records may also reduce errors like the wrong kidney being listed as potentially cancerous or a patient being transferred with medical files missing important radiology images.

Why You Will Experience a Diagnostic Error

By ThinkReliability Staff

On September 22, 2015, the Institute of Medicine released a report entitled “Improving Diagnosis in Health Care“. The report was the result of a request in 2013 by the Society to Improve Diagnosis in Medicine to the Institute of Medicine (IOM) to undertake a study on diagnostic error. The tasking to the committee formed by the IOM matched the three step problem-solving process: first, to define the problem by examining “the burden of harm and economic costs associated with diagnostic error”; second, to analyze the issue by evaluating diagnostic error; third, to provide recommendations as “action items for key stakeholders”.

The burden of harm determined to result from diagnostic errors is significant. Diagnostic errors are estimated to contribute to about 10% of hospital deaths, and 6-17% of hospital adverse events, clearly impacting patient safety. Not only patient safety is impacted, however. Diagnostic errors are the leading type of paid malpractice claims. They also impact patient services, leading to ineffective, delayed, or unnecessary treatment. This then impacts schedule and labor as additional treatment is typically required. The report found that, in a “conservative” estimate, 5% of adults who seek outpatient care in the United States experience a diagnostic error each year and determined that it is likely that everyone in the US will likely experience a meaningful diagnostic error in their lifetime.

The report also provided an analysis of issues within the diagnostic process (to learn more about the diagnostic process, see our previous blog) that can lead to diagnostic errors. Errors that occur at any step of the diagnostic process can lead to diagnostic errors. If a provider receives inaccurate or incomplete patient information, due to inadequate time or communication with a patient, compatibility issues with health information technology, or an ineffective physical exam, making a correct diagnosis will be difficult. Ineffective diagnostic testing or imaging, which can be caused by numerous errors during the process (detailed in the report). Diagnostic uncertainty or biases can also result in errors. However, not all errors are due to “human error”. The report asserts that diagnostic errors often occur because of errors in the health care system, including both systemic and communication errors.

When diagnostic errors do occur, they can be difficult to identify. The data on diagnostic errors is sparse due to both liability concerns as well as a lack of focus historically on diagnostic errors. In addition, there are few reliable measures for measuring diagnostic errors, and diagnostic errors can frequently only be definitely determined in retrospect.

The report identifies eight goals for improving diagnosis and reducing diagnostic errors that address these potential causes of diagnostic errors. These goals are presented as a call to action to health care professionals, organizations, patients and their families, as well as researchers and policy makers.

To view a high-level overview of the impacts to the goals, potential causes and recommendations related to diagnostic error presented in a Cause Map, or visual root cause analysis, click on “Download PDF” above. To learn more:

To read the report, click here.

For an overview of the diagnostic process, click here.

For an example of a diagnostic error with extensive public health impacts, click here.

Identifying and Preventing Causes of Lab Errors

By ThinkReliability Staff

A man was mistakenly told he had HIV. A baby who died from a blood disorder that could have been treated during pregnancy, but wasn’t because the routine blood screen came back clear. A little girl who had to receive a second transplant after the test to verify her acceptance of a new organ was run incorrectly. These are just some of the cases mentioned in a watchdog report about how laboratory errors and weak oversight put patients at risk.

There are 7 to 10 billion medical laboratory tests run in the US every year. Lab tests influence about 70% of medical decisions. Having the wrong information from these tests can be deadly, and there is no good data about how many lab tests may be inaccurate, or may be negitively impacting patient safety. Laboratories are generally overseen by accrediting organizations but the results are almost always private, and there have been recent cases where federal regulators have had to step in because serious deficiencies in lab processes were identified.

The risk isn’t just for patients. An employee was infected with HIV and hepatitis C after a machine malfunctioned, splashing contaminated blood product onto her face. The employee had warned her boss previously that the machine was broken and cross-contaminating samples. Patients can also receive wrong information that isn’t harmful to their physical health but causes all sorts of other problems, such as incorrectly run paternity tests that improperly rule out a man as the father of a child.

The process involved in laboratory testing – from taking a specimen from a patient to delivering the results – is complex, and there are potential issues at each step that can lead to inaccurate results. These causes can be visually diagrammed in a Cause Map, or a visual cause-and-effect diagram. (To view the Cause Map, click “Download PDF” above.) In this case, potential causes of lab errors are captured and analyzed for potential solutions. These causes include labeling of samples, time and storage conditions of the samples, use of proper (and non-expired) products to treat the samples, and calibration of the machines used for the testing.

Actions that reduce the risk of inaccurate lab results should be in place at all labs, but even with a well-planned process, mistakes can happen. That makes the addition of checks and oversight into the process incredibly important. Says Michael Baird, the chief science officer and laboratory director at DNA Diagnostics Center, “I will agree that mistakes are something that can happen whatever you do. You just need to have the appropriate controls in place for when a mistake happens, (so) you can catch it before it goes out the door.”

For example, at the lab Baird runs, samples used for DNA checks are run independently by two different technicians and when a man is ruled out as the father of a child, there is a double-check in place. Other labs have incorporated alert systems for time-sensitive specimens and have hired technical directors responsible for overseeing the labs.

There are also steps patients themselves can take to minimize the impact on their safety from potential lab testing errors. First, ensure that any samples taken are labeled immediately and with accurate information. If you’re at all unsure about a test result, get a second opinion at a different lab. Complaints about a lab should be directed to state health officials.

To view the Cause Map addressing potential causes of laboratory errors, click “Download PDF”. To learn more, read the watchdog report.

Patient Rediagnosis Lifts Death Sentence

By ThinkReliability Staff

A patient diagnosed with stage-four lung cancer and given less than a year to live was relieved to have the correct diagnosis – treatable sarcoidosis. However, he was then concerned about how the misdiagnosis occurred in the first place, risking his health had he gone along with a recommendation for chemotherapy.

After nearly a year of coughing, the patient underwent a CT scan and was diagnosed with sarcoidosis, a treatable lung disease. Shortly after that diagnosis, a lung biopsy found the presence of stage-four lung cancer. The patient was referred to an oncologist, who recommended chemotherapy. Luckily for the patient, the diagnosis didn’t seem right and he sought a second opinion. Although the tissue samples from the initial biopsy tested positive for stage-four lung cancer, additional biopsy results showed that he did not have cancer.

In order to reduce the risk of a diagnosis error like this one occurring in the future, it’s important to identify all the causes that contributed to the issue. We can capture these cause-and-effect relationships in a Cause Map, or visual root cause analysis. The first step in the Cause Mapping process is to capture the impacts to the goals. In this case, there was a potential safety impact to the patient receiving unneeded chemotherapy, which could have potentially caused a worsening of the actual disease from which he was suffering. The patient services goal was impacted by the misdiagnosis and the property and labor goals were both impacted by the potential for unneeded treatment, as can occur with any misdiagnosis.

Once the impacts to the goals have been determined, the next step is to determine the cause and effect relationships by beginning with an impacted goal and asking “Why” questions. The misdiagnosis resulted from the contamination of a biopsy sample sent to the lab to determine the pathology of the disease. The patient’s lab sample was contaminated with stage-four lung cancer from another patient. (DNA testing confirmed the presence of both the patient’s biopsy sample and tissue from a sample of the lung cancer sufferer.) The presence of DNA from both samples indicates that they were cross-contaminated, though the method is still unknown. (The patient with lung cancer was properly diagnosed.) Because the two diseases are pathologically similar, it was not immediately clear that there was a problem with the sample used to make the diagnosis.

Once the patient sought another opinion, it was verified that the first biopsy sample did contain cancer cells. However, another biopsy and blood tests showed he did not have cancer. The original hospital confirmed their diagnosis of cancer even after this information until another biopsy was performed at that facility. Five months after the initial cancer diagnosis was the diagnosis updated to sarcoidosis. The patient filed a complaint with that hospital, as well as the hospital where chemotherapy was recommended, on October 23, 2014.

Once all the causes are determined, solutions can be determined that address the various causes. Because it’s still not clear how the cross-contamination at the lab occurred, an investigation specifically addressing that issue should occur, looking in detail at the specimen handling procedures and adding improvements where necessary to reduce the risk of cross-contamination. (The risk is already very low; the lab has said that it generally handles 70,000 specimens a year and this is the first contamination issue known.)

Additionally, the method for reconsidering diagnoses based on additional testing from alternate providers must be examined. Though the initial misdiagnosis in this case, based on a lab sample that clearly showed the presence of cancer cells, is understandable enough, the ensuing delay in updating the diagnosis despite heavy pushback from the patient is not. Ideally the lessons learned from this case will provide safer and more effective healthcare for everyone.

Re-engineered connectors may prevent tubing mix-ups

By ThinkReliability Staff

Patients being treated at healthcare facilities may have multiple tubing connections designed to provide different medical treatment products.  Disaster can occur when tubing is connected incorrectly.  The most well-studied example is when enteral feeding solutions are accidentally directed into intravenous (IV) lines.  The Joint Commission (TJC) found 116 case studies, including 21 that resulted in deaths, due to this type of misconnection, discussed in a recent Sentinel Event Alert.  In 2006, this happened to a pregnant woman, resulting in the death of both her and her fetus.  (See our previous blog on this topic.)

Issues involving tubing misconnections have been reported since the 1970s and are not limited to feeding tube/ IV mix-ups.  Many types of tubing for medical product delivery are susceptible to misconnection due to the use of luer connectors, which allow a high degree of connection compatibility.  Concern is growing over these types of connections, and it’s become clear that training is not adequate to prevent these types of connections.  According to the Joint Commission, “The basic lesson from these cases is that if it can happen, it will happen.  Luer connectors are implicated in or contributed to many of these errors because they enable functionally dissimilar tubes or catheters to be connected.”

Action is being taken that would prevent these misconnections by ensuring that each type of tubing has its own, non-compatible connector.  The Association for the Advancement of Medical Instrumentation (AAMI) is leading an international initiative to develop small-bore connectors standards.  Public review of the standards is now available (along with other helpful information including case studies) on the AAMI website.  The State of California has passed legislation requiring manufacturers of intravenous, epidural or enteral feeding devices to implement the new standards that aim to prevent misconnection, though it does not take effect until January 1, 2016.

Connectors meeting the new, non-compatible standards are becoming available but compatible connectors may still be in use and in inventory systems.  To limit risk of patient death or serious impairment while the new connectors are phased in, TJC recommends that tubing is traced from the patient to its point of origin every time it is connected, disconnected, or during a patient hand-off.  Labeling high-risk tubing and routing tubes and catheters in specific directions (towards the head for IVs; towards the feet for enteric feeding tubes) has also been recommended to reduce the risk of misconnection.

Going forward, only equipment that is incompatible with female luer connections on IVs should be purchased for all non-IV equipment and acceptance testing should be performed on new purchases to ensure it is not compatible with IV tubing connections.

The cause-and-effect relationships leading to the patient safety risks, and the solutions that are being recommended can be visually diagrammed in a Cause Map, or visual root cause analysis.  To view a one-page overview of the issue, please click on “Download PDF” above.

Wrong Dye Injected into Spine During Surgery

By Kim Smiley

In the high stress, fast paced operating room environment, high consequence errors can and do happen, but the risk can be reduced by analyzing medical errors and improving standard work processes.  A recent case where a woman died unexpectedly after a routine procedure to insert a pump underneath her skin to administer medication offers many potentially useful lessons.  The wrong dye was injected into her spine during the surgery, which is the type of error that should be entirely preventable.

A Cause Map, or visual root cause analysis, can be used to analyze this issue.  To build a Cause Map, all causes that contribute to the issue are visually laid out to show the cause-and-effect relationships.  The general idea is to ask “why” questions to determine ALL the causes (plural) that contributed to the problem, and not focus the investigation on a single root cause because this allows a wider range of solutions to be considered.

So why did the wrong dye get injected into the patient?  The dye was injected because it was used during the surgery to verify the location of tubing that was threaded into the patient’s spine and the wrong dye was used.  The surgeon needed the dye to verify the location because the tubing was inserted during the surgery and it was difficult to see. The tubing was part of a pump that was being stitched under the patient’s skin to administer medication directly to the spine to treat symptoms from a back injury.  The patient had broken several vertebrae during a fall.

And now on to the meatier part of the discussion in regard to medical error prevention – why was the wrong dye used? The request for medication (dye) was given orally by the doctor to the nurse who passed it along to the pharmacy so it is possible that the pharmacist missed that the dye was intended for use in the spine.  The exact point where the work process breakdown occurred wasn’t clear in the media reports, but it is known that there were several checks in the process that failed for this type of error to occur.

Following this incident, the hospital did make changes in their work process to help reduce the likelihood of a similar error occurring.  The hospital now uses detailed written orders for medications except in emergencies when that isn’t possible.  The written order includes information about how the medication will be administered, which would have clarified that the dye was intended for use in the spine in this case.

The Willie King Case: Wrong Foot Amputated

By Kim Smiley

In one of the most notorious medical error examples in US history, the wrong foot was amputated on a patient named Willie King on February 20, 1995.  Both the hospital and surgeon involved paid hefty fines and the media had a feeding frenzy covering the dramatic and alarming mistake.

So how did a doctor remove the wrong foot?  Such a mistake seems difficult to comprehend, but was it really as mind boggling as it looks at first glance?

The bottom line is that the doctor honestly believed he was removing the correct foot when he began the surgery. The blackboard in the operating room and the operating room schedule all listed the wrong foot because the scheduler had accidentally listed the wrong foot.  After reading the incorrect paperwork, the nurse prepped the wrong foot.  When the doctor entered the operating room, the wrong foot was prepped and the most obvious documentation listed the wrong foot.  Basically, the stage was set for a medical error to occur.

The foot itself also looked the part.  The patient was suffering from complications of diabetes and both of his feet were in bad shape.  The “good” foot that was incorrectly removed looked like a candidate for amputation so there were no obvious visual clues it wasn’t the intended surgery site. Other doctors had testified in defense of the doctor saying the majority of other surgeons would have made the same mistake given the same set of circumstances.

There was some paperwork that listed the correct foot to be amputated, such as patient’s consent form and medical history.  This paperwork was available in the operating room, but no procedures in place at the time required the doctor to check these forms and these forms were far less visual than the documents where the incorrect information was listed.  Additionally, the doctor never spoke directly with the patient prior to the surgery which was another missed opportunity for the mistake to be caught.

Clearly the procedures needed to be changed to prevent future wrong site surgeries from occurring and a number of changes have been incorporated in the time since this case occurred to help reduce the risk of this type of medical error.  Surgeons in Florida are now required to take a timeout prior to beginning a surgery.  During the time out they are required to confirm that they have the right patient, right procedure and right surgical site.  This rule has been in place since 2004.

Mistakes will always happen, such as numbers being transposed or misheard words over the phone, but small mistakes need to be caught before they become big problems. Procedures like a timeout can significantly reduce the likelihood of an error going uncorrected.  In an ideal world, the simple mistake by the scheduler would have been caught long before it culminated in a surgery on the wrong body part.

A visual root cause analysis, called a Cause Map, can be built to illustrate the facts of this case.  A Cause Map intuitively lays out the cause-and-effect relationships including all the causes that contributed to an issue.  To view a Cause Map of this example, click on “Download PDF” above.