Document:Evolution of a Diagnosis

From AIDS Wiki
Jump to: navigation, search
NOTWITHSTANDING ANY OTHER NOTICE ON THIS PAGE, the material on this page is NOT available under the GNU Free Documentation License; in accordance with Title 17 U.S.C. section 107, it is posted in the manner of bulletin boards in schools and workplaces, to encourage public education and citizen awareness, without profit or payment, for persons and entities engaging in non-profit research and educational activities and purposes only.


Evolution of a Diagnosis

How Testing HIV Antibody Positive Became Equivalent to Having HIV

by Rodney Richards

"You Bet Your Life"
6 February 2007

Previous chapter


RodneyRichards.jpg

If it is indeed the case, as the FDA and manufacturers of HIV antibody tests contend, that the significance of a positive Western Blot (WB) in healthy blood donors (or anyone without symptoms of AIDS) is not known, then what could have possibly motivated the FDA to approve WB for use in this population in the first place? In fact, there was a very compelling reason for this approval, and it had nothing at all to do with confirming persons to be infected with HIV.

With the release of Abbott’s ELISA screening test in 1985, it was well known that the vast majority of positive screening tests in blood donors would likely represent false-positives (1). In fact, according to estimates from the experts, at least 25,000 units of blood tested falsely positive for antibodies to HIV in the first year of screening alone (2), and by the time WB was approved, this number was likely well in excess of 50,000.

What the experts do not emphasize is that according to CDC guidelines (3), every single one of these donors had to be informed by collection agencies that they might be infected with HIV, and that they should work it out with their private physicians. Unfortunately, physicians had no FDA-approved tools that could be used to distinguish false- from true-positive results at that time. As such, the human cost of protecting the blood supply was to condemn tens-of-thousand of healthy blood donors each year to a compromised life of fear and anxiety as a result of false-positive screening test results. This was more than a problem – it was a silent catastrophe.

One has to remember that at the time these healthy donors were informed they might be infected, there were absolutely no treatment options, and the media had already hyped such infection as an implicit death sentence. Furthermore, since the only possible risk factor for these donors was heterosexual sex, they were left with the additional burden of pondering how many others they might have condemned to death.

While it may be the case that some had the strength to carry this burden in silence, it is likely that many felt it morally and ethically responsible to share the bad news with their contacts in order to stop the scourge they had possibly sown from spreading even further. In other words, like a contagion, the fear and anxiety wrought by these 50,000 false diagnoses likely spread well beyond the direct victims; and while each of these individuals remained ignorant to the fact that there were tens-of-thousands others fighting the same battle, public health officials were well aware that they had created an epidemic of fear and confusion that might grow as fast as the very epidemic they sought to avert.

Ordinarily, public officials might embrace such widespread fear as a tool for motivating behaviors deemed to be in the interest of the public. For example, terror spawned from news that the AIDS epidemic had already spread widely into the heterosexual population could have served nicely for encouraging either abstention – for those on the right – or alternatively, condom use – for those on the left. However, in this case at hand, there was a problem; namely, the vast majority of these donors were among the healthiest of the healthy. Clearly, if HIV was going to be held out to the public as an invariable death sentence, something had to be done to demonstrate that these individuals were not infected; and that something was the approval of WB in 1987.

Specifically, as a result of this approval, the 50,000 persons who had already been told they might be infected with HIV could finally be reassured that they were not infected by virtue of either a negative, or “persistently indeterminate,” WB test (3). This is why the FDA approved the WB in 1987 – not for confirming persons as positive for antibodies (i.e., a result of unknown significance); but rather, for confirming healthy blood donors as negative for antibodies (i.e., a result of immense significance). In fact, to date, this product has likely spared close to a half-million donors – not to mention their families and sexual contacts – from the devastating psychological and sociological consequences of being informed they might be infected with a deadly virus on the basis of a false positive screening assay.

While this was certainly good news for blood donors, it simultaneously proved to be a nightmare for epidemiologists – in particular, those who were actively seeking to shore up the perceived link between HIV and AIDS. Specifically, even though the Department of Health and Human Services (DHHS) felt that the 36% correlation between HIV and AIDS as revealed by Gallo et al in 1984 was sufficient to announce to the global media that the probable cause of AIDS had been discovered; those in the research community knew this hypothesis would never survive the test of time unless the perceived correlation between this germ and the new syndrome could be strengthened substantially.

Given that even as early as 1985, researchers could use WB to confirm the presence of antibodies to HIV in as many as 80% of AIDS patients, the possibility of using antibody tests to declare infection looked quite attractive; and as outlined above, the CDC had already been busy setting the stage for this to happen. Unfortunately, the emergence of tens-of-thousands of antibody positive – but healthy and risk free – blood donors over the next two years prevented this from happening. And while it is the case that WB could successfully be used to demonstrate that the vast majority (about 95%) of these healthy donors were indeed not infected, this was achieved only by adopting a very strict set of rules (i.e., interpretive criterion) for declaring WB as positive. In other words, the more difficult it was to score positive on WB, the more ELISA positive blood donors could ultimately be told they were negative. The discouraging consequence of this strict criterion, however, was only about half of all AIDS patients could now be confirmed as positive for antibodies to HIV (4).

It had now been three years since the probable cause of AIDS had been announced to the world, and epidemiologists were still unable to establish even a remotely respectable correlation between Gallo’s hypothetical germ and the new syndrome. With the sacrifice of WB to the cause of saving the blood donors, epidemiologists were back to where they started; however, this time, not only were they without a correlation, they were likewise without a single scientific tool that could be used to establish one. With this background, it is perhaps easier to understand why the CDC had to start inventing things out of thin air back in 1987 – the fact that evidence for infection could be demonstrated in only about half of all AIDS patients was unacceptable, and aggressive damage control was in order.

For starters, the CDC went ahead with the antibody indicates current infection proclamation, as detailed in their August 14, 1987 publication. At least this allowed scientists to create the impression that about half of all AIDS patients were infected. Furthermore, with the implementation of WB testing in 1987, only about 1/10,000 blood donors would now have to be told they were infected (about 1,500 per annum); and in spite of the FDA’s and manufacturers’ insistence that the significance of such test results is not known, to tell these donors that they were infected was apparently a small price to pay for the illusion of a laboratory test for HIV (i.e., declaration of infection through the detection of antibodies on WB). Nevertheless, how did the CDC create the illusion that the other half of their AIDS patients were likewise infected?

Well, on the very same day that the CDC conjured up the antibody indicates current infection story, they also revised the case definition for AIDS in order that persons perceived to have AIDS could be declared to be infected; either on the basis of ELISA testing alone; or even in the absence of any testing whatsoever (i.e., presumptive diagnoses), provided they had certain confirmed illnesses considered indicative of HIV-disease (5).

How did the CDC justify these presumptive diagnoses? Well, “because not to count them would be to ignore substantial morbidity resulting from HIV infection.” And if this were not absurd enough, in cases where persons had PCP, or low T-cell counts (< 400/mm3), they could be declared infected (i.e., AIDS patients) even if they tested negative for antibody to HIV (5). As such, the AIDS epidemic marched on unfettered by the threat of any scientific facts. In fact, 1987 was a banner year for the CDC, with reported AIDS cases increasing from only 13,195 in 1986 to 20,745 in 1987 (6).

Although the CDC managed to avert the disaster posed by the release of the FDA approved WB in 1987, they knew their desperate actions would only serve as a short-term band-aid, and that further actions would have to be taken. Over the next few years, the CDC would continue to bring perceived harmony between infection and clinical AIDS by encouraging myths about disappearing antibodies in AIDS patients (i.e., the reason AIDS patients don’t test positive on WB is because they have lost their ability to make antibodies), and finally by encouraging diagnostic testing facilities to simply use a different criterion for scoring WB once again.

So what was the CDC’s justification for encouraging the use of this new WB criterion? Well, to put it quite simply, because the new criterion maximized the number of AIDS patients and healthy homosexuals who could be told they were infected. Furthermore, since persons scoring indeterminate under the old criterion could now be told they were infected, this would “reduce the...cost and difficulty of counseling persons with indeterminate test results, and cost of specimen testing.” (7) In other words, why waste time and money on follow-up testing of patients with indeterminate test results – an exercise that may reveal them to be uninfected – when you can simply tell them they are infected right up-front? This would also serve to spare patients the confusion and anxiety associated with indeterminate results.

So while blood banks were using one set of rules for scoring persons positive on WB, diagnostic testing laboratories – at the behest of the CDC – were using another. In keeping with this practice, scientists could continue to minimize the number of blood donors who had to be told they were infected (using the FDA criterion), while at the same time maximizing the number of homosexuals, bisexuals, and drug users who could be told they were infected (using the CDC criterion). But still, this duplicity could not be hidden forever, and the CDC’s only hope for establishing a credible link between perceived infection with HIV and AIDS was to campaign for a formal change to the FDA approved criteria for scoring WB as positive. Their persistence was rewarded when nearly 6 years later in 1993, the FDA approved a new WB kit that utilized the CDC criterion for defining samples as positive.

So engaged were scientist over the debate as to what should be the appropriate criteria for declaring a WB as positive, they all but forgot that the CDC’s original proclamation that antibody indicates current infection, was without any merit in the first place. By 1993, scoring positive for antibodies on WB – rightly or wrongly – had become synonymous with infection. As such, with the newly approved criterion for scoring WB, infection could finally be demonstrated in the vast majority of patients with clinical AIDS. But still, there remained one last hurdle. Even with the revised criteria for scoring WB, there remained a substantial number of AIDS patients in whom no evidence for antibody could be confirmed; and in order for HIV to be the putative cause of AIDS, it necessarily had to be found in 100% of patients.

So how did the experts deal with these remaining antibody negative AIDS patients? Well, they simply reclassified them as having something other than AIDS. After all, by the end of 1992, cumulative reported AIDS cases had already reached a quarter million, and the loss of a few thousand AIDS cases was only a drop in the bucket; a small price to pay for a perfect correlation between HIV and AIDS. In other words, the CDC constructed a perfect 100% correlation between HIV and AIDS, simply by getting rid of all AIDS patients for whom no evidence of HIV could be found. The only problem was...if not AIDS, what did these patients have? Well, nobody knew, but if they didn’t have evidence for HIV, they couldn’t have AIDS, so epidemiologists simply invented a new syndrome for them; and in order to make it sound real, they gave it an official name, Idiopathic CD4-lymphocytopenia, or ICL.

Finally, nine years after the fact, the CDC had their correlation. In fact, so attractive was this correlation that the CDC revised its AIDS case definition in 1993 to include confirmed antibody testing as a prerequisite for a diagnosis of AIDS. In other words, since the new case definition required persons to test positive on ELISA and WB before they could be counted as an AIDS case, the correlation between perceived HIV and AIDS would necessarily be a self-fulfilling 100% on into the future. Furthermore, the minority of patients who would be lost from the AIDS statistics as a result of ICL (i.e., sick persons in risk groups who fail to score positive on WB, or ICL patients) would not even be noticed because the CDC also dramatically expanded the list of conditions that could be used to declare other antibody positive persons as AIDS patients. And finally, in cases where researchers or physicians really wanted an ICL patient to have AIDS – for example in order to treat them with antiretroviral drugs – they could simply invoke the disappearing antibody principle.

Although no one can argue that 1993 was anything but a banner year for the CDC, their victory did not come without a cost. Specifically, with the adoption of the CDC’s liberal criterion for scoring WB, researchers associated with blood banks noticed a sudden and statistically significant increase in the number of blood donors who had to be informed they were infected (8, 9). And in spite of the fact that this new criterion was known to be prone to false positive reactions (10, 11); and that extensive follow-up studies (8, 9) have indicated “most of these [new] results are false positives;” (9}) and that false-positives as a result of the CDC criterion “may represent as many as 10% of all HIV-positive interpretations among donor populations;” (12) the research community remains silent – apparently content to knowingly sacrifice the lives of these donors in exchange for the illusion of a correlation between HIV and AIDS.

Given that the FDA and manufacturers of these tests contend that the significance of a positive ELISA and WB in healthy blood donors is not known, it may well be that 100% of the persons in low-risk populations who have been told they are infected are actually HIV negative. However, to know with certainty that at least 10% of low-risk individuals diagnosed with HIV since 1993 are actually not infected, and to do nothing about it, is incredible. According to the authors of one of the above studies, “our data suggest that from 48 to 56 blood donors annually are misclassified as HIV-1 infected based on a combination of false-positive EIA and Western blot results.” (10) To be sure, these are not astronomical numbers, but the authors go on to emphasize that: “The misclassification of even one HIV-uninfected person as HIV infected has serious consequences for that person, their family, and the institution providing the notification.” (10)

So why is nothing done to rectify this problem? Well according to the authors of one of the studies that uncovered false-positive WBs arising from the CDC criterion: “After reviewing the findings of the present study with CDC and FDA scientists, we decided that a revision of WB interpretive criteria is not warranted at present. The rationale is that the public health benefits of correct classification of a large number of infected persons as positive under the revised criteria (rather than their misclassification as indeterminate under the earlier criteria) outweigh the rare occurrence of false-positive WBs.” (9)

In other words, since many more gay men, bisexuals and IV drug users can be told they are infected under the revised criteria, it must be right, and to knowingly tell a few HIV-negative blood donors they are infected is worth it.

Footnotes and references

  1. CDC. "Provisional public health services inter-agency recommendations for screening donated blood and plasma for antibody to the virus causing acquired immunodeficiency syndrome." MMWR January 11, 1985; 34: 1-5.
  2. Leitman SF, et al. "Clinical implications of positive tests for antibodies to Human Immunodeficiency Virus Type I in asymptomatic blood donors." NEJM 1989; 321: 917-24.
  3. Although the manufacturers of WB tests make no formal claim that their tests can be used to exclude infection (because a person may have been recently infected and therefore not developed the necessary antibody response to score positive; i.e., seroconversion), it was, and still is, used for that purpose. In fact, the CDC would come to formally endorse the use of WB for this purpose in 1989 (see ref. below) In routine practice, and depending on the screening assay used, anywhere from about 1/300 to 1/1000 donated blood samples will score repeatedly reactive on screening with ELISA. Given that there are approximately 15 million units of blood donated in the US annually, this would correspond to 15,000 to 45,000 positive screening results annually. Depending on the collection site, follow-up WB testing will reveal 90-99% of these results to be false-positives. Unfortunately, about a third of these false-positives will initially score “indeterminate” on WB, which requires the patient to be retested in 1-3 months in order to confirm that their initial result was not representative of seroconversion. If their follow-up sample scores either negative on ELISA, or positive on ELISA but negative or again indeterminate (“persistently indeterminate”) on WB, the patient can be “reassured that they are almost certainly not infected.” (CDC. MMWR July 21, 1989; 38/S-7:1-7.)
  4. The Consortium for Retrovirus Serology Standardization. "Serological diagnosis of Human Immunodeficiency Virus infection by Western blot testing." JAMA 1988; 260: 674-9.
  5. CDC. "Revision of the CDC surveillance case definition for Acquired Immunodeficiency Syndrome." MMWR (Supplement) August 14, 1987; 36/No. 1S: 1-15S.
  6. Center for Infectious Disease, Centers for Disease Control. "AIDS weekly surveillance report – United States AIDS program." December 28, 1987.
  7. Sayre KR, et al. "False-positive human immunodeficiency virus type 1 Western blot tests in non-infected blood donors. Transfusion 1996; 36: 45-52.
  8. Kleinman S, et al. "False-positive HIV-1 test results in a low-risk screening setting of voluntary blood donation." JAMA 1998; 280: 1080-5.
  9. Aberie-Grasse J, et al. "Impact on human immunodeficiency virus type I (HIV-1) seroprevalence of the change in HIV-1 Western blot criteria." Transfusion 1997; 37: 246-7.
  10. Bukrinsky MI, et al. "Reactivity to gag- and env-related proteins in immunoblot assay is not necessarily indicative of HIV infection." AIDS 1988; 2: 405-406.
  11. Healey DS and Bolton WV. "Apparent HIV-1 glycoprotein reactivity on Western blot in uninfected blood donors." AIDS 1993; 7: 655-8.
  12. Dodd RY and Stramer SL. "Indeterminate results in blood donor testing: What you don’t know can hurt you." Transfus Med Rev 2000; 14: 151-60.

© 2007 by Rodney Richards
Originally published at "You Bet Your Life"

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox