Tag Archives: health
There is a growing demand for fruit and vegetables across the Western world, thanks to increased awareness of their nutritional and health benefits. But we’ve always been taught they might not be safe to eat straight out of the supermarket, and they have to be washed first. Is this the case? And what might happen if we don’t?
What’s in a veggie?
Fruits and some vegetables are often consumed raw, fresh-cut or minimally processed, which is often why there are concerns about their safety. Fresh fruits and vegetables and unpasteurised juices can harbour disease-causing bugs (knows as pathogens) such as Salmonella, Campylobacter, Listeria and Shiga-toxin-producing E. coli (strains of E.coli). They can also contain pesticide residues and toxic compounds produced by moulds on the surface or even inside tissues of these foods.
Fresh fruits and vegetables may also contain allergens, which may be naturally occurring or contaminated, that can cause severe discomfort to people suffering from an intolerance. Of the potential risks, contamination with tiny bugs or organisms called microbes is the most prevalent.
The ingestion of very small numbers of dangerous bugs may not be harmful as our immune system can fight them off. But problems begin when the body’s defences fail, causing these “bad bugs” to multiply and spread throughout the body.
In recent years, fruits and vegetables such as sprouts, celery and rockmelons were identified as potential sources of food-borne pathogens. They are more susceptible to being contaminated. This has caused a number of health and social issues and major economic losses worldwide.
Last year there was an outbreak of listeriosis in the US, a disease caused by the ingestion of bacterium Listeria monocytogenes, linked to commercially produced, prepacked whole caramel apples. Thirty-five people from 12 states were infected with the disease, and three people died.
In May 2011, Germany experienced the largest epidemic of hemolytic–uremic syndrome (a disease characterized by anemia, acute kidney failure and low platelet counts), caused by Shiga-toxin–producing E.coli associated with fresh produce such as fenugreek sprouts. Over a period of about three months nearly 4000 fell ill with symptoms such as headache and diarrhoea, and a further 800 contracted hemolytic–uremic syndrome. Authorities reported 53 deaths.
In the US in 2011, cantaloupes become contaminated with the bacterium Listeria monocytogenes. One-hundred and forty-six people in 28 states were sick and 30 died.
While Australia is considered one of the safest food suppliers in the world, a significant number of foodborne illnesses are still reported every year. The government-funded organisation OzFoodNet reported 674 outbreaks of enteric illness, including those transmitted by contaminated foods, in the last quarter of 2013 alone.
Does washing help?
The washing of fruits and vegetables is one of the most important processing steps at the industrial level. Washing is designed to remove dirt and dust and some pesticides, and to detach bugs. Washing improves not only the safety and quality, but also the product’s shelf-life.
However, the quality of water used for washing is crucial. Washing water can serve as a source of cross-contamination as it may be re-used during harvesting and processing stages. Washing with sanitising agents is much better; washing removes microorganisms by detaching them from the products, and sanitising kills them.
Screenshot from Woolworths website, CC BY
Although this first stage of washing can significantly reduce the level of pathogens, infiltration of pathogens into cracks, crevices, and between the cells of fruits and vegetables has been shown to be possible.
Once positioned in these niches, pathogens may survive and multiply by the time the infected produce is consumed. Therefore pre-washed produce may not be 100% safe. Peeling can help to get rid of bugs on the surface, but it also risks cross-contaminating the inner part of the product.
Cooking temperatures kill most of the pathogenic bugs, but the compounds produced by them (metabolites) may be heat-tolerant and can cause serious health issues. Washing may help to remove some of these compounds, but not necessarily all.
What to do
The risk of eating contaminated produce is much greater now than it has been in previous centuries because primary production, processing and trade of fruits and vegetables occur in diverse climates and within different countries’ rules and regulations and food processing systems.
Most of these foodborne illnesses are preventable. Washing in clean running tap water significantly reduces the level of E. coli bacteria on broccoli and lettuce, although it doesn’t completely eliminate it. Therefore washing fruits and vegetables using clean water at home – including pre-washed products – before consumption may help minimise the risk of foodborne infections.
Never eat or buy produce that looks spoiled, however be aware produce that is contaminated may look, taste and smell similar to the produce that is safe to eat. Make sure kitchen surfaces are clean and use the correct temperature and time for cooking.
Washing fresh produce is an important part of ensuring your favourite fruits and veggies are safe to consume, but also be sure to pay regular attention to the media for any outbreaks or updates related to fresh produce safety.
It is three years since Australia fully implemented its historic tobacco plain packaging law. From December 1, 2012, all tobacco products have been required to be sold in the mandated standardised packs, which, with their large disturbing graphic health warnings, are anything but “plain”.
Ever since, there have been frenzied efforts by the tobacco industry and its ideological baggage carriers to discredit the policy as a failure.
The obvious subtext of this effort has been to megaphone a message to other governments that they should not contemplate introducing plain packaging because it has “failed”: smoking, it is claimed, has not fallen any faster in Australia after plain packaging that it was already falling before. All that has occurred, they argue, is that illicit trade has increased.
The supreme irony here is of course that if such criticism was correct, then to paraphrase Hamlet’s mother Gertrude, “The tobacco industry doth protest too much, methinks.” Why would the industry and its astro-turfed bloggers waste so much money and effort denigrating a policy which was having little or no impact?
Why take the Australian government to the High Court (and fail six to one) to try and block the law? Why invest in supporting minnow tobacco-growing states such as the Dominican Republic, Honduras and Cuba in their efforts to have the World Trade Organization rule against plain packaging?
Why not just ignore an ineffective policy instead of making it only too obvious to all by such actions that it is in fact a grave threat to your industry?
Two key assumptions have underscored efforts to discredit the impact of plain packaging. First, critics assume the impacts of the law should have been evident immediately as it was implemented: as one colleague put it recently “within ten seconds of the law passing”.
Second, they assume (but never actually state) that the impact of plain packaging on smoking by children (the principal target) and adults was supposedly going to be greater than anything we have previously observed in the entire history of tobacco control.
In 1999, the late Tony McMichael, professor of epidemiology at the Australian National University, published a classic paper called Prisoners of the Proximate where he wrote about the need to understand the determinants of population health in terms that extend beyond proximate single risk factors and influences.
In tobacco control, both proximal (discrete, recent and quick-acting) and distal (on-going, slow-burn) effects of policies and campaigns can occur.
Price rises (and falls through discounting) can have both immediate and lasting effects, jolting smokers into sometimes unplanned quitting and also slowly percolating an unease about the costs of smoking that translate into quitting down the track.
Tobacco advertising bans are a good example of a policy that has such slow-burn effects across many years. Few if any quit smoking in direct response to tobacco advertising bans. They work instead by causing the next generations of kids to grow up in an environment devoid of massive promotional campaigns depicting smoking in positive ways.
I have often heard smokers say “plain packaging won’t make me quit smoking”. This is akin to the myopic self-awareness of those who swear “advertising (for any product) never influences me” while noting that it only influences the more impressionable.
Plain packs were unlikely to act suddenly in the way tax rises do, although the unavoidably huge graphic health warnings may well have acted like straws that broke the Camel’s back of worry about smoking. Their impact was far more likely to be of the slow-burn sort, where the constant reminder that tobacco, unique among all products, is the only consumer good treated this way by the law. It is exceptionally dangerous, with a recent estimate that two in every three long-term smokers will die from tobacco use.
In 1994 I wrote a now highly cited paper in the British Medical Journal which talked about the impossibility of “unravelling gossamer with boxing gloves” when it came to being certain about precisely why smokers quit. I took a day in the life of a smoker who quit, and pointed to the myriad of influences both distal and proximal that coalesce to finally stimulate a smoker to quit.
While a smoker might nominate a particular policy, conversation with a doctor or anti-smoking campaign as being “the reason” they quit, much of what went on before provides the broad shoulders of concern that carry the final attribution. There are synergies between all these factors and the demand to separate them all is like the demand to unscramble an omelette.
So what has happened to smoking in Australia since plain packs?
Data released this month from a national schools survey involving more than 23,000 high school children found smoking rates were the lowest ever recorded since the studies first commenced in 1984 (see graph). This momentum is starving the tobacco industry of new smokers, which is one important reason why all tobacco companies are now busily acquiring e-cigarette brands.
Proportion of 12- to 15-year-olds who smoke, 1984 to 2014
National Drug Strategy report 2014.
With adults, National Accounts data just released show that for the 11 quarter-year periods since March 2013, consumption of tobacco products in aggregate fell an unprecedented 20.8%, while the previous 11 quarters it fell 15.7% and in the 11 before that, only 2.2%.
The latest available data on adult smoking prevalence we have is from 2013 and shows just 12.8% of Australians over 14 smoked on a daily basis. This is the lowest on record and again, the biggest percentage falls experienced since the surveys commenced (see graph).
Reductions in daily smoking among Australians aged over 14, 1991 to 2013
Meanwhile, the tobacco industry plods along funding heavily lambasted studies which purport to show none of this is happening.
The argument that plain packaging would cause illicit trade to boom was made with monotonous regularity by Big Tobacco between April 2010 when plain packaging was announced and its December 2012 implementation. When the industry lost its case in the High Court, the argument was quietly dropped.
Today, the industry explains illicit trade entirely by the heinous government tobacco tax rises cloaked in a sanctimonious rhetoric of speaking up for poor smokers and corporate citizen concern about tax avoidance bleeding Treasury. In all this it fails to mention that it has long used tax rises as air cover to quietly raise its own profit margins.
As I wrote recently in The Conversation:
From August 2011 to February 2013, while excise duty rose 24¢ for a pack of 25, the tobacco companies’ portion of the cigarette price (which excludes excise and GST), jumped A$1.75 to A$7.10. While excise had risen 2.8% over the period, the average net price had risen 27%. Philip Morris’ budget brand Choice 25s rose A$1.80 in this period, with only 41¢ of this being from excise and GST.
Ireland, the United Kingdom and France have already passed laws to introduce plain packs. Norway and Canada will soon, and New Zealand, Chile, Turkey, South Africa and Brazil have also made high-level noises about joining in too. The world has a lot to thank Rudd and Gillard governments (and particularly Health Minister Nicola Roxon) for taking this initiative, and the subsequent Coalition government for continuing to support it strongly as it continues to come under attack from those it has and will continue to hurt.
Seasonal hay fever is usually caused by pollen from trees, grasses and weeds. In Australia, the major triggers are spring-flowering grasses such as ryegrass, but also summer-flowering Bahia, Bermuda and Johnson grasses.
So what’s the best way to manage your symptoms with medication?
Nasal decongestant sprays are effective for unblocking noses. They work very quickly by constricting the blood vessels in the lining of the nose. They’re also useful for opening the nasal passages to allow better access for other, more long-term nasal sprays, which we’ll discuss below.
But beware – they can’t be used for more than a few days before they cause “rebound” problems, where the nose becomes even more blocked.
Oral decongestant tablets aren’t as effective as nasal sprays. They’re most commonly used in combination with antihistamines. Together, these two drugs tackle most of the symptoms of hay fever.
Oral decongestants, such as pseudoephedrine, don’t cause rebound symptoms. But they’re stimulants and have unpleasant side effects such as sleep disturbance, irritability, raising blood pressure and urinary retention. So they’re for short-term use only.
Antihistamines are the most commonly used over-the-counter medications. They’re very effective for alleviating itchy, runny noses and sneezing. But they’re less effective for blocked noses, which in the longer term, becomes the most prominent symptom.
There are two major classes: the older, sedating drugs, such as Benadryl; and the newer, less- or non-sedating drugs, such as Zyrtec, Claratyn and Telfast.
Sedating antihistamines are generally not recommended for hay fever as they cause problems aside from drowsiness. They have unfortunate interactions with alcohol and some other medications, leading to significant risks when driving or operating machinery.
The non-sedating antihistamines as a class are safe, effective and relatively quick-acting. Most act within one to two hours and have a 12-to-24-hour duration of action. There are no meaningful differences in safety and efficacy between the new antihistamines with active ingredients such as cetirizine, loratidine and fexofenadine.
Antihistamines work best when used before allergen exposure, if this can be predicted. So, if you’re going bush walking or picnicking on a warm windy spring day, take an antihistamine before venturing out.
Contrary to popular belief, our bodies do not “get used to“ antihistamines and their effectiveness does not lessen over time.
Two topical antihistamine sprays are available, both of which are effective and can work more quickly than tablets or syrups.
Antihistamine eye drops can ease the irritation and discomfort of itchy eyes more effectively than antihistamine tablets.
Nasal steroid sprays
These are the Rolls-Royce of treatments for hay fever and are especially useful for those experiencing regular or severe symptoms. They will dampen down all the symptoms of hay fever and are particularly good for managing nasal blockage in a safe manner.
A number of different nasal steroid sprays are available, some over the counter.
Because they’re a preventative treatment, they must be used on a daily basis to be effective. Ideally, treatment should start at the very beginning of the hay fever season to stop the development of allergic inflammation in the lining of the nose. They also need to be applied correctly to the nose in order to prevent irritation.
Contrary to popular belief, these are very safe medications despite the name “steroid”. Intranasal steroid sprays are applied and are active in the nose; only the smallest amounts reach the general circulation.
Nevertheless, their use should be monitored in children, particularly if they are also using inhaled corticosteroids as an asthma preventer medication.
The most common side effect of nasal steroid sprays is nasal bleeding. This can occur even if used correctly.
Immunotherapy (allergy ‘vaccines’)
Immunotherapy involves administering doses of allergen extracts at gradually increasing doses. The aim is to “re-educate” the immune system to down-regulate the allergic response, reducing allergic symptoms affecting the airways.
This treatment has been available for more than a century, but these days two forms are used: injections and sublingual (under-the-tongue) drops or tablets. This should be prescribed by an allergy specialist who determines the correct “vaccine” for the therapy.
Immunotherapy is expensive and is typically given for three to four years. However, in the long term, it may be more cost-effective than treatments just targeting the symptoms.
There are many safe and effective over-the-counter treatments for hay fever symptoms, though some treatments may suit you better than others. If you continue to experience symptoms, talk to your GP about other treatment options.
Janet Davies, Associate professor, Queensland University of Technology; Connie Katelaris, Professor of Immunology and Allergy, UWAS & Head of Unit, South Western Sydney Local Health District, and Danielle Medek, Ecophysiologist; junior medical officer, ACT Health
We all get headaches from time to time. In fact, nearly every second person in the world had a headache at least once in the past year. But these can feel very different, depending on which of the nearly 200 types of headache you have.
More than half (52%) of people will have a tension-type headache at some point in their life, around 18% will get a migraine, and 4% will suffer from chronic daily headaches. These are the most common headache-related diagnoses. Although there are some variations globally, the figures seem remarkably consistent across populations.
Secondary headaches can be initiated by triggering factors such as medication overuse, medication side effects, neck pain, sinus disease or dental problems. These account for small percentages individually compared to the primary headaches, but may be more treatable if the predisposing problem can be sorted out.
Tension-type headaches (TTH) feel like a dull or heavy, non-pulsating band of pain, usually on both sides of the head. The name comes from an erroneous belief that overly tight muscles are the main reason for the headache.
TTH usually occurs in episodes, with each lasting from several hours up to a few days at a time. There is not usually much associated nausea, light sensitivity or sound sensitivity.
Chronic TTH is a less common form and is diagnosed when you have experienced at least 180 days with a headache per year. It is generally not aggravated by routine physical activity; it’s just there all the time.
Genetic tendencies explain some of the risk for developing TTH, with your own risk increased threefold if you have an immediate family member with the condition.
Infrequent episodic TTH does not appear to be strongly associated with psychological stress, despite this common belief. Chronic TTH has a stronger association with higher psychological distress, but it is unclear whether this is a cause or effect of having long-term disabling headaches.
Strangely for such a common and problematic condition, there is still little agreement about exactly how the pain is produced in TTH.
The most attractive hypothesis to me is that it represents a “virtual” pain whereby multiple low-grade inputs (likely including inputs that are “almost-painful”, or below the threshold for conscious pain) add up to produce sensitisation of the trigeminal nerve nuclei (the nerve shown in orange below).
This turmoil registers as pain referred to the distribution of the head, usually the forehead, temple and back of the head locations. Examination of these areas doesn’t show any abnormalities because in TTH, there is no one driving mechanism of the headache.
Treatment remains almost trivially simple, despite years of research. It’s almost true to say that the proverbial “cup of tea, a Bex and a good lie down” sums it up. Aspirin, paracetamol or ibuprofen plus rest and possibly some cold packs seem to be the most reliable treatment. There is conflicting or negative evidence for almost every other, fancier therapy.
Migraine alone is the sixth most disabling condition globally.
Migraines are usually one-sided, associated with nausea and light sensitivity (photophobia) and may also be preceded by idiosyncratic sensory experiences called an “aura”. Aura phenomena can include moods or emotions, such as deja vu, visual symptoms (flashing lights or jagged lines are common) or problems with speech.
Migraine is a clinical diagnosis; there is no objective test that can verify it with our current technology. But compared to the frustration of researching and treating tension-type headaches, migraine has been steadily giving up its secrets over the past decade.
Migraine physiology is extremely complex. The headaches seem to arise because of dysfunctional regulation of the tone of some of the blood vessels inside the skull.
Migraine sufferers – Migraineurs – may have genetic vulnerability to migraines because of overly responsive calcium channels in their nerve membranes or other mutations which result in them having overactive signalling pathways in the brain.
Environmental or internal triggers can provoke these nerves to over-react, resulting in the activation of a reflex pathway. This dysregulation of normal structures causes the headache, nausea, photophobia and phonophobia (sound sensitivity) typical of an attack.
The period of headache in a migraine attack corresponds with a rise in the blood levels in the head of a peptide called calcitonin gene-related peptide (CGRP). CGRP is one of the most common pain-inducing signal molecules in the body. When the CGRP falls, the headache goes away. Where the extra CGRP comes from is not clear but it probably is released from the overactive networks of cells in the brainstem.
The most effective group of drugs for migraine are the triptans. So effective and specific are these drugs that the diagnosis of migraine needs to be reconsidered if they don’t abort the headache attacks most of the time.
Triptans work by activating certain subtypes of serotonin receptors in the brain. Taking a triptan early in a migraine attack seems to directly lower the CGRP release and oppose its effects on blood vessels thereby stopping the attack. Triptans are not however useful to prevent frequent attacks of migraine.
Migraine prophylaxis is achieved by several drugs of different classes, with radically differing mechanisms of action. Some are anticonvulsants, which clearly work by suppressing the nerve overactivity typical of migraineurs. Others, such as the beta-blockers (propranolol) and calcium-channel blockers (verapamil) target the nerve endings on the blood vessels. Others which are known to be effective, such as botulinum toxin (Botox) and amitriptyline (Endep) work by means which are yet to be fully understood.
Severe migraineurs suffer years of disability and as a public service I would like to suggest that if you know someone who has severe migraines (you almost certainly do) please read this excellent list of what not to say to them when trying to be sympathetic or helpful.
Chronic daily headache
Imagine that you never had a day without headache. You can remember vaguely the time when you didn’t feel that pounding in the temples, squeezing in the back of the head or piercing pain above the eyes but it seems like another life. Such is the lot of sufferers of chronic daily headache (CDH).
Some headaches begin as as frequent but clearly episodic tension-type headache, or migraine, but then “transform” into what seems to be basically a continuous headache for at least some part of every day.
There are a number of rare headache types which may cause chronic daily headache and diagnosis of the these can lead to specific treatments which work well. This is the role of a neurologist or pain specialist with a special interest in headache.
Possibly the most common reason why tension-type headache or migraine can transform is medication overuse, especially short-acting opioids such as codeine. The best solution to this problem is to avoid long-term regular use of codeine for headaches, though the evidence would suggest we may never achieve this goal except by making codeine prescription-only.
Frequent use of triptans is also believed to sensitise the trigeminovascular networks in the brainstem, thereby lowering the bar for triggering of migraine attacks. If the threshold for an attack becomes too low, they may never quite switch off, and one attack will run into the next one.
If you have more than just the occasional headache, it pays to get a proper diagnosis, as the reasons for your headache can be many and varied. Some have specific treatments for them, and others such as TTH seem quite difficult to find a specific treatment for. There are new classes of drug treatment under development, for migraine in particular, so it looks hopeful that future generations may not have to labour under the burden of poorly treated headaches.
There seems to be a shortening gap between studies about diet, nutrition and health. And each starts another conversation about trans vs saturated vs polyunsaturated fats, or this diet vs that, or, as is today’s case, fats vs carbohydrates.
In a paper published today in the journal Cell Metabolism, researchers found that when 30% of a day’s kilojoules were restricted by cutting fats (diets with a higher intake of carbohydrates), participants in their study lost more body fat compared to when the same amount of energy was restricted by cutting carbs (diets with a higher intake of fat).
This study used a type of meticulous metabolic research, which is expensive and unsuited to lengthy periods, but valuable for exploring the physiology of reducing equal dietary contributions from fat or carbohydrate. But like much dietary analysis, it may be shining a light on the wrong issues altogether.
The good, the bad and the ugly
The most important aspect of any diet is that it should be practical and healthy enough to follow for the rest of your life. There’s no magic bullet for weight loss. While some people claim they find it easier to cut out foods high in carbohydrates, others find it easier to avoid high-fat foods.
If you need to lose weight, cutting down is what helps. But few people can stick to any extreme diet for life, so what you substitute is just as important as what you cut out – especially for long-term health.
Choices based only on macronutrients (foods required in large amounts in the diet, such as fats, carbohydrates and protein) miss important aspects of many foods and open the diet to imbalance. Carbohydrate foods, for instance, include nutritionally worthy choices – such as legumes, wholegrains, fruits, milk and yoghurt – but also a huge range of items high in sugar or refined starches with little or no nutritional attributes. “Cutting carbs” doesn’t distinguish between the good and bad foods in this category.
The same thing happens with fats. Sources of unsaturated fat – such as nuts, seeds, avocado or extra virgin olive oil – have proven health benefits. But there’s no evidence for any benefits of lard, dripping, cream, fast foods or any of the fatty snack foods that account for much of our saturated fat intake. And no long-term study shows sustained weight loss or other health benefits from a diet high in saturated fats.
Some foods are more even problematic. Most fast foods are high in saturated fat and salt, and lack dietary fibre. And they’re not only largely devoid of vegetables (apart from the odd gherkin), but often displace meals that would have contained vegetables.
Biscuits, cakes, pastries, many desserts and confectionery provide a double whammy with high levels of unhealthy fats as well as sugar and refined starches. Make that a triple whammy because most lack any nutritional virtue as well.
From bad to worse
Assumptions based on macronutrients are simply too gross to be meaningful. This is apparent in so-called meta-analyses based on a mixture of cohort and case-control studies that use different methods and time frames relating to what people eat, and fail to report all aspects of the diet.
One review, for instance, claimed that saturated fat was unrelated to cardiovascular disease. But it ignored the adverse impacts of the foods that had replaced saturated fats and provided no information about the foods that provided saturated fat in the first instance.
Worse still, such analyses are prone to many errors. A long check of every reference used in that meta-analysis showed that the conclusion would have differed if 25 studies had either not been omitted or had been reported correctly (sadly, it’s paywalled).
Another recent review also failed to show any clear association between higher saturated fat intake and all-cause mortality, heart disease, ischaemic stroke or type 2 diabetes, although the authors were unable to confidently rule out increased risk for heart disease deaths. They also noted that the certainty of associations between saturated fat and all outcomes was “very low”, which means we don’t yet understand the association between saturated fats and disease.
Hopefully, further research will distinguish between food sources of saturated fats; they are not all equal. There’s already good evidence that processed meats can have more deleterious effects than fresh meat. And that fermented dairy products, such as yoghurt and cheese, may also have health benefits and are distinctly different for heart health risk compared to butter.
Swapping saturated fat for sugar or refined starches is worse than useless for preventing cardiovascular disease. But please direct criticism of foods where fat has been replaced by sugar at the food industry. Dietary guidelines have always recommended limiting sugar as well as saturated fat.
A sorry state of affairs
Unfortunately, in most developed countries, sugar consumption remains high while intakes of vegetables, legumes, fruits, nuts and wholegrains are low. And while macronutrient intakes in countries such as Australia may look fine (31% of energy from fat and 44% from carbs), problems remain with the kinds and amounts of foods we consume.
Junk food and drinks were once consumed only as an occasional treat, but they now contribute significant portions of both adult and children’s diets – in Australia, 35% of adults’ and 41% of childrens’ energy intake. Confectionery and starchy, fatty, savoury snack food intake have also increased significantly.
It really is time to focus on foods instead of wasting time on macronutrients. Australia’s Dietary Guidelines have made this change, as has the new simple Swedish equivalent, which emphasises sustainable choices. Norway and 20 European countries also take a food focus and the number one point in Brazil’s enlightened guidelines is that diet is more than the intake of nutrients.
Consider the dozens of studies on Mediterranean diets, including randomised trials, where the fat and carbohydrate content vary but the health value depends on particular foods: extra virgin olive oil, nuts, vegetables, fruits, grains and legumes and a low intake of highly processed products. The take-home message from these is that we need to stop fussing over macronutrients and think about foods.
The immune system protects us from the constant onslaught of viruses, bacteria and other types of pathogens we encounter throughout life. It also remembers past infections so it can fight them off more easily the next time we encounter them.
But the immune system can sometimes misbehave. It can start attacking its own proteins, rather than the infection, causing autoimmunity. Or, it can effectively respond to one variant of a virus, but then is unable to stop another variant of the virus. This is termed the original antigenic sin (OAS).
OAS occurs when the initial successful immune response blocks an effective response when the person is next exposed to the virus. This can have potentially devastating consequences for illnesses such as the mosquito-borne dengue.
There are around 400 million dengue infections worldwide each year and no vaccine is available. Reinfection of someone who has been exposed to dengue previously can result in life-threatening hemorrhagic fever.
OAS is also thought to limit our immune responses to the highly variable influenza virus, increasing the chance of pandemics.
To understand why OAS occurs, we need to go back to basics about how immunity is formed.
The race begins
When a virus enters the body, a race begins between responding immune cells and the infecting pathogen. The pathogen replicates and finds a target cell or organ that will allow it to thrive.
So, the effectiveness of a response depends on the immune system winning the race to clear the pathogen before it causes irreversible damage to the body.
Immune cells called “B cells” make antibodies. A pathogen such as a virus is a large molecule with different components, called antigens. When a B cell recognises an antigen, it is activated and interacts with other immune cells to receive directions.
B cells then set out on two main paths. Some of the cells begin to make an antibody early in the response. But this antibody is often not of sufficient quality to rid the body of the infection.
The B cells that choose the alternate pathway go through a process that improves the quality of the antibody. This strengthens the binding between antibody and antigen. Antibodies are also grouped depending on the way they help eliminate the pathogen.
Some groups are better at clearing viruses and other pathogens. So, the antibody group that is tailored to be most effective at clearing the type of infection comes to dominate the response over this period.
Although the increase in quality of antibody can take weeks, there are two critical benefits. It means the pathogen is cleared. And high-quality “memory” cells remain to provide us with immunity to future infections.
Immune memory cells consist of long-lived plasma cells and memory B cells. Long-lived plasma cells live in the bone marrow and can continuously pump out high-quality antibody, providing a first wave of protection when we’re reinfected with a virus.
This is the same type of antibody that is transferred from mother to a breastfed baby, providing passive immunity against pathogens the mother has previously been infected with. But this level of antibody may not be enough to clear the infection.
This is where memory cells step in. Because memory cells have already undergone quality improvement, they can respond quickly after reinfection to produce a large number of plasma cells secreting high-quality antibody.
Therefore, memory cells can clear the infection much more rapidly than the initial infection. This means the pathogen doesn’t have time to damage the body.
When the quality improvement process fails
The quality improvement process that allows B cells to bind and clear the pathogen more effectively is highly selective to the dominating antigen.
In most responses to infection, this is critical to clear the infection. But in the case of some pathogens, such as dengue, the virus may have variant strains that can fool the immune memory response.
Dengue virus has four major variant serotypes. Within each major variant, one antigen dominates and is targeted by the immune system.
Infection by variant A results in extremely selective targeting towards antigen A. If the body is reinfected with the same variant (A), it can effectively clear the virus.
However, after reinfection by a second variant (in which antigen B dominates), immune memory cells recognise the virus, but they make antibody specific for antigen A, rather than the second variant, in which antigen B is now dominating.
So, antibody is being made but is unable to bind and eliminate the virus. To make matter worse, it appears that any new immune response to antigen B is inhibited by the memory response, although the reasons why this occurs are unclear.
Influenza is a highly variable virus, and these variations each season are why we require yearly vaccinations.
But the role of OAS in limiting our ability to respond to different variants of influenza is still highly controversial. Almost 60 years after OAS was proposed to describe the response to influenza infections, it is still a source of much current research.
How can we avoid OAS?
We need to train our immune system to be more flexible and produce antibodies that can adapt when viruses try to evade the immune system.
To this end, researchers are designing vaccines to respond to multiple variants of pathogens. This has shown promising results and may be the way forward to overcome OAS for potentially life-threatening viruses such as dengue.
With her lies about having cancer and her willingness to cash in on the hopes of actual cancer patients, Belle Gibson – the Australian woman behind The Whole Pantry app – is indicative of our run-down, self-indulgent and narcissistic moral world, right?
From an insatiable desire for fame and attention to the shallowness and consumerism of the wellness and New Age movement, Gibson’s tale of deceit embodies all that is wrong with the modern world. Or so the thinking goes.
But there’s a different story to be told here: one that focuses not on Gibson’s shocking lie that she healed herself naturally of cancer but the overwhelming moral response to her mass deception and the social role this plays.
From a raging Twittersphere to talk-back radio to water cooler conversations, the backlash to Gibson’s deceit reminds us that we have a strong and lively moral culture.
The collective disgust works, effectively, to reinforce the values we hold dear.
Belle Gibson: a fallen god of self-improvement culture
You know the argument: Western culture has been overtaken by an “anything goes” relativism and as a result we have lost touch with foundational moral laws and principles. Without the moral anchors of the past – community, religion and tradition – we are left bobbing on the ocean without a moral compass.
The growth of self-help or “therapeutic” culture gets wrapped up in this narrative of decline. Belle Gibson, of course, had been elevated to a modern-day god of our wellness and self-improvement culture.
Gibson’s Whole Pantry brand provided an exemplary story of the power of the “self-improved”. The idea of beating cancer through a commitment to wellness and healthy eating strikes a major chord within a culture that accords so much importance to choice, wellness and self-improvement. As the motto of The Whole Pantry states: “Your Whole Life, Starts with You”.
For some, Belle Gibson’s lies are the fulfilment of New Age thought and the contemporary turn to self-improvement. JR Hennessy, in The Guardian, located Gibson’s fall from grace as part of the inevitable commercialisation of the New Age and its hollow culture of self-fulfilment and faked authenticity.
On this analysis, health gurus, new age dieters, meditators, anti-vaxxers and the paleo set are lumped in one big pile of narcissism and irresponsibility.
A lively moral discourse: reminding ourselves about what matters
There may be merit in the concerns being expressed about wellness culture – especially in its more extreme faux-miracle forms. But the important lesson to draw from Gibson’s exposure is the health and vitality of morality in our culture.
Despite all the hand-wringing about moral relativism and the like, the outrage Gibson has provoked shows that we are actually very quick to point out moral wrongs and to seek to restore the balance.
A quick scan of the Twitter hashtag #bellegibson reveals a wide range of moral commentary. There are people calling for punitive action and incarceration and others pointing to her “extraordinary irresponsibility”. Still others are treading a more sympathetic line, highlighting the harm of internet shaming and Gibson’s fragile mental state.
This reaction suggests we have a keen sense of what the moral rules are and when they have been transgressed. We might not precisely agree as to what those rules are – but in our daily lives we question what constitutes the good life.
Proponents of moral decline overlook this “water-cooler” morality: how moral life is established and created in the relationships, communication and moral discourse of everyday life.
Culture as moral antibodies
Culture works here as a series of moral antibodies that seek to redress violations of shared basic moral principles and values. In the Gibson case, these are truth, justice and a sense of fairness.
As 19th-century sociologist Emile Durkheim famously argued, deviance plays a healthy role in society, working to clarify social norms, to preserve the moral boundaries of the community, and to help strengthen feelings of cohesiveness.
Belle Gibson may reveal our vulnerability to be hoodwinked by food fads and wellness warriors but the response to her transgressions is a powerful reminder of the values we share in common – and what happens when you violate them.
The next question our moral culture must ask itself is how healthy is it to publicly shame a vulnerable person, and what is the right balance between culpability and sense of care and generosity to those who have done the wrong thing.
It is arguably the latter which provides the strongest test of the health of a moral society.
The government’s ‘no jab, no pay’ policy, which will restrict childcare benefits for those parents who refuse to have their kids immunised, may seem harsh to some. Most parents, however, will see the wisdom of a policy which puts the collective welfare of all children above the conscientious objections of a few parents.
The rate of non-immunisation of children has risen from 1% to 2% in a decade, noted Tony Abbott at a Sunday morning press conference announcing the new policy. 40,000 children are not immunised in Australia, he added, and rates of some very avoidable but potentially lethal children’s diseases such as measles and whooping cough have gone up.
That 2% put at risk the other 98%, and using the tax and benefits system to send that message is tough, but justified.
In the United States and the UK, too, immunisation rates have fallen over recent years, and diseases which once plagued our children, and were then all but wiped out by immunisation programs, have returned in significant numbers. So what has been going on? Why are so many parents refusing to take advantage of a preventive medical technology which has saved literally millions of children’s lives across the world?
One answer, if not the only one – some have deep religious objections, for example – is the news media, and their role in what we might call the amplification of irrational anxiety.
A small but significant minority of parents have come to believe, in all sincerity (and no-one doubts that they have the best interests of their children at heart) that immunisation is dangerous, and certainly riskier than the risks associated with not having their kids vaccinated.
Even though there is no solid evidence to support that belief, and plenty of evidence to support the benefits of immunisation, some parents are so anxious that they will put their own children, and more importantly, other people’s children, at heightened risk of exposure to a preventable disease which could cause disability and even death.
So where have these anxieties come from?
Back in 2004, an English doctor by the name of Andrew Wakefield published research claiming to demonstrate a link between the MMR triple vaccine (to immunise children against mumps, measles and rubella) and the onset of autism. As followers of the story will know, Wakefield’s work was subsequently discredited, and he himself struck off the medical register in the UK for his unethical research methods.
Before that happened, however, the alleged risks of MMR became a major news story in the UK and all over the world. At that time, a decade ago, the global incidence of autism had risen dramatically. Between 1996 and 2007 in the United States, for example, the reported incidence of autism rose from 0.8 per 1,000 to 5.2 – an increase of some 600%.
Similar increases were recorded in many other countries. In Australia, the first survey of the prevalence of autism did not take place until 2006 so historical data are lacking. In 2014, however, the Australian Bureau of Statistics found a 79% increase in diagnoses between 2009 and 2012. A NSW parliament report of 2013 noted that:
… the growing number of children diagnosed with Autism Spectrum Disorder (ASD) is an issue of concern both in Australia and overseas.
This does not mean that the actual prevalence of autism has risen, though. Rather, the public awareness of autism has risen, through movies such as Rain Man and the explosion of media visibility around the condition seen since the 1990s. Documentaries were made about autistic ‘savants’, and families where parents struggled to cope with autistic children. The Curious Incident of the Dog in the Night Time became a global publishing phenomenon, and an entire sub-genre of ‘autism lit’ emerged.
Many people, children and adults, who might hitherto have been described as ‘different’ or ‘eccentric’, or even just ‘shy’, were labelled with Asperger’s Syndrome, or some other condition on the autistic spectrum.
Through a heightened media visibility, parents, medical professionals, teachers and others involved with children were sensitised to a condition which until recently was little known and poorly understood. In other words, autism has always existed, but only recently has it been recognised and given a name. As a result, its recorded incidence has risen dramatically, not because more children are acquiring autism from one cause or another, but because more of those born with it – and autism is often a genetic condition that runs in families and mainly affects males – are being identified.
This is a positive development, because autism is very real, and heightened public awareness has led to support services being put in place for people with autism where there had been none.
Notwithstanding this context, one cannot blame parents for becoming more anxious about the causes of autism, and many quite plausible, if never substantiated, theories have circulated. Wakefield’s research, when it was published in 2004, spoke directly to that anxiety, and his hypothesis – that autism was ‘caused’ by immunisation – seemed credible to many.
In the UK, where the scare was centred, and Wakefield’s work taken very seriously by most of the media, hundreds of thousands of parents withdrew their children from the MMR program. Then-prime minister Tony Blair was asked by journalists to reveal if his baby son Leo had been vaccinated or not. He refused to answer on privacy grounds, while making clear his own absolute confidence in the safety of the vaccine.
Despite such reassurances, and the widespread scepticism which greeted Wakefield’s research amongst his medical peers from the outset, the impact of the scare was very real. Rates of immunisation fell, while the incidence of measles and other preventable diseases began to rise. Ill-founded anxiety about the dangers of immunisation ended up having very real consequences on public health.
Years after Wakefield’s work had been discredited by his peers, his theories on MMR and autism have continued to influence parents all over the world. And where he has had influence, so the incidence of the diseases targeted by the MMR vaccine have risen.
In February this year, the Sunday Times reported on the anti-immunisation advocacy of US group Generation Rescue, who were reported to “seek inspiration” from Wakefield, who now lives and works in that country. The result of this campaign:
… say experts, has been to plunge America into the first national debate since the 1970s about the safety and necessity of vaccines — and led to the return of measles, a highly contagious childhood disease judged extinct by the US government’s Centers for Disease Control (CDC) 15 years ago.
In the US, vaccination rates had fallen by 3%, amid what the article called “a mounting sense of panic”. As in the UK a decade previously, erroneous health information spread through a variety of media channels had provoked a health crisis with strong political reverberations.
Politicians faced with anxious parents were encouraged to comment and pronounce on the vaccination ‘issue’, even when ignorant of the science. Republican contenders for the 2016 presidential race – Chris Christie and Rand Paul – both declared their approval of parental exemptions from MMR vaccination.
Rigorous research into media coverage of autism and its causes has not been done in Australia, and we cannot assume that all of those ‘conscientious objectors’ to immunisation are directly influenced by the Wakefield hypothesis. But his work, and the way it was reported a decade ago and since, undoubtedly contributed to a climate of fear around the risks of vaccination, irrational in so far as it lacks foundation in scientifically validated evidence.
The government is therefore right to take strong action against parents whose irrational fears knowingly put other children at risk. It is an example of firm government in the face of myth and unreason, and should be supported by all who care about the health of our kids.
Take this quick medical pop quiz: which of the following conditions would you prefer to have during your next stay in hospital? A. Staphylococcus aureus (golden staph) bloodstream infection; or B. a heart attack?
I am guessing most non-medical readers voted for the Staph option and, if my experience is anything to go by, the majority of medical readers will have also made a microbial choice.
The disturbing truth is that a Staph aureus bloodstream infection has a 12-month death rate of between 20 and 35%, compared with 3-5% for a heart attack in hospital. Although antibiotic-resistant Staph aureus (MRSA) infections carry a slightly higher death rate, even the drug-sensitive Staphs are among the most potent of pathogens.
Staph aureus lives on our skin and in our nose where it usually causes no harm. But if we are admitted to hospital and have an intravenous catheter inserted through our skin, the Staph aureus can be carried on the tip of the needle into the vein.
Usually our immune system mops up any stray microbes but the reason for coming to the hospital in the first place may have weakened our defences. Infections such as pneumonia, the effects of cancer and its treatment, diabetes, drugs that suppress the immune system and surgery make us more vulnerable to hospital-acquired infections.
Very sick patients often require long-term intravenous access through central venous catheters (which are inserted into a large vein at the chest, neck or groin). These carry a higher risk of infection than small peripheral cannulas, usually inserted in veins of the hand or arm.
Patients with bloodstream infections develop chills, fever, headache, muscle and back pain and may go on to develop failure of one or more organ systems.
The complications of Staph aureus bloodstream infections (which, going back to our quiz, include a heart attack) may take weeks or months to develop; by the time the patients who survive have been discharged from the intensive care unit, the original infection may have been forgotten.
Today the National Health Performance Authority released its report on health care associated Staph aureus bloodstream infections in Australia in 2013-14. This is the third year the data has been reported nationally and the news is mildly encouraging. In 2013-14, there were 1,621 bloodstream infections caused by Staph aureus, which is 100 fewer than in 2012-13.
Nearly 90% of the infections occurred in the 115 major and large Australian public hospitals. To make sensible comparisons, hospitals are grouped by their size and the complexity of the patients they treat. Patients with burns, cancer, HIV and those who have undergone surgery are considered to be more vulnerable to infection.
For the 36 major Australian hospitals with more vulnerable patients, the average rate of infection was 1.28 per 10,000 patient bed days, although the rate was more than three time higher in some of these hospitals than in others. At the 40 major hospitals with fewer vulnerable patients, the average rate was 0.78 per 10,000 patient days.
The agreed national benchmark is less than 2.0 per 10,000 patient days and only a handful of hospitals exceeded this rate.
While these data show that the risk of Staph aureus infection for an individual patient is low, when considered across the entire health system it reveals an important and costly problem.
These figures only relate to infections that have been acquired in a health-care setting. Staph aureus can also originate in people in the community who have had no contact with the health system and these infections also carry a high risk of death.
There isn’t much we can do to reduce community Staph aureus blood stream infections but we can influence the number of hospital-associated infections – as these data so happily show. One important reason for the reduction is the increasing compliance of health-care workers with hand hygiene.
The most recent data from Hand Hygiene Australia show that average compliance in Australian hospitals is now 81.9% across the five “moments” of hand hygiene. Even my recalcitrant doctor colleagues have lifted their game – from an average of 59.6% in 2011, they have now reached 70.2% (which, I am ashamed to say, is still 15.3% behind our much cleaner nursing colleagues).
Other reasons for the reduction include the implementation of protocols for the insertion, maintenance and early removal of central venous catheters and, possibly, the increased preference for peripherally inserted central catheters.
Staph aureus is only one of many bacteria that can invade the bloodstream but, for the moment, it is the only centrally monitored and reported bacteria in Australia. Gram-negative bacteria such as E. coli are increasingly common causes of serious infections and antibacterial resistance is arguably a more important problem in these organisms. We need to watch this medical space.
Nevertheless, the modest 6% reduction in the number of bloodstream infections indicates that something as banal as keeping your hands clean can make a real difference. The 100 hospitalised patients who didn’t get a Staph aureus blood stream infection last year will never know how lucky they were.