Total Pageviews

Monday, August 27, 2012

The fridge and the human food chain


I grew up in a house without a fridge but then again I grew up in Ireland which suffers neither extreme cold in winter or extreme heat in summer, thanks to the gulf stream. In summer, we had a cold box which was kept in a shed and was used to store milk and butter. Today, it is impossible to imaging a fridge free house in any developed country. The advent of mass use of refrigeration totally transformed the human food chain. Ice has of course long been a means of preserving perishable food but it was not until the start of the 19th century that the concept of the mass use of ice began to be developed. Frederic Tudor, also known as the “Ice king”, effectively started what was to become a big industry, the harvesting of large quantities of ice from naturally frozen waters and shipping it across long distances[1].  As the demand for ice grew, the technology for harvesting it also grew and one of Tudor’s suppliers invented a horse drawn ice plough that cut cut large uniform blocks of ice. Between 1827 and 1830, the price of ice fell from five cents to as low as half a cent. Like all natural resources, mother nature could help or hinder by way of winter temperatures, which meant that the price of ice fluctuated quite considerably. In the suburbs of Boston, ice was delivered daily to homes in blocks of fifteen pounds by the ice man with his horse drawn carriage. Between 1843 and 1856, Boston’s consumption of ice grew from 6,000 to 85,000 tons. The ice was placed in an ice box to preserve meats, milk and vegetables. Ice was also beginning to be used for the transport of food and in 1851 the first refrigerated rail car shipped butter from Boston to New York. In contrast to the US, ice was not widely used in Europe. In contrast to the US, European housewives shopped daily for their food. US consulates were asked about the likelihood of Europeans adopting ice boxes and the French consul replied thus: “In the great cities of Marseilles and Bordeaux butchering is done every day in winter and twice a day in summer, and the meat is cooked within a few hours of killing”. Indeed, in the great Parisian market, Les Hallles, it was generally forbidden for traders to keep stocks of one day’s foods to be sold the next day.
The idea of producing “winter-free” ice took off in the mid nineteenth century. Ice harvested from ponds was by no means pure and could be contaminated by debris, insects and dirt. Moreover. as the concept of food hygiene took off, the possibility that harvested ice might be contaminated by sewage was a worry. In addition the ice-man who delivered the daily block of ice was described as a national joke - uncouth and dirty. While mechanical refrigeration was developed in the mid 1850s, they units were very large, very noisy and quite dangerous, particularly in relation to fire hazards. The first domestic fridges were so large that they were installed in basements with the coolant piped up into the iceless box in the kitchen. By the turn of the twentieth century, the big manufacturing companies began to take an interest in domestic refrigeration: General Electric, Frigidaire (subsequently bought by General Motors) and Kelvinator (founded by ex-GM executives). The big switch was to move from a gas driven system to an electric system. In 1927, GE released its first compact domestic fridge, the Monitor Top. The market however still had to compete with the traditional ice box with harvested ice and they were not about to give up. The ice-man was given a uniform, ice boxes insulation was improved and external portals were developed so that ice could be delivered into the home with no one present. A marketing war broke out between harvested ice and the fridge. The fridge manufacturers hit back and produced booklets with recipes and emphasised the value of refrigeration for summer fruits.
In 1930, a Frigidaire engineer developed a new gas, freon, which was non-toxic, non-inflammable and required less pressure to achieve colder temperatures leading to even smaller motors and thus bigger space for food storage. The ice-man vanished. By 1940, half of all US homes owned a fridge and today, a home without a fridge in the developed world is unthinkable. Refrigeration transformed the human food chain allowing foods to be transported great distances to everyone’s economic gain. It transformed shops and shopping and with that it transformed lifestyles, liberating people from frequent and nearby shopping. Freshness became the expectation of the consumer with respect to food. All of this is exactly what the locavore movement would want to see reversed but as is sung in the famous Irish song, Galway Bay: “They might as well go chasing after moonbeams or light a penny candle from a star”.
Ironically, the gas that came to transform refrigeration was to be targeted as the main cause of the loss the earths ozone layer and thus alternatives to freon and other CFCs had to be developed. The Montreal Protocol was the international treaty that all countries signed up to  to eliminate CFC gases and it has been hailed as the most successful collaborative effort in relation to the environment.

The fridge lives on and is taken for granted in today’s food chain.


[1] Based on a chapter in “Freshness: a perishable history” by Susanne Freidberg, published by Harvard University Press 

Tuesday, August 21, 2012

The obesity epidemic re-visited


When we measure the prevalence of obesity, it is usually by way of a survey over a defined period. Thus the National Adult Nutrition Survey (NANS) here in Ireland was conducted over a 12 month period in 2009-2010.  This gives us a single measure in time but it tells us nothing about the dynamics of obesity. In other words, any individual selected at random from within the NANS database, might have acquired any excess weight at any time prior to being measured and even a subject within a normal weight range might have been fat at some previous time. Thus, there is a growing literature in the use of birth cohorts to gain a more accurate picture of the dynamics of the present obesity epidemic. Such studies seek to examine the separate effects of age, period and cohort (APC studies). We know that as we get older, our body fat rises and our lean body mass falls. The question is, does this happen at an equal rate independent of year of birth or period of life. Birth cohorts are groups of subjects born the same year or over a small number of years. They grow old together and they experience major period effects (war, depression, economic boom, technological innovation etc) at the same age. the period effect is the consequences of some event which effects all ages when it occurs. Thus the advent of the internet is a period, in this sense, which all ages encounter at a given time.

All studies show that as we age, there is a gradual rise in our level of body-fat. It rises gradually from late teens up to mid-fifties and then it begins to decline. So age effects obesity. Looking at data from 1976 to 2000[1], there is a rise in the prevalence in obesity. Thus the prevalence was about 12% in 1976. Some 8 years later in 1984, it had increased to just 15%. A further 8 years saw the obesity prevalence rise to 22% and by 2000 it was 28%. So period has an effect.  The key question is whether every birth cohort experiences the same effect of age in the development of obesity. The Reither paper shows that for birth cohorts starting in 1895, there was a gradual but constant rise in the % of the population that were obese, reaching a peak in the late 1920s.  Then the rate of obesity declined until the late 1950s when it took off again. Clearly, this paper shows that each birth cohort or in lay terms, each generation, experiences the age related gain in weight in a different manner. A second paper [2] also looked at US birth cohorts and also found a similar effect. This second paper presented data in a novel manner such that we can look at what the authors call iso-BMI lines. This is like looking at a map and examining the contours of mountainous regions where specific heights appear on a continuous line. The closer the lines get, the steeper is the climb. Thus we can look at birth year on one axis and age on another axis and see the manner in which iso-BMI contours change. Those born in 1900, could expect to have a BMI of 21 by age 40.  However, for those born ten years later, that BMI of 21was achieved at 30 years of age. A decade later, it was achieved at 20 years of age. If we look at a BMI of 25, this was achieved by 20 in 1980, by 30 in 1950, by 40 in 1940, by 50 in 1920 and by about 65 in 1900. The paper of Komlos & Brabec also shows that the rise in obesity came in waves. There was a surge between 1900 and 1920 and then a decline with a second surge in the late 1940s. This pattern of surges differed according to both race and sex with African American women showing the highest level of obesity at all times and the highest post WW2 surge.
These data tell us quite a lot about the dynamics of the obesity epidemic. Firstly, more recent generations experience greater levels of obesity and at an earlier age the older generations. One could argue that this is a period effect. There is greater sedentary leisure time and a more abundant obesogenic environment. However, one could also look at a biological explanation and argue that uterine programming is involved and that as each generation experiences obesity, it somehow enhance the likelihood of even greater obesity in the next generation. In my recent book, I called my chapter on obesity “A tsunami of lard” citing other data from the US and Europe on the cyclical growth of obesity which began well over a century ago. In fact, data from pooled US and Canadian actuarial data involving 34 insurance companies examined health related risks of obesity for 163,000 policyholders were deemed overweight. Of the overweight men, those whose body fat was distributed around their abdomen had a higher risk of death than other overweight men. Among the severely overweight men, those with abdominal obesity had a 52 percent increased risk of death, whereas those without abdominal obesity had only a 35 percent greater risk compared to the general population[3].
So when the likes of David Kessler writes in his book “The end of overeating” that the incidence of obesity soared from the late 1980s, he is ignoring an indisputable fact that obesity more or less tracked the industrial revolution. This is of huge importance. If Kessler chooses to ignore the early origins of obesity, then he can be comfortable blaming the advent of foods high in salt, sugar and fat[4]. Others can comfortably blame the advent of high fructose corn syrup, fast food, sugar sweetened beverages. Well consider Lucius Columella in his great work De Re Rustica (On Agriculture) in the year of 65 AD. when he wrote: "The consequence is that ill health attends so slothful a manner of living; for the bodies of our young men are so flabby and enervated that death seems likely to make no change to them." It is a simple fact of  life that obesity is one of the drawbacks of affluence where food is abundant and where labour saving devices (and slave labour) are accessible. This is not for one iota to play down the health consequences of obesity. It is to simply of enormous importance in understanding the causes of obesity.


[1] Reither EN et al (2009) Soc Sci Med November;69 (10) 1439-1448
[2] Komlos J & Brabec M (2010) Am J Hum Biol;22, 631-638
[3] Kahn HS & Williamson DE (1995) Int J Obes Relat Meat Disord; 18 (10) 686-691
[4] That will be the subject of a future and highly critical blog on Kessler’s book. 

Tuesday, August 14, 2012

Book Review: "Panic on a plate"


On my holidays here in Kerry, in between the Olympics and the rain I have been reading several books. Last week I blogged on the “Locavore’s Dilemma” and this week I’m going to cover the book “Panic on a plate” by Rob Lyons. Rob runs a blog (www.paniconaplate.com) which is well worth connecting to. He is deputy editor of Spiked (www.spiked-online.com) which has the magnificent objective of being “...dedicated to raising the horizons of humanity by waging a culture of war of words against misanthropy, priggishness, prejudice, Luddism, illiberalism and irrationalisation in all their ancient and modern forms”. Wow, don’t you love it!

“Panic on a plate - how society developed an eating disorder” is an excellent book for those who want to read behind the headlines of doom and gloom or shock and awe stories from the mass media on the dangers of the modern food chain. It is a small book, concise but covering all of the important issues myths that need to be addresses. The chapter which most interested me was the one entitled: “How has our food changed”. Here Rob Lyons looks deep into the past eating habits of the poorer social classes in London and he also explores the middle classes assumptions about the diets of the poorer classes. Thus he shows that todays obsession with such issues is over a century old and probably older if we had access to the right data.  He cites a study of the diets of the poorer classes in 1901.The following are daily averages in grams and the figures in brackets are the intake data of the Irish population from a recent national nutrition survey: 435 for bread (115), 104 for potatoes (71), 57 for sugar (75), 11 for cereals (57),  91 for meat  (140), 114 for milk (195) and 20 for fats (14). The quantitative differences between today in Ireland and then in London hide several problems The first concerns micronutrients because we know that these limited growth and development in the early part of the 20th century to such an extent that mandatory food fortification was introduced base on the appalling rate of rejection of military recruits to the Boer war based on poor nutritional status. The second hidden problem was access to adequate cooking facilities. He cites studies of eating habits  in 1914 by Maud Pember Reeves:“ Another difficulty which dogs the path of the Lambeth housekeeper is, either that there is no oven or only a gas oven which requires a good deal of gas, or that the stove oven needs much fuel to heat it. Once a week for the Sunday dinner, the plunge is taken. Homes where there is no  oven send out to the bakehouse on that occasion. The rest of the week is managed on cold food or in the hard-worked saucepan and frying pan are brought into play.” I had never heard of a bake house! On the plus side was the fish and chip shop and In the latter part of the 19th century, one study showed that “working class families in industrial areas use the fish and chip shop three or four times a week”.   And of course, there were critics of these fish and chip shops. He quotes from J K Walton’s ‘Fish and chips and the British working class, 1870-1940’: “...Critics alleged that fish and chips was indigestible, expensive and unwholesome. It was seen as a route to, or an aspect of, the ‘secondary poverty’ which arose from the incompetent or immoral misapplication of resources that would otherwise have been sufficient to sustain an adequate standard of living”. Apparently, in 1906, the English sea side town of Blackpool had 182 sweet shops, 79 fish and chip shops and 58 restaurants. The point that Rob Lyons  is making is that it is a common myth to believe that “....there was a Golden Age in which everyone ate well, with lots of locally produced meat, fruit and vegetables, lovingly prepared at home....Eating out was rare and convenience food non-existent”.

A second issue which he covers in this area, is that the present day obsession of the well heeled with the diets of the Hoi Polloi is nothing new. He cites George Orwell who in The Road to Wigan Pier wrote of: “Parties of dames now have the cheek to walk into East End houses and give shopping lessons to the wives of the unemployed”. He cites a government commission of 1904 on physical deterioration who decry the food choices of the poor: “It is no doubt that with greater knowledge, the poor might live more cheaply than they do but ..the tendency is to spend as little as possible on food”.

Lyons tackles the myth of ‘junk food’, and I particularly like his quotation of A.A.Gill of the Sunday Times in relation to organic food: “What I really mind about all this is that organic is making food into a class issue. Organic brings back this prewar system of posh, politically correct food for Notting Hill people and filthy, rubbish chemical food for filthy, rubbish chemical people. Either you are a nice organic person or you are a filthy, overweight McDonald’s person. I find that really obscene. It has very little to do with food and a lot to do with weird snobbery”. He has an excellent chapter on school meals and the efforts of the celebrity chef Jamie Oliver to right the wrongs of the school menu.

He writes about fear and how key words are used about the fears that are stoked up in relation to food:‘epidemic’, ‘time bomb’ and ‘plague’ and quotes the sociologist David Altheide: “Fear does not just happen; it is socially constructed and then manipulated by those who seek to benefit”.
I highly recommend this book to anyone with an interest in food and health and because Rob Lyons is a journalist, I’d particularly recommend it to that profession.

Tuesday, August 7, 2012

The Locavore's dilemma ~ review of an excellent book


“The Locavore’s Dilemma - In Praise of the 10,000 Mile Diet” is the title of a new book by Pierre Desrochers, a Professor of geography at the University of Toronto and his Japanese wife Hiroko Shimizy who has worked at John’s Hopkins university. A locavore is someone who espouses the concept of eating locally produced food. The book ends with a quotation  of the historian, Paul Johnson who wrote that history “is a powerful antidote to contemporary arrogance” and the book begins in this vein with a look back to 65 AD when Lucius Junius Moderatus Columella wrote in De Re Rustica (On agriculture): “Again and again I hear leading men of our state condemning now the unfruitfulness of the soil, now the inclemency of the climate for some seasons past, as harmful to crops; and some I hear reconciling the aforesaid complaints, as if on well-founded reasoning, on the ground that, in their opinion , the soil was worn out and exhausted by the over-production of earlier days and cab no longer furnish sustenance to mortals with its old time nourishment”. It is indeed reassuring to know that two millennia ago, we had the same old arguments by urban romantics about the fortunes of agriculture.

The first myth challenged by the authors is that local food nurtures social capital by creating a link between the producer and the purchaser. They go on to point out that each and every locality has its own growing conditions from soil type to micro-climate. Some favour the growth of wheat, others soft fruit, others oil seeds or grass or vegetables. If locally grown food is to meet the nutritional needs of a sizable urban population, it must produce a variety of foods. Because some are less suited to the local climate, by definition, productivity will fall and prices will rise. the second myth is that by supporting local food, the local economy is stimulated. The authors point to data comparing supermarket prices to locally grown foods and by and large, the latter costs twice as much as the locally grown food. When geography is not a feature of the buyers agenda, he or she can buy from the cheapest source at that time of the year. The cheapest source may be continents way but economies of scale will mean that it is produced under very efficient agricultural systems and shipped in considerable bulk in a very cost efficient manner. The authors cite David Cleveland, a professor at the University of California,  Santa Barbara talking of “two produce-laden trailers passing on the highway, one bringing food into the county; the other hauling it out”. The authors point out two glaring omissions in this emotive statement. One is the the county of Santa Barbara produces nine times more food that it needs and what it does import comes from Chile, Argentina and New Zealand. If Santa Barbara did not export its food, the market price would collapse. Economics will dictate the majority of consumer purchases of foods.

The third myth they challenge is that locally grown foods will be more environmentally friendly on the basis that the locally grown foods will have less “food miles”. However, the authors point out that the data on food energy reveals that just 4% of greenhouse gas equivalents is due to the journey of food from harvest to fridge. Some 83% is used in its production from seed to produce. They also point out that for the UK food chain, just 1% of “food miles” is due to air travel. Kenyan roses grown in the local sunshine emit one sixth the carbon dioxide of Dutch grown roses. The fourth myth tackles the belief that locally grown food will increase food security and it doesn’t take much to demolish that theory. When geography is not an issue, a climatic or pest event that reduces foods grown in one area simply means that competing parts of the globe can trade without them.  When a community relies on solely local food, any major climatic or pest event can dramatically increase food insecurity.  Myth 5 would have us believe that locally grown food is tastier, more nutritious and safer. Freshly picked food is believed to be tasty but I’m not aware of any published studies to verify that opinion. As regards nutrition, as I have pointed out in several blogs, the nutritional quality of plants varies according to the microclimate and not according to the growing conditions and endless studies have refuted this myth that local or organic foods are nutritionally superior. As regards food safety, the bigger the producer, the bigger the investment in food safety.

Modern agriculture has evolved to embrace all manner of new technologies in much the same way that other industries have: energy, transport, health, communication and so on. Nobody is hankering back to the good old days of the typewriter, the model T Ford or the telegram. But food is an exception here and it is reflected in the fact that scientists who consult for Boeing, Google, Apple, or held in high regard for their efforts but those who consult for the food giants are shunned from expert committees. The authors of this excellent book point out that modern agriculture has developed to where it is because  it is successful and competitive. Locavores dream of a food chain that is economically unsustainable. They quote Michael Pollan, the guru of locavorism who argues that at the end of World War 2, the US home garden sector was providing 40% of food consumed in the US. As the authors point out, someone else was responsible for the remaining 60% and the minute the opportunity arose, the romantic but highly burdensome home garden was abandoned in favour of the cheaper, more convenient and more accessible supermarket.
Some time back, my blog covered food insecurity in the US and with a declining economic environment, the number of citizens who are challenge d to provide adequate nutrition will rise to perhaps on in six or seven. The authors write: ”Michael Pollan’s manta ‘pay more, eat less’ may seem eminently sensible to the upper middle class consumers who can always cut back on the cappuccinos in order to spend eight dollars for a dozen eggs and $3.90 for a pound of Frog Hollow peaches”. Locavoraism is for the privileged and the selfish.

Monday, July 30, 2012

Fibre, farts and faeces


Some 2,500 years ago, Hippocrates declared that: “Wholemeal bread cleans out the gut and passes through as excrement. White bread is more nutritious as it makes less feces”.  For a long time fibre was regarded as a non-nutrient and of no importance. Thomas Richard Allinson[1] was a medical doctor who advocated vegetarianism and wholemeal bread and in 1892 he was struck off the medical register in the UK for his non-establishment views. In 1936 the American Medical Association formally condemned the use of bran, a view which would dominate for the next three decades.
Fibre was of course of enormous importance in animal nutrition, a science that was far more developed than human nutrition. To understand why, its best to first consider fibre from the botanical viewpoint. When a seedling finally gets to the sunshine of boundless photosynthetic energy, it is green and very leafy. As it grows, its stem moves from a predominantly photosynthesizing function to the dual function of physical support and the transport of minerals to the growing leafy top. Its chemistry changes and it develops strong cell walls for support and these cell walls are very fibrous growing less green as the harvest nears. The leafy top give rise to the seeds which will also be wrapped in a fibrous outer husk. For animal nutritionists, the higher the fibre content of forage, the lower was its energy value. The most abundant carbohydrate on the planet is cellulose which like starch, is a polymer of glucose. The polymer of cellulose is organised slightly differently from starch such that our starch- digesting enzymes in our gut cannot break down cellulose. For those animals that depend on a cellulose rich diet, they have developed a very sophisticated association with gut bacteria that can break down this cellulose such that the bacteria use some of the cellulose as energy and the leftovers are used by the host animal.  Its a win-win situation. In most of the grazing animals, the bacteria are found at the start of the digestive tract, the rumen, in cows and sheep while in humans, who don’t have to forage on cellulose, our gut bacteria are are the end of the gut, the colon.
Some 30 years after the AMA had downgraded bran to junk status, a small number of  distinguished medical doctors, challenged conventional wisdom and fibre entered the lexicon of human nutrition. The most notable was Denis Burkitt, a County Fermanagh man educated at Trinity College Dublin. He raised huge interest in fecal matters in relation to diseases of the digestive tract, ranging from constipation to colon cancer. Higher fibre diets lead to higher fecal outputs and this can be achieved in either of two ways. Some fibres are pretty resistant to any form of microbial degradation but have a huge capacity to absorb water. Thus if taken in sufficient quantity, the promote a high fecal output of undigested fiber, bran in particular, which has a very high water content leading to a soft stool. Indeed, Burkitt used to discuss two types of fecal stools, floaters and sinkers, the former on low fibre diets and the latter on high fibre diets. The second route to increased stool output is to provide the colonic microflora with a very fermentable fibre such that the biomass increases leading to more frequent excretion of softer stools. However, if the fibre is very water soluble and fermented very rapidly, as happens for example with the small sugar-like fibre in beans, flatulence is promoted leading to the ditty: “Beans, beans good for your heart. The more you eat, the more you fart. The more you fart the better you feel so beans, beans at every meal”!

The evidence that dietary fibre reduces the risk of colon cancer is strong and a recent meta analysis of all studies in this field, has shown the colon cancer risk is 17% lower with three  servings of wholegrain per day and that increasing fibre intake from 5g/d to 30g/d led to a decline of about 30% in the risk of colon cancer. Cereal fibre is way ahead of fruit and vegetable fibre in reducing this risk[2].  This cancer accounts for about 10% of all cancers and is of course the most preventable, not simply by diet, but also by endoscopic screening.

Fibre has taken its rightful place in human nutrition but now, there is a slightly different twist in that the colonic bacteria have raced to the top of biomedical fashion. We started with the genome ( the collective noun for all genes) and then we moved to the next level, the proteins which we called the proteome and of course soon after we had the metabolome. Now we have the microbiome. In my recent book, I devote an entire chapter to this topic because it is a very interesting and potentially important topic. However, I fear that it will go down the route of many great biomedical fashions and deliver very little. Many of the studies in this field are built on bizarre animal models which involve the breeding of germ-free animals without any gut bacteria. Many others are built on association studies which reveal that some pattern of colonic microbial flora is biologically superior to some other pattern. However, few human intervention studies have been conducted and within that limited literature, the data are not unanimous. However, the hype rolls on. To some extent, and I am in this business a long time, it reminds me of the anti-oxidant theory, a veritable bio-Klondike. It rose, dominated and fizzled out. Beware of theories that explain everything, well almost everything.

An adequate intake of dietary fibre is important. I remain a healthy sceptic about the promise of the microbiome.


[1] His flour company still flourishes, if you excuse the pun
[2] Aune D et al (2011) BMJ 343 Dietary fibre, whole grains, and risk of colorectal cancer: systematic review and dose-response meta-analysis of prospective studies

Monday, July 23, 2012

Iodine - now a problem in developed countries


When we think of hunger, we think of the gaunt and emaciated children in sub-Saharan Africa. There is another form of hunger known as “hidden hunger” which requires the services of a biochemist to detect deficiency symptoms in blood or urine and the three main nutrients of hunger are iron, leading to anaemia, vitamin A leading to blindness and iodine, leading to goiter. Today’s blog focuses on the latter[1]. Iodine is required as an element within the molecular structure of the thyroid hormone and this hormone plays an essential role in every cell where it regulates metabolic rate. Iodine also has a major role in brain development during pregnancy and the very early years of life.  If iodine intakes are inadequate, a condition known as goiter emerges in which the thyroid gland in the neck swells as it seeks to extract every last drop of iodine in blood for thyroid hormone synthesis. Most importantly, iodine deficiency in pregnancy will lead to impaired cognitive ability and if severe will lead to severe mental retardation, a condition known as cretinism.
Iodine is a rare element in the earth’s crust and is highly water-soluble and is thus concentrated in the oceans. It is deposited on land as rainwater and the further the distance from the sea, the lower the impact of this iodine rich rainfall. Inland mountainous regions are particularly susceptible to low soil iodine levels as rainwater and streams wash the iodine away, returning it to the sea. Thus there are large tracts of land known as goiter belts where soil iodine is sufficiently low to create a high prevalence of goiter. Iodine deficiency remains one of the great global nutritional challenges with the WHO estimating that 1.9 billion, almost one third of humanity, had inadequate intakes of iodine over the period 1994 to 2006 with approximately 20 million cases of impaired mental function. Great progress has been made in reducing this figure through the fortification of salt with iodine. In contrast to the improving situation in developing countries, inadequate iodine intake is now becoming an issue in developed countries.
Without any planned intervention, the problem of inadequate iodine intakes in certain developed countries was considerably reduced from the 1960s onwards because of the introduction of iodized cleansing substances (iodophors) into modern milking parlour and also the introduction of iodine enriched salt licks to ensure adequate iodine intake by dairy cows.  Milk became on of the main sources of iodine and milk consumption was encouraged particularly in children. However, the use of iodophor sanitizing agents in dairy farming has declined dramatically in recent times and the use of table salt, including iodized salt, has fallen, given the negative nutritional messages about salt and health.  In a recent survey of the iodine status of UK schoolgirls using urinary iodine levels, 51% were deemed to have mild iodine deficiency, 16% had moderate deficiency and 1% had severe deficiency. Just about one third had adequate iodine status[2]. The UK is now ranked eighth in the top ten of iodine deficiency countries according to the International Council for Control of Iodine Deficiency Disorders. Two studies have shown that if young children with poor nutritional status receive iodine supplements, their cognitive function rises significantly.

From a nutrition policy point of view, the issue of iodine adequacy is of greatest importance in women of childbearing age. The method used to assess the iodine status in populations is to take a single urine sample with the average value being compared to a WHO reference range for mild, moderate and severe iodine deficiency. However, whereas that approach can tell us if a population overall has a problem with iodine status, it cannot tell us who within that population has a true problem. A recent paper has shown that 10 different urine samples from one individual are needed to assess an individual’s iodine status. Thus at present, there is a need to ensure that all women of childbearing age receive adequate iodine intake. If, as the data would indicate, inadequate iodine status is such a serious issue especially as it relates to early life cognitive function, then iodine must become a very important public health nutrition issue.

Ironically, the many pregnant women who shift to organic foods in the belief that this will help ensure as healthy a baby as possible, will see a very significant fall in iodine intake.  Organic animal production greatly restricts the use of mineral and vitamin supplements in animal feeds.  Recent survey of the iodine content of milk from organic and conventional farms shows that the organic milk is 42% lower in iodine than conventional milk, and milk accounts for almost half the UK iodine intake. In fact, pregnant women should be counseled to avoid organic milk.


[1] This blog was prompted by an excellent paper presented to the recent Nutrition Society meeting by Sarah Bath, a PhD student at the University of Surrey
[2] Vanderpump MPJ et al (2011). Iodine status of UK schoolgirls: a cross sectional survey. The Lancet, 337, 2007-12 

Monday, July 16, 2012

Nature, nurture and the control of food intake


Some weeks ago, my blog was entitled: “Ever seen a fat fox”, the gist of which was that whereas biology can tell us a lot about the control of food intake in animals, in man, with a large pre-frontal cortex, living in a highly social existence, social aspects may be far more important than the biology. I return to this theme today in light of some intriguing research jointly carried out by researchers at the Universities of Washington and Toronto[1]. The research centers around the phenomenon of restrained eating, so let me first explain what this means. As I have previously pointed out, overweight and obesity are not usually the outcome of any conscious decision to get fat (Sumo wrestlers excluded). Restrained eating, on the other hand, is a very conscious decision to do, as the name implies, to consciously count calories in order not to gain weight or to maintain weight loss. Restrained eaters frequently have experienced some weight gain and have then made the lifetime commitment to being a fussy eater.

We know that obesity is highly heritable and based on studies of identical and non-identical twins, the figure for heritability is as high as 90%. So, it seemed a reasonable question to ask if brain function in response to food cues in restrained eating also had a similar level of genetic control. Thus the researchers used the University of Washington’s register of twins. In 2006, the twins took part in a health survey that involved the completion of a restraint-eating questionnaire (10 questions), effectively examining the concerns of the subjects about chronic dieting and weight gain. The researchers then mined the database to identify sets of identical twins that differed radically in their restrained eating patterns. Effectively, each twin pair had one serious restrained eater and one with no restraint as regards eating. By using identical twins, the study immediately eliminated any biological difference, since identical twins are genetic clones. The participants were all female and in their early thirties. The twins were shown photographs of “fattening” and “non-fattening” foods and then underwent a brain scan using a system known a functional Magnetic Resonance Imaging (fMRI). This was then repeated after the volunteers had consumed a milk shake.
The first test showed that the twin with the restrained eating personality showed much higher brain activity following exposure to pictures of high-calorie “fattening” foods than the twin with no tendency to food restraint. Specifically, the areas of brain activity most affected were the amygdala (involved in emotional processing of external cues), the occipital lobe (involved in behaviour modification) and in the right thalamus (involved in visual perception). Interestingly, there was no difference between restrained-eaters and non-restrained eaters in the prefrontal cortex. This is somewhat surprising to a non-neurobiologist such as myself since this is the area of the brain associated with all the higher complex brainpower of humans and since humans are the only species that exhibits restrained eating, one could be forgiven for thinking that the pre-frontal cortex would be involved. Things now get complex after the subjects had consumed the milkshake. In the case of the restrained eaters, all those areas of enhanced activity seen in the first test with pictures of high calorie “fattening” foods were greatly reduced. Strangely however, the sight of the non-fattening foods now switched on several parts of the brain, which was not observed in the non-restrained twins.
The authors point out that the fMRI data are consistent with other data, which show that restrained eaters are highly sensitive to external cues, which, in this study led to enhanced brain activity in the three regions mentioned. They see this as a conflict between food appeal and the desire to restrain eating. After the milkshake, this enhanced activity diminishes which the authors say may have been “cognitively driven”. I take that to mean that the intake of the milkshake “woke them up” in a sense such as to inhibit their natural desires.
For me the big deal in this paper is that an acquired habit regarding the regulation of food intake, restrained eating, is not entirely genetically driven. Moreover, it is not associated with the pre-frontal cortex, which makes man man and mice mice. Previous studies by the same authors in a very large twin study found that restrained eating was partly heritable with a range of 35% to 50% heritability recorded in 95% of the subjects. So some element of restrained eating is inherited and another, learned.  Such is the utter complexity of the regulation of food intake in humans. So, the next time you are at a conference and someone starts to discuss the regulation of food intake in rats and mice, take out the smart phone and do your e-mails!



[1] Schur AE et al (2012) Acquired difference in brain responses among monozygotic twins discordant for restrained eating. Physiology & Behaviour, 105, 560-567