Hostname: page-component-78c5997874-8bhkd Total loading time: 0 Render date: 2024-11-18T07:32:44.279Z Has data issue: false hasContentIssue false

Risk–benefit analysis of mineral intakes: case studies on copper and iron

Published online by Cambridge University Press:  22 September 2010

Susan J. Fairweather-Tait*
Affiliation:
School of Medicine, University of East Anglia, Norwich NR4 7TJ, UK
Linda J. Harvey
Affiliation:
School of Medicine, University of East Anglia, Norwich NR4 7TJ, UK
Rachel Collings
Affiliation:
School of Medicine, University of East Anglia, Norwich NR4 7TJ, UK
*
*Corresponding author: Professor Susan Fairweather-Tait, email s.fairweather-tait@uea.ac.uk, fax +44 1603 593752
Rights & Permissions [Opens in a new window]

Abstract

Dietary reference values for essential trace elements are designed to meet requirements with minimal risk of deficiency and toxicity. Risk–benefit analysis requires data on habitual dietary intakes, an estimate of variation and effects of deficiency and excess on health. For some nutrients, the range between the upper and lower limits may be extremely narrow and even overlap, which creates difficulties when setting safety margins. A new approach for estimating optimal intakes, taking into account several health biomarkers, has been developed and applied to selenium, but at present there are insufficient data to extend this technique to other micronutrients. The existing methods for deriving reference values for Cu and Fe are described. For Cu, there are no sensitive biomarkers of status or health relating to marginal deficiency or toxicity, despite the well-characterised genetic disorders of Menkes and Wilson's disease which, if untreated, lead to lethal deficiency and overload, respectively. For Fe, the wide variation in bioavailability confounds the relationship between intake and status and complicates risk–benefit analysis. As with Cu, health effects associated with deficiency or toxicity are not easy to quantify, therefore status is the most accessible variable for risk–benefit analysis. Serum ferritin reflects Fe stores but is affected by infection/inflammation, and therefore additional biomarkers are generally employed to measure and assess Fe status. Characterising the relationship between health and dietary intake is problematic for both these trace elements due to the confounding effects of bioavailability, inadequate biomarkers of status and a lack of sensitive and specific biomarkers for health outcomes.

Type
Symposium on ‘Nutrition: getting the balance right in 2010’
Copyright
Copyright © The Authors 2010

Abbreviations:
DRV

dietary reference values

EAR

estimated average requirement

RNI

reference nutrient intake

UL

upper level

Background

Risk assessment of non-nutrient constituents of food, for example, chemical additives, follows the well-established process of hazard identification, hazard characterisation, exposure assessment and risk characterisation. Critical decisions are made on causality (exposure and effect), the nature of the effect and its health impact, the strength of any reported associations, inter-species extrapolation if relevant and the applicability of the data used in the model to populations(Reference Renwick, Flynn and Fletcher1). Risk management and communication require translation of the risk assessor's findings for adoption at a local, regional, national or an international level(2).

Risk assessment was introduced into the process of deriving dietary reference values for essential trace elements approximately 15 years ago by Walter Mertz(Reference Mertz3). All mineral nutrients must have an intake range that is safe in relation to toxicity but adequate to meet nutrient requirements. This is reflected in the dose–response curve (Fig. 1) with upper limits and estimated average requirements (EAR) determined from toxicological and nutritional data, respectively. The EAR is the best estimate of requirement that can be obtained from published data, and by definition 50% of the population will fall above and 50% below this figure. The shape of the curve is assumed to be normal, and therefore the upper limit that covers the needs of 97·5% of the population is 2 sd above the EAR and the lower limit 2 sd below the EAR. These statistically derived numbers are based on probability modelling, and although it is, as yet, not possible to determine an individual's exact requirements for a nutrient, the situation may well change in the future if and when personalised nutrition becomes a reality.

Fig. 1. Dose–response curves derived from evidence-based (open boxes) and derived (shaded box) data used to derive dietary reference values using European Union (EU), Institute of Medicine (IOM) and United Nations University (UNU) terminology respectively: AR, average requirement; EAR, estimated average requirement; ANR, average nutrient requirement; PRI, population reference intake; RDA, recommended daily allowance; INLx, individual nutrient level for x% of the population; UL, tolerable upper intake limit; UL, upper level; UNL, upper nutrient level.

The range between the upper and lower limits may be a narrow zone or may even overlap, mainly due to the fact that the upper limit is a derived value with different uncertainty factors used for toxicology assessment from those used for estimating the risk of nutrient deficiency(4, Reference Schümann5). Recommendations on upper levels (UL) are based on the lowest observed adverse effect level or the no observed adverse effect level, divided by an uncertainty factor which reflects the degree (or lack) of confidence in the data. Commonly used uncertainty factors include inter-individual variability, animal-to-human extrapolation, availability of less than chronic data, use of lowest observed adverse effect level instead of no observed adverse effect level and an incomplete database, each of which may merit an uncertainty factor of up to 10(Reference Olin6) and therefore when combined could reach 1000 or more. The resulting UL may then fall below the population reference intake, namely the quantity that meets the needs of 97·5% population, which is calculated as 2 sd above the EAR. These situations can be dealt with by applying lower than conventionally used uncertainty factors. A modelling approach was developed by the ILSI Europe Expert Group on Risk–Benefit Analysis for Nutrients Added to Foods(Reference Renwick7) that integrates the % incidence of deficiency with the % incidence of toxicity over a range of intakes and thereby applies risk–benefit analysis to the intake–response relationship(Reference Renwick, Flynn and Fletcher1).

In order to undertake risk–benefit analysis of mineral nutrients, information on the impact of high and low intakes on health, and dietary intake data in population groups, together with measures of variation are required (Table 1). In anticipation that systems biology when applied to nutrition will be able to deliver on current promises, data describing phenotype and genotype will be generated(Reference de Graaf, Freidig and De Roos8) and these can then be used to derive mineral intakes for optimising health on an individual basis (personalised nutrition). The type of analysis (qualitative or quantitative) depends on the most likely public health issues, for example, the presence of deficiency disorders or concern about toxicity. It is also important to consider how risk–benefit managers will use the information, for example, dietary advice for sub-groups of the population or regulatory issues relating to individual foods or mineral fortificants. Food fortification is a special case where micronutrients have to be classified according to a safety margin which relates to the size of the interval between requirements and UL, and it has been suggested that all minerals, together with retinol, vitamin D, niacin and folate, should all be classified as Category A (⩽5-fold range between requirement and UL), namely ‘to be handled with care’(Reference Meltzer, Aro and Andersen9). In addition to the size of the safety margin, there may be additional problems with interactions between micronutrients, for example, the negative impact of Zn on Fe and Cu metabolism(Reference Collins, Prohaska and Knutson10).

Table 1. Information required for risk–benefit analysis of mineral intakes

The relationship between the intake of minerals and health is illustrated in Fig. 2. For Fe and Cu, there are many confounders that complicate the relationship, such as limited and varying bioavailability, a particular problem for Fe(Reference Hurrell and Egli11), the absence of sensitive and specific biomarkers of status, a particular problem for Cu(Reference Harvey, Ashton and Hooper12), and no specific early biomarkers of health for either mineral. Although low Hb concentration (anaemia) is caused by Fe deficiency, it cannot be used as a biomarker on its own as it is non-specific and Fe deficiency should be confirmed using biomarkers such as serum/plasma ferritin, transferrin receptor and, possibly, hepcidin(Reference Zeleniuch-Jacquotte, Zhang and Dai13). There is also the added complication that most diet-related chronic diseases have a multifactorial aetiology, and the effects of diet on these diseases are a consequence of habitual intake of a number of nutrients/dietary components. This underpins the move towards providing recommendations based on the nutrient density of foods/diets(Reference Hallberg14), and also the need for more biomarkers of health to be developed(Reference Micheel and Ball15).

Fig. 2. Intake–status–health relationships and examples of confounding factors for Cu and Fe.

The outcome of risk assessment of eight trace elements, including Fe and Cu, was reported in 2003 as a narrative review(Reference Goldhaber16). The information used included effects of deficiency and evidence for its existence in human subjects, the relationship between intake and status, and the effects of high intakes (toxicity). A generic decision tree for micronutrient risk assessment for UL was developed, including questions about intake, bioavailability, absorption–distribution–metabolism–excretion data and effects of high intakes(Reference Renwick and Walker17), and it should be possible to extend this process to cover the risk of deficiency.

A recent quantitative technique for estimating optimal intakes has been proposed(Reference Renwick, Flynn and Fletcher1), which defines the incidence of deficiency and adverse effects above and below the reference nutrient intake (RNI), also referred to as the population reference intake and the RDA, and extends the intake-incidence data to provide a range of estimates, instead of a single point, thereby providing greater flexibility for risk managers. The model requires data for the incidence of a response at one or more levels of intake and a suitable CV that represents inter-individual variations in populations. The proposed default CV is 15% for requirements and 45% for toxicity, and this model has been applied to selenium(Reference Renwick, Dragsted and Fletcher18). The average requirement for selenium derived by the Food and Nutrition Board of the US Institute of Medicine(19) used data for the plasma glutathione peroxidase response to different intakes of selenium; the mean from the best two studies was 45 μg/d (Fig. 3(a)). In relation to high intakes, four studies provided information on clinical signs of selenosisis (hair and nail loss and mottling of teeth) and from these the tolerable upper intake was set at 400 μg/d (Fig. 3(b)). However, in order to take other health effects into consideration, a more complex model is required, as illustrated in Fig. 3(c), but this is difficult for a risk manager to use, and therefore tabulated incidence data were generated from the model that provide % incidence of glutathione peroxidase deficiency, frank selenosis and intermediate health biomarkers, for selenium intakes ranging from very low (20 μg/d) to very high (1600 μg/d). The application of this novel approach for use with other micronutrients was explored but unfortunately there were insufficient data, and at present this does not appear to be a feasible way forward (A. Flynn, personal communication). In the absence of a common generic risk–benefit tool, case studies on Cu and Fe are presented in which current knowledge about the relationships between intake, status and health are summarised.

Fig. 3. Population distribution for (a) glutathione peroxidase activity with low selenium intakes and (b) frank selenosis with high selenium intakes. (c) Population distribution modelling for other possible effects resulting from different levels of intake of selenium. (Reprinted with permission from Renwick et al.(Reference Renwick, Dragsted and Fletcher18).)

Case study on copper

When the traditional method for deriving dietary reference values is used, the paucity of data on the distribution of intake requirements of Cu for normal health requires an assumption about the inter-individual variation (CV); this was taken to be 15% in the Institute of Medicine(19) and the Nordic(20) recommendations. The EAR requires the existence of a dose–response relationship between one or more biomarkers of status (which are predictive of optimal health) and dietary intake. For Cu, there are no sensitive and specific biomarkers of status(Reference Harvey, Ashton and Hooper12) and therefore a combination of indicators was used and compared with the alternative factorial approach. In the latter, the quantity of Cu required to maintain balance (and support growth, where appropriate) is multiplied by a ‘bioavailability factor’ which is variable, depending on the total Cu intake, because of homeostatic adaptive responses. The factorial estimate was slightly lower than the EAR derived from biochemical indicators of status(19). The RDA for adults was calculated from the EAR derived from biochemical dose–response data plus twice the CV (15%) as 0·9 mg/d(19, 20), although other expert committees suggest different figures e.g. 0·8 mg/d(21, Reference Doets, de Wit and Dhonukshe-Rutten22), which serves to illustrate the uncertainty surrounding Cu requirements for optimal health.

When using traditional approaches, the safe UL is calculated from the no observed adverse effect level divided by an uncertainty factor. In the US(19) and Europe(4) liver damage was selected as the critical endpoint from which to base an upper limit and there was agreement that the no observed adverse effect level was 10 mg/d, but with a suggested uncertainty factor of 1 and 2, respectively, this results in a daily upper limit of 10 mg and 5 mg in the US and Europe, respectively. However, to date there has been only one reported case of clinical liver toxicity resulting from excessive oral Cu intake (dietary supplements) in an individual with no known genetic predisposition(Reference O'Donohue, Reid and Varghese23, Reference O'Donohue, Reid and Varghese24).

Cu deficiency in adults is associated with impairment of a range of Cu-dependent enzymes leading to symptoms such as anaemia, hypercholesterolaemia and weakened immune function(Reference Danzeisen, Araya and Harrison25). Recently, a long-term marginal Cu deficiency has also been implicated in the development of adult-onset myeloneuropathy(Reference Goodman, Bosch and Ross26), further supporting the proposed association between prolonged sub-optimal Cu intakes and adverse health effects in later life, e.g. osteoporosis and CVD(Reference Stern, Solioz and Krewski27). Potentially devastating and irreversible consequences may result from Cu deficiency during pregnancy, including impaired development of the brain, heart, skeleton and blood vessels in the fetus(Reference Gambling and McArdle28).

Indicators that have been used by dietary reference values (DRV) panels to establish the EAR include serum Cu, ceruloplasmin, erythrocyte superoxide dismutase and platelet Cu concentration(19, 20). However, while frank Cu deficiency is relatively straightforward to diagnose, identifying marginal deficiency remains problematic. The tight homeostatic regulation of Cu concentrations in the bloodstream generally limits major perturbations in concentration to the extremes of dietary intake, thus potentially restricting biomarker responses to either severe deficiency or overload states. A recent systematic review assessed the evidence for biomarkers of Cu status(Reference Harvey, Ashton and Hooper12) and highlighted the lack of robust data, identifying only 16 suitable human studies in which data on a total of 16 potential Cu biomarkers were provided. Analysis of the data suggested that serum Cu, assessed from a total of seven studies, was the most useful biomarker of Cu status at the population level since it reflected changes in status in both depleted and replete individuals, albeit a more limited response in the latter. However, due to limited data, it was not possible to determine the relative usefulness of serum Cu in different population groups, thus uncertainty remains concerning its reliability under different circumstances. Various diseases and inflammatory states, as well as pregnancy, generally increase the concentrations of blood-based Cu indicators, which further undermines the use of this biomarker. The review assessed plasma Cu independently from serum Cu and highlighted the possibility that the former may also be responsive to Cu repletion in depleted individuals following supplementation. However, as with serum Cu there was only limited evidence that this marker reflects severe depletion, and even less that it reflects Cu status in replete individuals. The insufficiency of data meant that overall no conclusions could be drawn on the usefulness of plasma Cu as a status biomarker.

Several articles have been published recently in which both ‘traditional’ and ‘novel’ biomarkers of Cu status have been reviewed in a non-systematic way(Reference Danzeisen, Araya and Harrison25, Reference Harvey and McArdle29, Reference Olivares, Méndez and Astudillo30), and there is a significant lack of evidence to support the use of the majority of putative indicators. It has been suggested that novel biomarkers such as peptidylglycine α-amidating monoxygenase, lysyl oxidase and the Cu chaperone for superoxide dismutase may prove useful for evaluating Cu status(Reference Danzeisen, Araya and Harrison25, Reference Harvey and McArdle29, Reference Olivares, Méndez and Astudillo30). However, in each case, the systematic review did not identify any studies containing data relevant to the evaluation of any of these potential biomarkers(Reference Harvey, Ashton and Hooper12).

Despite investigations into a range of modelling strategies, the classic U-shaped dose–response curve (Fig. 1) for Cu has not been well defined(Reference Stern, Solioz and Krewski27). The modelling process is hampered by the lack of status biomarkers coupled with a paucity of robust health outcome data at the extremes of both deficiency and toxicity. In 2002, a working group of biologists, epidemiologists and toxicologists initiated the construction of a database to include information from both animal and human studies of Cu deficiency and excess, which would facilitate these analyses(Reference Krewski, Chambers and Birkett31). Subsequently, attempts have been made to characterise the curve using categorical regression to model Cu dose–response relationships(Reference Krewski, Chambers and Stern32). This statistical approach has the advantage of allowing multiple studies and endpoints to be modelled simultaneously by organising the data into categories of severity of toxic response(Reference Dourson, Teuschler and Durkin33, Reference Haber, Strickland and Guth34). However, it is apparent that the database currently contains insufficient data, particularly in the marginal deficiency and excess ranges, to permit derivation of a complex model(Reference Krewski, Chambers and Birkett31). Ongoing updates to the database may result in a dose–response model facilitating the determination of a recommended range for oral Cu intake.

The understanding of Cu transport and homeostasis in human subjects is largely due to the characterisation of two rare genetic diseases of Cu metabolism(Reference Harris35, Reference Mercer36). Menkes disease is an X-linked syndrome (1/300 000 live male births) exhibiting as a profound systemic Cu deficiency, whereas Wilson's disease manifests as Cu toxicity in 1/90 000 individuals(Reference Llanos and Mercer37). Both conditions result from defects in genes encoding similar Cu ATPase pumps. Mutations in the ATP7A gene result in Menkes disease and the inability of Cu to be transported across the gastrointestinal mucosa or blood–brain barrier(Reference Kaler38). Affected individuals have severe neurological, skeletal and soft tissue abnormalities and rarely survive beyond childhood(Reference Kaler39). Daily Cu histidine injections are the only viable treatment, but simply prevent the severest neurological problems.

Wilson's disease is caused by mutations in the ATP7B gene, which codes for a transport protein (ATPase) responsible for hepatobiliary Cu excretion(Reference Llanos and Mercer37). Although 300 mutations of ATP7B have been described, most cases of Wilson's disease results from a small number of mutations specific for that population(Reference Ala, Walker and Ashkan40). Diagnosis of Wilson's disease generally occurs between infancy and early adulthood and individuals routinely present with symptoms related to liver toxicity, accompanied by psychiatric and/or neurological problems(Reference Scheinberg and Sternlieb41). The treatment for the disease involves restriction of Cu dietary intake coupled with Cu chelation therapy(Reference Roberts and Schilsky42, Reference Das and Ray43).

The rarity of genetic conditions of Cu metabolism means that no allowance is made for these groups when setting dietary reference values. Despite a lack of robust evidence, it has been suggested that heterozygote carriers of mutations for Wilson's disease may be at increased risk of Cu toxicity compared with the general population(44, Reference Brewer45). Other potentially ‘at risk’ groups have been identified through sporadic incidences of idiopathic Cu toxicosis in children ingesting increased amounts of Cu in milk and water. Clusters of cases have been reported in the Pune area of India (Indian childhood cirrhosis) and the Tyrolean region of Europe (Tyrolean infantile cirrhosis) and result from genetic defects of Cu metabolism predisposing individuals to Cu toxicity with increased dietary Cu exposure(Reference Müller, Müller and Feichtinger46Reference Tanner48).

Ideally, robust risk assessment models for evaluating Cu dose–response relationships in individuals need to incorporate both genotype and phenotype data. For the foreseeable future, this goal is likely to remain an aspiration as risk–benefit analyses for Cu are still hampered by insufficient deficiency and toxicity population data.

Case study on iron

When the dietary supply of Fe is insufficient to meet physiological needs, or intakes are excessively high, there are well-documented health sequelae(49). Unlike Cu, there are no active excretory mechanisms for Fe, and body levels are maintained within a narrow range to avoid deficiency or overload through changes in the efficiency of absorption(49). However, there appears to be no clear relationship between Fe intake and body Fe(Reference Blanck, Cogswell and Gillespie50, Reference Harvey, Armah and Dainty51) because of various factors, collectively referred to as modifiers of bioavailability(Reference Hurrell and Egli11). There are a number of dietary enhancers and inhibitors of absorption, such as phytate, meat and ascorbic acid, and host-related factors, including serum/plasma ferritin and hepcidin, a recently discovered protein that modulates Fe absorption(Reference Roe, Collings and Dainty52). This complicates the evaluation of intake–status–health relationships and risk–benefit analysis in relation to dietary intake.

EAR for Fe have traditionally been based on factorial estimates taking into account Fe losses and requirements for growth and maintenance and applying a bioavailability factor. Requirements are not normally distributed in pre-menopausal women because of the effects of menstrual blood loss, which vary considerably between individuals and are skewed(Reference Harvey, Armah and Dainty51), and dietary reference values have to be derived taking this into account. In setting the UK EAR, the expert panel chose to use the 75th centile of blood losses, yet acknowledged that this would still result in an RNI that might not be sufficient for women with the greatest menstrual losses(53). Within many of the DRV reports, special consideration is given to groups with higher Fe requirements, greater losses or low intakes, beyond the usual distinction of age groups and reproductive life stage. For Fe, these include vegetarians, blood donors, populations with high parasite burden and athletes. Although these groups are acknowledged to be at high risk of Fe deficiency, no DRV panel has yet set specific requirements for any of these groups. There is some evidence that obese people are at increased risk of Fe deficiency(Reference Zimmermann, Zeder and Muthayya54) probably due to the increased expression of hepcidin associated with low-grade chronic inflammation(Reference del Giudice, Santoro and Amato55, Reference Aeberli, Hurrell and Zimmermann56), although no DRV body has yet taken this into consideration. Within ‘normal healthy’ populations inter-individual variability in Fe losses is thought to be about 15%, excluding menstruating women(53).

In calculating DRV, each expert panel must also apply a value or values to take into consideration the variability of absorption from the diet. A typical absorption figure of 15–18% is applied to Western diets (UK DRV and US Dietary Reference Intakes panels, respectively), which normally contain meat and high levels of ascorbic acid. The bioavailability factors applied to the different life stages may vary based on the increased efficiency of absorption (e.g. during pregnancy) or different types of diet, e.g. infant diets(Reference Otten, Pitzi Hellwig and Meyers57). The WHO, however, use a variety of bioavailability factors to cover a range of global diets: 5, 10, 12 and 15%. Even a variation of 10% in bioavailability can have considerable impact on the RNI. For adult males, the RNI set by the WHO for a diet with high bioavailability (15%) is 9·1 mg/d compared to a low bioavailability diet (5%), which equates to an RNI of 27·4 mg/d(58).

Unlike Cu, there are a number of good biomarkers of Fe status, although infection/inflammation confounds some of the indices, notably serum ferritin. Therefore, it is common practice to measure more than one biomarker. Normally a measure of body Fe stores is made, in addition to a basic marker of function, usually Hb. At present, the generally accepted optimum approach for detecting and measuring the degree of Fe deficiency is the ratio of serum transferrin receptor to serum ferritin, the so-called body Fe method(Reference Cook, Flowers and Skikne59). The ideal scenario is for individuals to have sufficient Fe to meet physiological requirements, with a small surplus for periods of dietary inadequacy and/or increased requirements, such as acute blood loss (e.g. blood donation) and infections (when Fe absorption is significantly reduced). In women of child-bearing age, the stores should be sufficient to maintain pregnancy by ensuring the fetus is not Fe deprived, and although absorptive efficiency is increased significantly during the second and even more in the third trimester(Reference Barrett, Whittaker and Williams60), additional Fe will be required in mothers with low Fe stores at the beginning of pregnancy.

Fe deficiency is the most common micronutrient deficiency in the world(58) and is typically classified into three main stages according to status and function biomarkers. Fe deficiency anaemia is the most severe stage of Fe deficiency and occurs when Fe stores are sufficiently depleted to lower Hb levels below a critical threshold. Anaemia, however, is not a health outcome; it describes a particular Fe status. The health effects associated with anaemia are tiredness, pallor, fatigue and impaired work performance(Reference Gibson61). These health outcomes are difficult to measure and are normally subjective and therefore do not lend themselves well to the assessment of risk through the prevalence of the health outcome in a population. Therefore, anaemia tends to be used as a health indicator for risk modelling. There are, however, anaemias of chronic disease(Reference Weiss and Goodnough62) that are not related to Fe intake, absorption or losses, and so this biomarker should be used with caution or the cause of the anaemia should first be established. Although there are fewer chronic diseases linked to low Fe intake than for some other micronutrients, the potential impact of high anaemia prevalence within a community or population can be extremely detrimental to both work output (productivity) and to birth outcome. This, coupled with its widespread prevalence within some populations, ensures that Fe deficiency remains at the forefront of global public health.

Other consequences linked to Fe deficiency include impaired immune function, thermoregulation and cognitive development(Reference Gibson61). Fe is an important component in the structure of a number of enzymes and acts as a co-factor in numerous other metabolic reactions, and therefore the consequences of even mild Fe deficiency can be far reaching(20). The impact on cognitive function appears to be of particular concern during periods of rapid growth in infants and adolescents, and evidence suggests that if sub-optimal Fe status occurs over a prolonged period of time, the potential damage can be irreversible. Beyond the status markers associated with anaemia there are a few reliable early health indicators that can be used to assess the impact of Fe deficiency.

When considering risks associated with high intakes of Fe, although there are some suggestive associations between high body Fe levels and chronic disease(49), the data are inconclusive and therefore do not provide a useful evidence base. Expert groups from the UK(63) and Europe(4) concluded that there were insufficient data to establish a safe UL for dietary Fe, although the UK expert group provided a guidance figure for Fe supplements of 17 mg/d; this was based on adverse gastrointestinal effects that are associated with Fe supplements but not food Fe. The Institute of Medicine(19) recommends a tolerable upper intake level (UL) of 45 mg/d based on gastrointestinal distress as the critical adverse effect, but their recommendations are for all sources of Fe: food, supplements and water. Data from the National Health and Nutrition Examination Survey (NHANES)(Reference Blanck, Cogswell and Gillespie50) indicate that the highest intake from food and supplements at the 90th percentile for any life stage and gender groups (excluding pregnancy and lactation) is 34 mg/d for men aged 51 years and older, which is well below the UL and therefore it was concluded that the risk of adverse effects of Fe from dietary sources was low. In pregnant and lactating women, 50–75% have a higher intake than 45 mg/d, but Fe supplementation is usually supervised as part of pre-natal and post-natal care so that the higher intakes should not pose a health risk. Interestingly, the WHO's recommendation for premenopausal women consuming a low-bioavailability diet (5%) is 58·8 mg/d, which exceeds the UL set by the Institute of Medicine(19).

Other potential consequences of high Fe intake or overload include possible interactions with other minerals that share common pathways or similar activity, such as Zn, Cu and Ca. However, there is limited in vivo evidence to suggest any real adverse affects either of high Fe intake on the body stores of other micronutrients, or of high intake of other minerals on Fe status(49). Fe also has the potential to cause oxidative damage within the body, but again, there is limited evidence to suggest that high-Fe intakes over time will cause any health problems associated with oxidative stress. The tight regulation of Fe absorption prevents the majority of the body from being exposed to excess Fe, other than the gastrointestinal tract.

Acute toxicity appears to be relatively rare, particularly in adults. Children are far more susceptible to both consuming high single dose intakes and the effects associated with them. These typically include vomiting and gastrointestinal tract disruption. Supplement use appears to be relatively limited, with pregnant and pre-menopausal women being the most common users. The NHANES III study found that about 16–20% of adults regularly consumed Fe in a supplement, and this equated to a median supplemental intake of just 1 mg/d(Reference Otten, Pitzi Hellwig and Meyers57). There are also populations at risk of excessive Fe intakes due to traditional processes such as cooking and brewing in vessels that contain high amounts of Fe, e.g. Bantu people of sub-Saharan African(20).

The strict regulation of dietary Fe absorption and its low bioavailability makes the accumulation of Fe in the body (Fe overload) and chronic toxicity relatively rare. It is most likely caused by the hereditary condition haemochromatosis (see later), a genetic disorder which affects the regulation of Fe absorption. Fe overload is also a potential problem for patients receiving parenteral nutrition, thus bypassing the regulatory systems(49). Although overload is rare, once excess Fe is in the body the lack of an excretory system makes regulation impossible without medical intervention. The main health implication of excess body Fe is the deposition of large amounts of the mineral into the major organs over time, particularly the liver, heart and pancreas.

Risk modelling for Fe deficiency on a population basis requires the selection of one or two key variables. On an individual basis, however, the picture is extremely complicated with individuals having a long list of potential confounding factors, including dietary patterns, physical activity, Fe status, menstruation, current infections/inflammatory state and genotype. The latter can be extremely influential in determining an individual's risk of deficiency or excess. In women, the age of menopause is partially heritable(Reference Varea, Bernis and Montero64) and the extent of menstrual blood loss is highly heritable(Reference Rybo and Hallberg65). The amount of Fe absorbed from the diet is multi-factorial with a large proportion of intra-individual and inter-individual variation yet to be explained.

There are a number of specific polymorphisms that have already been identified which influence Fe absorption and metabolism. The most widely studied is the HFE-linked haemochromatosis condition in which Fe absorption is not regulated appropriately, leading to Fe overload. The condition is a serious public health problem in some populations as the frequency within European populations is thought to be between 0·2 and 1·0%(Reference Janssen and Swinkels66, Reference Wrede, Hutzler and Bollheimer67). However, there is a relatively low penetrance, with some homozygotes for the C282Y gene showing no clinical signs of Fe overload(Reference Janssen and Swinkels66). The C282Y/C282Y genotype accounts for the majority of cases of Fe overload due to haemochromatosis, although about 25% cases have other genotypes, including 14% with H63D(Reference Burke, Imperatore and McDonnell68), but what exactly determines the degree of penetrance is not yet established. Heterozygotes do not appear absorb Fe inappropriately(Reference Roe, Heath and Oyston69) so are not at increased risk of Fe overload. The treatment for haemochromatosis is usually a combination of avoidance of high-Fe foods and, more importantly, a periodic reduction in stores through blood removal. The condition cannot be managed through dietary means alone, and therefore it is not practical to include patients with this particular disorder into risk–benefit analysis on a population level, despite the relatively high occurrence of the polymorphism. Instead, individuals must be treated through specific risk reduction measures.

In conclusion, despite the fact that severe Cu deficiency and overload are fully characterised conditions in the genetic abnormalities of Menkes and Wilson's disease, respectively, the lack of biomarkers in healthy individuals has precluded a robust risk assessment. For Fe, there is no active excretory mechanism and absorptive efficiency is central to the control of homeostasis. This results in fluctuating values for bioavailability, which is also dependent on various dietary factors. Further research is required before a comprehensive model of risk–benefit analysis can be applied to dietary Fe intake. High-priority topics include the development of more robust biomarkers to precisely define status, and markers of function to assess the impact of Fe deficiency and excess, as well as relevant early biomarkers of health. The impact of obesity and effects of genotype on Fe requirements and metabolism, and the implication of high Fe intakes for risk of chronic disease also require further evaluation. Despite an apparent wealth of data on the metabolic consequences of deficiency and excess for both Fe and Cu, in each case there is no adequate approach for modelling risk assessment across the entire deficiency–toxicity range.

Acknowledgements

Preparation of this manuscript was partially funded by the EURRECA Network of Excellence (www.eurreca.org) which is financially supported by the Commission of the European Communities, specific Research, Technology and Development (RTD) Programme Quality of Life and Management of Living Resources, within the Sixth Framework Programme, contract no. 036196. This report does not necessarily reflect the Commission's views or its future policy in this area. The authors declare no conflict of interest. S.F.-T. was responsible for the first draft and final editing of the manuscript, L. H. contributed to the case study on copper and R. C. to the case study on iron.

References

1.Renwick, AG, Flynn, A, Fletcher, RJ et al. (2004) Risk–benefit analysis of micronutrients. Food Chem Toxicol 42, 19031922.CrossRefGoogle ScholarPubMed
2.European Food Safety Authority (2007) Summary Report EFSA Scientific Colloquium 6. Risk-benefit analysis of foods. Tabiano, Italy: EFSA.Google Scholar
3.Mertz, W (1995) Risk assessment of essential trace elements: new approaches to setting recommended dietary allowances and safety limits. Nutr Rev 53, 179185.CrossRefGoogle ScholarPubMed
4.EFSA (2006) Tolerable Upper Intake Levels for Vitamins and Minerals. Scientific Committee on Food, Scientific Panel on Dietetic Products, Nutrition and Allergies. Tabiano, Italy: European Food Safety Authority. ISBN 92-9199-014-0.Google Scholar
5.Schümann, K (2006) Dietary reference intakes for trace elements revisited. J Trace Elem Med Biol 20, 5961.CrossRefGoogle ScholarPubMed
6.Olin, SS (1998) Between a rock and a hard place: methods for setting dietary allowances and exposure limits for essential minerals. J Nutr 128, 364S367S.CrossRefGoogle Scholar
7.Renwick, AG (2006) Toxicology of micronutrients: adverse effects and uncertainty. J Nutr 136, 493S501S.CrossRefGoogle ScholarPubMed
8.de Graaf, AA, Freidig, AP, De Roos, B et al. (2009) Nutritional systems biology modeling: from molecular mechanisms to physiology. PLoS Comput Biol 5, e1000554 (Epublication ahead of print 26 November 2009).CrossRefGoogle ScholarPubMed
9.Meltzer, HM, Aro, A, Andersen, NL et al. (2003) Risk analysis applied to food fortification. Public Health Nutr 6, 281290.CrossRefGoogle ScholarPubMed
10.Collins, JF, Prohaska, JR & Knutson, MD (2010) Metabolic crossroads of iron and copper. Nutr Rev 68, 133147.CrossRefGoogle ScholarPubMed
11.Hurrell, R & Egli, I (2010) Iron bioavailability and dietary reference values. Am J Clin Nutr 91, 1461S1467S.CrossRefGoogle ScholarPubMed
12.Harvey, LJ, Ashton, K, Hooper, L et al. (2009) Methods of assessment of copper status in humans: a systematic review. Am J Clin Nutr 89, 2009S2024S.CrossRefGoogle ScholarPubMed
13.Zeleniuch-Jacquotte, A, Zhang, Q, Dai, J et al. (2007) Reliability of serum assays of iron status in postmenopausal women. Ann Epidemiol 17, 354358.CrossRefGoogle ScholarPubMed
14.Hallberg, L (1981) Bioavailable nutrient density: a new concept applied in the interpretation of food iron absorption data. Am J Clin Nutr 34, 22422247.CrossRefGoogle ScholarPubMed
15.Institute of Medicine (2010). Evaluation of Biomarkers and Surrogate Endpoints in Chronic Disease [Micheel, CM and Ball, JR]. Washington DC: National Academies Press.Google Scholar
16.Goldhaber, SB (2003) Trace element risk assessment: essentiality vs. toxicity. Regul Toxicol Pharmacol 38, 232242.CrossRefGoogle ScholarPubMed
17.Renwick, AG & Walker, R (2008) Risk assessment of micronutrients. Toxicol Lett 180, 123130.CrossRefGoogle ScholarPubMed
18.Renwick, AG, Dragsted, LO, Fletcher, RJ et al. (2008) Minimising the population risk of micronutrient deficiency and over-consumption: a new approach using selenium as an example. Eur J Nutr 47, 1725.CrossRefGoogle ScholarPubMed
19.Food and Nutrition Board, Institute of Medicine (2001) Dietary Reference Intakes. A Report of the Panel on Micronutrients. Washington, DC: National Academies Press.Google Scholar
20.Nordic Nutrition Council (2004). Nordic Nutrition Recommendations 2004: Integrating Nutrition and Physical Activity. 4th ed. Copenhagen, Denmark: Nordic Council of Ministers.Google Scholar
21.Reports of the Scientific Committee for Food (1993) Nutrient and Energy Intakes for the European Community. Luxembourg: Commission of the European Communities.Google Scholar
22.Doets, EL, de Wit, LS, Dhonukshe-Rutten, RA et al. (2008) Current micronutrient recommendations in Europe: towards understanding their differences and similarities. Eur J Nutr 47, 1740.CrossRefGoogle ScholarPubMed
23.O'Donohue, JW, Reid, MA, Varghese, A et al. (1993) Micronodular cirrhosis and acute liver failure due to chronic copper self-intoxication. Eur J Gastroenterol 5, 561562.Google Scholar
24.O'Donohue, J, Reid, M, Varghese, A et al. (1999) A case of adult chronic copper self-intoxication resulting in cirrhosis. Eur J Med Res 4, 252.Google ScholarPubMed
25.Danzeisen, R, Araya, M, Harrison, B et al. (2007) How reliable and robust are current biomarkers for copper status? Br J Nutr 98, 676683.CrossRefGoogle ScholarPubMed
26.Goodman, BP, Bosch, EP, Ross, MA et al. (2009) Clinical and electrodiagnostic findings in copper deficiency myeloneuropathy. J Neurol Neurosurg Psychiatry 80, 524527.CrossRefGoogle ScholarPubMed
27.Stern, BR, Solioz, M, Krewski, D et al. (2007) Copper and human health: biochemistry, genetics, and strategies for modelling dose-response relationships. J Toxicol Environ Health, Part B 10, 157222.CrossRefGoogle ScholarPubMed
28.Gambling, L & McArdle, HJ (2004) Iron, copper and fetal development. Proc Nutr Soc 63, 553562.CrossRefGoogle ScholarPubMed
29.Harvey, LJ & McArdle, HJ (2008) Biomarkers of copper status: as brief update. Br J Nutr 99, suppl. 3, S10S13.CrossRefGoogle ScholarPubMed
30.Olivares, M, Méndez, MA, Astudillo, PA et al. (2008) Present situation of biomarkers for copper status. Am J Clin Nutr 88, 859S862S.CrossRefGoogle ScholarPubMed
31.Krewski, D, Chambers, A & Birkett, N (2010) The use of categorical regression in modelling copper exposure–response relationships. J Toxicol Environ Health, Part A 73, 187207.CrossRefGoogle ScholarPubMed
32.Krewski, D, Chambers, A, Stern, BR et al. (2010) Development of a copper database for exposure-response analysis. J Toxicol Environ Health, Part A 73, 208216.CrossRefGoogle ScholarPubMed
33.Dourson, ML, Teuschler, LK, Durkin, PR et al. (1997) Categorical regression of toxicity data: a case study using aldicarb. Regul Toxicol Pharmacol 25, 121129.CrossRefGoogle Scholar
34.Haber, L, Strickland, JA & Guth, DJ (2001) Categorical regression analysis of toxicity data. Comments toxicol 7, 437452.Google Scholar
35.Harris, ED (2000) Cellular copper transport and metabolism. Annu Rev Nutr 20, 291310.CrossRefGoogle ScholarPubMed
36.Mercer, JF (2001) The molecular basis of copper transport diseases. Trends Mol Med 7, 6469.CrossRefGoogle ScholarPubMed
37.Llanos, RM & Mercer, JF (2002) The molecular basis of copper homeostasis copper-related disorders. DNA Cell Biol 21, 259270.CrossRefGoogle ScholarPubMed
38.Kaler, SG (1998) Diagnosis and therapy of Menkes syndrome, a genetic form of copper deficiency. Am J Clin Nutr 67, 1029S1034S.CrossRefGoogle ScholarPubMed
39.Kaler, SG (1996) Menkes disease mutations and response to early copper histidine treatment. Nat Genet 13, 2122.CrossRefGoogle ScholarPubMed
40.Ala, A, Walker, AP, Ashkan, K et al. (2007) Wilson's disease. Lancet 369, 397408.CrossRefGoogle ScholarPubMed
41.Scheinberg, IH & Sternlieb, I (1996) Wilson disease and idiopathic copper toxicosis. Am J Clin Nutr 63, 842S845S.CrossRefGoogle ScholarPubMed
42.Roberts, EA & Schilsky, ML (2003) A practice guideline on Wilson disease. Hepatology 37, 14751492.CrossRefGoogle ScholarPubMed
43.Das, SK & Ray, K (2006) Wilson's disease: an update. Nat Clin Pract Neurol 2, 482493.CrossRefGoogle ScholarPubMed
44.National Research Council (2000) Copper in Drinking Water [Committee on Copper in Drinking Water, editor]. Washington DC: National Academy Press.Google Scholar
45.Brewer, GJ (2000) Editorial. J Trace Elem Exp Med 13, 249254.3.0.CO;2-L>CrossRefGoogle Scholar
46.Müller, T, Müller, W & Feichtinger, H (1998) Idiopathic copper toxicosis. Am J Clin Nutr 67, 1082S1086S.CrossRefGoogle ScholarPubMed
47.Müller, T, Feichtinger, H & Berger, H (1996) Endemic tyrolean infantile cirrhosis: an ecogenetic disorder. Lancet 347, 877880.CrossRefGoogle ScholarPubMed
48.Tanner, MS (1998) Role of copper in Indian childhood cirrhosis. Am J Clin Nutr 67, 1074S1081S.CrossRefGoogle ScholarPubMed
49.Scientific Advisory Committee on Nutrition. (2010) Iron and Health. London: The Stationery Office (In the Press).Google Scholar
50.Blanck, HM, Cogswell, ME, Gillespie, C et al. (2005) Iron supplement use and iron status among US adults: results from the third National Health and Nutrition Examination Survey. Am J Clin Nutr 82, 10241031.CrossRefGoogle ScholarPubMed
51.Harvey, LJ, Armah, CN, Dainty, JR et al. (2005) Impact of menstrual blood loss and diet on iron deficiency among women in the UK. Br J Nutr 94, 557564.CrossRefGoogle ScholarPubMed
52.Roe, MA, Collings, R, Dainty, JR et al. (2009) Plasma hepcidin concentrations significantly predict interindividual variation in iron absorption in healthy men. Am J Clin Nutr 89, 10881091.CrossRefGoogle ScholarPubMed
53. Report on Health and Social Subjects 41: Dietary Reference Values (DRVs) for Food Energy and Nutrients for the UK, Report of the Panel on DRVs of the Committee on Medical Aspects of Food Policy (COMA) 1991. London: The Stationary Office.Google Scholar
54.Zimmermann, MB, Zeder, C, Muthayya, S et al. (2008) Adiposity in women and children from transition countries predicts decreased iron absorption, iron deficiency and a reduced response to iron fortification. Int J Obes (Lond) 32, 10981104.CrossRefGoogle Scholar
55.del Giudice, EM, Santoro, N, Amato, A et al. (2009) Hepcidin in obese children as a potential mediator of the association between obesity and iron deficiency. J Clin Endocrinol Metab 94, 51025107.CrossRefGoogle ScholarPubMed
56.Aeberli, I, Hurrell, RF & Zimmermann, MB (2009) Overweight children have higher circulating hepcidin concentrations and lower iron status but have dietary iron intakes and bioavailability comparable with normal weight children. Int J Obes (Lond) 33, 11111117.CrossRefGoogle ScholarPubMed
57.Institute of Medicine (2006) Iron. In Dietary Reference Intakes. The Essential Guide to Nutrient Requirements, pp. 329339 [Otten, JJ Pitzi Hellwig, J and Meyers, LD, editors]. Washington, DC: The National Academies Press.Google Scholar
58.World Health Organisation and Food and Agriculture Organisation of the United Nations (2004) Vitamin and Mineral Requirements in Human Nutrition. Geneva: World Health Organization.Google Scholar
59.Cook, JD, Flowers, CH & Skikne, BS (2003) The quantitative assessment of body iron. Blood 101, 33593364.CrossRefGoogle ScholarPubMed
60.Barrett, JF, Whittaker, PG, Williams, JG et al. (1994) Absorption of non-haem iron from food during normal pregnancy. BMJ 309, 7982.CrossRefGoogle ScholarPubMed
61.Gibson, RS (2005) Principles of Nutritional Assessment, 2nd edn. Oxford: Oxford University Press.CrossRefGoogle Scholar
62.Weiss, G & Goodnough, LT (2005) Anemia of chronic disease. N Engl J Med 352, 10111023.CrossRefGoogle ScholarPubMed
63.Expert Group on Vitamins and Minerals (2003) Safe Upper Levels for Vitamins and Minerals. London: Food Standards Agency.Google Scholar
64.Varea, C, Bernis, C, Montero, P et al. (2000) Secular trend and intrapopulational variation in age at menopause in Spanish women. J Biosoc Sci 32, 383393.CrossRefGoogle ScholarPubMed
65.Rybo, G & Hallberg, L (1966) Influence of heredity and environment on normal menstrual blood loss. A study of twins. Acta Obstet Gynecol Scand 45, 389410.CrossRefGoogle ScholarPubMed
66.Janssen, MCH & Swinkels, DW (2009) Hereditary haemochromatosis. Best Prac Res Clin Gastro 23, 179183.CrossRefGoogle ScholarPubMed
67.Wrede, CE, Hutzler, S, Bollheimer, LC et al. (2004) Correlation between iron status and genetic hemochromatosis (codon C282Y) in a large German population. Isr Med Assoc J 6, 3033.Google Scholar
68.Burke, W, Imperatore, G, McDonnell, SM et al. (2000) Contribution of different HFE genotypes to iron overload disease: a pooled analysis. Genet Med 2, 271277.CrossRefGoogle ScholarPubMed
69.Roe, MA, Heath, AL, Oyston, SL et al. (2005) Iron absorption in male C282Y heterozygotes. Am J Clin Nutr 81, 814821.CrossRefGoogle ScholarPubMed
Figure 0

Fig. 1. Dose–response curves derived from evidence-based (open boxes) and derived (shaded box) data used to derive dietary reference values using European Union (EU), Institute of Medicine (IOM) and United Nations University (UNU) terminology respectively: AR, average requirement; EAR, estimated average requirement; ANR, average nutrient requirement; PRI, population reference intake; RDA, recommended daily allowance; INLx, individual nutrient level for x% of the population; UL, tolerable upper intake limit; UL, upper level; UNL, upper nutrient level.

Figure 1

Table 1. Information required for risk–benefit analysis of mineral intakes

Figure 2

Fig. 2. Intake–status–health relationships and examples of confounding factors for Cu and Fe.

Figure 3

Fig. 3. Population distribution for (a) glutathione peroxidase activity with low selenium intakes and (b) frank selenosis with high selenium intakes. (c) Population distribution modelling for other possible effects resulting from different levels of intake of selenium. (Reprinted with permission from Renwick et al.(18).)