Aphids, carotenoids and photosynthesis

When it comes to finding food for dinner, plants have it pretty easy. They can use light from the Sun to convert carbon dioxide into sugar, literally making food out of thin air. You might think that a superpower like photosynthesis should have evolved in animals too, but it doesn’t come easily.

Plants only came by this ability by absorbing a bacterium that could photosynthesize. Over time, the bacterium became a part of the plant’s cells allowing it to be passed on to the next generation. The ability to photosynthesize is so complicated that it is unlikely to develop from scratch again.

However, published this month is news that aphids, through their unique ability to produce pigments known as carotenoids, may be the first animal able to photosynthesize. However, there are several reasons why this is not entirely true: spider mites are also able to produce carotenoids; the aphids don’t appear to be making sugars and so are not photosynthetic, but phototropic; and a species of sea slug is already known to photosynthesize after taking chloroplasts and genes from the algae it eats.

So what is the news then? Aphids have used their carotenoid-making ability to construct a rudimentary light-harvesting system – and that’s pretty cool.

Pea aphids are remarkable in many ways, not only might they be able to harvest energy from light but they can be born pregnant.
Credit: Shipher Wu (photograph) and Gee-way Lin (aphid provision), National Taiwan University

The vast majority of animals have no need to produce their own carotenoids because they can be gained through diet. There is a great need for carotenoids in our bodies as they are used to produce vitamin A and retinal, a light-absorbing chemical that we use in order to see. Carotenoids are easily found in our diet and one carotenoid – beta-carotene – is what gives carrots their colour (and carrots are what gave carotenoids their name). [Note the link between carrots and being able to see better. It’s not just an old wives’ tale.]

Image

Beta-carotene can act as a molecular wire across the cell membrane, be converted into retinal, and act as an anti-oxidant. All the more reason to eat your carrots (if you’re not an aphid).

The scientists in this study investigated the aphids to find out why they would produce their own carotenoids. Never mind how they gained the ability (although it is likely horizontal gene transfer from yeast, but that’s a different story), but the actual production of carotenoids is energetically expensive and there should be a good reason for making it. And it turns out there is.

The membrane of a cell is a fatty layer acting as a barrier between the carefully balanced inner working of the cell and the outside world. The non-polar structure of carotenoids means that they are only found in the membrane. Also, the alternating double-bonds along the carbon backbone mean that electrons can travel freely along the length of the molecule. This means that carotenoids can transport electrons from one side of the membrane to the other.

Carotenoids absorb light in the blue region of the spectrum, making them look orange/yellow (hence the colour of carrots). When a carotenoid molecule absorbs light, it ‘excites’ an electron, allowing it to jump to a neighbouring molecule outside the membrane. This electron donating reaction is known as reduction, and it can start a chain of reduction and oxidation reactions. These can lead to an imbalance of charges across the membrane, with electrons on one side and protons (H+ ions) on the other. This imbalance is what drives energy production. Think of it as a hydroelectric dam, we allow the water (or protons) to build up on one side of the dam (or membrane) and when the channel is opened we can capture the energy of the water (or protons) rushing through. In this case, the turbine is the remarkable ATP synthase complex which produces ATP, the body’s ‘currency’ of energy.

Not bad for a bug…

Bird’s Eye View: How to see magnetic fields

The ancient Greeks, like many people since, were confounded and fascinated by the migration of birds. Homer recognised that cranes “flee the winter and the terrible rains and fly off to the world’s end”. Meanwhile, Aristotle wrongly asserted that each year summer redstarts would transform into robins come winter, as the two species were never seen in Greece together. In modern times, we’ve come to appreciate the vast distances covered by migratory animals and the remarkable precision with which they make the journey. How is this feat achieved?

It is known that animals use sounds, landmarks or even smells to guide and navigate their way across continents. But the most intriguing and least understood navigation ability is magnetoreception: the detection of the Earth’s magnetic field through an internal, biological compass. This ability has been seen in a variety of animals, from ants to crocodiles. In fact, wildlife rangers in Florida resorted to taping magnets to the heads of crocodiles to prevent them from finding their way back after being relocated. Magnetoreception has even been suggested in the humble cow, after researchers using Google Earth accidentally discovered that cows tend to line up with the Earth’s magnetic field.

Magnetoreception was first observed in captive robins in 1957. In autumn, when it was time for them to migrate from Frankfurt to Spain, they kept flying southwest in their cage. This happened even though the room was isolated from any outside visual stimuli with which the robins could orientate themselves. This led to the idea that robins might use an internal magnetic compass to migrate. Many studies have been conducted since, but controversy still rages over the exact underlying mechanism of magnetoreception.

Robin in the winter

Robins can find their way with only the Earth’s magnetic field to guide them, but how do they achieve this?
Photo credit: Christine Matthews

Over fifty animal species have been found to use an internal magnetic compass so far, and several different mechanisms have been proposed and observed. The most established mechanism relies on the presence of small crystals of magnetite, a naturally magnetic mineral, in either the nose or the beak, surrounded by receptor nerves. Magnetite has been found in many animals, including humans, where it could be used to sense the magnetic field of the Earth and create a magnetic field map for migration. However, in experiments on birds where this magnetite receptor was deliberately disrupted by anaesthetic or a strong magnetic pulse, the birds could still orientate themselves along the magnetic field. This suggests that there is an alternative mechanism at work. Even more intriguingly, this alternative mechanism only works when there is light present, and didn’t appear to be influenced by reversing the direction of the field.

In 1978, Klaus Schulten suggested a mechanism for this type of magnetoreception, known as the radical pair mechanism. This proposes that there is a light-activated reaction in the bird’s eye that is affected by magnetism. By detecting the rate of the reaction, birds can sense the strength and alignment of Earth’s magnetic field. The problem with this idea is that the Earth’s magnetic field is incredibly weak, and so its influence on a normal reaction is a million times less than the energies involved in a normal chemical reaction. How could it possibly have a detectable effect?

The secret to detecting the magnetic field lies in generating a pair of radicals, which are molecules with unpaired electrons that interact strongly with magnetic fields. Creating these radicals requires a burst of energy, as provided when the molecules are exposed to light. Within a suitable molecule or protein, two radicals can form what is known as a ‘spin-correlated pair’ that exists in two different states. Conversion between these two states is affected by a magnetic field, and the rate of conversion can be monitored through the concentration of the radicals. In this way, a weak magnetic field can become detectable by cells in an organism.

The radical pair mechanism fits with the observations that cannot be reconciled with magnetite receptors—it is both dependent on the presence of light and unresponsive to the polarity of the field. Experimental evidence was lacking in 1978 when Schulten proposed the mechanism, so the idea received little attention for twenty years.

In 2000, a research group from Illinois suggested that proteins known as cryptochromes may be behind this source of magnetoreception. Cryptochrome proteins are found in the eye of robins, and absorb blue light to start a radical reaction—the perfect candidate to generate biologically detectable spin-correlated radical pairs. This led to renewed interest in the area, including the development of a proof-of-principle artificial magnetoreceptor system by a team of researchers at Oxford University. This was the first man-made chemical compass; the first artificial chemical system sufficiently sensitive to detect the Earth’s weak magnetic field on the planet’s surface.

Cryptochrome protein with a flavin radical initiator

Cryptochrome proteins are found in many creatures and absorb blue light through a co-factor known as FAD (shown in yellow)

The contribution of cryptochrome and the radical pair mechanism to magnetoreception in animals is still being investigated. Despite initial scepticism, evidence from model systems and computational work has shown that this mechanism is feasible for detecting magnetism. Cryptochromes are primarily responsible for maintaining circadian rhythms in many animals, including humans. Like many proteins throughout evolution, cryptochromes have found a new role in a different part of the body. From their presence in the eye, it has even been suggested that robins could sense the results of the radical reaction along the optic nerve and actually ‘see’ the magnetic field.

With growing evidence of weak magnetic fields affecting biological processes, there is increasing interest in how they might affect us. Numerous studies have shown a significant correlation between proximity to high-voltage power lines—which carry a low-frequency magnetic field­­—and increased rates of childhood leukaemia. In 2001 the International Agency for Research on Cancer classified extremely low-frequency magnetic fields as a possible carcinogen. Yet several attempts to demonstrate magnetic field induced carcinogenesis or tumour promotion in cells have failed, so this issue is still surrounded by uncertainty.

Perhaps in years to come our suspicions of magnetic fields transforming healthy cells into cancerous ones might be viewed just as fanciful as Aristotle’s redstarts to robins hypothesis. While we cannot be sure yet that power lines cause cancer, further analysis of Google Earth has shown that they can certainly disrupt the ability of cows to line up with the Earth’s magnetic field—tricking them into aligning with the magnetic field of the power line instead.

Above Genetics: How your behaviour can affect your DNA

It is now regarded as a simple fact of life that you are stuck with the genes your parents gave you. Genetic mutations from long ago have passed down from generation to generation, whether it is joined earlobes, knobbly knees or a higher risk of breast cancer. This is Darwinian inheritance in action.

But what if that isn’t the full story? What if your actions as a ten-year-old not only affected your genes, but those of your future children and grandchildren?

Darwin wasn’t the first to offer a theory on evolution; before him was Jean-Baptiste Lamarck, who suggested that traits developed by animals during their lifetime could be passed onto their children. The classic example is a giraffe, who, after years of stretching to reach higher leaves, would have offspring with longer necks. After several generations of stretching, giraffes would have necks as long as they are today.

This made intuitive sense and even Darwin was reluctant to dismiss it — he was not quite the Darwinist he is now made out to be. However, now that DNA is known to be responsible for passing on genetic information, it is hard to see how this could work. No amount of neck-straining will change your genes, so surely this cannot happen.

How could stretching your neck be passed on to the next generation?
Source: Brocken Inaglory

The Human Genome Project aimed to transcribe our DNA and find out what makes a human. Even after decoding all three billion ‘letters’, there was still not enough information to provide all the answers. One of the problems is that many genes can be switched off, or ‘silenced’, by the body. This occurs through methylation, where a small marker, just one carbon and three hydrogen atoms, is attached to a section of DNA and switches it off. Since this silences a gene without actually altering the genetic code, methylation is described as epigenetic, from the Greek ‘epi-’ for ‘above’. This system of silencing genes is another layer of information on top of the genetic code — an epigenetic code.

Methylation of DNA is a natural way that an organism regulates itself, but external factors can affect this dramatically. When a honey bee larva is fed on royal jelly its methylation levels fall, causing it to develop ovaries, grow larger and live longer, i.e. become a queen. However, it is not just food that is important; mice change their DNA methylation patterns depending on how much attention their mothers paid them as a pup.

Methylation of DNA is a very subtle change but its effects can be enormous.
Source: Christoph Bock

Epigenetic changes due to environment can happen even before birth. Studies of several famines, including the Dutch Hunger Winter of 1944 and the Great Chinese Famine of 1958–61, have shown that children conceived during these periods were underweight at birth. Now adults, they have increased rates of obesity, heart disease and schizophrenia, and have increased methylation of genes linked to these diseases. This makes clear the importance of epigenetics as not just an aside to genetics but a powerful force in its own right.

It was long thought that epigenetic changes could not be passed to our children, as sperm and egg cells undergo ‘reprogramming’ to wipe the slate clean. However, recent experiments have shown that mice are able to pass on epigenetic changes and historical records suggest that this may be happening in humans too.

As an isolated farming community in the early 1800s, Överkalix in north Sweden suffered from extreme fluctuation in food supply; some years would be plentiful while others would be ruinous. This variation encouraged Lars Olev Bygren, a preventative health specialist, to trace the ancestries of a hundred villagers and cross-reference them with harvest records. Remarkably, he found that boys who had survived a year of famine while aged 9–12 went on to have sons and grandsons who lived on average 32 years longer than those of boys who had enjoyed a year of feasting. This prepubescent stage is when the body is most susceptible to environmental changes; however it is remarkable that the repercussions can be seen two hundred years later in the lives of their grandchildren, who were never directly exposed to famine.

This seems very similar to Lamarck’s theory that the impact of an animal’s life is passed down to the next generation. However, epigenetics cannot replace Darwinian inheritance, where a genetic mutation is permanent, as epigenetic changes should only be temporary. But how long is temporary? We have seen effects passed down at least two generations in humans, while experiments with roundworms have shown epigenetic changes surviving over 40 generations.

Epigenetics gives us the flexibility that genetics could never provide, allowing us to adapt to our environment during our lifetime. However, it is not just our own lives being adjusted; we could be determining the lives of our future children. Take heed, the sins of the father may well be visited upon the son.