The hidden potential of MDMA and psilocybin

Earlier this week I went to a talk by Professor David Nutt entitled ‘Time to Grow Up?’ where he spoke about the damage caused by the UK government’s approach towards drugs. David Nutt has become quite well known in the UK after he was fired in 2009 by the Home Secretary as the chairman of the Advisory Council on the Misuse of Drugs for clashing with government policy. Most notably, he published an editorial in the Journal of Psychopharmacology in which he said that the dangers from ecstasy were comparable to that of horse-riding.

I was already well aware of his views on the relative risks of drugs (alcohol causes far more damage overall than anything else) and his views on the government’s approach to banning new drugs (such as mephedrone, which potentially saved more lives than it took). He laid out the evidence very clearly and presented it in a very engaging manner – I would recommend checking out his blog if you’re interested.

When adding together harm to self and harm to others, alcohol tops the list of most harmful drugs. Image credit: The Economist

When adding together harm to self and harm to others, alcohol tops the list of most harmful drugs. Image credit: The Economist

However, one aspect really caught my attention – the study of illegal drugs as therapeutics or medicine. The case for cannabis is well-known: its potency as a pain reliever has led to it now being prescribed by doctors in some countries. But what about MDMA (ecstasy) used to treat post-traumatic stress disorder, or psilocybin (the hallucinogen found in magic mushrooms) used to treat depression or obsessive-compulsive disorder? These drugs have been suggested as potential treatments but there are so few studies conducted that we can’t be sure.

Science on drugs

Cannabis, MDMA and psilocybin are all considered have no therapeutic value by the UK government and so are listed as Schedule 1 drugs. This means that not only are researchers required to get a licence from the Home Office, but they also have to find a supplier who also has the correct licence. These licences cost thousands of pounds, take about a year to be approved and then result in police inspections on a regular basis. Not only that, many research funding bodies are reluctant to sponsor studies on illegal drugs or refuse to fund it altogether.

It is easy to see why there hasn’t been much research done.

Nonetheless, David Nutt made a big splash two months ago when he televised a study on the neurological effects of MDMA. In Drugs Live: The Ecstasy Trial, 25 volunteers were studied by fMRI after taking either the drug or a placebo in a double-blind trial – making it the largest ever brain-imaging study of MDMA conducted. In time, the full results will be published in peer-reviewed journals and they are actually expecting to publish 5-6 papers from this single study.

The researchers used pure MDMA, avoiding the contaminants usually found in ecstasy

The researchers used pure MDMA, avoiding the contaminants usually found in ecstasy pills

This research was not just to look at the effects of taking an illegal drug – understanding the neurological mechanism of MDMA could potentially help millions. Back in 2010, a US study of patients with post-traumatic stress disorder showed that MDMA, combined with psychotherapy, cured 10 out of the 12 people given the drug in a randomized-control trial. These were people who had not responded to government-approved drugs or psychotherapy alone, and two months after the study they were free of symptoms. MDMA appears to dampen negative emotions, which allows a patient to revisit their traumatic memories without the associated emotional pain. This can be the starting point for the patient to come to terms with their trauma and deal it with through therapy. These results, although from a very small sample size, are near miraculous and yet it took over ten years for the researchers to get approval.

The magic of mushrooms

David Nutt has also previously studied the effects of psilocybin on the brain and the results were surprising. Psilocybin is a hallucinogen, causing colours, sounds and memories to appear much more vivid than usual. With this kind of effect it was expected that psilocybin would activate certain areas of brain, which would be seen as bright patches on an fMRI scan. However, they actually saw dark blue patches of decreased brain activity, but only in a specific area of the brain that acts as a central hub for connections.

Psilocybin is the hallucinogen in magic mushrooms. Like MDMA is a class A drug in the UK

Psilocybin is the hallucinogen in magic mushrooms. Like MDMA, it is a class A drug in the UK but could it used to treat depression?

Decreasing brain activity doesn’t sound like a good thing, but one of these connector hubs, known as the posterior cingulate cortex, is over-active in people with depression. This area is responsible for several roles but can cause anxiety, particularly when it is over-active. When David Nutt and his colleagues asked people on psilocybin to think of a happy memory, they found that the volunteers could remember happy memories more vividly – almost as if they were reliving them. These test subjects were also much happier after recalling positive memories. Dampening the activity in the connector hubs and getting a boost from recalling happier times could be enough to get people out of the vicious circle of depression. Fortunately, the Medical Research Council has since funded David Nutt to conduct further studies into the use of psilocybin as a treatment for depression.

Post-traumatic stress disorder and depression are two of the biggest mental health issues that we face and current treatments are plainly inadequate. It seems ridiculous that the government’s policies on drugs prevent scientific studies aiming to reap benefits from them. These treatments show good potential and yet little is being done to take advantage of them. This situation is nothing new – after being fêted as a treatment for alcoholism in the 1960s, LSD has largely been ignored ever since it was declared illegal. Even though, according to David Nutt, it is probably as good at treating alcoholism as anything we’ve got now, researchers are limited to reanalysing studies completed 50 years ago.

What treatments are out there, undiscovered because of governments’ heavy-handed approaches towards recreational drugs? I’m just glad that there are researchers like David Nutt who are willing to make the effort, and take the flak, in order to find out.

claimtoken-5091246e7dc78

Keeping up with the Red Queen: Co-evolution of hosts and pathogens

While most micro-organisms are harmless to humans, and in some cases even beneficial, there are those that are not welcome. These are known as pathogens, from the Greek for ‘producer of suffering’. For us, pathogens can be very damaging and even deadly, and so we have evolved ways to resist their attacks. For the pathogen, infecting a human might be the only chance to survive and reproduce. The potential reward is so great that pathogens will develop ways around our defences. This becomes an ongoing battle and means that neither side can stop evolving in case the opponent becomes too dominant; in the words of the Red Queen from Lewis Carroll’s Through the Looking-Glass: “It takes all the running you can do, to keep in the same place.” This interplay between host and pathogen is known as co-evolution and can lead to very different strategies being used on both sides.

The Red Queen (not to be confused with the Queen of Hearts) and Alice

All-out war

Aggressive strategies, like that used by the malarial parasite Plasmodium falciparum, can drive co-evolution rapidly. Once it has infected a person, the pathogen replicates as quickly as possible. This is risky because it seriously affects the infected human and can even cause death, and if the host dies, the pathogen die with it. This strategy works because P. falciparum spreads between humans via mosquitoes. High concentration of the parasite in the bloodstream is the best way to ensure that they are taken up by a mosquito and passed on to a new host. However, the severity of malaria and the high likelihood of death means there is a strong evolutionary pressure for humans to develop resistance, even if that comes at a price.

Indeed resistance has developed, via a specific mutation in the gene coding for the blood protein haemoglobin. Everyone has two copies of the haemoglobin gene – one from each parent. A mutation in both copies of the gene provides malarial resistance but causes sickle cell anaemia. This causes blood flow to be restricted and eventually leads to premature death, which obviously outweigh any benefits. Fortunately, having just one copy of the mutation maintains some resistance and avoids sickle cell anaemia. However it can never be guaranteed that a child will inherit exactly one copy of the mutated gene, so there is always a risk of children being born with sickle cell anaemia. This means that the level of the mutation present in the population is determined by the balance between the risk of malaria and the risk of sickle cell anaemia. There is little risk of malaria in the UK and so the mutation is rare. In sub-Saharan Africa, where malaria causes a quarter of all deaths in children under five, the balance shifts. The mutated gene is present in up to 40% of the population, meaning that approximately 2% of children are born with sickle cell anaemia.

When a specific glutamic acid residue (highlighted in magenta on the far right) is mutated to a valine it causes haemoglobin molecules to aggregate, which distorts the shape of red blood cells

Under the radar

Evolving against every new infectious disease that we encounter is not possible. Humans, with our relatively slow reproductive rate, take a long time to develop new, beneficial mutations. This is why many infections, such as the common cold caused by the rhinovirus, have evolved to avoid affecting humans severely. Catching a cold will not affect the ability to reproduce, which is the main driving force behind evolution. Without this evolutionary pressure, humans do not appear to have developed mutations to specifically deal with the rhinovirus. Keeping the host able and mobile also has the benefit of boosting the spread of the disease. If the host feels well enough to leave bed and go to work, then the rhinovirus is presented with a wide range of potential hosts with every sneeze.

The immune system provides a flexible defence against a huge variety of diseases, lessening the need to evolve in response to new infections. The system learns to recognise a part of the pathogen known as the antigen, which allows the pathogen to be targeted. This means that a strain of rhinovirus, or any pathogen, that has mutated its antigen can evade detection by the immune system for longer. Unfortunately for us, this happens very often as the rhinovirus mutates very easily and reproduces incredibly quickly, allowing a huge number of mutations to accumulate within a population rapidly.

Gaining ground

Today the risk of dying from an infectious disease is lower than at any point in human history. After thousands of years locked in an impasse with diseases, we have finally developed an advantage through modern medicine. Although it seemed that we have surpassed the power of evolution, pathogens are still working towards breaking through our defences. Whether it is a new strain of flu or an antibiotic-resistant superbug, if we want to maintain our advantage it will take all the running we can do.

Note: this post was updated 16:53 GMT 21/10/2012 to remove any suggestions that Plasmodium falciparum is a bacterium

claimtoken-5091246e7dc78

 

Bird’s Eye View: How to see magnetic fields

The ancient Greeks, like many people since, were confounded and fascinated by the migration of birds. Homer recognised that cranes “flee the winter and the terrible rains and fly off to the world’s end”. Meanwhile, Aristotle wrongly asserted that each year summer redstarts would transform into robins come winter, as the two species were never seen in Greece together. In modern times, we’ve come to appreciate the vast distances covered by migratory animals and the remarkable precision with which they make the journey. How is this feat achieved?

It is known that animals use sounds, landmarks or even smells to guide and navigate their way across continents. But the most intriguing and least understood navigation ability is magnetoreception: the detection of the Earth’s magnetic field through an internal, biological compass. This ability has been seen in a variety of animals, from ants to crocodiles. In fact, wildlife rangers in Florida resorted to taping magnets to the heads of crocodiles to prevent them from finding their way back after being relocated. Magnetoreception has even been suggested in the humble cow, after researchers using Google Earth accidentally discovered that cows tend to line up with the Earth’s magnetic field.

Magnetoreception was first observed in captive robins in 1957. In autumn, when it was time for them to migrate from Frankfurt to Spain, they kept flying southwest in their cage. This happened even though the room was isolated from any outside visual stimuli with which the robins could orientate themselves. This led to the idea that robins might use an internal magnetic compass to migrate. Many studies have been conducted since, but controversy still rages over the exact underlying mechanism of magnetoreception.

Robin in the winter

Robins can find their way with only the Earth’s magnetic field to guide them, but how do they achieve this?
Photo credit: Christine Matthews

Over fifty animal species have been found to use an internal magnetic compass so far, and several different mechanisms have been proposed and observed. The most established mechanism relies on the presence of small crystals of magnetite, a naturally magnetic mineral, in either the nose or the beak, surrounded by receptor nerves. Magnetite has been found in many animals, including humans, where it could be used to sense the magnetic field of the Earth and create a magnetic field map for migration. However, in experiments on birds where this magnetite receptor was deliberately disrupted by anaesthetic or a strong magnetic pulse, the birds could still orientate themselves along the magnetic field. This suggests that there is an alternative mechanism at work. Even more intriguingly, this alternative mechanism only works when there is light present, and didn’t appear to be influenced by reversing the direction of the field.

In 1978, Klaus Schulten suggested a mechanism for this type of magnetoreception, known as the radical pair mechanism. This proposes that there is a light-activated reaction in the bird’s eye that is affected by magnetism. By detecting the rate of the reaction, birds can sense the strength and alignment of Earth’s magnetic field. The problem with this idea is that the Earth’s magnetic field is incredibly weak, and so its influence on a normal reaction is a million times less than the energies involved in a normal chemical reaction. How could it possibly have a detectable effect?

The secret to detecting the magnetic field lies in generating a pair of radicals, which are molecules with unpaired electrons that interact strongly with magnetic fields. Creating these radicals requires a burst of energy, as provided when the molecules are exposed to light. Within a suitable molecule or protein, two radicals can form what is known as a ‘spin-correlated pair’ that exists in two different states. Conversion between these two states is affected by a magnetic field, and the rate of conversion can be monitored through the concentration of the radicals. In this way, a weak magnetic field can become detectable by cells in an organism.

The radical pair mechanism fits with the observations that cannot be reconciled with magnetite receptors—it is both dependent on the presence of light and unresponsive to the polarity of the field. Experimental evidence was lacking in 1978 when Schulten proposed the mechanism, so the idea received little attention for twenty years.

In 2000, a research group from Illinois suggested that proteins known as cryptochromes may be behind this source of magnetoreception. Cryptochrome proteins are found in the eye of robins, and absorb blue light to start a radical reaction—the perfect candidate to generate biologically detectable spin-correlated radical pairs. This led to renewed interest in the area, including the development of a proof-of-principle artificial magnetoreceptor system by a team of researchers at Oxford University. This was the first man-made chemical compass; the first artificial chemical system sufficiently sensitive to detect the Earth’s weak magnetic field on the planet’s surface.

Cryptochrome protein with a flavin radical initiator

Cryptochrome proteins are found in many creatures and absorb blue light through a co-factor known as FAD (shown in yellow)

The contribution of cryptochrome and the radical pair mechanism to magnetoreception in animals is still being investigated. Despite initial scepticism, evidence from model systems and computational work has shown that this mechanism is feasible for detecting magnetism. Cryptochromes are primarily responsible for maintaining circadian rhythms in many animals, including humans. Like many proteins throughout evolution, cryptochromes have found a new role in a different part of the body. From their presence in the eye, it has even been suggested that robins could sense the results of the radical reaction along the optic nerve and actually ‘see’ the magnetic field.

With growing evidence of weak magnetic fields affecting biological processes, there is increasing interest in how they might affect us. Numerous studies have shown a significant correlation between proximity to high-voltage power lines—which carry a low-frequency magnetic field­­—and increased rates of childhood leukaemia. In 2001 the International Agency for Research on Cancer classified extremely low-frequency magnetic fields as a possible carcinogen. Yet several attempts to demonstrate magnetic field induced carcinogenesis or tumour promotion in cells have failed, so this issue is still surrounded by uncertainty.

Perhaps in years to come our suspicions of magnetic fields transforming healthy cells into cancerous ones might be viewed just as fanciful as Aristotle’s redstarts to robins hypothesis. While we cannot be sure yet that power lines cause cancer, further analysis of Google Earth has shown that they can certainly disrupt the ability of cows to line up with the Earth’s magnetic field—tricking them into aligning with the magnetic field of the power line instead.

Above Genetics: How your behaviour can affect your DNA

It is now regarded as a simple fact of life that you are stuck with the genes your parents gave you. Genetic mutations from long ago have passed down from generation to generation, whether it is joined earlobes, knobbly knees or a higher risk of breast cancer. This is Darwinian inheritance in action.

But what if that isn’t the full story? What if your actions as a ten-year-old not only affected your genes, but those of your future children and grandchildren?

Darwin wasn’t the first to offer a theory on evolution; before him was Jean-Baptiste Lamarck, who suggested that traits developed by animals during their lifetime could be passed onto their children. The classic example is a giraffe, who, after years of stretching to reach higher leaves, would have offspring with longer necks. After several generations of stretching, giraffes would have necks as long as they are today.

This made intuitive sense and even Darwin was reluctant to dismiss it — he was not quite the Darwinist he is now made out to be. However, now that DNA is known to be responsible for passing on genetic information, it is hard to see how this could work. No amount of neck-straining will change your genes, so surely this cannot happen.

How could stretching your neck be passed on to the next generation?
Source: Brocken Inaglory

The Human Genome Project aimed to transcribe our DNA and find out what makes a human. Even after decoding all three billion ‘letters’, there was still not enough information to provide all the answers. One of the problems is that many genes can be switched off, or ‘silenced’, by the body. This occurs through methylation, where a small marker, just one carbon and three hydrogen atoms, is attached to a section of DNA and switches it off. Since this silences a gene without actually altering the genetic code, methylation is described as epigenetic, from the Greek ‘epi-’ for ‘above’. This system of silencing genes is another layer of information on top of the genetic code — an epigenetic code.

Methylation of DNA is a natural way that an organism regulates itself, but external factors can affect this dramatically. When a honey bee larva is fed on royal jelly its methylation levels fall, causing it to develop ovaries, grow larger and live longer, i.e. become a queen. However, it is not just food that is important; mice change their DNA methylation patterns depending on how much attention their mothers paid them as a pup.

Methylation of DNA is a very subtle change but its effects can be enormous.
Source: Christoph Bock

Epigenetic changes due to environment can happen even before birth. Studies of several famines, including the Dutch Hunger Winter of 1944 and the Great Chinese Famine of 1958–61, have shown that children conceived during these periods were underweight at birth. Now adults, they have increased rates of obesity, heart disease and schizophrenia, and have increased methylation of genes linked to these diseases. This makes clear the importance of epigenetics as not just an aside to genetics but a powerful force in its own right.

It was long thought that epigenetic changes could not be passed to our children, as sperm and egg cells undergo ‘reprogramming’ to wipe the slate clean. However, recent experiments have shown that mice are able to pass on epigenetic changes and historical records suggest that this may be happening in humans too.

As an isolated farming community in the early 1800s, Överkalix in north Sweden suffered from extreme fluctuation in food supply; some years would be plentiful while others would be ruinous. This variation encouraged Lars Olev Bygren, a preventative health specialist, to trace the ancestries of a hundred villagers and cross-reference them with harvest records. Remarkably, he found that boys who had survived a year of famine while aged 9–12 went on to have sons and grandsons who lived on average 32 years longer than those of boys who had enjoyed a year of feasting. This prepubescent stage is when the body is most susceptible to environmental changes; however it is remarkable that the repercussions can be seen two hundred years later in the lives of their grandchildren, who were never directly exposed to famine.

This seems very similar to Lamarck’s theory that the impact of an animal’s life is passed down to the next generation. However, epigenetics cannot replace Darwinian inheritance, where a genetic mutation is permanent, as epigenetic changes should only be temporary. But how long is temporary? We have seen effects passed down at least two generations in humans, while experiments with roundworms have shown epigenetic changes surviving over 40 generations.

Epigenetics gives us the flexibility that genetics could never provide, allowing us to adapt to our environment during our lifetime. However, it is not just our own lives being adjusted; we could be determining the lives of our future children. Take heed, the sins of the father may well be visited upon the son.