Nature Podcast

This is a transcript of the 17th April edition of the weekly Nature Podcast. Audio files for the current show and archive episodes can be accessed from the Nature Podcast index page (http://www.nature.com/nature/podcast), which also contains details on how to subscribe to the Nature Podcast for FREE, and has troubleshooting top-tips. Send us your feedback to mailto:podcast@nature.com..

Advertisement

The Nature Podcast, brought to you by Bio-Rad's 1000-Series Thermal Cycling Platform. When you rethink PCR, you think about how easy it can be. See how at http://www.bio-rad.com/1000series.

End Advertisement

Adam Rutherford: Coming up this week, genome sequencing gets personal.

Jonathan M. Rothberg: So we were able in less than 2 months, for less than a million dollars to sequence the 6 billion bases of Jim Watson.

Kerri Smith: And our Podium speaker looks beyond the Renaissance for the roots of rationalism.

Philip Ball: The emergence of western science as a branch of humanism should be properly located in the 12th and 13th centuries.

Kerri Smith: This is the Nature Podcast. I am Kerri Smith.

Adam Rutherford: And I am Adam Rutherford. First this week, you might be forgiven for thinking that we've published the human genome more than a couple of times in the last few years. In 2001, following more than a decade of research, hundreds of scientists published the first draft of the human genome in Nature. The technique has become so refined that this week a single institute has sequenced the genome of an individual in a matter of a few weeks. As a nice touch, the individual they chose was James Watson, co-discoverer of the structure of DNA. I've got Nature's chief biology editor, Ritu Dhand with me in the studio. But first I spoke to Jonathan Rothberg of the Rothberg Institute for Childhood Diseases in Connecticut. I asked him how his team had managed to transform the art of genome sequencing. Nature 452, 872–876 (17 April 2008)

Jonathan M. Rothberg: The technique that has allowed us to make the next hundred or thousand-fold reduction in cost in sequencing a genome, and allowed us to go from sequencing a collection of genomes - which was the human genome project - to an individual genome, was the development of 454 sequencing which really took the concept that allowed transistors to be moved to a chip and give us the integrated circuit and the cost reduction we've all seen in our computers, in our personal computers. So, as the inventor of 454 sequencing, I tried to find the equivalent of the transistor, traded a company to move it to a chip, and once it was moved to a chip every year when we make the density of our reactions on this chip, twice as dense it gets 4 times faster and 4 times cheaper, so we were able in less than 2 months, for less than a million dollars to sequence the 6 billion bases of Jim Watson -- 3 billion that he got from his mom, 3 billion from his dad.

Adam Rutherford: And what's the advantage of doing a diploid genome over a haploid - which is what the previous sequencing efforts have resulted in?

Jonathan M. Rothberg: In order to understand the biology of an individual, you have to have the diploid genome. There is no such thing as your haploid genome; we're all diploid. We each carry in the majority of our genes, two copies and it's those two copies that determine whether we have brown eyes, whether we are susceptible to a particular disease. So in the Jim Watson case, we found 12 cases where Jim Watson had a mutation that if he had two copies of it, would most likely be lethal, but we know Jim Watson was a healthy 79-year-old male, when we sequenced them.

Adam Rutherford: How has Jim Watson reacted to the sequenced data that you revealed?

Jonathan M. Rothberg: Well, Jim said publicly that the reason, and I'll quote him, "he gave Jonathan his sequence is, he actually didn't think we'd be able to do it". So his reaction was first surprise, then pleasure but on a serious note he wanted to show that we don't have to be scared of our own genetic heritage and he wanted to show by putting his genome in the public that this type of information can be used for good first and foremost.

Adam Rutherford: Okay! One last question for you. When you were choosing the individual, was Jim Watson the clear front-runner or did you consider other people, maybe the President, I don't know Madonna or maybe Brad Pitt. Was it Jim Watson the whole way?

Jonathan M. Rothberg: When I first conceived this technology the individual I conceived of it for was my son, Noah, who had just been rushed to the newborn intensive care unit and I wanted to sequence his genome, since Noah's now a healthy 8-year-old playing hockey, we had to sequence somebody else. And the second person that came to mind was myself but at that point I don't think I had the ego or courage to do that, so Jim was our third choice but we think, in the end, the most fitting.

Adam Rutherford: That's Jonathan Rothberg. Now I am joined in the studio by Nature chief biology editor, Ritu Dhand. Now of course Jim Watson has a long history with Nature. Is there any significance to the genome being his?

Ritu Dhand: Jim Watson is a great scientist and a great guy, but the point of publishing this paper was the fact that it was a technical tour de force and it could have been anyone's genome. It is a nice little touch that it is Jim's genome, but it really didn't make any difference to whether or not we were going to publish the paper.

Adam Rutherford: As you say it is a technical tour de force. Rothberg mentioned that it cost him less than a million dollars to sequence Jim Watson's genome; that cost is only going to come down in the future. What are the ethical implications of having such a cheap method for sequencing personal genomes?

Ritu Dhand: Well, it's not quite so cheap, yet, but it's not quite at the one thousand dollars per genome that is being touted but I agree the way technology is going if that figure is a reality. Publishing a genome means you need to get the permission of the donor of the DNA, but that is not the only person that's important to consider. The genome may reflect qualities from your parents and also have implications to the genome of your children and both parties need to be considered on whether or not they wish that the information to be made public or to be made aware of the genome. Having said that, it is very very important that we do continue to sequence genomes both from a perspective of gaining biological insights but also for enabling much better medical research to be done.

Adam Rutherford: Thanks Ritu.

Jingle

Kerri Smith: Now sequencing your genome is a sure way for a part of you to live forever. It might be something that's currently limited to famous geneticists and entrepreneurs, but Jim Watson's gene sequence isn't the only thing to be given an elixir of life in this week's Nature. Charlotte Stoddart has more.

Charlotte Stoddart: The assortments of specialist blood cells that carry oxygen around our bodies and fight off infections have only a short lifespan. So a population of self-renewing cells in the bone marrow, called haematopoietic stem cells, continually churns out new template cells, which in turn divide a limited number of times to produce the mature blood cells. These templates are called multi-potent progenitor cells. Now researchers at Stanford University have figured out how to confer the ever-lasting property of stem cells onto their descendants. By knocking out 3 key genes in mice, all regulated by another gene called Bmi1, they found that the progenitor cells became able to self-renew, just like the parental stem cells. Since these genes are often mutated in tumours, this finding could help us understand how cells first become cancerous. Lead author, Michael Clarke, told me more. Nature advance online publication (16 April 2008)

Michael F. Clarke: Now what's unique about a stem cell is a single blood stem cell, for example, can regenerate the bone marrow of a mouse that's had a lethal dose of radiation or chemotherapy, for example, and do so for the lifetime of the animal and that's because the stem cell has the ability to self-renew and to make essentially a photocopy of itself. However, when it makes a multi-potent progenitor cell during a cell division, this cell has all the capacity to generate the mature cells in the tissue, it can't self-renew and what this means is if you take another lethally irradiated mouse and inject even tens of thousands of multipotent progenitor cells, they can only transiently restore the blood system and they do so only for a couple of months.

Charlotte Stoddart: Do we know why these stem cells are able to renew themselves again and again but their descendants, the multipotent progenitor cells, lack this ability?

Michael F. Clarke: So we understand some of the mechanisms that allow stem cells to regenerate themselves or to self-renew. Our lab a few years ago had found that Bmi1 a member of a protein complex that controls gene transcriptions that that gene was absolutely required for self-renewal of blood stem cells. On the other hand, nothing was known about the regulation or the restriction on self-renewal in multipotent progenitor cells.

Charlotte Stoddart: And that's exactly what you've been looking at in this latest study and you and your team have been trying to restore the ability of these cells to self-renew. So I wonder if you could tell me little bit about that.

Michael F. Clarke: So when we found that the Bmi1 was important for the regulation of self-renewal we wanted to understand which of these pathways were important for normal stem cell self-renewal. And so Bolagi Akala systemically started looking at the cellular hierarchy in mice mutant for downstream targets of Bmi1, when he looked at mice mutants for 3 of these downstream genes, he found that there is a 12-fold increase in stem cell activity of whole bone marrow. The surprising thing he found is that the increase in stem cell activity was actually from the early multipotent progenitor cells and furthermore when he transplanted the bone marrow of these mice that had been rescued by multipotent mutant progenitor cells he could also restore blood production in a second group of mice that have been lethally irradiated and so what this told us is that these three genes are part of the pathway that is preventing early progenitor cells from self-renewing.

Charlotte Stoddart: So what your work is showing then is that these progenitor cells have an underlying ability to self-renew, but that ability is normally blocked by these genes that you are talking about and when you knock out the genes then you can restore that ability to self-renew.

Michael F. Clarke: I wouldn't say restore, I would say they can regain the ability to self-renew. Actually what's happening is they are regaining a stem cell property that's actively being shut down by the cells and the reason I think that this is happening is that a multicellular organism like people or mice need to have very, very tight control on self-renewing cells because cancer can be viewed as a disease of abnormal self-renewal and so a multicellular organism that's long lived needs to minimize the risk of mutations accumulating that cause cancer in any tissue or organ.

Charlotte Stoddart: So, can your findings in that case help us to understand a bit more about how cells become cancerous?

Michael F. Clarke: I think so and I think that's the real importance of these findings because these 3 genes which are being epigenetically repressed in normal stem cells are very, very commonly mutated in many many types of cancers and what this suggests is that the abnormally self-renewing cancer cells in these tumours that they probably arose not from a normal stem cell but from a progenitor cell and its because these progenitor cells are so much more frequent in any tissue than the stem cells and that has major implications for developing drugs against these cancer stem cells.

Charlotte Stoddart: And does that mean then that drugs perhaps should be targeting the multipotent progenitor cells rather than the stem cells?

Michael F. Clarke: This works suggests that. We have to very carefully go in and look at the self-renewing cells and various cancers. In the next series of experiments, we need to really look very closely at that, but it certainly implies that your explanations will hold true.

Kerri Smith: Michael Clarke there.

Jingle

Adam Rutherford: Coming up shortly, we'll be finding out how science is benefiting from the kinds of technologies that have given us eBay and Facebook. But first, science writer and regular Nature contributor Phil Ball takes to the Podium with a tale of sciences' beginnings. Essay: Nature 452, 816–818 (17 April 2008)

Philip Ball: The middle ages in Western Europe are often portrayed as a time of ignorance, slumped between Greek philosophy and Renaissance rationalism. This caricature is wide off the mark. In fact the notion of a universe governed by laws accessible to human reason emerged in Western Europe, largely during the 12th century. This development set the scene for a truly scientific view of the world and our place within it. It also opened up the schism between faith and reason that today is a chasm. In the early 12th century, European thought was energized by Latin translations of Greek manuscripts on natural philosophy preserved in Arabic versions. The new translations also spread the original contributions of Islamic scholars in topics ranging from medicine to maths. They guided progressive western scholars towards the way of thinking about problems that was governed by scepticism rather than by Holy Scripture. The rationalist position was nurtured at the Cathedral School of Chartres in France. The school was guided by the platonic belief that a transcendental Rome, governed by order and geometry, underpins the mundane world. One of its most controversial figures, William of Conches, argued that natural phenomena arise from God-created forces acting under their own agency. William agreed with Plato that if we ask questions of nature, we can expect to get answers and to be able to understand them, something we need to believe before we can even imagine conducting science. The 12th century trust in reason and interest in nature for its own sake, flourished in the following century. Under the influence of Aristotle, scholars emphasized careful observations signalling the beginnings of an experimental approach to nature. Some conservative theologians were outraged. They felt that it was futile and impious to seek anything akin to what we would now regard as physical law. This 13th century theological back clash culminated in a papal declaration in 1277 condemning many propositions in Aristotle's works. It forced some scholars into an inelegant compromise that we are still struggling to expunge. They conceded that although scientific truth was right about the world, it could be overruled by theological truth with its mysteries and miracles. Yet opposition to medieval rationalism was partly motivated by valid concerns about the dangers of bringing science into scripture. Read as a kind of moral mythology, holy books may have some social value, making them sources of natural facts must lead to the absurdities of today's creationism. By making God a natural phenomenon, the medieval rationalists turned him into an explicatory contingency for which there has since seemed ever less need. Secular learning was gradually revealed to have so much power that it rivalled, not rationalized, theology itself. The consequent rift between faith and reason has now left traditional religions so compromised that they are susceptible to being displaced by more dogmatic varieties. Nevertheless, the emergence of western science as a branch of humanism should be properly located in the 12th and 13th centuries, it was then that the universe ceased to be a forest of symbols designed by God for human kinds' spiritual identification and became instead a source of intrinsic intellectual value and fascination.

Adam Rutherford: Phil Ball there. His new book, Universe of Stone, is published next month by Harper Collins.

Kerri Smith: Next up, from the roots of science to one of its newest trends; scientists are taking advantage of the technologies that have brought us Facebook and Wikipedia. Blogs, wikis, social networking -- these terms are familiar to many of us and are usually labelled with the term Web 2.0. The phrase reflects how we have become more interactive on the web and now scientists are getting in on the act. They call it Science 2.0. Nature Editor Mitch Waldrop has a written a feature on it for Nature's sister publication, Scientific American. He joined us in the studio. Hi Mitch!

Mitchell M. Waldrop: Hi Kerri!

Kerri Smith: Tell us first of all, what triggered your interest in Science 2.0.

Mitchell M. Waldrop: What got me into the story was I happened to come across in the course of researching another article, a site at MIT called OpenWetWare that had been put together by grad students. It was a wiki meaning that individuals participating could write their own pages and make modifications and so forth, a lot like wikipedia. But what they were doing was actually using it do science to communicate with one another. So that if a student learned something about how to do a certain lab procedure, he or she would write it up there on the wiki so that the next person would know how to do it better. Once they started doing that they realized that they could communicate in a lot of other ways. People started putting class notes up there; teachers started using it to plan classes. Other people started actually keeping their lab notebooks online. This is considered radical because usually the standard lab notebook is a hard bound paper thing that people keep in their labs and then no one else gets to see until the other data is finally published maybe a year or two later in a published research paper. But by letting other people see it, they could get interesting feedback, suggestions from people, from other researchers elsewhere, so this thing grew quite rapidly and by using these kind of collaborative communication tools that Web 2.0 offered, people were able to start doing science in a better, more collegial, more open kind of way than they had.

Kerri Smith: But isn't science supposed to be collegial any way? What does this new approach add to the traditional model?

Mitchell M. Waldrop: One of the things I was surprised and intrigued to realize is that in many ways science is a very natural fit to Web 2.0 because for at least 400 years since the time of Galileo and Newton scientists have been "crowd sourcing" their knowledge; they have been sharing information, trading ideas, criticizing ideas. You're synthesizing things in a group, sort of, fashion. Science has traditionally been quite open and collaborative, but what has happened especially from about the middle of the twentieth century and accelerating in about the last 20 or 30 years is that there are a lot of forces tending to inhibit communication in science. This have to do with wanting to keep information secret so that you get the first publication and the first publication has to do with whether or not you get promotion and tenure at your university. It may have to do with whether or not you are awarded a patent, which can be very valuable and lucrative. It certainly has to do with prestige, it turns out there are lots of reasons that had developed especially in recent years to have less open communication in science, but the people who are getting involved in the Science 2.0 type of activity feel that it is opening things up again.

Kerri Smith: Now you've made it sound really quite positive there in the most part and why wouldn't we want more communication. So what's making scientists less happy with this new form of communication? What might they stand to lose?

Mitchell M. Waldrop: Well some scientists are leery in part because they associate the internet with, you know, hackers and malicious mischief, so they're worried that if they put their lab notebooks or anything valuable up there, it could get, you know, distorted or vandalized, but of course there are protections for that much of the software like wiki software allows you to just roll it back to a previous version, so it is actually not such a very risky thing. I think a much more worrisome aspect is many people are afraid of their colleagues. They are afraid that if they reveal what they are working on or what their data is before it is published, someone else may look at that, use that data, use the hints that they provide rush something else in the publication and get the credit.

Kerri Smith: Now finally this article itself is particularly interesting not only because its subject is Science 2.0, but also because it is quite Web 2.0 itself, tell us why?

Mitchell M. Waldrop: So what we did is an experiment, Scientific American put the article up in its draft form on its web site and advised readers to comment and I asked a series of questions in the introduction of the posting, you know, do you think this is a good idea or where do you see the problems and so forth and we very quickly got well over a hundred comments.

Kerri Smith: Some useful ones?

Mitchell M. Waldrop: Yes actually, I would say several dozen very substantive and thoughtful discussions. So we ended up taking these comments, in a few cases I incorporated changes into the article itself, but we realized it would be much more useful to extract you know the essence and so when the article ran, you'll see it in the magazine, we have quotations from the online discussion along the sides of the article. When you read it, it's like here is the main article and here is this dialogue going on around it in response. I think it was, I was very pleased with how it came out. It was a very effective use of the two mediums -- print, online.

Kerri Smith: Nature's Mitch Waldrop there and we at Nature are excited to have been nominated for a Webby Award -- The Oscars of the Internet. http://www.nature.com is one of five web sites short listed in the People's Voice science category. So do help us get that gong by voting at http://www.webbyawards.com.

Adam Rutherford: Finally this week, Geoff Brumfiel gets the grips with quasiparticles and the rather complex world of quantum computing.

Geoff Brumfiel: Normal particles like protons and electrons have charges of plus or minus one, but in the 1980s physicists found out that under certain conditions electrons in a sheet of semiconducting material could collectively behave like particles of fractional charge. In other words, there appeared to be particles of say plus 1/2 or minus 1/4. These quasiparticles, as they were called, were considered an interesting oddity, though perhaps not very useful, but a paper in this week's Nature, suggests that under certain conditions the particles might be useful for quantum computing. I called Moty Heiblum and Merav Dolev of the Weizmann Institute of Science in Israel to learn how and Mordehai started by explaining the original phenomenon known as the Quantum Hall Effect. Nature 452, 829–834 (17 April 2008)

Moty Heiblum: Okay, so the system is very complicated, you have electrons sitting in a sheet. They interact with each other by pull and repulsion. Then you have a magnetic field and they start rotating around themselves and around each other, so it is a very complex soup with many many interactions over there and the greatness of some theoretician was that you can take all the soup and replace it now with imaginary particles, which we call quasiparticles, which are sitting, sort of, like in free space, they don't interact any more with each other but the payment that you have to pay for it is that you have to have give them a fractional charge.

Geoff Brumfiel: So in the quantum hall system that you are looking at in this particular experiment, you are dealing with quasiparticles that have a fractional charge, a quarter that of the electron, and they behave kind of unusually. Tell me a little bit about what they are doing.

Moty Heiblum: Okay. So when you rotate this and you move them around one another or you move one in a very sort of intricate way among many many of them, the system does not stay in the same state, but it changes to a totally different quantum state which can be measured later on.

Geoff Brumfiel: So you have this system where you can actually change the quantum state of the whole thing by moving these quasiparticles around, and as I understand it, it can be used for something called quantum computing. Merav can you tell me what that is?

Merav Dolev: Yes, in regular computing, the computer is made out of the switch, has the value of either 0 or 1. Quantum computing takes a quantum system which has two levels and then you call it a qubit. Now in quantum physics you can have a superposition of the state. So the system can be, at some probability, in the state zero and in another probability at the state 1.

Geoff Brumfiel: Then tell me how this system and how these fractional particles can be used to make a quantum computer.

Merav Dolev: Well first of all the main difficulty in regular quantum computing is that the system is very sensitive to interactions with the environment and this interaction with environment can change the state of the system and ruin the computation. What I can say is that these quasiparticles can be used for something, which is called the topological quantum computing. The operators here are performed when you change the position of quasiparticles in the system, you drive them one around the other in a specific way which performs the computation. Now the great advantage here is that all you care about is the topology of the path that this quasiparticles use. You don't really care about the small changes that can happen during the path and therefore the system is very robust and is not so much affected by the environment.

Geoff Brumfiel: I'd just like to know, how far we have to go, what's left for actually making this into a quantum computer, this system?

Merav Dolev: There is a very very long distance to go, before you could actually make a quantum computer out of it. First of all, we do not really know that the quasiparticles indeed obey what theory predicts that will enable us to use it for quantum computer. So the first step would be just to prove that which is not at all just it is very complicated, then it is a very hard task to build the cubitor, to build the quantum computer out of these quasiparticles, so the distance is pretty long I would say.

Geoff Brumfiel: Merav Dolev and before her Moty Heiblum of the Weizmann Institute of Science in Israel.

Kerri Smith: That's all from us this week. Our Sound of Science is from a band called Massukos from Mozambique. Their lead singer, Feliciano dos Santos, has won top environmental award the Goldman Prize this week for his work campaigning for clean water and sanitation. The band used their star status to spread the word about hygiene and HIV/AIDS, I'm Kerri Smith.

Adam Rutherford: And I am Adam Rutherford. Thanks for listening.

[Sound of Science Plays]

Advertisement

The Nature Podcast is sponsored by Bio-Rad, at the centre of scientific discovery for over 50 years, on the web at http://www.discover.bio-rad.com.

End Advertisement