Genesis Machines: The New Science of Biocomputing

Genesis Machines: The New Science of Biocomputing

by Martyn Amos
Genesis Machines: The New Science of Biocomputing

Genesis Machines: The New Science of Biocomputing

by Martyn Amos

eBook

$10.99  $11.99 Save 8% Current price is $10.99, Original price is $11.99. You Save 8%.

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

Silicon chips are out. Today's scientists are using real, wet, squishy, living biology to build the next generation of computers. Cells, gels and DNA strands are the 'wetware' of the twenty-first century. Much smaller and more intelligent, these organic computers open up revolutionary possibilities.
Tracing the history of computing and revealing a brave new world to come, GenesisMachines describes how this new technology will change the way we think not just about computers - but about life itself


Product Details

ISBN-13: 9781782394914
Publisher: Atlantic Books
Publication date: 06/14/2007
Sold by: Barnes & Noble
Format: eBook
Pages: 368
File size: 1 MB

About the Author

Dr Martyn Amos was awarded the world's first Ph.D. in DNA computing; he is currently a Senior Lecturer in Computing and Mathematics at Manchester Metropolitan University, UK. His webpage is http://www.martynamos.com
Dr Martyn Amos was awarded the world's first Ph.D. in DNA computing. He is currently a Senior Lecturer in Computer Science at the University of Exeter. His website is at http://www.martynamos.comGenesis Machines was first published in 2006.

Read an Excerpt

Genesis Machines

The New Science of Biocomputing


By Martyn Amos

Grove Atlantic Ltd

Copyright © 2006 Martyn Amos
All rights reserved.
ISBN: 978-1-84354-225-4



CHAPTER 1

The Logic of Life


At the end of 2005, the computer giant IBM and the Lawrence Livermore National Laboratory in the USA announced that they had built the world's fastest supercomputer. Made up of over 130,000 computer chips wired up intoo 64 air-cooled cabinets, the machine known as Blue Gene/L cost one hundred million dollars and was capable (at its peak) of performing more than 280 trillion calculations per second. Computer scientists salivated at the thought of such vast computational power, forecasters anticipated the creation of global weather models capable of predicting hurricanes weeks in advance, and astrophysicists dreamed of simulating the very first fiery instant after the birth of the universe. Biologists, on the other hand, had other ideas. The problem they had earmarked for the machine was rather more interesting than any of these other projects. They wanted to work out how to unscramble an egg.

What could possibly justify spending hundreds of millions of dollars of American taxpayers' money on reverse engineering an omelette? The answer lies in just how proteins form their particular complex shapes, and the implications are huge, not just for chefs, but for the whole of mankind. When preparing scrambled eggs, we begin by cracking a couple of eggs into a bowl. What we generally see is the orange-yellow yolk, and its surrounding liquid 'egg white'. This white (known as albumen) is essentially made up of water and a lot of protein. Individual protein molecules are made up of long amino-acid chains, like beads on a string. The amino-acid 'beads' are sticky, so the whole thin string repeatedly folds in, on and around itself when it's first made, twisting and turning to form a compact spherical ball (proteins can take many wierd and wonderful forms, as we'll see, but egg-white proteins are generally globular). In their normal state (i.e. inside the egg), these globular proteins float around quite happily in the albumen, bouncing off one another and the various other molecules present. However, when heat is introduced into the equation, things begin to get messy. This new energy begins to shake the egg-white molecules around, and they start to bounce off one another. This constant bashing weakens the sticky bonds holding the protein balls together, and they quickly begin to unfurl back into their original long, stringy shape. With so many molecules bouncing around in solution, the sticky beads begin to stick to their counterparts in other molecules, quickly binding the protein strings together into the dense, rubbery mesh we see on tables the world over.

Why is this process so interesting to biologists? The reason is that our understanding of protein structure formation is infuriatingly incomplete. We can take a folded protein apart, mapping the precise location of every individual atom, until we have a complete three-dimensional picture of the entire structure. That's the easy part, and it was done decades ago. Putting it all back together again – well, that's rather more difficult. As we'll see, predicting in advance how an arbitrary chain of amino-acid beads will fold up (that is, what precise shape it will adopt) is one of the main driving forces of modern biology. As yet, nobody knows how to do this completely, and the problem of protein structure prediction is taxing some of the best minds in science today. A complete understanding of how to go from bead sequence to 3-D molecule will have massive implications for the treatment of diseases such as cancer and AIDS, as well as yielding fundamental insights into the mechanics of life.

As we can begin to appreciate, nature is often remarkably coy; huge proteins fold up in fractions of a second, and yet the biggest human-built computer on the planet could take over a year's worth of constant processing just to predict how a single, simple protein might adopt its particular shape. We should not be surprised that simulating even simple natural processes should come at such a high cost, and advances in computing technology and its application to biology will reap huge dividends in terms of a deeper understanding of natural systems. Such knowledge, though, is also beginning to suggest an entirely new way of thinking about how we build computers and other devices. 'Traditional' computers are shedding new light on how living systems process information, and that understanding is now itself being used to build entirely new types of information-processing machine. This new form of engineering lies at the heart of what follows.

Nature has computation, compression and contraptions down to a fine art. A honeybee, with a brain one twenty-thousandth the size of our own, can perform complex face recognition that requires state-of-the-art computer systems to automate. A human genome sequence may be stored on a single DVD, and yet pretty much every cell in our body contains a copy. Science-fiction authors tell stories of 'microbots' – incredibly tiny devices that can roam around under their own power, sensing their environment, talking to one another and destroying intruders. Such devices already exist, but we know them better as bacteria. Of course, the notion of biomimicry – using nature as inspiration for human designs – is nothing new. Velcro, for example, was patented in 1955, but was originally inspired by plant burrs. Spider silk is now used as the basis for bulletproof vests. Away from the realm of materials science, nature-inspired design permeates our modern way of life. Telephone traffic is now routed through the global communications grid using models of how ants communicate using chemical signals. Computer systems based on the operation of the human brain detect fraudulent trading patterns on the stock market. As author Janine Benyus explains, 'The core idea is that nature, imaginative by necessity, has already solved many of the problems we are grappling with. Animals, plants and microbes are the consummate engineers. They have found what works, what is appropriate, and most important, what lasts here on Earth. After 3.8 billion years of research and development, failures are fossils, and what surrounds us is the secret to survival.'

Biocomputing – the main focus of this book, building computers not from silicon but from DNA molecules and living cells – has emerged in the last decade as a serious scientific research area. In his book The Selfish Gene, Richard Dawkins coined the phrase gene machine to describe early life forms in terms of their being nothing more than 'replication devices' to propagate their genetic programs. In the title of this book I use the similar phrase 'genesis machine', but with exactly the same intention as Dawkins: to emphasize the fact that there are direct parallels between the operation of computers and the gurglings of living 'stuff' – molecules, cells and human beings. As Dawkins puts it, 'Genes are master programmers, and they are programming for their lives.' Of course, the operation of organic, biological logic is a lot more noisy, messy and complex than the relatively simple and clear-cut execution of computer instructions. Genes are rarely 'on' or 'off'; in reality, they occupy a continuous spectrum of activity. Neither are they arranged like light switches, directly affecting a single, specific component. In fact, as we'll see, genes are wired together like an electrician's worst nightmare – turn up a dimmer switch in London, and you could kill the power to several city blocks in Manhattan. So how can we possibly begin to think about building computers from (maybe quite literally!) a can of worms? State of the art electronic computers are unpredictable enough, without introducing the added messiness, ambiguity and randomness that biology brings. As computer scientist Dennis Shasha puts it, 'It's hard to imagine how two scientific cultures could be more antagonistic than computer science and biology ... In their daily work, computer scientists issue commands to meshes of silicon and metal in air-conditioned boxes; biologists feed nutrients to living cells in petri dishes. Computer scientists consider deviations to be errors; biologists consider deviations to be objects of wonder.' But, rather than shying away from the complexity of living systems, a new generation of bioengineers are seeking to embrace it – to harness the diversity of behaviour that nature offers, rather than trying to control or eliminate it. By building our own gene(sis) machines (devices that use this astonishing richness of behaviour at their very core) we are ushering in a new era, both in terms of practical devices and applications, and of how we view the very notion of computation – and of life.

If you believe the considerable hype that has surrounded biocomputing in recent years, you could be forgiven for thinking that our desktop PCs are in imminent danger of being usurped by a whole new generation of bio-boxes, thousands of times more powerful than the silicon-based dinosaurs they will replace. This is, of course, absolute nonsense. What concerns us here is not simply the construction of much smaller bioelectronic devices along the lines of what has gone before. We are not just in the business of replacing silicon with organic 'mush'. Silicon-based machines will, for the forseeable future, be the weapons of choice for scientists probing the fundamental mysteries of nature. Device miniaturisation may well be one of the main side benefits of using molecules such as DNA to compute, but it is certainly not the major driving force behind this work. Instead, researchers in the field of biocomputing are looking to force a fundamental shift in our understanding of computation. In the main, traditional computers will still be important in our everyday lives for the forseeable future. Our electricity bills will still be calculated using silicon-based computers built along existing principles. DNA computers will not do your tax return in double-quick time. Nobody, at least not in the forseeable future, will be able to buy an off-the-shelf organic computer on which to play games or surf the Web.

This may sound like an abruptly negative way to begin a book on biocomputing. Far from it. I believe that alternatives to silicon should be sought if we are to build much smaller computers in the near to mid term. But what really interests me (and what motivated me to write this book) is the long term – by which I mean, not five or ten years down the line, but decades into the future. As Len Adleman, one of the main researchers in the field, told the New York Times in 1997, 'This is scouting work, but it's work that is worth pursuing, and some people and resources should be sent out to this frontier to lay a path for what computers could be like in 50 years as opposed to Intel's explorations for faster chips only a few years down the road.'

The key phrase here is 'what computers could be like'. The question being asked is not 'Can we build much smaller processor chips?', or 'How do we run existing computers at a much faster pace', but what sorts of computers are possible in the future? This isn't tinkering around the edges, it's 'blue-sky' research – the sort of high-risk work that could change the world, or crash and burn. It's exhilarating stuff, and it has the potential to change for ever our definition of a 'computer'. Decades ago, scientists such as John von Neumann and Alan Turing laid the foundations of this field with their contemplation of the links between computation and biology. The fundamental questions that drive our research include the following: Does nature 'compute', and, if so, how? What does it mean if we say that a bacterium is 'doing computation'? How might we exploit or draw inspiration from natural systems in order to suggest entirely new ways of doing computation? Are there potential niches of application where new, organic-based computers could compete with their silicon cousins? How can mankind as a whole benefit from this potentially revolutionary new technology? What are the dangers? Could building computers with living components put us at risk from our own creations? What are the ethical implications of tinkering with nature's circuits? How do we (and, indeed, should we) reprogramme the logic of life?

I hope that in what follows I can begin to answer at least some of these questions. By tracing the development of traditional computers up to the present day, I shall try to give an idea of how computers have evolved over time. It is important that we are clear on what it means to 'compute'. Only by understanding what is (and what is not) computable may we fully comprehend the strengths and weaknesses of the devices we have built to do this thing we call 'computation'. By describing the development of the traditional computer all the way from its roots in ancient times, it will become clear that the notion of computation transcends any physical implementation. Silicon-based or bio-based, it's all computation. Once we understand this fact – that computation is not just a human-defined construct, but part of the very fabric of our existence – only then can we fully appreciate the computational opportunities offered to us by nature.


Life, the Universe and Everything

Descartes was dying. The once proud mathematician, philosopher, army officer and now tutor to the Queen of Sweden lay in a feverish huddle in his basement room. Racked with pneumonia, his already frail body could no longer bear the intolerable illness, and at four o'clock on the morning of 11 February 1650, he passed away. Barely five months after being summoned to court by Queen Christina, the merciless chill of the Stockholm winter claimed the life of the man who had coined the immortal phrase, 'I think, therefore I am'. Christina had summoned Descartes for tuition in the methods of philosophy. The Queen was a determined pupil, and Descartes would be regularly woken at 5 a.m. to begin the day's work. During one such gruelling session, Descartes declared that animals could be considered to be no different from machines. Intrigued by this, the Queen wondered about the converse case; if animals are nothing more than 'meat machines', could we equally consider machines to be 'alive', with all of the properties and capabilities of living creatures? Could a steam turbine be said to 'breathe'? Did an adding machine 'think'? She pointed to a nearby clock and challenged Descartes to explain how it could reproduce. He had no answer.

Thomas Hobbes, the English philosopher most famous for his work Leviathan, disagreed with the notion of Cartesian duality (body and soul as two separate entities), in that he believed that the universe consisted simply of matter in motion – nothing more, nothing less. Hobbes believed that the idea of an immaterial soul was nonsense, although he did share Descartes's view that the universe operates with clockwork regularity. In the opening lines of Leviathan, Hobbes gives credence to the view that life and machinery are one and the same:

Nature, the art whereby God has made and governs the world, is by the art of man, as in many other things, so in this also imitated – that it can make an artificial animal. For seeing life is but a motion of limbs, the beginning whereof is in some principal part within, why may we not say that all automata (engines that move themselves by springs and wheels as does a watch) have an artificial life? For what is the heart but a spring, and the nerves but so many strings, and the joints but so many wheels giving motion to the whole body such as was intended by the artificer?


A slim volume entitled What is Life? is often cited by leading biologists as one of the major influences over their choice of career path. Written by the leading physicist Erwin Schrödinger, and published in 1944, What is Life? has inspired countless life scientists. In his cover review of the 1992 edition (combined with two other works), physicist Paul Davies observed that

Erwin Schrödinger, iconoclastic physicist, stood at the pivotal point of history when physics was the midwife of the new science of molecular biology. In these little books he set down, clearly and concisely, most of the great conceptual issues that confront the scientist who would attempt to unravel the mysteries of life. This combined volume should be compulsory reading for all students who are seriously concerned with truly deep issues of science.


(Continues...)

Excerpted from Genesis Machines by Martyn Amos. Copyright © 2006 Martyn Amos. Excerpted by permission of Grove Atlantic Ltd.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents


Acknowledgements     ix
Prologue     xi
Introduction     1
The Logic of Life     7
Birth of the Machines     36
There's Plenty of Room at the Bottom     83
The TT-100     118
The Gold Rush     151
Flying Fish and Feynman     203
Scrap-heap Challenge     237
Epilogue     301
Notes     306
Index     343
From the B&N Reads Blog

Customer Reviews