Where Science and Faith Converge
  • Evolutionary Paradigm Lacks Explanation for Origin of Mitochondria and Eukaryotic Cells

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 03, 2017

    You carried the cross
    Of my shame
    Oh my shame
    You know I believe it
    But I still haven’t found
    What I’m looking for

    —Adam Clayton, Dave Evans, Larry Mullen, Paul David Hewson, Victor Reina

    One of my favorite U2 songs is “I Still Haven’t Found What I’m Looking For.” For me, it’s a reminder that because of Christ, my life has meaning, purpose, and a sense of destiny. Still, I will never discover ultimate fulfillment in this world no matter how hard I search, but in the world to come—the new heaven and new earth.

    Though their pursuit is scientific and not religious, many scientists have also failed to find what they have been looking for. Physicists are on a quest to find the Theory of Everything—a Grand Unified Theory (GUT) that can account for everything in physics. However, a GUT eludes them.

    On the other hand, life scientists appear to have found it. They claim to have discovered biology’s GUT: the theory of evolution. Many biologists assert that evolutionary mechanisms can fully account for the origin, history, and design of life. And they are happy to sing about their discovery any chance they get.

    Yet, despite this claim, the evolutionary paradigm seems to come up short time and time again when it comes to explaining key events in life’s history. And this failure serves as the basis for my skepticism regarding the evolutionary paradigm.

    Currently, evolutionary biologists lack explanations for the key transitions in life’s history, including thes

    • origin of life,
    • origin of eukaryotic cells,
    • origin of sexual reproduction,
    • origin of body plans,
    • origin of consciousness,
    • and the origin of human exceptionalism.

    To be certain, evolutionary biologists have proposed models to explain each of these transitions, but the models consistently fail to deliver, as a recent review article published by two prominent evolutionary biologists from the Hungarian Academy of Sciences illustrates.In this article, these researchers point out the insufficiency of the endosymbiont hypothesis—the leading evolutionary model for the origin of eukaryotic cells—to account for the origin of mitochondria and, hence, eukaryogenesis.

    The Endosymbiont Hypothesis

    Lynn Margulis (1938–2011) advanced the endosymbiont hypothesis for the origin of eukaryotic cells in the 1960s, building on the ideas of Russian botanist, Konstantin Mereschkowski. Taught in introductory high school and college biology courses, Margulis’s work has become a cornerstone idea of the evolutionary paradigm. This classroom exposure explains why students often ask me about the endosymbiont hypothesis when I speak on university campuses. Many first-year biology students and professional life scientists alike find the evidence for this idea compelling and, consequently, view it as providing broad support for an evolutionary explanation for the history and design of life.

    According to the hypothesis, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe. (Ingested cells that take up permanent residence within other cells are referred to as endosymbionts.)

    Presumably, organelles such as mitochondria were once endosymbionts. Once engulfed, the endosymbionts took up permanent residency within the host, with the endosymbiont growing and dividing inside the host. Over time, the endosymbionts and the host became mutually interdependent, with the endosymbionts providing a metabolic benefit for the host cell. The endosymbionts gradually evolved into organelles through a process referred to as genome reduction. This reduction resulted when genes from the endosymbionts’ genomes were transferred into the genome of the host organism. Eventually, the host cell evolved the machinery to produce the proteins needed by the former endosymbiont and processes to transport those proteins into the organelle’s interior.

    Evidence for the Endosymbiont Hypothesis

    The similarity between organelles and bacteria serve as the main line of evidence for the endosymbiont hypothesis. For example, mitochondria—which are believed to be descended from a group of alpha-proteobacteria—are about the same size and shape as a typical bacterium and have a double membrane structure like gram-negative cells. These organelles also divide in a way that is reminiscent of bacterial cells.

    Biochemical evidence also exists for the endosymbiont hypothesis. Evolutionary biologists view the presence of the diminutive mitochondrial genome as a vestige of this organelle’s evolutionary history. They see the biochemical similarities between mitochondrial and bacterial genomes as further evidence for the evolutionary origin of these organelles.

    The presence of the unique lipid, cardiolipin, in the mitochondrial inner membrane also serves as evidence for the endosymbiont hypothesis. This important lipid component of bacterial inner membranes is not found in the membranes of eukaryotic cells—except for the inner membranes of mitochondria. In fact, biochemists consider it a signature lipid for mitochondria and a vestige of this organelle’s evolutionary history.2

    Does the Endosymbiont Hypothesis Successfully Account for the Origin of Mitochondria?

    Despite the seemingly compelling evidence for the endosymbiont hypothesis, evolutionary biologists lack a genuine explanation for the origin of mitochondria, and, in a broader context, the origin of eukaryotic cells. In their recently published critical review, Zachar and Szathmary point out that evolutionary biologists have proposed over twenty different evolutionary scenarios for the mitochondrial origins that umbrella underneath the endosymbiont hypothesis. Of these, they identify eight that are reasonable, casting the others aside. Still, these eight hypotheses fail to fully account for the origin of mitochondria. The Hungarian biologists delineate twelve questions that any successful endosymbiogenesis model must answer. In turn, they demonstrate that none of these models answers all the questions. In doing so, the two researchers call for a new theory.

    In the article’s abstract, the authors state, “The origin of mitochondria is a unique and hard evolutionary problem, embedded within the origin of eukaryotes. . . . Contending theories widely disagree on ancestral partners, initial conditions and unfolding events. There are many open questions but there is no comparative examination of hypotheses. We have specified twelve questions about the observable facts and hidden processes leading to the establishment of the endosymbiont that a valid hypothesis must address. There is no single theory capable of answering all questions.”3

    Space doesn’t permit me to discuss each of the questions posed by the pair of biologists. Still, I would like to call attention to a few problems confronting the endosymbiont hypothesis, highlighted in their critical review.

    Lack of Transitional Intermediates. Biologists have yet to discover any single-celled organisms that represent transitional intermediates between prokaryotes and eukaryotic cells. (There are some eukaryotes that lack mitochondria, but they appear to have lost these organelles.) All complex cells display the eukaryotic hallmark features. In other words, it looks as if eukaryotic cells emerged in a short period of time, without any transitional forms. In fact, some biologists dub the transition the eukaryotic big bang.

    Chimeric Nature of Eukaryotic Cells. Eukaryotic cells possess an unusual combination of features. Their information-processing systems resemble those of archaea, but their membranes and energy metabolism are bacteria-like. There is no plausible evolutionary scenario to explain this blend of features. It would require the archaeon host to replace its membranes while retaining all its information-processing genes. Evolutionary biologists know of no instance in which this type of transition took place, nor do they know how it could have occurred.

    Absence of Membrane Bioenergetics in the Host. All prokaryotic organisms rely on their plasma membrane to produce energy. If eukaryotic cells emerged via endosymbiogenesis, then the plasma membranes of eukaryotic cells should possess vestiges of that past function. Yet, the plasma membranes of eukaryotic cells show no traces of this essential biochemical feature.

    Mechanism of Inclusion. The most plausible way for the endosymbiont to be taken up by the host cell is through a process called phagocytosis. But why wouldn’t the engulfed cell be digested by the host? How did the endosymbiont escape destruction? And, if it somehow survived, why doesn’t the mitochondria possess a triple membrane system, with the outermost membrane derived from the phagosome?

    Early Selective Advantage. Once inside the host, why didn’t the endosymbiont simply reproduce, overrunning the host cell? What benefit would it be for the host cell to initially harbor the endosymbiont? Currently, evolutionary biologists don’t have answers to troubling questions such as these.

    The challenges delineated by the Hungarian biologists aren’t the only ones faced by evolutionary models for endosymbiogenesis. As I discuss in a previous article, mitochondrial protein biogenesis poses another difficult problem for the endosymbiont hypothesis.

    The authors of the critical review sum it up this way: “The integration of mitochondria was a major transition, and a hard one. It poses puzzles so complicated that new theories are still generated 100 years since endosymbiogenesis was first proposed by Konstantin Mereschkowsky and 50 years since Lynn Margulis cemented the endosymbiotic origin of mitochondria into evolutionary biology. . . . One would expect that by this time, there is a consensus about the transition, but far from that even the most fundamental points are still debated.”4

    Though evolutionary biologists claim to have life’s history all figured out, in reality they are like most of us—they still haven’t found what they are looking for.

    Resources

    Endnotes

    1. Istvan Zachar and Eors Szathmary, “Breath-Giving Cooperation: Critical Review of Origin of Mitochondria Hypotheses,” Biology Direct 12 (August 14, 2017): 19, doi:10.1186/s13062-017-0190-5.
    2. In previous posts (here, here, and here), I explain the rationale for mitochondrial DNA and the presence of cardiolipin in the inner mitochondrial membrane from a creation model/intelligent design vantage point and, in doing so, demonstrate that the two biochemical features aren’t uniquely explained by the endosymbiont hypothesis.
    3. Zachar and Szathmary, “Breath-Giving Cooperation.”
    4. Zachar and Szathmary, “Breath-Giving Cooperation.”
  • Whale Vocal Displays Make Beautiful Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 26, 2017

    There is the sea, vast and spacious,
    teeming with creatures beyond number—
    living things both large and small.
    There the ships go to and fro,
    and Leviathan, which you formed to frolic there.

    —Psalm 104:25–26

    A few weeks ago, I did something I always wanted to do. I listened to the uncut, live version of the Allman Brothers’ Mountain Jam from beginning to end. Thirty-four minutes in length, this song appears on The Allman Brothers’ live At Fillmore East album. Though The Allman Brothers are among my favorite groups, I have never had the time and motivation to listen to this song in its entirety. I like listening to jam bands, but a thirty-four-minute song . . . in any case, a cross-country flight finally afforded me the opportunity to give my undivided attention to this jam band masterpiece. What an incredible display of musicianship!

    Humpback Whale Acoustical Displays

    Rockers aren’t the only ones who can get a bit carried away when performing a song. Humpback whales are notorious for their jam-band-like acoustical displays. These creatures produce elaborate patterns of sounds that researchers dub songs. The whale songs can last for up to 30 minutes, and some whales will repeatedly perform the same song for up to 24 hours.

    Humpback whale songs display a complex hierarchical organization. The most basic element of the song consists of a single sound, called a unit. These creatures combine units together to form phrases. In turn, they combine phrases to form themes. Finally, they combine themes to form a song, with each theme connected by transitional phrasing.

    Researchers aren’t certain why humpback whales engage in these complex acoustical displays. Only the males sing. Perhaps their singing establishes dominance within the group. Most researchers think that the males sing to attract females. (Even for whales, the musicians get the girls.)

    Humpback whales in the same area perform the same song. But, their songs continually evolve. Researchers refer to the complete transformation of one whale song into another as a revolution. As the songs evolve, each member of the group learns the new variant. When one group of humpback whales encounters another group, the two groups exchange songs. This exchange accelerates the song revolution. As a result of this encounter, members of both groups develop and learn a new song.

    How Do Humpback Whales Learn Songs?

    Researchers from the UK and Australia wanted to understand how humpback whales learn new songs.1 Their query is part of a bigger question: How do animals transmit culture—learned information and behaviors—to other members of the group and to the next generation?

    To answer this question, the research team recorded 9,300 acoustical displays over the course of two complete song revolutions for the humpback whales of the South Pacific. Among these recordings, they discovered hybrid songs—vocal displays comprised of bits and pieces of both the old and the new songs. They concluded that these hybrids songs captured the transition from one song to the next.

    These song hybrids consisted of phrases and themes from the old and new songs spliced together. The structure of hybrid songs indicated to the research team that humpback whales must learn songs in the same way that humans learn languages, by learning bits and piecing them together.

    Rock on!

    The Creator’s Artistry

    Sometimes, as Christian apologists, we tend to think of God solely as an Engineer who creates with only one specific purpose or function in mind. But, the insights researchers have gained into the vocal displays of the humpback whales reminds me that the God I worship is also a Divine Artist—a God who creates for his enjoyment.

    Scripture supports this idea. Psalm 104:25 states that God formed the leviathan (which in this passage seems to refer to whales) on day five to frolic in the vast, spacious seas. In other words, God created the great sea mammals for no other purpose than to play!

    Artistry and engineering are not mutually exclusive. Engineers often design cars and buildings to be both functionally efficient and aesthetically pleasing. But sometimes, as humans, we create for no other reason than for our pleasure and for others to enjoy and be moved by our work.

    Nature’s Beauty and God’s Existence

    The humpback whale exemplifies the remarkable beauty of the natural world. Everywhere we look in nature—whether the night sky, the oceans, the rain forests, the deserts, even the microscopic world—we see a grandeur so great that we are often moved to our very core.

    Watching a humpback whale breach or hearing a recording of its vocal displays is more than sufficient to produce in us that sense of awe and wonder. And yet, our wonder and amazement only grow as we study these creatures using sophisticated scientific techniques.

    For Christians, nature’s beauty prompts us to worship the Creator. But it also points to the reality of God’s existence and supports the biblical view of humanity.

    As philosopher Richard Swinburne argues, “If God creates a universe, as a good workman, he will create a beautiful universe. On the other hand, if the universe came into existence without being created by God, there is no reason to suppose that it would be a beautiful universe.”2 In other words, the beauty in the world around us signifies the Divine.

    But, as human beings, why do we perceive beauty in the world? In response to this question, Swinburne asserts, “There is certainly no particular reason why, if the universe originated uncaused, psycho-physical laws…would bring about aesthetic sensibilities in humans.”3 But, if human beings are made in God’s image, as Scripture teaches, we should be able to discern and appreciate the universe’s beauty, made by our Creator to reveal his glory and majesty.

    In short, the humpback whales’ acoustical displaysa jam band masterpiecesing of the Creator’s existence and his artistry.

    Resources

    Endnotes

    1. Ellen C. Garland et al., “Song Hybridization Events during Revolutionary Song Change Provide Insights into Cultural Transmission in Humpback Whales,” Proceedings of the National Academy of Sciences USA 114 (July 25, 2017): 7822–29, doi:10.1073/pnas.1621072114.
    2. Richard Swinburne, The Existence of God, 2nd ed. (New York: Oxford University Press, 2004), 190–91.
    3. Swinburne, Existence of God, 190–91.
  • The Human Genome: Copied by Design

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 19, 2017

    The time my wife Amy and I spent in graduate school studying biochemistry were some of the best days of our lives. But it wasn’t all fun and games. For the most part, we spent long days and nights working in the lab.

    But we weren’t alone. Most of the graduate students in the chemistry department at Ohio University kept the same hours we did, with all-nighters broken up around midnight by “Dew n’ Donut” runs to the local 7-Eleven. Even though everybody worked hard, some people were just more productive than others. I soon came to realize that activity and productivity were two entirely different things. Some of the busiest people I knew in graduate school rarely accomplished anything.

    This same dichotomy lies at the heart of an important scientific debate taking place about the meaning of the ENCODE project results. This controversy centers around the question: Is the biochemical activity measured for the human genome merely biochemical noise or is it productive for the cell? Or to phrase the question the way a biochemist would: Is biochemical activity associated with the human genome the same thing as biochemical function?

    The answer to this question doesn’t just have scientific implications. It impacts questions surrounding humanity’s origin. Did we arise through evolutionary processes or are we the product of a Creator’s handiwork?

    The ENCODE Project

    The ENCODE project—a program carried out by a consortium of scientists with the goal of identifying the functional DNA sequence elements in the human genome—reported phase II results in the fall of 2012. To the surprise of many, the ENCODE project reported that around 80% of the human genome displays biochemical activity, and hence function, with the expectation that this percentage should increase with phase III of the project.

    If valid, the ENCODE results force a radical revision of the way scientists view the human genome. Instead of a wasteland littered with junk DNA sequences (as the evolutionary paradigm predicts), the human genome (and the genomes of other organisms) is packed with functional elements (as expected if a Creator brought human beings into existence).

    Within hours of the publication of the phase II results, evolutionary biologists condemned the ENCODE results, citing technical issues with the way the study was designed and the way the results were interpreted. (For a response to these complaints go here, here, and here.)

    Is Biochemical Activity the Same Thing As Function?

    One of the technical complaints relates to how the ENCODE consortium determined biochemical function. Critics argue that ENCODE scientists conflated biochemical activity with function. For example, the ENCODE Project determined that about 60% of the human genome is transcribed to produceRNA. ENCODE skeptics argue that most of these transcripts lack function. Evolutionary biologist Dan Graur has asserted that “some studies even indicate that 90% of transcripts generated by RNA polymerase II may represent transcriptional noise.”In other words, the biochemical activity measured by the ENCODE project can be likened to busy but nonproductive graduate students who hustle and bustle about the lab but fail to get anything done.

    When I first learned how many evolutionary biologists interpreted the ENCODE results I was skeptical. As a biochemist, I am well aware that living systems could not tolerate such high levels of transcriptional noise.

    Transcription is an energy- and resource-intensive process. Therefore, it would be untenable to believe that most transcripts are mere biochemical noise. Such a view ignores cellular energetics. Transcribing 60% of the genome when most of the transcripts serve no useful function would routinely waste a significant amount of the organism’s energy and material stores. If such an inefficient practice existed, surely natural selection would eliminate it and streamline transcription to produce transcripts that contribute to the organism’s fitness.

    Most RNA Transcripts Are Functional

    Recent work supports my intuition as a biochemist. Genomics scientists are quickly realizing that most of the RNA molecule transcribed from the human genome serve critical functional roles.

    For example, a recently published report from the Second Aegean International Conference on the Long and the Short of Non-Coding RNAs (held in Greece between June 9–14, 2017) highlights this growing consensus. Based on the papers presented at the conference, the authors of the report conclude, “Non-coding RNAs . . . are not simply transcriptional by-products, or splicing artefacts, but comprise a diverse population of actively synthesized and regulated RNA transcripts. These transcripts can—and do—function within the contexts of cellular homeostasis and human pathogenesis.”2

    Shortly before this conference was held, a consortium of scientists from the RIKEN Center for Life Science Technologies in Japan published an atlas of long non-coding RNAs transcribed from the human genome. (Long non-coding RNAs are a subset of RNA transcripts produced from the human genome.) They identified nearly 28,000 distinct long non-coding RNA transcripts and determined that nearly 19,200 of these play some functional role, with the possibility that this number may increase as they and other scientific teams continue to study long non-coding RNAs.3 One of the researchers involved in this project acknowledges that “There is strong debate in the scientific community on whether the thousands of long non-coding RNAs generated from our genomes are functional or simply byproducts of a noisy transcriptional machinery . . . we find compelling evidence that the majority of these long non-coding RNAs appear to be functional.”4

    Copied by Design

    Based on these results, it becomes increasingly difficult for ENCODE skeptics to dismiss the findings of the ENCODE project. Independent studies affirm the findings of the ENCODE consortium—namely, that a vast proportion of the human genome is functional.

    We have come a long way from the early days of the human genome project. When completed in 2003, many scientists at that time estimated that around 95% of the human genome consisted of junk DNA. And in doing so, they seemingly provided compelling evidence that humans must be the product of an evolutionary history.

    But, here we are, nearly 15 years later. And the more we learn about the structure and function of genomes, the more elegant and sophisticated they appear to be. And the more reasons we have to think that the human genome is the handiwork of our Creator.

    Resources

    Endnotes

    1. Dan Graur et al., “On the Immortality of Television Sets: ‘Function’ in the Human Genome According to the Evolution-Free Gospel of ENCODE,” Genome Biology and Evolution5 (March 1, 2013): 578–90, doi:10.1093/gbe/evt028.
    2. Jun-An Chen and Simon Conn, “Canonical mRNA is the Exception, Rather than the Rule,” Genome Biology 18 (July 7, 2017): 133, doi:10.1186/s13059-017-1268-1.
    3. Chung-Chau Hon et al., “An Atlas of Human Long Non-Coding RNAs with Accurate 5′ Ends,” Nature 543 (March 9, 2017): 199–204, doi:10.1038/nature21374.
    4. RIKEN, “Improved Gene Expression Atlas Shows that Many Human Long Non-Coding RNAs May Actually Be Functional,” ScienceDaily, March 1, 2017, www.sciencedaily.com/releases/2017/03/170301132018.htm.
  • Dollo’s Law at Home with a Creation Model, Reprised*

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 12, 2017

    *This article is an expanded and updated version of an article published in 2011 on reasons.org.

    Published posthumously, Thomas Wolfe’s 1940 novel, You Can’t Go Home Againconsidered by many to be his most significant work—explores how brutally unfair the passage of time can be. In the finale, George Webber (the story’s protagonist) concedes, “You can’t go back home” to family, childhood, familiar places, dreams, and old ways of life.

    In other words, there’s an irreversible quality to life. Call it the arrow of time.

    Like Wolfe, most evolutionary biologists believe there is an irreversibility to life’s history and the evolutionary process. In fact, this idea is codified in Dollo’s Law, which states that an organism cannot return, even partially, to a previous evolutionary stage occupied by one of its ancestors. Yet, several recent studies have uncovered what appears to be violations of Dollo’s Law. These violations call into question the sufficiency of the evolutionary paradigm to fully account for life’s history. On the other hand, the return to ‘ancestral states’ finds an explanation in an intelligent design/creation model approach to life’s history.

    Dollo’s Law

    French paleontologist Louis Dollo formulated the law that bears his name in 1893 before the advent of modern-day genetics, basing it on patterns he unearthed from the fossil record. Today, his idea finds undergirding in contemporary understanding of genetics and developmental biology.

    Evolutionary biologist Richard Dawkins explains the modern-day conception of Dollo’s Law this way:

    “Dollo’s Law is really just a statement about the statistical improbability of following exactly the same evolutionary trajectory twice . . . in either direction. A single mutational step can easily be reversed. But for larger numbers of mutational steps . . . mathematical space of all possible trajectories is so vast that the chance of two trajectories ever arriving at the same point becomes vanishingly small.”1

    If a biological trait is lost during the evolutionary process, then the genes and developmental pathways responsible for that feature will eventually degrade, because they are no longer under selective pressure. In 1994, using mathematical modeling, researchers from Indiana University determined that once a biological trait is lost, the corresponding genes can be “reactivated” with reasonable probability over time scales of five hundred thousand to six million years. But once a time span of ten million years has transpired, unexpressed genes and dormant developmental pathways become permanently lost.2

    In 2000, a scientific team from the University of Oregon offered a complementary perspective on the timescale for evolutionary reversals when they calculated how long it takes for a duplicated gene to lose function.3 (Duplicated genes serve as a proxy for dormant genes rendered useless because the trait they encode has been lost.) According to the evolutionary paradigm, once a gene becomes duplicated, it is no longer under the influence of natural selection. That is, it undergoes neutral evolution, and eventually becomes silenced as mutations accrue. As it turns out, the half-life for this process is approximately four million years. To put it another way, sixteen to twenty-four million years after the duplication event, the duplicated gene will have completely lost its function. Presumably, this result applies to dormant, unexpressed genes rendered unnecessary because the trait they specify is lost.

    Both scenarios assume neutral evolution and the accumulation of mutations in a clockwise manner. But what if the loss of gene function is advantageous? Collaborative work by researchers from Harvard University and NYU in 2007 demonstrated that loss of gene function can take place on the order of about one million years if natural selection influences gene loss.4 This research team studied the loss of eyes in the cave fish, the Mexican tetra. Because they live in a dark cave environment, eyes serve no benefit for these creatures. The team discovered that eye reduction offers an advantage for these fish, because of the high metabolic cost associated with maintaining eyes. The reduced metabolic cost associated with eye loss accelerates the loss of gene function through the operation of natural selection.

    Based on these three studies, it is reasonable to conclude that once a trait has been lost, the time limit for evolutionary reversals is on the order of about 20 million years.

    The very nature of evolutionary mechanisms and the constraints of genetic mutations make it extremely improbable that evolutionary processes would allow an organism to revert to an ancestral state or to recover a lost biological trait. You can’t go home again.

    Violations of Dollo’s Law

    Despite this expectation, over the course of the last several years, researchers have uncovered several instances in which Dollo’s Law has been violated. A brief description of a handful of these occurrences follows:

    The re-evolution of mandibular teeth in the frog genus Gastrotheca. This group is the only one that includes living frogs with true teeth on the lower jaw. When examined from an evolutionary framework, mandibular teeth were present in ancient frogs and then lost in the ancestor of all living frogs. It also looks as if teeth have been absent in frogs for 225 million years before they reappeared in Gastrotheca.5

    The re-evolution of oviparity in sand boas. When viewed from an evolutionary perspective, it appears as if live-birth (viviparity) evolved from egg-laying (oviparity) behaviors in reptiles several times. For example, estimates indicate that this evolutionary transition has occurred in snakes at least thirty times. As a case in point, there are 41 species of boas in the Old and New Worlds that give live births. Yet, two recently described sand boas, the Arabian sand boas (Eryx jayakari) and the Saharan sand boa (Eryx muelleri) lay eggs. Phylogenetic analysis carried out by researchers from Yale University indicates that the egg-laying in these two species of sand boas re-evolved 60 million years after the transition to viviparity took place.6

    The re-evolution of rotating sex combs in Drosophila. Sex combs are modified bristles unique to male fruit flies, used for courtship and mating. Compared to transverse sex combs, rotating sex combs result when several rows of bristles undergo a rotation of ninety degrees. In the ananassae fruit fly group most of the twenty or so species have simple transverse sex combs, with Drosophila bipectinata and Drosophila parabipectinata the two exceptions. These fruit fly species possess rotating sex combs. Phylogenetic analysis conducted by investigators from the University of California, Davis indicates that the rotating sex combs in these two species re-evolved, twelve million years after being lost.7

    The re-evolution of sexuality in mites belonging to the taxa, Crotoniidae. Mites exhibit a wide range of reproductive modes, including parthenogenesis. In fact, this means of reproduction is prominent in the group Oribatida, clustering into two subgroups that display parthenogenesis, almost exclusively. However, residing within one of these clusters is the taxa Crotoniidae, which displays sexual reproduction. Based on an evolutionary analysis, a team of German researchers conclude this group re-evolved the capacity for sexual reproduction.8

    The re-evolution of shell coiling in limpets. From an evolutionary perspective, the coiled shell has been lost in gastropod lineages numerous times, producing a limpet shape, consisting of a cap-shaped shell and a large foot. Evolutionary biologists have long thought that the loss of the coiled shell represents an evolutionary dead end. However, researchers from Venezuela have shown that coiled shell morphology re-evolved, at least one time, in calyptraeids, 20 to 100 million years after its loss.9

    This short list gives just a few recently discovered examples of Dollo’s Law violations. Surveying the scientific literature, evolutionary biologist J. J. Wiens identified an additional eight examples in which Dollo’s Law was violated and determined that in all cases the lost trait reappeared after at least 20 million years had passed and in some instances after 120 million years had transpired.10

    Violation of Dollo’s Law and the Theory of Evolution

    Given that the evolutionary paradigm predicts that re-evolution of traits should not occur after the trait has been lost for twenty million years, the numerous discoveries of Dollo’s Law violations provide a basis for skepticism about the capacity of the evolutionary paradigm to fully account for life’s history. The problem is likely worse than it initially appears. J. J. Wiens points out that Dollo’s Law violations may be more widespread than imagined, but difficult to detect for methodological reasons.11

    In response to this serious problem, evolutionary biologists have offered two ways to account for Dollo’s Law violations.12 The first is to question the validity of the evolutionary analysis that exposes the violations. To put it another way, these scientists claim that the recently identified Dollo’s Law violations are artifacts of the evolutionary analysis, and not real. However, this work-around is unconvincing. The evolutionary biologists who discovered the different examples of Dollo’s Law violations were aware of this complication and took painstaking efforts to ensure the validity of the evolutionary analysis they performed.

    Other evolutionary biologists argue that some genes and developmental modules serve more than one function. So, even though the trait specified by a gene or a developmental module is lost, the gene or the module remains intact because they serve other roles. This retention makes it possible for traits to re-evolve, even after a hundred million years. Though reasonable, this explanation still must be viewed as speculative. Evolutionary biologists have yet to apply the same mathematical rigor to this explanation as they have when estimating the timescale for loss of function in dormant genes. These calculations are critical given the expansive timescales involved in some of the Dollo’s Law violations.

    Considering the nature of evolutionary processes, this response neglects the fact that genes and developmental pathways will continue to evolve under the auspices of natural selection, once a trait is lost. Free from the constraints of the lost function, the genes and developmental modules experience new evolutionary possibilities, previously unavailable to them. The more functional roles a gene or developmental module assumes, the less likely it is that these systems can evolve. Shedding one of their roles increases the likelihood that these genes and developmental pathways will become modified as the evolutionary process explores new space now available to it. In this scenario, it is reasonable to think that natural selection could modify the genes and developmental modules to such an extent that the lost trait would be just as unlikely to re-evolve as it would if gene loss was a consequence of neutral evolution. In fact, the study of eye loss in the Mexican tetra suggests that the modification of these genes and developmental modules could occur at a faster rate if governed by natural selection rather than neutral evolution.

    Violation of Dollo’s Law and the Case for Creation

    While Dollo’s Law violations are problematic for the evolutionary paradigm, the re-evolution—or perhaps, more appropriately, the reappearance—of the same biological traits after their disappearance makes sense from a creation model/intelligent design perspective. The reappearance of biological systems could be understood as the work of the Creator. It is not unusual for engineers to reuse the same design or to revisit a previously used design feature in a new prototype. While there is an irreversibility to the evolutionary process, designers are not constrained in that way and can freely return to old designs.

    Dollo’s Law violations are at home in a creation model, highlighting the value of this approach to understanding life’s history.

    Endnotes

    1. Richard Dawkins, The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (New York: W.W. Norton, 2015), 94.
    2. Charles R. Marshall, Elizabeth C. Raff, and Rudolf A. Raff, “Dollo’s Law and the Death and Resurrection of Genes,” Proceedings of the National Academy of Sciences USA 91 (December 6, 1994): 12283–87.
    3. Michael Lynch and John S. Conery, “The Evolutionary Fate and Consequences of Duplicate Genes, Science 290 (November 10, 2000): 1151–54, doi:10.1126/science.290.5494.1151.
    4. Meredith Protas et al., “Regressive Evolution in the Mexican Cave Tetra, Astyanax mexicanus, Current Biology 17 (March 6, 2007): 452–54, doi:10.1016/j.cub.2007.01.051.
    5. John J. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs after More than 200 Million Years, and Re-evaluating Dollo’s Law,” Evolution 65 (May 2011): 1283–96, doi:10.1111/j.1558-5646.2011.01221.x.
    6. Vincent J. Lynch and Günter P. Wagner, “Did Egg-Laying Boas Break Dollo’s Law? Phylogenetic Evidence for Reversal to Oviparity in Sand Boas (Eryx: Boidae),” Evolution 64 (January 2010): 207–16, doi:10.1111/j.1558-5646.2009.00790.x.
    7. Thaddeus D. Seher et al., “Genetic Basis of a Violation of Dollo’s Law: Re-Evolution of Rotating Sex Combs in Drosophila bipectinata,” Genetics 192 (December 1, 2012): 1465–75, doi:10.1534/genetics.112.145524.
    8. Katja Domes et al., “Reevolution of Sexuality Breaks Dollo’s Law,” Proceedings of the National Academy of Sciences USA 104 (April 24, 2007): 7139–44, doi:10.1073/pnas.0700034104.
    9. Rachel Collin and Roberto Cipriani, “Dollo’s Law and the Re-Evolution of Shell Coiling,” Proceedings of the Royal Society B 270 (December 22, 2003): 2551–55, doi:10.1098/rspb.2003.2517.
    10. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs.”
    11. Wiens, “Re-evolution of Lost Mandibular Teeth in Frogs.
    12. Rachel Collin and Maria Pia Miglietta, “Reversing Opinions on Dollo’s Law,” Trends in Ecology and Evolution 23 (November 2008): 602–9, doi:10.1016/j.tree.2008.06.013.
  • Is 75% of the Human Genome Junk DNA?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 29, 2017

    By the rude bridge that arched the flood,
    Their flag to April’s breeze unfurled,
    Here once the embattled farmers stood,
    And fired the shot heard round the world.

    –Ralph Waldo Emerson, Concord Hymn

    Emerson referred to the Battles of Lexington and Concord, the first skirmishes of the Revolutionary War, as the “shot heard round the world.”

    While not as loud as the gunfire that triggered the Revolutionary War, a recent article published in Genome Biology and Evolution by evolutionary biologist Dan Graur has garnered a lot of attention,1 serving as the latest salvo in the junk DNA wars—a conflict between genomics scientists and evolutionary biologists about the amount of functional DNA sequences in the human genome.

    Clearly, this conflict has important scientific ramifications, as researchers strive to understand the human genome and seek to identify the genetic basis for diseases. The functional content of the human genome also has significant implications for creation-evolution skirmishes. If most of the human genome turns out to be junk after all, then the case for a Creator potentially suffers collateral damage.

    According to Graur, no more than 25% of the human genome is functional—a much lower percentage than reported by the ENCODE Consortium. Released in September 2012, phase II results of the ENCODE project indicated that 80% of the human genome is functional, with the expectation that the percentage of functional DNA in the genome would rise toward 100% when phase III of the project reached completion.

    If true, Graur’s claim would represent a serious blow to the validity of the ENCODE project conclusions and devastate the RTB human origins creation model. Intelligent design proponents and creationists (like me) have heralded the results of the ENCODE project as critical in our response to the junk DNA challenge.

    Junk DNA and the Creation vs. Evolution Battle

    Evolutionary biologists have long considered the presence of junk DNA in genomes as one of the most potent pieces of evidence for biological evolution. Skeptics ask, “Why would a Creator purposely introduce identical nonfunctional DNA sequences at the same locations in the genomes of different, though seemingly related, organisms?”

    When the draft sequence was first published in 2000, researchers thought only around 2–5% of the human genome consisted of functional sequences, with the rest being junk. Numerous skeptics and evolutionary biologists claim that such a vast amount of junk DNA in the human genome is compelling evidence for evolution and the most potent challenge against intelligent design/creationism.

    But these arguments evaporate in the wake of the ENCODE project. If valid, the ENCODE results would radically alter our view of the human genome. No longer could the human genome be regarded as a wasteland of junk; rather, the human genome would have to be recognized as an elegantly designed system that displays sophistication far beyond what most evolutionary biologists ever imagined.

    ENCODE Skeptics

    The findings of the ENCODE project have been criticized by some evolutionary biologists who have cited several technical problems with the study design and the interpretation of the results. (See articles listed under “Resources to Go Deeper” for a detailed description of these complaints and my responses.) But ultimately, their criticisms appear to be motivated by an overarching concern: if the ENCODE results stand, then it means key features of the evolutionary paradigm can’t be correct.

    Calculating the Percentage of Functional DNA in the Human Genome

    Graur (perhaps the foremost critic of the ENCODE project) has tried to discredit the ENCODE findings by demonstrating that they are incompatible with evolutionary theory. Toward this end, he has developed a mathematical model to calculate the percentage of functional DNA in the human genome based on mutational load—the amount of deleterious mutations harbored by the human genome.

    Graur argues that junk DNA functions as a sponge absorbing deleterious mutations, thereby protecting functional regions of the genome. Considering this buffering effect, Graur wanted to know how much junk DNA must exist in the human genome to buffer against the loss of fitness—which would result from deleterious mutations in functional DNA—so that a constant population size can be maintained.

    Historically, the replacement level fertility rates for human beings have been two to three children per couple. Based on Graur’s modeling, this fertility rate requires 85–90% of the human genome to be composed of junk DNA in order to absorb deleterious mutations—ensuring a constant population size, with the upper limit of functional DNA capped at 25%.

    Graur also calculated a fertility rate of 15 children per couple, at minimum, to maintain a constant population size, assuming 80% of the human genome is functional. According to Graur’s calculations, if 100% of the human genome displayed function, the minimum replacement level fertility rate would have to be 24 children per couple.

    He argues that both conclusions are unreasonable. On this basis, therefore, he concludes that the ENCODE results cannot be correct.

    Response to Graur

    So, has Graur’s work invalidated the ENCODE project results? Hardly. Here are four reasons why I’m skeptical. 

    1. Graur’s estimate of the functional content of the human genome is based on mathematical modeling, not experimental results.

    An adage I heard repeatedly in graduate school applies: “Theories guide, experiments decide.” Though the ENCODE project results theoretically don’t make sense in light of the evolutionary paradigm, that is not a reason to consider them invalid. A growing number of studies provide independent experimental validation of the ENCODE conclusions. (Go here and here for two recent examples.)

    To question experimental results because they don’t align with a theory’s predictions is a Bizarro World” approach to science. Experimental results and observations determine a theory’s validity, not the other way around. Yet when it comes to the ENCODE project, its conclusions seem to be weighed based on their conformity to evolutionary theory. Simply put, ENCODE skeptics are doing science backwards.

    While Graur and other evolutionary biologists argue that the ENCODE results don’t make sense from an evolutionary standpoint, I would argue as a biochemist that the high percentage of functional regions in the human genome makes perfect sense. The ENCODE project determined that a significant fraction of the human genome is transcribed. They also measured high levels of protein binding.

    ENCODE skeptics argue that this biochemical activity is merely biochemical noise. But this assertion does not make sense because (1) biochemical noise costs energy and (2) random interactions between proteins and the genome would be harmful to the organism.

    Transcription is an energy- and resource-intensive process. To believe that most transcripts are merely biochemical noise would be untenable. Such a view ignores cellular energetics. Transcribing a large percentage of the genome when most of the transcripts serve no useful function would routinely waste a significant amount of the organism’s energy and material stores. If such an inefficient practice existed, surely natural selection would eliminate it and streamline transcription to produce transcripts that contribute to the organism’s fitness.

    Apart from energetics considerations, this argument ignores the fact that random protein binding would make a dire mess of genome operations. Without minimizing these disruptive interactions, biochemical processes in the cell would grind to a halt. It is reasonable to think that the same considerations would apply to transcription factor binding with DNA.

    2. Graur’s model employs some questionable assumptions.

    Graur uses an unrealistically high rate for deleterious mutations in his calculations.

    Graur determined the deleterious mutation rate using protein-coding genes. These DNA sequences are highly sensitive to mutations. In contrast, other regions of the genome that display function—such as those that (1) dictate the three-dimensional structure of chromosomes, (2) serve as transcription factors, and (3) aid as histone binding sites—are much more tolerant to mutations. Ignoring these sequences in the modeling work artificially increases the amount of required junk DNA to maintain a constant population size.

    3. The way Graur determines if DNA sequence elements are functional is questionable. 

    Graur uses the selected-effect definition of function. According to this definition, a DNA sequence is only functional if it is undergoing negative selection. In other words, sequences in genomes can be deemed functional only if they evolved under evolutionary processes to perform a particular function. Once evolved, these sequences, if they are functional, will resist evolutionary change (due to natural selection) because any alteration would compromise the function of the sequence and endanger the organism. If deleterious, the sequence variations would be eliminated from the population due to the reduced survivability and reproductive success of organisms possessing those variants. Hence, functional sequences are those under the effects of selection.

    In contrast, the ENCODE project employed a causal definition of function. Accordingly, function is ascribed to sequences that play some observationally or experimentally determined role in genome structure and/or function.

    The ENCODE project focused on experimentally determining which sequences in the human genome displayed biochemical activity using assays that measured

    • transcription,
    • binding of transcription factors to DNA,
    • histone binding to DNA,
    • DNA binding by modified histones,
    • DNA methylation, and
    • three-dimensional interactions between enhancer sequences and genes.

    In other words, if a sequence is involved in any of these processes—all of which play well-established roles in gene regulation—then the sequences must have functional utility. That is, if sequenceQperforms functionG, then sequenceQis functional.

    So why does Graur insist on a selected-effect definition of function? For no other reason than a causal definition ignores the evolutionary framework when determining function. He insists that function be defined exclusively within the context of the evolutionary paradigm. In other words, his preference for defining function has more to do with philosophical concerns than scientific ones—and with a deep-seated commitment to the evolutionary paradigm.

    As a biochemist, I am troubled by the selected-effect definition of function because it is theory-dependent. In science, cause-and-effect relationships (which include biological and biochemical function) need to be established experimentally and observationally, independent of any particular theory. Once these relationships are determined, they can then be used to evaluate the theories at hand. Do the theories predict (or at least accommodate) the established cause-and-effect relationships, or not?

    Using a theory-dependent approach poses the very real danger that experimentally determined cause-and-effect relationships (or, in this case, biological functions) will be discarded if they don’t fit the theory. And, again, it should be the other way around. A theory should be discarded, or at least reevaluated, if its predictions don’t match these relationships.

    What difference does it make which definition of function Graur uses in his model? A big difference. The selected-effect definition is more restrictive than the causal-role definition. This restrictiveness translates into overlooked function and increases the replacement level fertility rate.

    4. Buffering against deleterious mutations is a function.

    As part of his model, Graur argues that junk DNA is necessary in the human genome to buffer against deleterious mutations. By adopting this view, Graur has inadvertently identified function for junk DNA. In fact, he is not the first to argue along these lines. Biologist Claudiu Bandea has posited that high levels of junk DNA can make genomes resistant to the deleterious effects of transposon insertion events in the genome. If insertion events are random, then the offending DNA is much more likely to insert itself into “junk DNA” regions instead of coding and regulatory sequences, thus protecting information-harboring regions of the genome.

    If the last decade of work in genomics has taught us anything, it is this: we are in our infancy when it comes to understanding the human genome. The more we learn about this amazingly complex biochemical system, the more elegant and sophisticated it becomes. Through this process of discovery, we continue to identify functional regions of the genome—DNA sequences long thought to be junk.

    In short, the criticisms of the ENCODE project reflect a deep-seated commitment to the evolutionary paradigm and, bluntly, are at war with the experimental facts.

    Bottom line: if the ENCODE results stand, it means that key aspects of the evolutionary paradigm can’t be correct.

    Resources to Go Deeper

    Endnotes

    1. Dan Graur, “An Upper Limit on the Functional Fraction of the Human Genome,” Genome Biology and Evolution 9 (July 2017): 1880–85, doi:10.1093/gbe/evx121.
  • DNA Replication Winds Up the Case for Intelligent Design

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 08, 2017

    One of my classmates and friends in high school was a kid we nicknamed “Radar.” He was a cool kid who had special needs. He was mentally challenged. He was also funny and as good-hearted as they come, never causing any real problems—other than playing hooky from school, for days on end. Radar hated going to school.

    When he eventually showed up, he would be sent to the principal’s office to explain his unexcused absences to Mr. Reynolds. And each time, Radar would offer the same excuse: his grandmother died. But Mr. Reynolds didn’t buy it—for obvious reasons. It didn’t require much investigation on the principal’s part to know that Radar was lying.

    Skeptics have something in common with my friend Radar. They use the same tired excuse when presented with compelling evidence for design from biochemistry. Inevitably, they dismiss the case for a Creator by pointing out all the “flawed” designs in biochemical systems. But this excuse never sticks. Upon further investigation, claimed instances of bad designs turn out to be elegant, in virtually every instance, as recent work by scientists from UC Davis illustrates.

    These researchers accomplished an important scientific milestone by using single molecule techniques to observe the replication of a single molecule of DNA.1 Their unexpected insights have bearing on how we understand this key biochemical operation. The work also has important implications for the case for biochemical design.

    For those familiar with DNA’s structure and replication process, you can skip the next two sections. But for those of you who are not, a little background information is necessary to appreciate the research team’s findings and their relevance to the creation-evolution debate.

    DNA’s Structure

    DNA consists of two molecular chains (called “polynucleotides”) aligned in an antiparallel fashion. (The two strands are arranged parallel to one another with the starting point of one strand of the polynucleotide duplex located next to the ending point of the other strand and vice versa.) The paired molecular chains twist around each other forming the well-known DNA double helix. The cell’s machinery generates the polynucleotide chains using four different nucleotides: adenosineguanosinecytidine, and thymidine, abbreviated as A, G, C, and T, respectively.

    A special relationship exists between the nucleotide sequences of the two DNA strands. Biochemists say the DNA sequences of the two strands are complementary. When the DNA strands align, the adenine (A) side chains of one strand always pair with thymine (T) side chains from the other strand. Likewise, the guanine (G) side chains from one DNA strand always pair with cytosine (C) side chains from the other strand. Biochemists refer to these relationships as “base-pairing rules.” Consequently, if biochemists know the sequence of one DNA strand, they can readily determine the sequence of the other strand. Base-pairing plays a critical role in DNA replication.

    Image 1: DNA’s Structure

    DNA Replication

    Biochemists refer to DNA replication as a “template-directed, semiconservative process.” By “template-directed,” biochemists mean that the nucleotide sequences of the “parent” DNA molecule function as a template, directing the assembly of the DNA strands of the two “daughter” molecules using the base-pairing rules. By “semiconservative,” biochemists mean that after replication, each daughter DNA molecule contains one newly formed DNA strand and one strand from the parent molecule.

    Image 2: Semiconservative DNA Replication

    Conceptually, template-directed, semiconservative DNA replication entails the separation of the parent DNA double helix into two single strands. By using the base-pairing rules, each strand serves as a template for the cell’s machinery to use when it forms a new DNA strand with a nucleotide sequence complementary to the parent strand. Because each strand of the parent DNA molecule directs the production of a new DNA strand, two daughter molecules result. Each one possesses an original strand from the parent molecule and a newly formed DNA strand produced by a template-directed synthetic process.

    DNA replication begins at specific sites along the DNA double helix, called “replication origins.” Typically, prokaryotic cells have only a single origin of replication. More complex eukaryotic cells have multiple origins of replication.

    The DNA double helix unwinds locally at the origin of replication to produce what biochemists call a “replication bubble.” During the course of replication, the bubble expands in both directions from the origin. Once the individual strands of the DNA double helix unwind and are exposed within the replication bubble, they are available to direct the production of the daughter strand. The site where the DNA double helix continuously unwinds is called the “replication fork.” Because DNA replication proceeds in both directions away from the origin, there are two replication forks within each bubble.

    Image 3: DNA Replication Bubble

    DNA replication can only proceed in a single direction, from the top of the DNA strand to the bottom. Because the strands that form the DNA double helix align in an antiparallel fashion with the top of one strand juxtaposed with the bottom of the other strand, only one strand at each replication fork has the proper orientation (bottom-to-top) to direct the assembly of a new strand, in the top-to-bottom direction. For this strand—referred to as the “leading strand”—DNA replication proceeds rapidly and continuously in the direction of the advancing replication fork.

    DNA replication cannot proceed along the strand with the top-to-bottom orientation until the replication bubble has expanded enough to expose a sizable stretch of DNA. When this happens, DNA replication moves away from the advancing replication fork. DNA replication can only proceed a short distance for the top-to-bottom-oriented strand before the replication process has to stop and wait for more of the parent DNA strand to be exposed. When a sufficient length of the parent DNA template is exposed a second time, DNA replication can proceed again, but only briefly before it has to stop again and wait for more DNA to be exposed. The process of discontinuous DNA replication takes place repeatedly until the entire strand is replicated. Each time DNA replication starts and stops, a small fragment of DNA is produced.

    Biochemists refer to these pieces of DNA (that will eventually compose the daughter strand) as “Okazaki fragments”—after the biochemist who discovered them. Biochemists call the strand produced discontinuously the “lagging strand” because DNA replication for this strand lags behind the more rapidly produced leading strand. One additional point: the leading strand at one replication fork is the lagging strand at the other replication fork since the replication forks at the two ends of the replication bubble advance in opposite directions.

    An ensemble of proteins is needed to carry out DNA replication. Once the origin recognition complex (which consists of several different proteins) identifies the replication origin, a protein called “helicase” unwinds the DNA double helix to form the replication fork.

    Image 4: DNA Replication Proteins

    Once the replication fork is established and stabilized, DNA replication can begin. Before the newly formed daughter strands can be produced, a small RNA primer must be produced. The protein that synthesizes new DNA by reading the parent DNA template strand—DNA polymerase—can’t start production from scratch. It must be primed. A massive protein complex, called the “primosome,” which consists of over 15 different proteins, produces the RNA primer needed by DNA polymerase.

    Once primed, DNA polymerase will continuously produce DNA along the leading strand. However, for the lagging strand, DNA polymerase can only generate DNA in spurts to produce Okazaki fragments. Each time DNA polymerase generates an Okazaki fragment, the primosome complex must produce a new RNA primer.

    Once DNA replication is completed, the RNA primers are removed from the continuous DNA of the leading strand and from the Okazaki fragments that make up the lagging strand. A protein called a “3’-5’ exonuclease” removes the RNA primers. A different DNA polymerase fills in the gaps created by the removal of the RNA primers. Finally, a protein called a “ligase” connects all the Okazaki fragments together to form a continuous piece of DNA out of the lagging strand.

    Are Leading and Lagging Strand Polymerases Coordinated?

    Biochemists had long assumed that the activities of the leading and lagging strand DNA polymerase enzymes were coordinated. If not, then DNA replication of one strand would get too far ahead of the other, increasing the likelihood of mutations.

    As it turns out, the research team from UC Davis discovered that the activities of the two polymerases are not coordinated. Instead, the leading and lagging strand DNA polymerase enzymes replicate DNA autonomously. To the researchers’ surprise, they learned that the leading strand DNA polymerase replicated DNA in bursts, suddenly stopping and starting. And when it did replicate DNA, the rate of production varied by a factor of ten. On the other hand, the researchers discovered that the rate of DNA replication on the lagging strand depended on the rate of RNA primer formation.

    The researchers point out that if not for single molecule techniques—in which replication is characterized for individual DNA molecules—the autonomous behavior of leading and lagging strand DNA polymerases would not have been detected. Up to this point, biochemists have studied the replication process using a relatively large number of DNA molecules. These samples yield average replication rates for leading and lagging strand replication, giving the sense that replication of both strands is coordinated.

    According to the researchers, this discovery is a “real paradigm shift, and undermines a great deal of what’s in the textbooks.”Because the DNA polymerase activity is not coordinated but autonomous, they conclude that the DNA replication process is a flawed design, driven by stochastic (random) events. Also, the lack of coordination between the leading and lagging strands means that leading strand replication can get ahead of the lagging strand, yielding long stretches of vulnerable single-stranded DNA.

    Diminished Design or Displaced Design?

    Even though this latest insight appears to undermine the elegance of the DNA replication process, other observations made by the UC Davis research team indicate that the evidence for design isn’t diminished, just displaced.

    These investigators discovered that the activity of helicase—the enzyme that unwinds the double helix at the replication fork—somehow senses the activity of the DNA polymerase on the leading strand. When the DNA polymerase stalls, the activity of the helicase slows down by a factor of five until the DNA polymerase catches up. The researchers believe that another protein (called the “tau protein”) mediates the interaction between the helicase and DNA polymerase molecules. In other words, the interaction between DNA polymerase and the helicase compensates for the stochastic behavior of the leading strand polymerase, pointing to a well-designed process.

    As already noted, the research team also learned that the rate of lagging strand replication depends on primer production. They determined that the rate of primer production exceeds the rate of DNA replication on the leading strand. This fortuitous coincidence ensures that as soon as enough of the bubble opens for lagging strand replication to continue, the primase can immediately lay down the RNA primer, restarting the process. It turns out that the rate of primer production is controlled by the primosome concentration in the cell, with primer production increasing as the number of primosome copies increase. The primosome concentration appears to be fine-tuned. If the concentration of this protein complex is too large, the replication process becomes “gummed up”; if too small, the disparity between leading and lagging strand replication becomes too great, exposing single-stranded DNA. Again, the fine-tuning of primosome concentration highlights the design of this cellular operation.

    It is remarkable how two people can see things so differently. For scientists influenced by the evolutionary paradigm, the tendency is to dismiss evidence for design and, instead of seeing elegance, become conditioned to see flaws. Though DNA replication takes place in a haphazard manner, other features of the replication process appear to be engineered to compensate for the stochastic behavior of the DNA polymerases and, in the process, elevate the evidence for design.

    And, that’s no lie.

    Resources

    Endnotes

    1. James E. Graham et al., “Independent and Stochastic Action of DNA Polymerases in the Replisome,” Cell 169 (June 2017): 1201–13, doi:10.1016/j.cell.2017.05.041.
    2. Bec Crew, “DNA Replication Has Been Filmed for the First Time, and It’s Not What We Expected,” ScienceAlert, June 19, 2017, https://sciencealert.com/dna-replication-has-been-filmed-for-the-first-time-and-it-s-stranger-than-we-thought.
  • How Are Sea Slugs a Failed Prediction of the Evolutionary Paradigm?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jul 18, 2017

    Test them all; hold on to what is good.

    –1 Thessalonians 5:21

    What is your definition of success?

    The answer to this question most likely depends on the person you ask. People view success differently.

    However, subjectivity is not the case when it comes to scientific theories. Success in science is based on a singular criterion: how well does the theory perform at predicting future scientific outcomes?

    Scientific predictions arise as the logical entailments of the theory at hand. In turn, scientists use these predictions to assess the theory’s validity. If experimental results and observations fulfill the theory’s predictions, then scientists consider it sound. If observations and results don’t match the predictions, then scientists are forced to revise and even discard, the theory under evaluation. In short, successful scientific theories have explanatory and predictive power.

    It is for this reason many biologists view the theory of evolution as a valid paradigm for interpreting the origin, history, and design of life. And it is for this reason many biologists regard the theory of evolution as biology’s grand unifying theory.

    However, the evolutionary paradigm has yet to adequately explain key events in life’s history, such as (1) the origin of life, (2) the origin of body plans, (3) the origin of sexual reproduction, (4) the trigger for the sociocultural big bang and human exceptionalism, and (5) the origin of consciousness. The evolutionary paradigm also suffers from failed predictions, as recent work by a team of neuroscientists from Georgia State University attests.1

    Swimming Sea Slugs

    The Georgia State University researchers characterized the neural circuits involved in the swimming behavior of a group of sea slugs called the nudibranchs. These creatures serve as an ideal model system to study neural circuits because relatively large neurons make up their neural systems. The sea slugs’ neural circuits are simple and straightforward to map. On top of that, the sea slugs’ neural circuits regulate simple behaviors. These properties make it easy to characterize and, then, manipulate the neural circuitry of these creatures.

    Biologists have identified about 2,000 species of nudibranchs. Of this number, about 50 swim with a characteristic left-right motion.

    The Georgia State scientists investigated the neural mechanism associated with the left-right swimming behavior of two sea slug species: the giant nudibranch and the hooded nudibranch. From an evolutionary perspective, these two sea slugs share an evolutionary ancestor. In fact, all 50 left-right swimming sea slugs belong to the same branch of the evolutionary tree. (In technical terms, they are monophyletic.)

    Predictions of the Evolutionary Model

    Given that the left-right swimming nudibranchs are monophyletic, the evolutionary model predicts that the morphology, genetics, and behavior originated in the common ancestor of this group. And, given that the swimming behavior of this group is shared among all members (homologous), the expectation is that the neurons and neural circuitry that control this behavior should also be shared among all members.

    The Georgia State scientists say, “. . . Behavioral morphology is often assumed to involve similarity in underlying neuronal mechanisms. . . . Behaviors that are homologous and similar in form would naturally be assumed to be produced by similar neural mechanisms.”2

    Sea Slug Neural Circuitry

    Consistent with the predictions of the evolutionary paradigm, the researchers discovered that the neurons of the giant and hooded nudibranchs were homologous. But, to their surprise, they discovered that the underlying neural mechanisms that controlled the swimming behavior of the two sea slugs were distinct.

    In fact, using a technique called dynamic clamping, the Georgia State scientists could modify the neural circuitry of one sea slug to be the same as the other, all the while inducing the same swimming behavior.

    Masking the Failure of the Evolutionary Paradigm

    The unexpected discovery of distinct neural circuitry in the giant and hooded nudibranchs stands as a failed prediction of the evolutionary paradigm. So how do the Georgia State scientists respond to this discovery?

    First, they point out that their findings support the notion of neural plasticity, with the same neurons supporting multiple neural circuits and varying neural circuits producing the same behavior. But, neural plasticity doesn’t fully account for this finding. If the two sea slugs weren’t part of the same branch on the evolutionary tree, one could argue that the difference in neural circuits represents an example of convergence.

    The researchers suggest that perhaps the divergence of the neural circuitry from the neural mechanism displayed by the shared ancestor of the nudibranch is due to a phenomenon they dub neural drift. This doesn’t seem plausible given the importance of the swimming behavior for sea slug survival. Altering the neural circuitry would alter this behavior, compromising the sea slug’s fitness.

    In fact, there is no independent evidence whatsoever for neural drift. It is a made-up, ad hoc phenomenon that creates a diversion, masking the fact that the results from this study represent a failed prediction of the evolutionary paradigm.

    While this failed prediction is not sufficient to overthrow the evolutionary paradigm, it does justify skepticism about the capacity of evolutionary theory—as currently conceived—to fully explain life’s design and diversity.

    Resources

    Endnotes

    1. Akira Sakurai and Paul S. Katz, “Artificial Synaptic Rewiring Demonstrates that Distinct Neural Circuit Configuration Underlie Homologous Behaviors,” Current Biology 27 (June 19, 2017): 1–14, doi:10.1016/j.cub.2017.05.016.
    2. Ibid.
  • Why Did God Create the Thai Liver Fluke?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jul 11, 2017

    The Thai liver fluke causes quite a bit of human misery. This parasite infects fish living in the rivers of Southeast Asia, which, in turn, infects people who eat the fish.

    Raw and fermented fish make up a big part of the diet of people in Southeast Asia. For example, in Thailand, a popular culinary item is called sour fish. This “delicacy” is prepared by mixing raw fish with garlic, salt, seasoning, and rice. After rolling the mixture into a ball, it is placed in a plastic bag and left to ferment in the hot sun for several days.

    The fermentation process isn’t sufficient to kill the cysts of the Thai liver fluke embedded in the muscles of the infected fish. So, when people eat sour fish (or raw fish), they risk ingesting the parasite.

    The Thai Liver Fluke Life Cycle

    After ingestion, the cysts open in the digestive track of the human host, releasing the fluke. This parasite travels through the bile duct, making its way into the liver, where it takes up residence.

    Once in the liver, the fluke lays eggs that are carried into the host’s digestive track by bile secreted by the liver. In turn, the eggs are released into the environment with human excrement. After being ingested by snails, the eggs hatch, producing larvae that escape from the snail. The free-living larvae infect fish, forming cysts in their skin, fins, and muscle.

    Image: Life cycle of Opisthorchis viverrini. Image source: Wikipedia

    The Thai liver fluke is a master of disguise, evading the immune system of the human host and living for decades in the liver. Unless the infestation is extreme, people infected with the fluke are completely unaware that they harbor this parasite.

    Estimates indicate that 10% of the Thai population is infected with the Thai liver fluke. But in the villages of northern Thailand, where the consumption of raw and fermented fish is higher than in other areas of the country, 45% of the people carry the parasite.

    The Thai Liver Fluke and Cancer

    The Thai liver fluke can live for several decades in the host’s liver without much consequence. But eventually, the burden of the infection catches up with the human host, leading to an aggressive and deadly form of liver cancer that claims about 26,000 Thai lives each year. Once the cancer is detected, most patients die within a year.

    Biomedical researchers think the liver cancer is triggered by the Thai liver fluke, which munches on the host’s liver. Interestingly, the fluke’s saliva contains a protein (called granulin-like protein) that stimulates cell growth and division. These processes help the liver to repair itself after being damaged by the fluke. In effect, the parasite eats part of the liver, supercharges the liver to repair itself, and then eats the new tissue, repeating the cycle for decades. The repeated wounding and repairing of the liver tissue accompanied by rapid cell division eventually leads to the onset of cancer.

    The Thai Liver Fluke and God’s Goodness

    The problems caused by the Thai liver fluke are not limited to the biomedical arena. This parasite causes theological issues, as well. Why would a good God create the Thai liver fluke? Questions like this one fall under the problem of evil.

    Philosophers and theologians recognize two kinds of evil: moral and natural.Moral evil stems from human action (or inaction in some cases). Natural evil proceeds from nature itself—earthquakes, tornadoes, floods, diseases, and the like.

    Natural evil seems to present a greater theological challenge than moral evil does. Skeptics could agree that God can be excused for the free-will actions of human beings who violate his standard of goodness, but they reason that natural disasters and disease don’t result from human activity. Therefore, this type of “evil” must be attributed solely to God.

    Are Some Forms of Natural Evil Actually Moral Evil?

    As I have previously argued, many times natural evil is moral evil in disguise. (See the Resources section below.) In other words, the suffering humans experience stems from human moral failing and poor judgment, not the actual natural phenomenon.

    This most certainly seems to be the case when it comes to the Thai liver fluke. Liver cancer caused by parasite infestations would plummet if people stopped eating raw fish and developed better public sanitation systems and practices.

    So, is it God’s fault that humans become infected with the Thai liver fluke? Or is it because the people of northern Thailand suffer from poverty and a lack of sanitation—ultimately, conditions caused by human moral failing? Is it God’s fault that people of Southeast Asia develop liver cancer from fluke infestations, when they eat raw and fermented fish instead of properly cooking the meat, knowing the adverse health effects?

    Parasites Play a Critical Role in Ecological Systems

    Still, the question remains: Why would God create parasites at all?

    As it turns out, parasites play an indispensable role in ecosystem health.1 Though these creatures make minor contributions to the biomass of ecosystems, they have a significant effect on several ecosystem parameters, including biodiversity. In fact, some ecologists believe that an ecosystem becomes more robust and functions better as parasite diversity increases.

    Considering this insight, a rationale exists as to why God would create the Thai liver fluke to be a member of the ecosystems of the rivers in Southeast Asia. This parasite infects any carnivore (dogs, cats, rats, and pigs) that eats fish from these rivers, not just humans. Undoubtedly infecting these carnivores influences a variety of ecosystem processes, such as species competition, and energy flow through the ecosystem. The harm this parasite causes humans is an unintended consequence of imprudent human activities—not the inherent design of nature.

    Parasites and God’s Providence

    Remarkably, recent work by scientists from the Australian Institute of Tropical Health and Medicine (AITHM) indicates that the suffering caused by the Thai liver fluke may fulfill a higher purposea greater good.

    These researchers believe that the Thai liver fluke may hold the key to effectively treat slow- and non-healing wounds caused by diabetes.2

    High blood glucose levels associated with diabetes compromise the circulatory and immune systems. This compromised condition inhibits wound repair due to restricted blood flow to the site of the injury. It also makes the wound much more prone to infection.

    The AITHM researchers realized that the granulin-like protein produced by the Thai liver fluke could be used to promote healing of chronic wounds because it promotes rapid cell proliferation in the liver. If incorporated into a cream, this protein could be topically applied to the wounds, stimulating wound repair. This treatment would dramatically reduce the cost of treating chronic wounds and significantly improve the treatment outcomes.

    Ironically, the properties of the granulin-like protein that make this biomolecule so insidious are exactly the properties that make it useful to treat diabetics’ wounds. To put it another way, the Thai liver fluke is beneficial to humanity.

    The idea that God designed nature to be useful for humanity is a facet of divine providence. In Christian theology, this idea refers to God’s continual role in: (1) preserving his creation; (2) ensuring that everything happens; and (3) guiding the universe. The concept of divine providence also posits that when God created the world he built into the creation everything humans (and other living organisms) would need. Accordingly, every good thing that people possess has been provided and preserved by God, either directly or indirectly.

    On this basis, as counterintuitive as this may initially seem, it could be argued that as part of his providence, God created the Thai liver fluke for humanity’s use and benefit.

    And we know that in all things God works for the good of those who love him, who have been called according to his purpose.

    –Romans 8:28

    Resources

    Endnotes

    1. Peter J. Hudson, Andrew P. Dobson, and Kevin D. Lafferty, “Is a Healthy Ecosystem One that Is Rich in Parasites?” Trends in Ecology and Evolution 21 (July 2006): 381–85, doi:10.1016/j.tree.2006.04.007.
    2. Paramjit S. Bansal et al., “Development of a Potent Wound Healing Agent Based on the Liver Fluke Granulin Structural Fold,” Journal of Medicinal Chemistry 60 (April 20, 2017): 4258–66, doi:10.1021/acs.jmedchem.7b00047.
  • Can Intelligent Design Be Part of the Construct of Science?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jun 27, 2017

    “If this result stands up to scrutiny, it does indeed change everything we thought we knew about the earliest human occupation of the Americas.”1

    This was the response of Christopher Stringer—a highly-regarded paleoanthropologist at the Natural History Museum in London—to the recent scientific claim that Neanderthals made their way to the Americas 100,000 years before the first modern humans.2

    At this point, many anthropologists have expressed skepticism about this claim, because it requires them to abandon long-held ideas about the way the Americas were populated by modern humans. As Stringer cautions, “Many of us will want to see supporting evidence of this ancient occupation from other sites before we abandon the conventional model.”3

    Yet, the archaeologists making the claim have amassed an impressive cache of evidence that points to Neanderthal occupation of North America.

    As Stringer points out, this work has radical implications for anthropology. But, in my view, the importance of the work extends beyond questions relating to human migrations around the world. It demonstrates that intelligent design/creation models have a legitimate place in science.

    The Case for Neanderthal Occupation of North America

    In the early 1990s, road construction crews working near San Diego, CA, uncovered the remains of a single mastodon. Though the site was excavated from 1992 to 1993, scientists were unable to date the remains. Both radiocarbon and luminescence dating techniques failed.

    Recently, researchers turned failure into success, age-dating the site to be about 130,000 years old, using uranium-series disequilibrium methods. This result shocked them because analysis at the site indicated that the mastodon remainswere deliberately processed by hominids, most likely Neanderthals.

    The researchers discovered that the mastodon bones displayed spiral fracture patterns that looked as if a creature, such as a Neanderthal, struck the bone with a rock—most likely to extract nutrient-rich marrow from the bones. The team also found rocks (called cobble) with the mastodon bones that bear markings consistent with having been used to strike bones and other rocks.

    To confirm this scenario, the archaeologists took elephant and cow bones and broke them open with a hammerstone. In doing so, they produced the same type of spiral fracture patterns in the bones and the same type of markings on the hammerstone as those found at the archaeological site. The researchers also ruled out other possible explanations, such as wild animals creating the fracture patterns on the bones while scavenging the mastodon carcass.

    Despite this compelling evidence, some anthropologists remain skeptical that Neanderthals—or any other hominid—modified the mastodon remains. Why? Not only does this claim fly in the face of the conventional explanation for the populating of the Americas by humans, but the sophistication of the tool kit does not match that produced by Neanderthals 130,000 years ago based on archaeological sites in Europe and Asia.

    So, did Neanderthals make their way to the Americas 100,000 years before modern humans? An interesting debate will most certainly ensue in the years to come.

    But, this work does make one thing clear: intelligent design/creation is a legitimate part of the construct of science.

    A Common Skeptical Response to the Case for a Creator

    Based on my experience, when confronted with scientific evidence for a Creator, skeptics will often summarily dismiss the argument by asserting that intelligent design/creation isn’t science and, therefore, it is not legitimate to draw the conclusion that a Creator exists from scientific advances.

    Undergirding this objection is the conviction that science is the best, and perhaps the only, way to discover truth. By dismissing the evidence for God’s existence—insisting that it is nonscientific—they hope to undermine the argument, thereby sidestepping the case for a Creator.

    There are several ways to respond to this objection. One way is to highlight the fact that intelligent design is part of the construct of science. This response is not motivated by a desire to reform science, but by a desire to move the scientific evidence into a category that forces skeptics to interact with it properly.

    The Case for a Creator’s Role in the Origin of Life

    It is interesting to me that the line of reasoning the archaeologists use to establish the presence of Neanderthals in North America equates to the line of reasoning I use to make the case that the origin of life reflects the product of a Creator’s handiwork, as presented in my three books: The Cell’s Design, Origins of Life, and Creating Life in the Lab. There are three facets to this line of reasoning.

    The Appearance of Design

    The archaeologists argued that: (1) the arrangement of the bones and the cobble and (2) the markings on the cobble and the fracture patterns on the bones appear to result from the intentional activity of a hominid. To put it another way, the archaeological site shows the appearance of design.

    In The Cell’s Design I argue that the analogies between biochemical systems and human designs evince the work of a Mind, serving to revitalize Paley’s Watchmaker argument for God’s existence. In other words, biochemical systems display the appearance of design.

    Failure to Explain the Evidence through Natural Processes

    The archaeologists explored and rejected alternative explanations—such as scavenging by wild animals—for the arrangement, fracture patterns, and markings of the bones and stones.

    In Origins of Life, Hugh Ross (my coauthor) and I explore and demonstrate the deficiency of natural process, mechanistic explanations (such as replicator-first, metabolism-first, and membrane-first scenarios) for the origin of life and, hence, biological systems.

    Reproduction of the Design Patterns

    The archaeologists confirmed—by striking elephant and cow bones with a rock—that the markings on the cobble and the fracture patterns on the bone were made by a hominid. That is, through experimental work in the laboratory, they demonstrated that the design features were, indeed, produced by intelligent agency.

    In Creating Life in the Lab, I describe how work in synthetic biology and prebiotic chemistry empirically demonstrate the necessary role intelligent agency plays in transforming chemicals into living cells. In other words, when scientists go into the lab and create protocells, they are demonstrating that the design of biochemical systems is intelligent design.

    So, is it legitimate for skeptics to reject the scientific case for a Creator, by dismissing it as non-scientific?

    Work in archaeology illustrates that intelligent design is an integral part of science, and it highlights the fact that the same scientific reasoning used to interpret the mastodon remains discovered near San Diego, likewise, undergirds the case for a Creator.

    Resources

    Endnotes

    1. Colin Barras, “First Americans May Have Been Neanderthals 130,000 Years Ago,” New Scientist, April 26, 2017, https://www.newscientist.com/article/2129042-first-americans-may-have-been-neanderthals-130000-years-ago/.
    2. Steven R. Holen et al., “A 130,000-Year-Old Archaeological Site in Southern California, USA,” Nature 544 (April 27, 2017): 479–83, doi:10.1038/nature22065.
    3. Barras, “First Americans.”
  • DNA Wired for Design

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jun 20, 2017

    Though this be madness, yet there is method
    int.

    Hamlet (Act II, scene II)

    Was Hamlet crazy? Or was he feigning madness so he could investigate the murder of his father without raising suspicion?

    In my senior year of high school, Mrs. Hodges assigned our class these questions as the topic for the first essay we wrote for honors English. I made the case that Hamlet was perfectly sane. Indeed, there was method to his madness.

    I wound up with a B- on the assignment. Mrs. Hodges wasn’t impressed with my reasoning, writing on my paper in red ink, “You aren’t qualified to comment on Hamlet’s sanity. You are not a psychologist!” When she returned my paper, I muttered, “Of course, I’m not a psychologist. I’m a high school student. You were the one who asked me to speculate on his sanity. And then when I do . . .”

    I was reminded of this high school memory a few days ago while contemplating the structure and function of DNA. This biomolecule’s design is “crazy.” Yet every detail of DNA’s structure is crucial for the role it plays as an information storage system in the cell. You might say there is biochemical method to DNA’s madness when it comes to its properties. One of DNA’s “insane” features is its capacity to conduct electrical current through the interior of the double helix.

    DNA Wires

    Caltech chemist Jacqueline Barton discovered this phenomenon in the early 1990s. Barton and her collaborators attached different chemical groups to the two ends of the DNA double helix. Both compounds possessed redox centers (metal atoms that can give off and take up electrons). When they blasted one of the redox centers with a pulse of light, it ejected an electron that was taken up by the redox center attached to the opposite end of the DNA molecule, causing the compound to emit a flash of light. The researchers concluded that the ejected electron must have travelled through the interior of the double helix from one redox center to the other.

    Shortly after this discovery, Barton and her team learned that electrical charges  move through DNA only when the double helix is intact. Electrical current won’t flow through single-stranded DNA, nor will it flow if the DNA double helix is distorted, due to damage or misincorporation of DNA subunits during replication.

    These (and other) observations indicate that the conductance of electrical charge through the DNA molecule stems from π-π stacking interactions of the nucleobases in the double helix interior. These interactions produce a molecular orbital that spans the length of the double helix. In effect, the molecular orbital functions like a wire running through DNA’s interior.

    DNA Wires and Nanoelectronics

    Charge conductance through the DNA double helix occurs more rapidly than it does through “standard” molecular wires made from inorganic materials. These “insane” transport speeds have inspired researchers to explore the possibility of using DNA as molecular scale wiring in nanoelectronic devices. In fact, some researchers think that DNA wires might become an integral feature for the next generation of medical diagnostic equipment.

    Does DNA Function as a Wire in the Cell?

    While the charge conductance through the DNA double helix is an interesting and potentially useful property, biochemists have long wondered if DNA functions as a nanowire in the cell.

    In 2009, Barton and her team discovered the answer to this question. DNA’s capacity to transmit electrical charges along the length of the double helix plays a key role in the DNA repair process, and recently Barton’s collaborators have demonstrated that DNA’s wire property plays an important role in the initiation of DNA replication. Both processes are important for DNA to function as an information storage system. Repairing damage to DNA insures the integrity of the information it houses. And DNA replication makes it possible to pass this information on to the next generation. There is a purpose to every aspect of DNA’s properties—a method to the madness.

    Detecting Damage to DNA

    Damage to DNA distorts the double helix. In a process called base excision repair, the cell’s machinery recognizes and removes the damaged portion of the DNA molecule, replacing it with the correct DNA subunits.

    For some time, biochemists puzzled over how the DNA repair enzymes located the damaged regions. In the bacteria E. coli, two repair enzymes, dubbed EndoIII and MutY, occur at low levels. (E. coli is a model organism often used by biochemists to study cellular processes.) Biochemists estimate that less than 500 copies of EndoIII exist in the cell and around 30 copies of MutY. These are low numbers considering the task at hand. These repair enzymes bear the responsibility of surveying the E. coli genome for damage—a genome that consists of over 4.6 million base pairs (genetic letters).

    Barton and her team discovered that the two repair enzymes possess a redox center consisting of an iron-sulfur cluster (4Fe4S) that has no enzymatic activity.1 They speculated and then demonstrated that the 4Fe4S cluster functions just like the compounds they attached to the DNA double helix in their original experiment in the 1990s.

    It turns out Barton and her team were right. These repair proteins bind to DNA. Once bound, they send an electron from the 4Fe4S redox center through the interior of the double helix, which establishes a current through the DNA molecule. Once the repair protein loses an electron, it cannot dissociate from the DNA double helix. Other repair proteins bound to the DNA pick up the electrons from the DNA’s interior at their iron-sulfur redox center. When they do, they dissociate from the DNA and resume their migration along the double helix. Eventually, the migrating repair protein will bind to the DNA again, sending an electron through the DNA’s interior.

    This process is repeated, over and over again. However, if the DNA becomes damaged and the double helix distorted, then the DNA wire breaks, interrupting the flow of electrons. When this happens, the repair proteins remain attached to the DNA close to the location of the damage—thus, initiating the repair process.

    Initiating DNA Replication

    Recently, Barton and her team discovered that charge conductance through DNA also plays a critical role in the early stages of DNA replication.DNA replication—the process of generating two daughter molecules identical to the parent molecule—serves an essential life function.

    DNA replication begins at specific sites along the double helix, called replication origins. Typically, prokaryotic cells, such as E. coli, have only a single origin of replication.

    The replication machinery locally unwinds the DNA double helix at the origin of replication to produce a replication bubble. Once the individual strands of the DNA double helix unwind and are exposed within the replication bubble, they are available to direct the production of the daughter strand.

    Before the newly formed daughter strands can be produced, a small RNA primer must be produced. DNA polymerase—the protein that synthesizes new DNA by reading the parent template strand—can’t start production from scratch. It must be primed. The primosome, a massive protein complex that consists of over 15 different proteins (including the enzyme primase), produces the RNA primer. From there, DNA polymerase takes over and begins synthesizing the daughter DNA strand.

    Barton and her team discovered that the handoff between primase and DNA polymerase relies on DNA’s wire property. Both primase and DNA polymerase possess 4Fe4S redox clusters. When primase’s 4Fe4S redox center loses an electron, this protein binds to DNA to produce the RNA primer. When primase’s 4Fe4S redox center picks up an electron, the protein detaches from the DNA to end the production of the RNA primer.

    When DNA polymerase binds to the DNA to begin the process of daughter strand synthesis, it sends an electron from its 4Fe4S redox center along the double helix formed by the parent DNA-RNA primer. When the electron reaches the 4Fe4S redox center of primase, it brings the production of the RNA primer to a halt.

    DNA Wires and the Case for a Creator

    The work by Barton and her colleagues highlights the elegant and sophisticated design of biochemical systems. DNAs wire property is so remarkable that it serves as inspiration for the design of the next generation of electronic devices—at the nanoscale. The use of biological designs to drive technological advance is one of the most exciting areas in engineering. This area of study—called biomimetics and bioinspiration—presents us with new reasons to believe that life stems from a Creator. It paves the way for a new type of design argument I dub the converse Watchmaker argument: If biological designs are the work of a Creator, then these systems should be so well-designed that they can serve as engineering models and, otherwise, inspire the development of new technologies.

    The converse Watchmaker argument complements William Paley’s classical Watchmaker argument for God’s existence. In my book The Cell’s Design, I describe how recent advances in biochemistry revitalize this classical argument. Over the last few decades, one of the most astounding insights from biochemistry is the recognition that many biochemical systems display the same properties as human designs. This similarity can be used to argue that life must come from the work of a Mind.

    The Watchmaker Prediction

    In conjunction with my presentation of the revitalized Watchmaker argument in The Cell’s Design, I proposed the Watchmaker prediction. I contend that many of the cell’s molecular systems currently go unrecognized as analogs to human designs because the corresponding technology has yet to be developed. That is, the Watchmaker argument may well become stronger in the future, and its conclusion more certain, as human technology advances.

    The possibility that advances in human technology will ultimately mirror the molecular technology that already exists as an integral part of biochemical systems leads to the Watchmaker prediction: As human designers develop new technologies, examples of these technologies, which previously went unrecognized, will become evident in the operation of the cell’s molecular systems. In other words, if the Watchmaker analogy truly serves as evidence for the Creator’s existence, then it is reasonable to expect that life’s biochemical machinery anticipates human technological advances.

    The Watchmaker Prediction, Satisfied

    The discovery that DNA’s wire properties are critical for DNA repair and the initiation of DNA replication fulfills the Watchmaker prediction. Barton and her team recognized the physiological importance of DNA charge conductance a year after The Cell’s Design was published.

    Nanoscientists have been working to develop molecular-scale nanowires for the last couple of decades. The discovery of DNA’s wire properties occurred in this context. In other words, as new technology emerged—in this case, nanoelectronics—we have discovered its existence inside the cell.

    Considering the wire properties of DNA, it is not madness to think that a Creator exists and played a role in life’s genesis.

    Resources

    Endnotes

    1. Amie K. Boal et al., “Redox Signaling between DNA Repair Proteins for Efficient Lesion Detection,” Proceedings of the National Academy of Sciences, USA 106 (September 8, 2009): 15237–42, doi:10.1073/pnas.0908059106Pamel A. Sontz et al., “DNA Charge Transport as a First Step in Coordinating the Detection of Lesions by Repair Proteins,” Proceedings of the National Academy of Sciences, USA 109 (February 7, 2012): 1856–61, doi:10.1073/pnas.1120063109; Michael A. Grodick, Natalie B. Muren, and Jacqueline K. Barton, “DNA Charge Transport within the Cell,” Biochemistry 54 (February 3, 2015): 962–73, doi:10.1021/bi501520w.
    2. Elizabeth O’Brien et al., “The [4Fe4S] Cluster of Human DNA Primase Functions as a Redox Switch Using DNA Charge Transport,” Science 355 (February 24, 2017): doi:10.1126/science.aag1789.
  • DNA: Digitally Designed

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 24, 2017

    We live in uncertain and frightening times.

    There seems to be no end to the serious risks confronting humanity. In fact, in 2014, USA Today published an article identifying the 10 greatest threats facing our world:

    • Fiscal crises in key economies
    • Structurally high unemployment/underemployment
    • Water crises
    • Severe income disparity
    • Failure to climate change mitigation and adaptation
    • Greater incidence of extreme weather events (e.g., floods, storms, fires)
    • Global governance failure
    • Food crises
    • Failure of a major financial mechanism/institution
    • Profound political and social instability

    If this list isn’t bad enough, another crisis looms in our near future: a data storage crisis.

    Thanks to the huge volume of scientific data generated by disciplines such as genomics and the explosion of YouTube videos, 44 trillion gigabytes of digital data currently exist in the world. To put this in context, each person in a worldwide population of 10 billion people would have to store over 6,000 CDs to house this data. Estimates are that if we keep generating data at this pace, we will run out of high-quality silicon needed to make data storage devices by 2040.1

    Compounding this problem are the limitations of current data storage technology. Because of degradative processes, hard disks have a lifetime of about 3 years and magnetic tapes about 10 years. These storage systems must be kept in controlled environments—which makes data storage an expensive proposition.

    Digital Data Storage in DNA

    Because of DNA’s role as a biochemical data storage system (in which the data is digitized), researchers are exploring the use of this biomolecule as the next-generation digital data storage technology. As proof of principle, a team of researchers from Harvard University headed up by George Church coded the entire contents of a 54,000-word book (including 11 JPEG images) into DNA fragments.

    The researchers chose to encode the book’s contents into small DNA fragments—devoting roughly two-thirds of the sequence for data and the remainder for information that can be used to locate the content within the entire data block. In this sense, their approach is analogous to using page numbers to order and locate the contents of a book.

    Since then, researchers have encoded computer programs, operating systems, and even movies into DNA.

    Because DNA is so highly optimized to store information, it is an ideal data storage medium. (For details regarding the optimal nature of DNA’s structure, see The Cell’s Design.) Researchers think that DNA has the capacity to store data near the theoretical maximum. About one-half pound of DNA can store all the data that exists in the world today.

    Limitations of DNA Data Storage

    Despite its promises, there are some significant technical hurdles to overcome before DNA can serve as a data storage system. Cost and time are two limitations. It is expensive and time-consuming to produce and read the synthetic DNA used to store information. As technology advances, the cost and time requirements associated with DNA data storage will likely improve. Still, because of these limitations, most technologists think that the best use of DNA will be for archival storage of data.

    Another concern is the long-term stability of DNA. Over time, DNA degrades. Researchers believe that redundancy may be one way around this problem. By encoding the same data in multiple pieces of DNA, data lost because of DNA degradation can be recovered.

    The processes of making and reading synthetic DNA also suffer from error. Current technology has an error rate of 1 in 100. Recently, researchers from Columbia University achieved a breakthrough that allows them to elegantly address loss of information from DNA due to degradation or miscoding that takes place when DNA is made and read. These researchers successfully applied techniques used for “noisy communication” operations to DNA data storage.2

    With these types of advances, the prospects of using DNA to store digital data may soon become a reality. And unlike other data storage technologies, DNA will never become obsolete.

    Biomimetics and Bioinspiration

    The use of biological designs to drive technological advance is one of the most exciting areas in engineering. This area of study—called biomimetics and bioinspiration—presents us with new reasons to believe that life stems from a Creator. As the names imply, biomimetics involves direct copying (or mimicry) of designs from biology, whereas bioinspiration relies on insights from biology to guide the engineering enterprise. DNA’s capacity to inspire engineering efforts to develop new data storage technology highlights this biomolecules elegant, sophisticated design and, at the same time, raises a troubling question for the evolutionary paradigm.

    The Converse Watchmaker Argument

    Biomimetics and bioinspiration pave the way for a new type of design argument I dub the converse Watchmaker argument: If biological designs are the work of a Creator, then these systems should be so well-designed that they can serve as engineering models and otherwise inspire the development of new technologies.

    At some level, I find the converse Watchmaker argument more compelling than the classical Watchmaker analogy. It is remarkable to me that biological designs can inspire engineering efforts.

    It is even more astounding to think that biomimetics and bioinspiration programs could be so successful if biological systems were truly generated by an unguided, historically contingent process, as evolutionary biologists claim.

    Biomimetics and Bioinspiration: The Challenge to the Evolutionary Paradigm

    To appreciate why work in biomimetics and bioinspiration challenge the evolutionary paradigm, we need to discuss the nature of the evolutionary process.

    Evolutionary biologists view biological systems as the outworking of unguided, historically contingent processes that co-opt preexisting designs to cobble together new systems. Once these designs are in place, evolutionary mechanisms can optimize them, but still, these systems remain—in essence—kludges.

    Most evolutionary biologists are quick to emphasize that evolutionary processes and pathways seldom yield perfect designs. Instead, most biological designs are flawed in some way. To be certain, most biologists would concede that natural selection has produced biological designs that are well-adapted, but they would maintain that biological systems are not well-designed. Why? Because evolutionary processes do not produce biological systems from scratch, but from preexisting systems that are co-opted through a process dubbed exaptation and then modified by natural selection to produce new designs. Once formed, these new structures can be fine-tuned and optimized through natural selection to produce well-adapted designs, but not well-designed systems.

    If biological systems are, in effect, kludged together, why would engineers and technologists turn to them for inspiration? If produced by evolutionary processes—even if these processes operated over the course of millions of years—biological systems should make unreliable muses for technology development. Does it make sense for engineers to rely on biological systems—historically contingent and exapted in their origin—to solve problems and inspire new technologies, much less build an entire subdiscipline of engineering around mimicking biological designs?

    Using biological designs to guide engineering efforts seems to be fundamentally incompatible with an evolutionary explanation for life’s origin and history. On the other hand, biomimetics and bioinspiration naturally flow out of an intelligent design/creation model approach to biology. Using biological systems to inspire engineering makes better sense if the designs in nature arise from a Mind.

    Resources

    The Cell’s Design: How Chemistry Reveals the Creator’s Artistry by Fazale Rana (book)
    iDNA: The Next Generation of iPods?” by Fazale Rana (article)
    Harvard Scientists Write the Book on Intelligent Design—in DNA” by Fazale Rana (article)
    Digital and Analog Information Housed in DNA by Fazale Rana (article)
    Engineer’s Muse: The Design of Biochemical Systemsby Fazale Rana (article)

    Endnotes
    1. Andy Extance, “How DNA Could Store All the Worlds Data, Nature 537 (September 2, 2016): 22–24, doi:10.1038/537022a.
    2. Yaniv Erlich and Dina Zielinski, “DNA Fountain Enables a Robust and Efficient Storage Architecture,” Science 355 (March 3, 2017): 950–54, doi:10.1126/science.aaj2038.
  • A Critical Reflection on Adam and the Genome, Part 2

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 17, 2017

    When I began college, I signed up for a premed major but quickly changed my course of study after my first biology class. Biology 101 introduced me to the fascinating molecular world inside the cell. At that point, I was hooked. All I wanted was to become a biochemist.

    But there was another reason why I gave up on the prospects of becoming a physician. I didn’t think I had the mental wherewithal to make decisions with life and death consequences for patients. And to this day, I deeply admire men and women who do possess that mental fortitude.

    The problem is that once someone dies, they don’t come back to life. I knew this reality would loom large for every decision I would make as a physician. Over 100,000 years of human experience teaches that when people die, they remain dead. And this experience is borne out by centuries of scientific study into human biology.

    When It Comes to the Virgin Birth and the Resurrection, Christianity is Anti-Scientific

    Yet at the heart of the Christian faith is the idea that Jesus Christ was raised from the dead. To be clear: This idea is counter to human experience and thoroughly anti-scientific.

    On the other hand, a strong circumstantial case based on historical facts can be marshaled for the life, death, and resurrection of Jesus Christ. The historical evidence for the resurrection combined with the fact that this event transcends the laws of nature is clear evidence for Christians that God intervened in human history to perform a miracle—to act in a way that contravenes the laws of nature.

    Even though alternative explanations for the facts surrounding the resurrection fall short, many skeptics remain unconvinced that the resurrection happened. Why? Because it defies scientific explanation—dead people don’t come back to life.

    Yet, I don’t know of any evangelical or conservative Christian that would deny the resurrection. Nor would these same Christians deny the virgin birth—another event that also defies scientific explanation. As Christians, we readily embrace anti-scientific ideas when they are central to Christianity. We don’t view them as allegorical or as literary constructs that teach theological truths so that they can be accommodated to scientific truth. We regard them as real events in space and time, in which God discernibly acted in a miraculous way.

    Not only are the resurrection and the virgin birth anti-scientific, but the explanations for these two events completely fly in the face of methodological naturalism—the philosophical idea undergirding contemporary science. According to this philosophical system, scientific explanations must rely on material causes—natural process mechanisms. Any explanation that appeals to the work of a supernatural agent—a Creator—or processes that defy known laws of nature can’t be part of the scientific construct. By definition, these types of explanations are forbidden. Yet when it comes to the resurrection and the virgin birth, Christians reject methodological naturalism without apology. We don’t try to force these events within the framework of methodological naturalism by arguing that God used the laws of nature to affect the virgin birth or the resurrection. Why? Because the explanations for these events go beyond nature’s laws—these events are transcendent miracles.

    Adam and Eve’s Creation and Importance to the Christian Faith

    Should we not be willing to do adopt the same posture when it comes to the question of origins, including the historicity of Adam and Eve?

    Like the virgin birth and the resurrection, Adam and Eve’s existence and role as humanity’s founding couple impacts key doctrines of the Christian faith, such as inerrancy, the image of God, the fall, original sin, marriage, and the atonement.

    Venema and McKnight’s Adam and the Genome

    The importance of a historical Adam and Eve to the Christian faith explains why New testament scholar Scot McKnight (Northern Seminary) spent four chapters—half of a book—in Adam and the Genome trying to convince the reader that the existence of this primordial couple is not critical to the Christian faith. McKnight felt this exercise necessary because he concedes that comparative genomics and population genetics demonstrate the truth of human evolution and the impossibility that humanity arose from a primordial pair—an Adam and an Eve.

    Coauthored along with biologist Dennis Venema (Trinity Western University), Adam and the Genome presents a scientific and theological case for evolutionary creationism—the idea that God employed evolutionary processes to bring about the design, origin, and history of life, including humanity.1

    The case Venema presents for human evolution serves as the motivation for McKnight’s contribution to the book. In fact, McKnight’s portion of Adam and the Genome is just the latest in a growing list of responses by evangelical and conservative Christian theologians to the specter of human evolution. Though this idea has been in play since the late 1800s with the publication of Darwin’s The Descent of Man, recently, Christian scholars, such as McKnight, feel compelled to sort through the theological fallout of this scientific explanation for human origins because of the emergence of genomics. Now that we have the capability to efficiently sequence and compare the entire genetic makeup of humans and other creatures, such as the great apes, the sense is that the case for human evolution has become undeniable.

    So, have Venema and McKnight made their case? Is human evolution a fact? Are Adam and Eve merely theological constructs?

    Having left the theological response to McKnight in the hands of scholars such as Gavin Ortlund and Ken Keathley, in part 1 of this review, I offered my reflections on Venema’s intellectual journey from an antievolutionary intelligent design proponent to someone who embraces and now advocates for evolutionary creationism, concluding that it wasn’t scientific evidence alone that motivated Venema and many other evolutionary creationists to adopt this view. I contend that many evolutionary creationists adopt this view, in part, because they are reacting to the disappointment they felt when they realized that they had been unintentionally mislead (when they were young and scientifically naïve) by well-meaning Christians who taught them young-earth creationism. I argue that in abandoning young-earth creationism, many evolutionary creationists have moved to the opposite extreme, rejecting any science-faith model that doesn’t fully embrace mainstream scientific ideas—even if those ideas challenge key biblical doctrines.

    In this second part of my review, I offer my thoughts on the core of Venema’s case for human evolution: namely, work in comparative genomics and population genetics, found in chapters two and three, respectively, of Adam and the Genome.

    Venema’s goal in his contribution to Adam and the Genome is to communicate the “undeniable” evidence for human evolution. Specifically, Venema discusses recent work in comparative genomics with the hope of explaining to the motivated layperson why many biologists regard the shared features in genomes as evidence for common ancestry. Applying that insight to whole genome comparisons of humans, chimpanzees, and other great apes, Venema explains why biologists think humanity shares an evolutionary history with the great apes—in fact, with all life on Earth. Focusing on pseudogenes, Venema concludes the case for common descent by discussing the widespread occurrence of nonfunctional DNA sequences located throughout the genomes of humans and the great apes—usually in corresponding locations in these genomes. Venema argues that these one-time functional DNA sequence elements were rendered nonfunctional through mutational events and are retained in genomes as vestiges of evolutionary history.

    Role of Methodological Naturalism in Venema’s Argument

    Admittedly, the scientific case Venema presents for common descent is strong—at least at first glance. (Though, in making his case, he does overlook some significant scientific issues confronting evolutionary biologists, such as the incongruency of evolutionary trees. In other words, evolutionary biologists wind up with different evolutionary trees depending on the region of the genome they use to build the trees. This is certainly the case when the human genome is compared to the genomes of chimpanzees and gorillas. One-third of the human genome more closely aligns to the gorilla genome than to the chimpanzee genome, indicating that gorillas, not chimpanzees, are our closest evolutionary relative.)

    Having acknowledged the strong case Venema makes for human evolution, I want to make sure that the reader recognizes the powerful, yet often unrecognized, role methodological naturalism plays, propping up the case for common descent, and, hence, human evolution. Because of the influence of methodological naturalism, the only permissible way to interpret shared genetic features within the mainstream scientific enterprise is from an evolutionary framework. Any explanation evoking a Creator’s involvement is off the table—even if a creation model can account for the data, and it can. However, this approach will never receive a hearing in the scientific community today because it violates the tenets of methodological naturalism. In other words, because of methodological naturalism’s sway, common descent, and, consequently, human evolution must be true by default. No other option is allowed. No other explanation, no matter how valid, is permitted.

    Like most evolutionary creationists, Venema and McKnight embrace methodological naturalism when it comes to the question of human origins. Yet they readily reject this idea when it comes to the virgin birth and the resurrection. As a result, their approach to science is inconsistent. Why apply the principles of methodological naturalism to human origins but not to questions surrounding the resurrection or the virgin birth?

    It is true that methodological naturalism has a demonstrated track record of success—when it guides investigation of secondary, proximal causes. But this scientific approach often comes up short when scientific questions focus on primary or ultimate causes, such as the origin of the universe or the origin of life.

    In fact, I wonder if Christians should embrace methodological naturalism at all. At its essence, this philosophical approach to science is inherently atheistic. A Christian could justify embracing a limited or weak form of methodological naturalism because Scripture teaches that God has providentially instituted processes that operate within the creation to sustain it. When studying these types of phenomena, application of methodological naturalism appears to be justified because the focus is on identifying and characterizing secondary, proximal causes.

    But what about the question of origins? Given the descriptions of God’s creative work in the creation accounts, it looks as if God intervened in a direct personal way when it comes to the origin of the universe and the origin and history of life—particularly when it comes to humanity’s beginnings. If so, then methodological naturalism becomes an impotent guide for scientific study because it insists that these events must have mechanistic causes—even if they may not. By default, an atheistic worldview is imposed on the scientific enterprise. Within the framework of methodological naturalism, science no longer becomes the quest for truth, but a game played, with the goal being to produce a material causes explanation for the universe and phenomena within the universe, even if material causes aren’t the true explanation—and even if the explanations leave something to be desired.

    Adherents of methodological naturalism defend its restrictions by arguing that science can’t put God in a test tube. Yet it is a straightforward exercise to show that science does have the tool kit to detect the work of intelligent agents within nature and to characterize their capabilities. By extension, science should have no problem detecting a Creator’s handiwork—and even determining the Creator’s identity.

    So, what happens if we relax the restrictive requirements of methodological naturalism when we investigate the question of human origins? If we do, it becomes evident that human evolution isn’t unique in its capacity to explain shared genetic features. It becomes conceivable that the shared genetic features in the genomes of humans and the great apes could reflect similar designs employed by a Creator. To put it another way, the shared genetic features could reflect common design, not common descent.

    Though this approach to the data is forbidden by contemporary mainstream science, this interpretative approach is not anti-scientific. In fact, there is a historical precedent for viewing shared genetic features as evidence for common design, not common descent. Prior to Darwin, distinguished biologist Sir Richard Owen interpreted shared (homologous) biological structures (and, consequently, related organisms) as manifestations of an archetype that originated in the mind of the First Cause, not the products of descent with modification. Darwin later replaced Owen’s archetype with a common ancestor. Again, the key point is that it is possible to conceive of an alternative interpretation of shared biological features, if one is willing to allow for the operation of a Creator within the history of life.

    If the action of an intelligent agent becomes part of the construct of science, and hence, biology, then the shared molecular fossils in the genomes of humans and the great apes (such as pseudogenes) could be seen as shared design features. These sequence elements point to common descent only if certain assumptions are true:

    1. the genomes’ shared structures and sequences are nonfunctional;
    2. the events that created these features are rare, random, and nonrepeatable;
    3. no mechanisms other than common descent (vertical gene transfer) can generate shared features in genomes.

    However, recent studies raise questions about the validity of these assumptions. For example, in the last decade or so, molecular biologists and molecular geneticists have discovered that most classes of “junk DNA,” including pseudogenes, have function. (Interested readers can find references to the original scientific papers in the expanded second edition of Who Was Adam? and The Cell’s Design.) In fact, the recently proposed competitive endogenous RNA hypothesis explains why pseudogenes must display similar sequences to their functional counterparts in order to carry out their cellular function.

    Moreover, as discussed in Who Was Adam?, researchers are now learning that many of the events that alter genomes’ structures and DNA sequences are not necessarily rare and random. For example, biochemists have known for quite some time that mutations occur in hotspots in genomes. Recent work also indicates that transposon insertion and intron insertion occur at hot spots, and gene loss is repeatable. New studies also reveal that horizontal gene transfer can mimic common descent. This phenomenon is not confined to bacteria and archaea but has been observed in higher plants and animals as well, via a vector-mediated pathway or organelle capture.

    These advances serve to undermine key assumptions needed for a common descent argument. Considering these discoveries, is it possible to make sense of the shared genomic architecture and DNA sequences within the framework of a creation model?

    A Scientific Creation Model for Common Design

    What follows is a brief abstract of the RTB genomics model. A more detailed description and defense of our model can be found in the second expanded edition of Who Was Adam?

    A key tenet of the model is the idea that organisms—and hence, their genomes—are the products of God’s direct creative activity. But once created, genomes are subjected to microevolutionary processes.

    In brief, our model explains the similarities among organisms’ genomes in one of two ways:

    1. Reflecting the work of a Creator who deliberately designed similar features in genomes according to: (1) a common function, or (2) a common blueprint.
    2. Reflecting the outworking of physical, chemical, or biochemical processes that (1) occur frequently, (2) are nonrandom, and (3) are reproducible. These processes cause the independent origin of the same features in the genomes of different organisms. These features can be either functional or nonfunctional.

    Our model also explains genomes’ differences in one of two ways:

    1. Reflecting the work of a Creator who deliberately designed differences in genomes with distinct functions.
    2. Reflecting the outworking of physical, chemical, or biochemical processes that reflect microevolutionary changes.

    In principle, our model can account for similarities and differences in the genomes of organisms as either the deliberate work of a Creator or via natural-process mechanisms that alter the genomes after creation.

    Were Adam and Eve Real?

    Having argued for the reality of human evolution, Venema focuses attention on Adam and Eve’s historicity. If humanity arose through an evolutionary process, then Venema rightly points out that humanity must have begun as a population, not a primordial couple—by definition. According to evolutionary biologists: evolution is a population-level phenomenon. That being the case, if humanity arose via evolutionary processes, then there could never have been an Adam and an Eve. In support of this idea, Venema then discusses population genetics studies that indicate humanity began as an initial group of around 10,000 individuals. Based on these methods, the genetic diversity among humans today is too great to have come from just two individuals. Venema then goes on to explain how evolutionary biologists reconcile the existence of Mitochondrial Eve and Y-chromosomal Adam (understood to be an actual woman and man, respectively) with the idea that humanity began as a population.

    Some Thoughts on Methods Used to Estimate Humanity’s Initial Population Size

    Did humanity originate from a primordial pair?

    One point Venema fails to acknowledge is that, at best, the population sizes generated from genetic diversity data are merely estimates, not hard and fast values. The reason: the mathematical models these methods are based on are highly idealized, generating differing estimates based on several factors.

    More significantly, recent studies focusing on birds and mammals, however, raise questions as to whether these models predict population size. As the author of one study states, “Analyses of mitochondrial DNA (mtDNA) have challenged the concept that genetic diversity within populations is governed by effective population size and mutation rate . . . the variation in the rate of mutation rather than in population size is the main explanation for variations in mtDNA diversity observed among bird species.”2

    In fact, several studies—involving white-tail deer, mouflon sheep, Przewalski’s horses, white-tail eagles, the copper redhorse, and gray whales—in which the original population size was known, the measured genetic diversity generations later was much greater than expected based on the models. In turn, if this data was used to estimate initial population size, the numbers would be much greater than the models predicted.

    Did humanity originate from a single pair? Even though population estimates indicate humanity originated from several hundred to several thousand individuals based on mathematical models, it could well be that these numbers overestimate the original numbers for the first humans. And given how poorly these population size models perform, it is hard to argue that science has falsified the notion that humanity descended from a primordial pair.

    Final Thoughts

    In Adam and the Genome, Venema makes a compelling case for human evolution, but he fails to tell the entire story. Venema overlooks a serious problem facing the evolutionary paradigm: namely, the incongruencies of evolutionary trees built from genetic data. He also neglects to communicate a legion of exciting discoveries made since the human genome sequence was completed—discoveries indicating that virtually every class of junk DNA has function. These discoveries undermine evolution’s case and make it apparent that we are in our infancy when it comes to understanding the structure and function of the human genome. The more we learn, the more evident its elegant and ingenious design.

    At the end of the day, the case for human evolution is propped up by the restrictions of methodological naturalism. As we have demonstrated in Who Was Adam?, when this restriction is relaxed, it is possible to advance a competing creation model that can account for the data from comparative genomics.

    One thing has become clear to me after reading Adam and the Genome. It is no longer effective for creationists and intelligent design proponents to focus our efforts on taking pot shots at human evolution. We must move beyond that type of critique and develop a philosophically robust framework for science that can compete with methodological naturalism and advance scientific models within that new framework with the capacity of explaining the data from comparative genomics and population genetics.

    I am confident we can. We simply must roll up our sleeves and get to work.

    Resources—Theological Reflections on Adam and the Genome

    Resources—An Old-Earth Creationist Perspective on the Scientific Case for a Traditional Biblical View of Human Origins

    Resources—The Problem of Incongruent Evolutionary Trees

    Resources—Science Can Detect the Creator’s Handiwork in Nature

    Resources—Common Design as a Valid Scientific Model

    Resources—Junk DNA is Functional

    Resources—Pseudogenes are Functional

    Resources—Mutational Hot Spots in Genomes

    Resources—Adam and Eve’s Historicity

    Endnotes
    1. Dennis R. Venema and Scot McKnight, Adam and the Genome: Reading Scripture after Genetic Science (Grand Rapids, MI: Brazos Press, 2017).
    2. Hans Ellegren, “Is Genetic Diversity Really Higher in Large Populations?” Journal of Biology 8 (April 2009): 41, doi:10.1186/jbiol135.
  • A Critical Reflection on Adam and the Genome, Part 1

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | May 10, 2017

    Who doesn’t like a bargain? I sure do. And I am a sucker for 2-for-1 specials.

    For those interested in science-faith discussions, the recent book by biologist Dennis Venema (Trinity Western University) and New Testament scholar Scot McKnight (Northern Seminary) is quite the deal. Two books in one, Adam and the Genome presents a scientific and theological case for evolutionary creationism—the idea that God employed evolutionary processes to bring about the design, origin, and history of life, including humanity.1

    The first half of the book, written by Venema, presents a case for human evolution from recent work in comparative genomics and population genetics. As part of his case for human evolution, Venema makes it clear that the genetic diversity of humanity is too extensive to have come from a primordial couple—Adam and Eve.

    As an author who works in the science-faith arena, I am impressed with the writing of Venema’s portion of the book. He does a masterful job of communicating complex ideas in genomics and population genetics in an accessible way. He makes it easy for the uninitiated to understand why a growing number of evangelical Christians feel compelled to embrace evolutionary creationism.

    The author of the book’s second half, Scot McKnight, assumes the reality of human evolution along with the inevitable requirement that humanity emerged as a population, not a primordial pair. Making these two concessions, McKnight explains why he doesn’t think the Christian faith depends on a historical Adam and Eve as the sole progenitors of all humanity. Instead, he argues that Adam and Eve should be viewed as archetypal—as literary and theological concepts.

    So, have Venema and McKnight made their case?

    Even though I can’t resist 2-for-1 deals, I am not going to offer the reader a 2-for-1 review. Instead, I am limiting my critical reflections to Dennis Venema’s portion of the book. Because I’m not a biblical scholar or a theologian, I will refrain from sharing my thoughts on McKnight’s contribution to Adam and the Genome. Instead, I encourage the curious reader to take a look at articles by theologians Ken Keathley and Gavin Ortlund. Both scholars offer insightful commentary on McKnight’s analysis of the historical Adam—a much better bargain than anything I could hope to offer.

    Venema’s Case for Human Evolution

    Venema opens his case for human evolution by maintaining that the theory of biological evolution is well evidenced—the real deal. He argues that the theory of evolution has broad explanatory and predictive power.

    He then turns to recent work in comparative genomics, explaining why many biologists regard the shared features in genomes as evidence for common ancestry. Applying that insight to whole genome comparisons of humans, chimpanzees, and other great apes, Venema explains why biologists think humanity shares an evolutionary history with the great apes—and, in fact, with all life on Earth. Focusing on pseudogenes, Venema concludes the case for common descent by discussing the widespread occurrence of nonfunctional DNA sequences located throughout the genomes of humans and the great apes—usually in corresponding locations in these genomes. Venema argues that these one-time functional DNA sequence elements were rendered nonfunctional through mutational events and are retained in genomes as vestiges of evolutionary history.

    Venema then turns his attention to the question of Adam and Eve. If humanity arose through an evolutionary process, then Venema rightly points out that humanity must have begun as a population, not a primordial couple—by definition. According to evolutionary biologists, evolution is a population-level phenomenon. That being the case, if humanity arose via evolutionary processes, then there could never have been an Adam and an Eve. In support of this idea, Venema then discusses population genetics studies that indicate humanity began as an initial group of around 10,000 individuals. Based on these methods, the genetic diversity among humans today is too great to have come from just two individuals. Venema then goes on to explain how evolutionary biologists reconcile the existence of mitochondrial Eve and Y-chromosomal Adam (understood to be an actual woman and man, respectively) with the idea that humanity began as a population.

    Finally, Venema closes out his portion of the book by offering a critique of the two most common challenges to biological evolution raised by the intelligent design movement: (1) irreducible complexity, and (2) the improbability of biological information arising by chance. Venema does a nice job of explaining why most biologists are not impressed with these challenges to biological evolution, and hence, the case for intelligent design.

    Venema’s Story

    One of the things Venema does exceptionally well in Adam and the Genome is interweave throughout his four chapters the story of his intellectual conversion—from intelligent design to evolutionary creationism.

    Venema recounts growing up in a conservative Christian home and attending a private Christian school where he learned that “‘Darwin’ and ‘evolution’ were evil, of course—things that atheist scientists believed despite their overwhelming flaws, because those scientists had purposefully blinded their eyes to the truth.”2

    Venema tells how, at an early age, he was fascinated with the natural world and wanted to be a scientist. His frustration evident, Venema describes how his dreams of becoming a scientist were waylaid because of the influence of the young-earth creationism that perfused his home, school, and church community.

    Unable to afford a private Christian college, Venema headed off to a secular university, sure that his faith would be challenged by his course work. Enrolled in a premed program (because he felt it safer than pursuing a science major), Venema describes how biology failed to capture his interest, until he began to do research in a university lab as an undergraduate student. That experience transformed him from a lackluster student to one who was highly motivated. It also inspired him to give up on medicine (even though he had the grades to get into medical school) and pursue a career in science. After completing his undergraduate education, Venema earned a PhD in genetics. Venema recounts how his antievolutionary views remained intact throughout his undergraduate and graduate training. In fact, he recalls how deeply impacted he was by the challenges biochemist Michael Behe leveled against Darwinian evolution in his book Darwin’s Black Box. In this book, and elsewhere, Behe argues that biochemical systems are irreducibly complex, and because of this property cannot arise in a stepwise evolutionary process, but must originate at once, with all components simultaneously coming together.

    It was only later that Venema realized the deficiency of Behe’s case and other intelligent design arguments. According to Venema, he eventually concluded that intelligent design was based on god-of-the-gaps reasoning. Venema states, “Over the course of my personal journey away from ID, I came to an uncomfortable conclusion: ID seemed strong only where there was a lack of relevant evidence.” 3

    Is Evolutionary Creationism an Overreaction to Ill-Conceived Science-Faith Models?

    Venema does a masterful job of explaining why so many biologists are convinced that life’s design, origin, history—including humanity’s origin—are best explained by the theory of evolution. Reading through Venema’s chapters, it becomes clear that strong evidential support exists for the theory of evolution, and along with it, human evolution. But, in my view Venema doesn’t tell the full story. There are also significant events in life’s history that evolutionary theory fails to explain—for example, the origin and design of biochemical systems. In fact, Venema readily acknowledges the scientific community’s failure to explain the origin of life through evolutionary means. It was this failure combined with the elegant, sophisticated, and ingenious designs of biochemical systems that convinced me that life’s origin and design at a molecular level must be the handiwork of a Creator. Despite Venema’s assertion, when it comes to the origin and design of biochemical systems, the case for intelligent design, and hence, a Creator’s role in life’s origin has become stronger over the last three decades—not because of our ignorance, but because of what we have learned about the origin-of-life problem and the structure and function of biochemical systems.

    Yet having staked out and defended this claim in Origins of Life, The Cell’s Design, and Creating Life in the Lab, I am sympathetic to the critique Venema levels against: (1) Behe’s idea of irreducible complexity, and (2) the popular claim made by many Christian apologists that evolutionary mechanisms cannot generate biological information. Like Venema, at one time I found both arguments compelling. But as I carefully listened to the rebuttals to these arguments from origin-of-life researchers and evolutionary biologists over the years, I found myself less convinced that these specific arguments represent valid critiques of the abiogenesis and evolutionary theory. (For more details, see the Resources section of this review.)

    Unlike Venema, I didn’t abandon progressive creationism for evolutionary creationism when I soured on these two popular design arguments. Why? In spite of the limitations of these two arguments, I am more convinced than ever that the origin of life and the design of biochemical systems can’t be explained by evolutionary mechanisms. The case for a Creator doesn’t rise and fall on the validity of the arguments from irreducible complexity and the improbability of evolutionary mechanisms generating information. Instead, as I outline in a recently released video, How to Make a Case for Biochemical Design, the case for God’s role in the genesis of life and design of biochemical systems finds its basis in several different lines of evidence that collectively form a powerful weight-of-evidence case for biochemical design.

    Yet Venema doesn’t see it that way, even though he acknowledges the challenges facing an evolutionary explanation for life’s origin. Why?

    I am sure Venema would answer that his reluctance to embrace any form of intelligent design/creationism is the overwhelming evidence for common descent and human evolution. But given his story, I can’t help but wonder if there is more to it. I can’t help but wonder if Venema’s move away from intelligent design to evolutionary creationism isn’t possibly an overreaction, in part, to feeling duped by well-meaning Christians who authoritatively taught flawed scientific ideas as truth. I can’t help but wonder if Venema’s embrace of mainstream scientific ideas about evolution finds some motivation in the safety of this approach. By embracing evolutionary creationism, he will never be at odds with mainstream scientific thinking again. Those of us who espouse ideas about the design, origin, and history of life outside of the scientific mainstream know the cost of adopting these views. All of us have been ridiculed and dismissed by skeptics and people in the scientific community simply because we have the impertinence to challenge mainstream scientific ideas regarding origins and the temerity to claim that the evidence points to God’s role in the origin and design of the universe and life.

    Over the years, I have gotten to know several evolutionary creationists who have similar stories to Venema’s. I have often heard evolutionary creationists express disappointment by being unintentionally mislead when they were young and scientifically naïve by well-meaning Christians who taught them young-earth creationism, only to later discover the scientific deficiencies of that idea. It seems to me that in abandoning young-earth creationism, they, like Venema, have moved to the opposite extreme, rejecting any science-faith model that doesn’t fully embrace mainstream scientific ideas—even if those ideas challenge key biblical doctrines.

    In fact, I have had many evolutionary creationists say to me both publicly and privately: If evangelical Christians don’t accept the evolutionary paradigm, we will lose all credibility with the scientific community. I have heard evolutionary creationists argue that evangelical Christianity must adapt to the reality of evolution if the Christian faith is to remain relevant.

    I will address these concerns more fully in part 2 of this review. For now, Venema’s story serves as a cautionary tale for all of us involved in science-faith discussions. We need to make sure that our ideas are scientifically credible, even if they lie outside the scientific mainstream. It is important that we faithfully communicate the scientific consensus and why the scientific community holds to it before we offer alternative models. We also need to be willing to acknowledge the shortcomings of our approach and models, whether our ideas fall within or outside the mainstream. Young-earth, old-earth, and evolutionary creationists alike need to exercise humility when it comes to advocating for their views. Perhaps if these practices were more commonplace, extreme views such as evolutionary creationism (and young-earth creationism) wouldn’t hold such sway.

    What Motivations Influence My Views?

    Venema’s story has caused me to reflect on my own intellectual journey: from an agnostic to a theist; from a theist to a Christian, who embraced theistic evolution; and, finally, from an adherent of theistic evolution to one who now espouses progressive creationism. Do I hold my views based on evidence alone? Or are there other motivating factors? Like Venema, I would like to think that I hold my views because they best account for all the evidence, both scientific and biblical. But maybe I have a deep-seated skepticism of biological evolution because I, too, felt duped by well-meaning biology professors who taught me that the case for the evolutionary paradigm was airtight, when, in fact, I later learned was not the case whatsoever. I feel as if my journey to faith in Christ was waylaid because of my wholehearted embrace of the evolutionary paradigm, again based on a simplistic treatment of biological evolution. At one time in my life, I reasoned that if evolution can account for everything, then why is a Creator needed? God becomes superfluous in the evolutionary paradigm.

    My point is this: a complex interplay of several factors determines the views that each of us holds, including the relationship between science and the Christian faith. Sincere, thoughtful, highly educated Christians can look at the same scientific and biblical data and come to rather different conclusions. It is for this reason that when we discuss science-faith issues with others (both inside and outside the Church) we need to move past the evidence and learn about one another’s experiences and control beliefs. In doing so, hopefully we realize that no one position uniquely holds the scientific or biblical high ground.

    Unfortunately, like many evolutionary creationists, Venema writes as if evolutionary creationism is the only scientifically credible view. And McKnight, like other evolutionary creationists, adopts the posture that it is exegetically unreasonable to embrace a traditional biblical view of human origins. But, what if one reaches a different conclusion? Namely, that Scripture teaches humanity was created in God’s image through direct and personal Divine action and that all humanity comes from Adam and Eve? Does that mean, as Christians, we must abandon the scientific high ground?

    In part 2, I will argue that the answer is no. I maintain that it is possible to hold to a scientifically credible view of human origins, while at the same time embracing the traditional biblical view of human origins. However, to do so, we must abandon methodological naturalism as the philosophical framework for science. If we do so, we will find the theory of evolution doesn’t uniquely account for the data from comparative genomics and population genetics. It is possible to present a robust scientific model (see Who Was Adam?) that explains the shared similarities and differences found in the genomes of humans and the great apes as shared design features—manifestations of an archetypal design.

    Resources—Theological Reflections on Adam and the Genome

    Resources—An Old-Earth Creationist Perspective on the Scientific Case for a Traditional Biblical View of Human Origins

    Resources—The Challenges to Two Popular Design Arguments

    Endnotes
    1. Dennis R. Venema and Scot McKnight, Adam and the Genome: Reading Scripture after Genetic Science (Grand Rapids, MI: Brazos Press, 2017).
    2. Ibid., 1.
    3. Ibid., 90.
  • Conservation Biology Studies Elicit Doubts about the First Human Population Size

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 26, 2017

    Adam named his wife Eve, because she would become the mother of all the living.

    –Genesis 3:20

    Prior to joining Reasons to Believe in June of 1999, I spent seven years working in research and development for a Fortune 500 company. Part of my responsibilities included method development. My lab worked on developing analytical methods to measure the active ingredients in our products. But more interesting to me was the work we did designing methods to predict consumer responses to our product prototypes.

    Before we could deploy either type of method, it was critical for us to ensure that the techniques we developed would generate reliable, accurate data that could be used to make sound business decisions.

    Method Validation

    Researchers assess the soundness of scientific methods through a process called method validation. A key part of this process involves applying the method to “known” samples. If the method produces the expected result, it passes the test. For example, the team in my lab would often develop analytical methods to measure the active ingredients in our products. To validate these methods, we would carefully weigh and add specified amounts of the actives to prepared samples and then use our newly developed method to measure the ingredient levels. If we got the right results, it gave us the confidence to apply the method to real world samples.

    A Controversy about the Size of the First Human Population

    Currently, a set of scientific methods resides at the center of an important controversy among conservative and evangelical Christians about the historicity of Adam and Eve. Specifically, the scientific methods in question are designed to measure the population size of the first humans. Even though the traditional reading of the biblical creation accounts indicates humanity began as a primordial pair—an Adam and Eve—all three sets of methods indicate that the initial human population size consisted of several thousand individuals, not two, raising serious questions about the traditional Christian understanding of humanity’s origin. Some evangelical Christians argue that we must accept these findings and reinterpret the biblical creation accounts, regardless of the theological consequences. Others (myself included) question the validity of these methods. It is important to make sure that these techniques perform as intended before abandoning the traditional biblical view of humanity’s beginnings.

    The Importance of Adam and Eve’s Historicity

    The finding that humanity began as a population, not a pair, causes quite a bit of consternation for me and many other evangelical and conservative Christians. Adam and Eve’s existence and role as humanity’s founding couple are not merely academic concerns. For the Christian faith, the question of Adam and Eve’s historicity are more significant than any business decision that relied on analytical methods my lab developed. (Data from my lab was used to make some decisions that involved millions of dollars.) The historicity of Adam and Eve impacts key doctrines of the Christian faith, such as inerrancy, the image of God, the fall, original sin, marriage, and the atonement.

    Again, given the profound implications of abandoning Adam and Eve’s historicity, it is important to know if these population size methods perform as intended. They are a big part of the reason evolutionary biologists and geneticists reject Adam and Eve’s existence. To put it another way, are these methods valid, yielding accurate, reliable results?

    Measuring the Initial Human Population Size

    Currently, geneticists use three approaches to estimate the size of the initial human population. 1

    1. The most prominent method finds its basis in mathematical expressions relating the current genetic variability among humans today to mutation rate and initial population size. Using these relationships, geneticists develop mathematical models that allow them to calculate the initial population size for the first humans after measuring genetic variability of contemporary human population groups (and assuming a constant mutation rate).
    2. A more recently developed approach relies on a phenomenon called linkage disequilibrium to measure the initial population size of the first humans.
    3. The final approach (also relatively new on the scene) makes use of a process called incomplete lineage sorting to estimate humanity’ s initial population size.

    Are Population Size Methods Valid?

    So are these methods valid? When I have asked evolutionary creationists this question, they usually hem and haw, and then reply: These methods are based on sound, well-understood phenomena, and therefore should be considered reliable.

    I believe that to be true. The methods do appear to be based on sound principles. But that is not enough—not if we are to draw rigorous scientific conclusions. Scientific methods can only be considered reliable if they have been validated. When I worked in R&D, if I insisted to my bosses that they should accept the results of methods I developed because they were based on sound principles but lacked validation data, I would have been fired.

    So given the importance of the historical Adam and Eve, why should we accept anything less for population size measurements?

    To my surprise, when I survey the scientific literature, I can’t find any studies that demonstrate successful validation of any of these three population size methods. For me, this is a monumental concern, particularly given the importance of Adam and Eve’s historicity. The fact that these methods haven’t been validated provides justification for Christians to hold the results of these studies at arm’s length.

    In fact, when it comes to the first category of methods, I find something even more troubling: Studies in conservation biology raise serious questions about the validity of these methods. Of course, we can’t directly validate methods designed to measure the numbers of the first humans because we don’t have access to that initial population. But we can gain insight into the validity of these methods by turning to work in conservation biology. When a species is on the verge of extinction, conservationists often know the numbers of species that remain. And because genetic variability is critical for their recovery and survival, conservation biologists monitor genetic diversity of endangered species. In other words, conservation biologists have the means to validate population size methods that rely on genetic diversity.

    In my book Who Was Adam? I discuss three separate studies (involving mouflon sheep, Przewalski’s horses, and gray whales) in which the initial populations were known. When the researchers measured the genetic diversity generations after the initial populations were established, the genetic diversity was much greater than expected—again, based on the models relating genetic diversity and population size.2 In other words, this method failed validation in each of these cases. If researchers used the genetic variability to estimate original population sizes, the sizes would have measured larger than they actually were.

    In Who Was Adam? I also cite studies that raise doubts about the reliability of linkage disequilibrium methods to accurately measure population sizes.3 Not only is this method not validated, it, too, has failed validation.

    Recently, I conducted another survey of the scientific literature to see if I had missed any important studies involving population size and genetic diversity. Again, I was unable to find any studies that demonstrated the validity of any of the three approaches used to measure population size. Instead, I found three more studies indicating that when genetic diversity was measured for animal populations on the verge of extinction it was much greater than expected, based on the predictions derived from the mathematical models.4

    The Surprisingly High Genetic Diversity of White-Tailed Deer in Finland

    Of specific interest is a study published in 2012 by researchers from Finland. These scientists monitored the genetic diversity (focusing on 14 locations in the genome consisting of microsatellite DNA) of a population of white-tailed deer that were introduced into Finland from North America in 1934.5 The initial population consisted of three females and one male, and since then has grown to between 40,000 to 50,000 individuals. This population has remained isolated from all other deer populations since its introduction.

    Though the researchers found that the genetic diversity of this population was lower than for a comparable population in Oklahoma (reflecting the genetic bottleneck that occurred when the original members of the population were relocated), it was still surprisingly high. Because of this unexpectedly high genetic diversity, size estimates for the initial population would be much greater than four individuals. To put it another way, this population size method fails validation—one more time.

    Why is this approach to measuring population sizes so beleaguered, when the method is based on sound, well-understood principles? In Who Was Adam? (and elsewhere), I point out that the equations undergirding this method are simplified, idealized mathematical relationships that do not take into account several relevant factors that are difficult to mathematically model, such as population dynamics through time and across geography.

    Recently, conservation biologists have identified another factor influencing genetic diversity that confounds the straightforward application of the equations used to calculate initial population size: long generation times. That is, animals with long generation times display greater-than-anticipated genetic diversity, even when the population begins with a limited number of individuals.6

    This finding is significant when it comes to discussions about Adam and Eve’s historicity. Human beings have long generation times—longer than white-tailed deer. From a creation model perspective, these long generation times help to explain why humanity displays such relatively large genetic diversity, even though we come from a primordial pair. And it suggests that the initial population size estimates for modern humans are likely exaggerated.

    So did humanity originate as a population or a primordial pair?

    The claims of some geneticists and evolutionary biologists notwithstanding, it’s hard to maintain that humanity began as a population of thousands of individuals, because the methods used to generate these numbers haven’t been validated—in fact, work in conservation biology makes me wonder if these methods are trustworthy at all. Given their track record, I would never have used these methods when I worked in R&D to make a business decision.

    Resources

    Endnotes
    1. For a recent and accessible discussion of these methods see Dennis R. Venema and Scot McKnight, Adam and the Genome: Reading Scripture after Genetic Science (Grand Rapids, MI: Brazos Press, 2017), 45–48.
    2. Fazale Rana with Hugh Ross, Who Was Adam? A Creation Model Approach to the Origin of Humanity (Covina, CA: RTB Press, 2015), 349–353.
    3. Ibid.
    4. Catherine Lippé, Pierre Dumont, and Louis Bernatchez, “High Genetic Diversity and No Inbreeding in the Endangered Copper Redhorse, Moxostoma hubbsi (Catostomidae, Pisces): The Positive Sides of a Long Generation Time,” Molecular Ecology 15 (June 2006): 1769–1780, doi:10.1111/j.1365-294X.2006.02902.x; Frank Hailer et al., “Bottlenecked But Long-Lived: High Genetic Diversity Retained in White-Tailed Eagles upon Recovery from Population Decline,” Biology Letters 2 (June 2006): 316–319, doi:10.1098/rsbl.2006.0453; Jaana Kekkonen, Mikael Wikström, and Jon E. Brommer, “Heterozygosity in an Isolated Population of a Large Mammal Founded by Four Individuals Is Predicted by an Individual-Based Genetic Model,” PLoS ONE 7 (September 2012): e43482, doi:10.1371/journal.pone.0043482.
    5. Kekkonen, Wikström, and Brommer, “Heterozygosity in an Isolated Population.”
    6. Lippé, Dumont, and Bernatchez, “High Genetic Diversity and No Inbreeding”; Hailer et al., “Bottlenecked but Long-Lived.”
  • Does Radiocarbon Dating Prove a Young Earth? A Response to Vernon R. Cupps

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 19, 2017

    In my experience, one of the most persuasive scientific claims for a young Earth is the detection of carbon-14 in geological samples such as coal and fossilized dinosaur remains.1 According to young-earth creationists (YECs), if the coal samples and fossils are truly millions of years old (as the scientific community claims), then there shouldn’t be any trace of carbon-14 in these samples. Why? It’s because the half-life of carbon-14 is about 5,700 years, meaning that all the detectable carbon-14 should have disappeared from the samples long before they reach even 100,000 years of age.

    In Dinosaur Blood and the Age of the Earth, I respond to this young-earth argument, suggesting three mechanisms that can account for carbon-14 in fossil remains (and by extension, in geological materials) from an old-earth perspective.

    When YECs detect carbon-14, they find it at low levels, corresponding to age dates older than 30,000 years (not 3,000 to 6,000 years old, as their model predicts, by the way). These low levels make it reasonable to think that some of the carbon-14 signal comes from contamination of the sample by, say, microorganisms picked up from the environment.

    These low levels also make it conceivable that some of the detected carbon-14 is due to a ubiquitous carbon-14 background. Cosmic rays are continuously producing radiocarbon from nitrogen-14. Because of this nonstop production, carbon-14 is everywhere and will show up at extremely low levels in any measurement that is made, even if it isn’t present in the actual sample.

    It is also possible that some of the carbon-14 in the fossil and coal samples arises from the in situ conversion of nitrogen-14 to carbon-14 driven by the decay of radioactive elements in the environment. Because fossils and coal derive from once-living organisms, there will be plenty of nitrogen-14 contained in these specimens. For example, environmental uranium and thorium would readily infuse into the interiors of fossils, and as these elements decay, the high energy they release will convert nitrogen-14 to carbon-14.

    Employing a “back-of-the-envelope” flux analysis, Vernon Cupps—a YEC affiliated with the Institute of Creation Research—has challenged my assessment, concluding that neither (1) the production of carbon-14 from cosmic radiation nor (2) the decay of radioactive isotopes in the environment is sufficient to account for the carbon-14 detected in fossil and geological samples.2

    Though I think his analysis may be unrealistically simplistic, let’s assume Cupps’s calculations are correct. He still misses my point. In Dinosaur Blood and the Age of the Earth, I argue that all three possible sources simultaneously contribute to the detectable carbon-14. In other words, while no single source may fully account for the detectable carbon-14, when combined, all three can. Cupps’s analysis neglects the contribution of the ubiquitous background carbon-14 and possible sources of contamination from the environment.

    Ironically, the low levels of carbon-14 detected in fossils and geological specimens by YECs actually argue against a young Earth, not an old Earth.

    How can that be?

    If fossil and geological specimens are between 3,000 and 6,000 years old, then somewhere between 50 and 75 percent of the original carbon-14 should remain in the sample. This amount of material should generate a strong carbon-14 signal. The fact that these specimens all age-date to 30,000 to 45,000 years old means that less than 2 percent of the original carbon-14 remains in these samples—if the results of this measurement are taken at face value. It becomes difficult to explain this result if these samples are less than 6,000 years old. On the other hand, the weak carbon-14 signal measured by YECs does make sense if carbon-14 does not reflect the material originally in the sample, but instead stems from a combination of (1) contamination from the environment, (2) ubiquitous background radiocarbon, and/or (3) irradiation of the samples by isotopes such as uranium or thorium in the environment.

    To put it plainly, it is difficult to reconcile the carbon-14 measurements made by YECs with fossil and geological samples that are 3,000 to 6,000 years old, Cupps’s analysis notwithstanding.

    On the other hand, an old-earth perspective has the explanatory power to account for the low levels of carbon-14 associated with fossils and other geological samples.

    Resources

    Endnotes
    1. Vernon R. Cupps, “Radiocarbon Dating Can’t Prove an Old Earth,” Acts & Facts, April 2017, http://www.icr.org/article/9937.
    2. Ibid.
  • Can Science Detect the Creator’s Fingerprints in Nature?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Apr 12, 2017

    Which of these does not know that the hand of the Lord has done this?

    Job 12:9

     

    In early March (2017), I took part in a forum at Samford University (Birmingham, AL), entitled Genesis and Evolution. At this two-day event, the panelists presented and discussed scientific and biblical perspectives on Young-Earth, Old-Earth, and Evolutionary versions of creationism.

    The organizers charged me with the responsibility of describing Old-Earth Creationism (OEC) from a scientific vantage point, while also providing the rational for my views.

    As part of my presentation, the organizers asked me to discuss the assumptions that undergird my views. One of the foundational tenets of OEC is an important idea taught in Scripture: God has revealed Himself to us through the record of nature. According to passages such as Job 12: 7-9, part of that revelation includes the ‘fingerprints’ He has left on His creation.

    If Scripture is true, then scientific investigation should uncover evidence for design throughout the natural realm. Science should find God’s fingerprints. And, indeed, it has. As a biochemist, I am deeply impressed with the elegance, sophistication, and ingenuity of the cell’s molecular systems. In my view, these features reflect the work of a mind—a Divine Mind. But, the evidence for intelligent design in the biochemical realm is much more extensive. For example, the eerie similarity between the structure and function of biochemical systems, and the objects and devices produced by human designers further evinces the Creator’s handiwork. In my book The Cell’s Design, I show how the remarkable similarities serve to revitalize William Paley’s Watchmaker Argument for God’s existence.

    To describe the hallmark features of human designs, Paley used the term contrivance. Human designs are contrivances. And so, are biological systems. If human contrivances require the work of human designers, then, it follows that biological systems—which, too, are contrivances—require a Divine designer. In The Cell’s Design, I introduce the concept of an intelligent design pattern. Following Paley, I identify several features that characterize human designs. Collectively, these characteristics form a pattern that can then be matched to the features of biological and biochemical systems. The greater the match between the intelligent design pattern and biological/biochemical systems, the greater the certainty that designs found in living systems are the work of a mind.

    In response to my presentation at the Genesis and Evolution event, cell biologist Ken Miller from Brown University—a well-known critic of intelligent design—argued that creationism and intelligent design cannot be part of the construct of science, because science lacks the capability of detecting the supernatural. In his book, The Triumph of Evolution and the Failure of Creationism, paleontologist Niles Eldredge makes this very point:

    “We humans can directly experience the material world only through our senses, and there is no way we can directly experience the supernatural. Thus, in the enterprise that is science, it isn’t an ontological claim that a God does not exist, but rather an epistemological recognition that even if such a God did exist, there would be no way to experience that God given the impressive, but still limited, means afforded by science. And that is true by definition.”1

    But, as I pointed out during my presentation and elsewhere there are scientific disciplines predicated on science’s capacity to detect the activity of intelligent agency. One is SETI: The Search for Extraterrestrial Intelligence. Astronomers involved in this research program seek ways to distinguish between electromagnetic radiation emanating from astronomical objects from those hypothetically generated by intelligent agents that are part of alien civilizations. To put it another way, SETI is an intelligent design research program.

    Research by scientists from the Harvard-Smithsonian Center for Astrophysics powerfully illustrates this point.2 These investigators propose that fast radio burst (FRBs) emanate from alien technology, specifically planet-sized transmitters powering interstellar probes.

    Astronomers discovered FRBs in 2007. Since then, around two dozen exceedingly bright, millisecond bursts of radio emissions have been detected. Astronomers think that FRBs originate in distant galaxies, billions of light years away.

    The Harvard-Smithsonian scientists calculate that the transmitters could generate enough energy from sunlight to move probes through space, if the light was directed to onto structures twice the size of Earth. Given the energies involved, the transmitters would have to be cooled. Again, the researchers estimate that a water-cooled device twice Earth’s size could keep the transmitter from melting.

    The researchers recognize that construction of the transmitters lays beyond our technology, but is possible given the laws of physics. They speculate aliens built these transmitter to power light sails to move space craft, weighing a million tons and carrying living creatures across interstellar space.

    These astronomers maintain that the transmitter would have to continually focus its beam on the light sails. FRBs originate when the transmitter and light sails briefly point in Earth’s direction due to the relative motion of the transmitter and light sail.

    So, are FRBs evidence for alien technology? Avi Loeb, one of the Harvard-Smithsonian scientists, admits that their proposal is speculative, but justifies it because, “we haven’t identified a possible natural source with any confidence.”3 But, Loeb argues, “Deciding what’s likely ahead of time limits the possibilities. It’s worth putting ideas out there and letting the data be the judge.”4

    So, contrary to the protests of scientists, such as Miller and Eldredge, science does have the tool kit to detect the handiwork of intelligent agents and even discern the capabilities and motives of the intelligent designer(s). So, why not let intelligent design proponents and creationists put their ideas out there and let the data be the judge?

    It is interesting that the Harvard-Smithsonian astronomers think they can recognize the work of intelligent designers who possess capabilities beyond what we can understand—and, maybe, even imagine. They also think that they can discern the purpose behind the alien technology—space exploration. So why can’t science recognize the work of a Creator whose capabilities exist beyond what we can imagine?

    So, considering the proposal by the Harvard-Smithsonian investigators, it is disingenuous for Miller, Eldredge, and other scientists, to reject, out of hand, the claim the scientific evidence for God’s fingerprints in biochemical systems. I contend that the intelligent design pattern that I describe in The Cell’s Design can be used to rigorously—and, even, quantitatively—characterize the Creator’s activity in biological systems. Moreover, as I have discussed previously, science has the tools to identify the designer.

    As the apostle Paul wrote, evidence for the Creator is “clearly seen from what has been made.” (Romans 1:20) If only the scientific community would be willing to look.

    Resources:

    Fast Radio Bursts: E. T. Is Not Calling Home by Hugh Ross (article)

    Fast Radio Bursts Update by Hugh Ross (article)

    A Biochemical Watch Found in a Cellular Heath by Fazale Rana (article)

    Can Science Identify the Intelligent Designer? by Fazale Rana (article)

    The Cell’s Design By Fazale Rana (book)

    Endnotes
    1. Niles Eldredge, The Triumph of Evolution and the Failure of Creationism (New York: Holt and Company, 200) p. 13.
    2. Harvard-Smithsonian Center for Astrophysics, “Could Fast Radio Burst Be Powering Alien Probes?” ScienceDaily (March 9, 2017), sciencedaily.com/releases/2017/03/170309120419.htm
    3. Harvard-Smithsonian Center for Astrophysics, “Could Fast Radio Burst Be Powering Alien Probes?” ScienceDaily (March 9, 2017), sciencedaily.com/releases/2017/03/170309120419.htm
    4. Harvard-Smithsonian Center for Astrophysics, “Could Fast Radio Burst Be Powering Alien Probes?” ScienceDaily (March 9, 2017), www.sciencedaily.com/releases/2017/03/170309120419.htm
  • What Does the Discovery of Earth’s Oldest Fossils Mean for Evolutionary Models?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 29, 2017

    Communication can be a complex undertaking. Often, people don’t say what they really mean. And if they do, their meaning is often veiled in what they say. That’s why it’s important to learn how to read between the lines. Understanding the real meaning when something isn’t explicitly stated usually requires experience and some insider’s knowledge.

    Thanks to my expertise in biochemistry and origin-of-life research and 20 years of experience as a Christian apologist, I can usually read between the lines when scientists respond to discoveries that challenge the evolutionary paradigm, such as the recently reported discovery of Earth’s oldest fossils. Because of their fear that intelligent design proponents and creationists will make use of these types of discoveries to advance the case for a Creator, scientists can be adept at masking their concern when they discuss the implications of these discoveries. But if you know how to read between the lines, their consternation is as plain as day.

    Earth’s Oldest Fossils

    An international team made up of scientists from the United Kingdom, United States, Canada, and Australia recently reported on the discovery of microfossils from a geological formation in the northern part of Quebec, Canada.1 Formed from ancient hydrothermal vents, this iron-rich geological system dates somewhere between 3.77 and 4.3 billion years in age.

    The putative microfossils consist of microscopic hematite filaments and tubes, like those found in modern hydrothermal vents. Today, iron-oxidizing microbes produce hematite filaments and tubes when sheaths of extracellular materials become coated by iron oxyhydroxide. Added evidence for the biogenicity of these microfossils comes from carbonate and apatite associated with the hematite structures. These compounds can also be produced as by-products of the metabolic activity of microorganisms. The research team also discovered graphite inclusions enriched in carbon-12, a geochemical signature of life. Finally, the Raman spectrum of the carbonaceous deposits display features that also point to the biological origin of this material.

    Matthew Dodd, one of the research team members, argues that “we can think of alternative explanations for each of these singular observations, but why all of these features occur together can really only be explained by one thing, which is a biological interpretation.”2

    The discovery of these microfossils comes on the heels of the discovery of stromatolites in newly exposed rock outcroppings in Greenland, dating at 3.7 billion years.3 Both recent discoveries corroborate earlier work that yielded several different geochemical markers for biological activity. In short, an impressive weight of evidence points to the early appearance of complex and diverse microbial life on Earth.

    Skepticism about Bioauthenticity

    Despite this impressive collection of evidence, several scientists have expressed skepticism about the bioauthenticity of the fossils. Journalist Sarah Kaplan explains why: “Findings like these are subject to intense scrutiny because they have potentially far-reaching implications for the study of early organisms on Earth and other planets.”4

    As I have discussed previously when the discovery of 3.7-billion-year-old stromatolite fossils were unearthed in Greenland, one of the implications of the early appearance of metabolically complex and diverse microbial life on Earth is that it calls into question evolutionary explanations for the origin of life. These discoveries indicate that life appeared suddenly on Earth, in a geological instant. Yet traditionally, origin-of-life researchers maintained that life’s origin via chemical evolution would have required hundreds of millions of years, perhaps even a billion years.

    This concern can be read between the lines in the objections raised by scientists responding to this discovery.

    Some argue that the research team hasn’t amassed enough evidence to convince them of the biogenicity of the fossils, pointing out that extraordinary claims require extraordinary evidence. But the claim that life appeared early in Earth’s history is only extraordinary within the evolutionary paradigm. To view these microfossils as extraordinary highlights the trouble these fossil finds cause for an evolutionary approach to the origin-of-life question.

    Others argue that iron-oxidizing microbes are too complex to have appeared this early in Earth’s history. Some assert that the rock layers containing the fossils are much younger than 3.77 billion years, raising concerns about the dating methods used to determine the age of the rocks harboring the microfossils. Again, both complaints reveal concerns about the impact that this fossil find has on the evolutionary explanation for life’s beginning. The hope is that by forcing the fossils to appear much later in Earth’s history, scientists can explain the metabolic complexity of the organisms that produced the hematite deposits by giving evolutionary processes more time. Yet there is no reason to dispute the dates for the rock formations in northern Canada, and the case for the biogenicity of the fossils is strong.

    Some dismiss the bioauthenticity of the microfossils because it would require life to originate under hostile conditions, caused by the late heavy bombardment. These hostile conditions would have frustrated the origin-of-life process, potentially sterilizing Earth, making it difficult to imagine how life could have emerged, let alone diversified, at 3.77 billion years ago—at least from an evolutionary vantage point. If these fossils aren’t authentic, then scientists don’t have to confront the counterintuitive fact that life appeared under hostile conditions.

    It seems to me that these scientists are dangerously close to evaluating the validity of the 3.77-billion-year-old microfossils based on how well they fit into the evolutionary paradigm, instead of evaluating evolutionary explanations for the origin of life based on the fossil evidence—a complete reversal of the way that the scientific method is supposed to work.

    Nevertheless, a quick read between the lines reveals just how awkwardly this fossil find fits within the evolutionary paradigm.

    Implications for Creation Models

    Though the discovery of 3.77-billion-year-old microfossils confounds evolutionary origin-of-life models, it affirms RTB’s origin-of-life model. As described in Origins of Life, two key predictions of this model include (1) life appearing on Earth soon after the planet’s formation and (2) first life possessing intrinsic complexity. And these predictions are satisfied by this latest advance.

    The writing is on the wall: the case for a Creator’s role in the origin of life is becoming more and more evident.

    Resources

    Endnotes
    1. Matthew S. Dodd et al., Evidence for Early Life in Earth’s Oldest Hydrothermal Vent Precipitates,”Nature 543 (March 2017): 60–64, doi:10.1038/nature21377.
    2. Sarah Kaplan, “Newfound 3.77-Billion-Year-Old Fossils Could Be Earliest Evidence of Life on Earth,” Washington Post, March 1, 2017, https://www.washingtonpost.com/news/speaking-of-science/wp/2017/03/01/newfound-3-77-billion-year-old-fossils-could-be-earliest-evidence-of-life-on-earth.
    3. Allen P. Nutman et al., “Rapid Emergence of Life Shown by Discovery of 3,700-Million-Year-Old Microbial Structures,” Nature 537 (September 2016): 535–38, doi:10.1038/nature19355.
    4. Kaplan, “Newfound 3.77-Billion-Year-Old Fossils.”
  • Latest Insights into Obesity Fatten the Case for Human Design

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 22, 2017

    As a biochemist, I have come up with a radical new diet plan: Eat less and exercise more. Yet, recent work by a research team led by Herman Pontzer at Hunter College exposed the flaws in my newfangled diet before I could even try it out. As it so happens, an emerging body of data indicates that exercise contributes very little to weight loss.

    This surprising, counterintuitive finding has important implications for medical practitioners trying to combat a worldwide obesity epidemic. It also highlights the elegant design of the human body and supports the growing case for human exceptionalism.

    The Obesity Epidemic

    Some of the latest statistics indicate that worldwide, 1 in 3 people are overweight and 1 in 10 suffer from obesity. This problem has serious consequences because obesity plays a part in the etiology of type 2 diabetes, cardiovascular disease, and certain forms of cancer.

    Of course, the cause of obesity is straightforward: People consume more calories than they need. One common-sense solution is to have people exercise more. Presumably the obesity epidemic is linked to a sedentary lifestyle. Throughout most of human history, our forebearers lived physically demanding lives. In contrast, people today engage in limited physical activity. Presumably, this inactivity lowers daily energy expenditure, leading to excessive weight gain, as caloric intake exceeds caloric outtake. Ready access to energy-dense foods only serves to exacerbate this caloric imbalance.

    But as it turns out, exercise appears to have little to no bearing on weight loss, defying conventional wisdom. While exercise has many health benefits, weight loss doesn’t appear to be one of them. Why? Because, based on the latest research, increasing our physical activity doesn’t lead to a greater caloric expenditure. As a corollary, the only way to lose weight is to restrict caloric intake.

    Constrained Energy Expenditure

    Over the course of the last few years, researchers at Hunter College have sought to understand what, if any, aspect of the Western lifestyle contributes to obesity. In the process, they have learned that the sedentary lifestyle in the West is not the problem. They discovered that when people transition from an inactive lifestyle to one characterized by moderate activity, a small increase in energy expenditure occurs. But, beyond that point, energy expenditure plateaus. Additional activity doesn’t translate into increased energy expenditure; instead total energy outlay appears to be constrained.

    For example, in 2012 the research team from Hunter College published the results of a study in which they examined the energy expenditure of the Hadza people, indigenous hunter-gatherers who live in the woodland and savanna of northern Tanzania. Anthropologists think that their way of living closely resembles the lifestyle of the first modern humans. As expected, the investigators determined that the Hadza are much more active than people who live Western lifestyles. Despite that difference, the average daily energy expenditure of the Hadza was no different than people from the Western world (once corrected for age, body size, and body composition).1

    In a broader study, the Hunter College scientists found the same trend when examining average daily energy expenditure for a sample of 332 people from Africa and North America. The sample included 25- to 45-year-old men and women representing people with a variety of lifestyles. After correcting for age, body size, and composition, average daily energy expenditure appeared to be constant, regardless of the amount of daily activity.2

    The Hunter College researchers speculate that as physical activity increases, our bodies conserve calories by reducing (1) our basal metabolic rate, (2) our repair processes, and (3) our growth rate. Additionally, women also conserve energy by reducing estrogen production and (for women who are nursing) decreasing lactation. The researchers also speculate that men and women may reduce energy expenditure by altering our posturing behaviors.

    Constrained Energy Expenditure and the Case for Human Design

    In many ways, constrained energy expenditure functions as an ingenious design to ensure human survival. For most of human history, our ancestors lived as hunter-gatherers—a highly active, physically demanding way of life. Yet when hunting and foraging for food, day-to-day success is not guaranteed. Humans could never have endured as a species if our daily energy expenditures didn’t plateau. When caloric intake is low (because of food scarcity), reducing activity level is not an option for hunter-gatherers because reduced activity makes it even less likely that they will find enough food to provide the minimal daily caloric intake. When food is scarce, the only way to endure is to double down foraging efforts. But increased foraging wouldn’t be possible if caloric expenditures increased linearly with activity. Constraining caloric output by slowing down basal metabolic rates and other processes allows hunter-gatherers to maintain high activity levels even when food isn’t plentiful.

    As a creationist, I see constrained energy expenditure as an ingenious biological design befitting a Creator who made human beings to be fearfully and wonderfully made.

    Constrained Energy Expenditure and the Case for Human Exceptionalism

    When it comes to constraining daily energy expenditure, humans aren’t unique. It appears as if all primates limit their daily energy outlay. For example, the daily energy expenditures of primates in the wild is no different than the caloric output of primates living out their lives in a zoo or in a laboratory setting.

    But what does make us unique is the magnitude of our daily energy expenditure. Humans require about 600 more calories per day than chimpanzees and nearly 1,000 more calories than orangutans.3 The primary reason for this difference is our large brain size. Maintenance of our large brains requires an energy outlay not demanded of other primates. Compared to other primates, we have accelerated metabolic processes.

    But our large brain size (and our advanced cognitive abilities, capacities for symbolism, and theory of mind that go along with it) allow us to thrive in the face of this additional energy demand. The first anatomically modern humans were adept at shaping their diets to consist of calorie-rich foods. Cooking their food also allowed them to extract more calories and other nutrients from the food they collected. They also shared food with one another. These practices reflect our unique nature as human beings and arise from our symbolism and capacity for theory of mind—properties that reflect the image of God.

    The unexpected insight into the relationship between physical activity and energy expenditure points to insights about human beings that are initially unexpected for those of us who view humans as the product of God’s handiwork. Constrained energy expenditure doesn’t make much sense if we think about it in the context of a Western lifestyle. But when we consider it in light of the way human beings have lived for much of human history, it makes perfect sense. And the difference in our average energy expenditure compared to other primates highlights our unique and exceptional nature, adding to the weight (pun intended) of evidence for human exceptionalism.

    Returning to my diet plan: I guess it doesn’t take a biochemist to know what do to lose weight—just eat less.

    Resources

    Endnotes
    1. Herman Pontzer et al., “Hunter-Gatherer Energetics and Human Obesity,” PLoS ONE 7 (July 2012): id. e40503, doi:10.1371/journal.pone.0040503.
    2. Herman Pontzer et al., “Constrained Total Energy Expenditure and Metabolic Adaptation to Physical Activity in Adult Humans,” Current Biology 26 (February 2016): 410–17, doi:10.1016/j.cub.2015.12.046.
    3. Herman Pontzer et al., “Metabolic Acceleration and the Evolution of Human Brain Size and Life History,” Nature 533 (May 2016): 390–92, doi:10.1038/nature17654.
  • Protein-Binding Sites ENCODEd into the Design of the Human Genome

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 15, 2017

    At last year’s AMP Conference, I delivered a talk titled: “How the Greatest Challenges Can Become the Greatest Opportunities for the Gospel.” I illustrated this point by describing three scientific concepts related to the origin of humanity that 20 years ago stood as insurmountable challenges to the traditional biblical view of human origins. But, thanks to scientific advances, these concepts have been replaced with new insights that turn these challenges into evidence for the Christian faith.

    The Challenge of Junk DNA

    One of the challenges I discussed centered on junk DNA—nonfunctional DNA littering the genomes of most organisms. Presumably, these nonfunctional DNA sequences arose through random biochemical, chemical, and physical events, with functional DNA converted into useless junk, in some instances. In fact, when the scientific community declared the human genome sequence completed in 2003, estimates at that time indicated that around 95 percent of the human genome consist of junk sequences.

    Since I have been involved in apologetics (around 20 years), skeptics (and believers) have regarded the high percentages of junk DNA in genomes as a significant problem for intelligent design and creation models. Why would an all-powerful, all-knowing, and all-good God create organisms with so much junk in their genomes? The shared junk DNA sequences found among the genomes of humans and the great apes compounds this challenge. For many, these shared sequences serve as compelling evidence for common ancestry among humans and the other primates. Why would a Creator introduce nonfunctional DNA sequences into corresponding locations in genomes of humans and the great apes?

    But what if the junk DNA sequences are functional? It would undermine the case for common descent, because these shared sequences could reasonably be interpreted as evidence for common design.

    The ENCODE Project

    In recent years, numerous discoveries indicate that virtually every class of junk DNA displays function, providing mounting support for a common-design interpretation of junk DNA. (For a summary, see the expanded and updated edition of Who Was Adam?) Perhaps the most significant advance toward that end came in the fall of 2012 with the publication of phase II results of the ENCODE project—a program carried out by a consortium of scientists with the goal of identifying the functional DNA sequence elements in the human genome.

    To the surprise of many, the ENCODE project reported that around 80 percent of the human genome displays function, with the expectation that this percentage should increase with phase III of the project. Many of the newly recognized functional elements play a central role in regulating gene expression. Others serve critical roles in establishing and maintaining the three-dimensional hierarchical structure of chromosomes.

    If valid, the ENCODE results would force a radical revision of the way scientists view the human genome. Instead of a wasteland littered with junk DNA sequences, the human genome (and the genome of other organisms) would have to be viewed as replete with functional elements, pointing to a system far more complex and sophisticated than ever imagined—befitting a Creator’s handiwork. (See the articles listed in the Resources section below for more details.)

    ENCODE Skeptics

    Within hours of the publication of the phase II results, evolutionary biologists condemned the ENCODE project, citing a number of technical issues with the way the study was designed and the way the results were interpreted. (For a response to these complaints go here, here, and here.)

    These technical complaints continue today, igniting the junk DNA war between evolutionary biologists and genomics scientists. Though the concerns expressed by evolutionary biologists are technical, some scientists have suggested the real motivation behind the criticisms of the ENCODE project are philosophical—even theological—in nature. For example, molecular biologists John Mattick and Marcel Dinger write:

    There may also be another factor motivating the Graur et al. and related articles (van Bakel et al. 2010; Scanlan 2012), which is suggested by the sources and selection of quotations used at the beginning of the article, as well as in the use of the phrase ‘evolution-free gospel’ in its title (Graur et al. 2013): the argument of a largely non-functional genome is invoked by some evolutionary theorists in the debate against the proposition of intelligent design of life on earth, particularly with respect to the origin of humanity. In essence, the argument posits that the presence of non-protein-coding or so-called ‘junk DNA’ that comprises >90% of the human genome is evidence for the accumulation of evolutionary debris by blind Darwinian evolution, and argues against intelligent design, as an intelligent designer would presumably not fill the human genetic instruction set with meaningless information (Dawkins 1986; Collins 2006). This argument is threatened in the face of growing functional indices of noncoding regions of the genome, with the latter reciprocally used in support of the notion of intelligent design and to challenge the conception that natural selection accounts for the existence of complex organisms (Behe 2003; Wells 2011).1

    Is DNA-Binding Activity Functional?

    Even though there may be nonscientific reasons for the complaints leveled against the ENCODE project, it is important to address the technical concerns. One relates to how biochemical function was determined by the ENCODE project. Critics argued that ENCODE scientists conflated biochemical activity with function. As a case in point, three of the assays employed by the ENCODE consortium measure binding of proteins to the genome, with the assumption that binding of transcription factors and histones to DNA indicated a functional role for the target sequences. On the other hand, ENCODE skeptics argue that most of the measured protein binding to the genome was random.

    Most DNA-binding proteins recognize and bind to short stretches of DNA (4 to 10 base pairs in length) comprised of highly specific nucleotide sequences. But given the massive size of the human genome (3.2 billion genetic letters), nonfunctional binding sites will randomly occur throughout the genome, for statistical reasons alone. To illustrate: Many DNA-binding proteins target roughly between 1 and 100 sites in the genome. Yet, the genome potentially harbors between 1 million and 1 billion binding sites. The hundreds of sites that are slight variants of the target sequence will have a strong affinity to the DNA-binding proteins, with thousands more having weaker affinities. Hence, the ENCODE critics maintain that much of the protein binding measured by the ENCODE team was random and nonfunctional. To put it differently, much of the protein binding measured in the ENCODE assays merely is a consequence of random biochemical activity.

    Nonfunctional Protein Binding to DNA Is Rare

    This challenge does have some merit. But, this criticism may not be valid. In an earlier response to this challenge, I acknowledged that some protein binding in genomes will be random and nonfunctional. Yet, based on my intuition as a biochemist, I argued that random binding of proteins throughout the genome would be disruptive to DNA metabolism, and, from an evolutionary perspective would have been eliminated by natural selection. (From an intelligent design/creation model vantage point, it is reasonable to expect that a Creator would design genomes with minimal nonfunctional protein-binding sites.)

    As it happens, new work by researchers from NYU affirms my assessment.2 These investigators demonstrated that protein binding in genomes is not random but highly specific. As a corollary, the human genome (and genomes of other organisms) contains very few nonfunctional protein-binding sites.

    To reach this conclusion, these researchers looked for nonfunctional protein-binding sites in the genomes of 75 organisms, representative of nearly every major biological group, and assessed the strength of their interaction with DNA-binding proteins. The researchers began their project by measuring the binding affinity for a sample of regulatory proteins (from humans, mice, fruit flies, and yeast) with every possible 8 base pair sequence combination (32,896). Based on the binding affinity data, the NYU scientists discovered that nonfunctional binding sites with a high affinity for DNA binding proteins occurred infrequently in genomes. To use scientific jargon to describe their findings: The researchers discovered a negative correlation between protein-binding affinity and the frequency of nonfunctional binding sites in genomes. Using statistical methods, they demonstrated that this pattern holds for all 75 genomes in their study.

    They attempted to account for the frequency of nonfunctional binding sequences in genomes by modeling the evolutionary process, assuming neutral evolution in which random mutations accrue over time free from the influence of natural selection. They discovered that this modeling failed to account for the sequence distributions they observed in the genomes, concluding that natural selection must have weeded high affinity nonfunctional binding sites in genomes.

    These results make sense. The NYU scientists point out that protein mis-binding would be catastrophic for two reasons: (1) it would interfere with several key processes, such as transcription, gene regulation, replication, and DNA repair (the interference effect); and (2) it would create inefficiencies by rendering DNA-binding proteins unavailable to bind at functional sites (the titration effect). Though these problems may be insignificant for a given DNA-binding protein, the cumulative effects would be devastating because there are 100 to 1,000 DNA-binding proteins per genome with 10 to 10,000 copies of each protein.

    The Human Genome Is ENCODEd for Design

    Though the NYU researchers conducted their work from an evolutionary perspective, their results also make sense from an intelligent design/creation model vantage point. If genome sequences are truly the product of a Creator’s handiwork, then it is reasonable to think that the sequences comprising genomes would be optimized—in this case, to minimize protein mis-binding. Though evolutionary biologists maintain that natural selection shaped genomes for optimal protein binding, as a creationist, it is my contention that the genomes were shaped by an intelligent Agent—a Creator.

    These results also have important implications for how we interpret the results of the ENCODE project. Given that the NYU researchers discovered that high affinity nonfunctional binding sites rarely occur in genomes (and provided a rationale for why that is the case), it is difficult for critics of the ENCODE project to argue that transcription factor and histone binding assays were measuring mostly random binding. Considering this recent work, it makes most sense to interpret the protein-binding activity in the human genome as functionally significant, bolstering the original conclusion of the ENCODE project—namely, that most of the human genome consists of functional DNA sequence elements. It goes without saying: If the original conclusion of the ENCODE project stands, the best evidence for the evolutionary paradigm unravels.

    Our understanding of genomes is in its infancy. Forced by their commitment to the evolutionary paradigm, many biologists see genomes as the cobbled-together product of an unguided evolutionary history. But as this recent study attests, the more we learn about the structure and function of genomes, the more elegant and sophisticated they appear to be. And the more reasons we have to believe that genomes are the handiwork of our Creator.

    Resources

    Endnotes
    1. John S. Mattick and Marcel E. Dinger, “The Extent of Functionality in the Human Genome,” The HUGO Journal 7 (July 2013): doi:10.1186/1877-6566-7-2.
    2. Long Qian and Edo Kussell, “Genome-Wide Motif Statistics Are Shaped by DNA Binding Proteins over Evolutionary Time Scales,” Physical Review X 6 (October–December 2016): id. 041009, doi:10.1103/PhysRevX.6.041009.
  • Hagfish Slime Expands the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Mar 08, 2017

    The designs found in biological systems never cease to amaze me. Even something as gross and seemingly insignificant as hagfish slime displays remarkable properties, befitting the handiwork of a Creator. In fact, the design of hagfish slime is so ingenious, it is serving as the source of inspiration for researchers from the US Navy in their quest to develop new types of military technology.

    What Are Hagfish?

    Hagfish are ancient creatures that first appeared on Earth around 520 million years ago, with representative specimens recovered in the Cambrian fossil assemblages. These eel-like creatures are about 20 inches in length with loose fitting skin that varies in color from pink to blue-gray, depending on the species.

    The hagfish are jawless but have a mineralized encasement around their skull (cranium). With eyespots instead of true eyes, these creatures have no vision. Hagfish are bottom-dwellers. To explore their environment, they make use of whisker-like structures. As scavengers, hagfish consume dead and dying creatures by burrowing into their bodies and ingesting the remains from the inside out. Remarkably, hagfish absorb nutrients through their skin and gills, in addition to feeding with their mouths. In fact, researchers estimate that close to half their nutrient intake comes through absorption.

    Hagfish Slime

    When disturbed or attacked by predators, hagfish secrete a slime from about 100 glands that line the flanks of their bodies. (This behavior explains why hagfish are sometimes called slime eels.) Produced by epithelial and gland thread cells, the slime rapidly expands to 10,000 times its original volume. A single hagfish can generate around 5.5 gallons of slime each time its disturbed. Once secreted, the slime coats the gills of attacking fish, suffocating the predator. With the predator distracted, the hagfish performs this defensive maneuver that allows it to escape, while scrapping the slime off its body to prevent self-suffocation.

    Two different types of proteins comprise hagfish slime. One of the components, mucin, is a large protein found widely throughout nature, serving as the primary component of mucus. Secreted by epithelial cells, mucin interacts with water molecules, restricting their movement, contributing to the slime’s viscosity.1

    Additionally, hagfish slime consists of long, thread-like proteins. These protein threads are 12 nanometers in diameter and 15 centimeters long! (That is one big molecule.) These dimensions equate to a rope that is 1 centimeter in diameter and 1.5 kilometers in length. These protein fibers are incredibly strong, equivalent to a string that is 100 times thinner than a strand of human hair, but 10 times stronger than a piece of nylon.

    Inside the gland thread cells, these protein fibers are carefully packaged like a skein of yarn, held together by other proteins that serve as a type of molecular glue.2 When the secreted hagfish slime contacts seawater, the glue proteins dissolve, leading to an explosive unraveling of the protein skeins, without any of the fibers becoming tangled. The protein threads contribute to the slime’s viscoelastic properties and provide the mechanism for the rapid swelling of the slime.

    Hagfish Slime Inspires Military Technologies

    The unusual and ingenious properties of the slime and the slime’s thread proteins have inspired researchers from the US Navy to explore their use in military technology. For example, the remarkable durability of the protein fibers (reminiscent of Kevlar) suggests an application for them in bulletproof vests. The properties of the hagfish slime could also be used as a flame retardant and a shark repellent for Navy divers.

    Other commercial labs are exploring applications that include food packaging, bungee cords, and bandages. In fact, some have gone as far as to dub the thread proteins as the ultimate biodegradable biofiber.

    Biomimetics and the Case for a Creator

    In recent years, engineers have routinely and systematically benefited by insights from biology to address engineering problems and to inspire new technologies by either directly copying (or mimicking) designs from biology, or using insights from biological designs to guide the engineering enterprise.

    From my perspective, the use of biological designs to guide engineering efforts fits awkwardly within the evolutionary paradigm. Why? Because evolutionary biologists view biological systems as the products of an unguided, historically contingent process that co-opts preexisting systems to cobble together new ones. Evolutionary mechanisms can optimize these systems, but they are still kludges.

    Given the unguided nature of evolutionary mechanisms, does it make sense for engineers to rely on biological systems to solve problems and inspire new technologies? Conversely, biomimetics and bioinspiration find a natural home in a creation model approach to biology. Using designs in nature to inspire engineering makes sense only if these designs arose from an intelligent Mind—even if they are as disgusting as the slime secreted by a bottom-dwelling scavenger.

    Resources

    Endnotes
    1. Lukas Böni et al., “Hagfish Slime and Mucin Flow Properties and Their Implications for Defense,” Scientific Reports 6 (July 2016): id. 30371, doi:10.1038/srep30371.
    2. Timothy Winegard et al., “Coiling and Maturation of a High-Performance Fibre in Hagfish Slime Gland Thread Cells,” Nature Communications 5 (April 2014): id. 3534, doi:10.1038/ncomms4534; Mark A. Bernards Jr. et al., “Spontaneous Unraveling of Hagfish Slime Thread Skeins Is Mediated by a Seawater-Soluble Protein Adhesive,” Journal of Experimental Biology 217 (April 2014): 1263–68, doi:10.1242/jeb.096909.

About Reasons to Believe

RTB's mission is to spread the Christian Gospel by demonstrating that sound reason and scientific research—including the very latest discoveries—consistently support, rather than erode, confidence in the truth of the Bible and faith in the personal, transcendent God revealed in both Scripture and nature. Learn More »

Support Reasons to Believe

Your support helps more people find Christ through sharing how the latest scientific discoveries affirm our faith in the God of the Bible.

Donate Now

U.S. Mailing Address
818 S. Oak Park Rd.
Covina, CA 91724
  • P (855) 732-7667
  • P (626) 335-1480
  • Fax (626) 852-0178
Reasons to Believe logo