Where Science and Faith Converge
  • But Do Watches Replicate? Addressing a Logical Challenge to the Watchmaker Argument

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 22, 2020

    Were things better in the past than they are today? It depends who you ask.

    Without question, there are some things that were better in years gone by. And, clearly, there are some historical attitudes and customs that, today, we find hard to believe our ancestors considered to be an acceptable part of daily life.

    It isn’t just attitudes and customs that change over time. Ideas change, too—some for the better, some for the worst. Consider the way doing science has evolved, particularly the study of biological systems. Was the way we approached the study of biological systems better in the past than it is today?

    It depends who you ask.

    As an old-earth creationist and intelligent design proponent, I think the approach biologists took in the past was better than today for one simple reason. Prior to Darwin, teleology was central to biology. In the late 1700s and early to mid-1800s, life scientists viewed biological systems as the product of a Mind. Consequently, design was front and center in biology.

    As part of the Darwinian revolution, teleology was cast aside. Mechanism replaced agency and design was no longer part of the construct of biology. Instead of reflecting the purposeful design of a Mind, biological systems were now viewed as the outworking of unguided evolutionary mechanisms. For many people in today’s scientific community, biology is better for it.

    Prior to Darwin, the ideas shaped by thinkers (such as William Paley) and biologists (such as Sir Richard Owen) took center stage. Today, their ideas have been abandoned and are often lampooned.

    But, advances in my areas of expertise (biochemistry and origins-of-life research) justify a return to the design hypothesis, indicating that there may well be a role for teleology in biology. In fact, as I argue in my book The Cell’s Design, the latest insights into the structure and function of biomolecules bring us full circle to the ideas of William Paley (1743-1805), revitalizing his Watchmaker argument for God’s existence.

    In my view, many examples of molecular-level biomachinery stand as strict analogs to human-made machinery in terms of architecture, operation, and assembly. The biomachines found in the cell’s interior reveal a diversity of form and function that mirrors the diversity of designs produced by human engineers. The one-to-one relationship between the parts of man-made machines and the molecular components of biomachines is startling (e.g., the flagellum’s hook). I believe Paley’s case continues to gain strength as biochemists continue to discover new examples of biomolecular machines.

    The Skeptics’ Challenge

    Despite the powerful analogy that exists between machines produced by human designers and biomolecular machines, many skeptics continue to challenge the revitalized watchmaker argument on logical grounds by arguing in the same vein as David Hume.1 These skeptics assert that significant and fundamental differences exist between biomachines and human creations.

    In a recent interaction on Twitter, a skeptic raised just such an objection. Here is what he wrote:

    “Do [objects and machines designed by humans] replicate with heritable variation? Bad analogy, category mistake. Same one Paley made with his watch on the heath centuries ago.”

    In other words, biological systems replicate, whereas devices and artefacts made by human beings don’t. This difference is fundamental. Such a dissimilarity is so significant that it undermines the analogy between biological systems (in general) and biomolecular machines (specifically) and human designs, invalidating the conclusion that life must stem from a Mind.

    This is not the first time I have encountered this objection. Still, I don’t find it compelling because it fails to take into account manmade machines that do, indeed, replicate.

    Von Neumann’s Universal Self-Constructor

    In the 1940s, mathematician, physicist, and computer scientist John von Neumann (1903–1957) designed a hypothetical machine called a universal constructor. This machine is a conceptual apparatus that can take materials from the environment and build any machine, including itself. The universal constructor requires instructions to build the desired machines and to build itself. It also requires a supervisory system that can switch back and forth between using the instructions to build other machines and copying the instructions prior to the replication of the universal constructor.

    Von Neumann’s universal constructor is a conceptual apparatus, but today researchers are actively trying to design and build self-replicating machines.2 Much work needs to be done before self-replicating machines are a reality. Nevertheless, one day machines will be able to reproduce, making copies of themselves. To put it another way, reproduction isn’t necessarily a quality that distinguishes machines from biological systems.

    It is interesting to me that a description of von Neumann’s universal constructor bears remarkable similarity to a description of a cell. In fact, in the context of the origin-of-life problem, astrobiologists Paul Davies and Sara Imari Walker noted the analogy between the cell’s information systems and von Neumann’s universal constructor.3 Davies and Walker think that this analogy is key to solving the origin-of-life problem. I would agree. However, Davies and Walker support an evolutionary origin of life, whereas I maintain that the analogy between cells and von Neumann’s universal constructor adds vigor to the revitalized Watchmaker argument and, in turn, the scientific case for a Creator.

    In other words, the reproduction objection to the Watchmaker argument has little going for it. Self-replication is not the basis for viewing biomolecular machines as fundamentally dissimilar to machines created by human designers. Instead, self-replication stands as one more machine-like attribute of biochemical systems. It also highlights the sophistication of biological systems compared to systems produced by human designers. We are a far distance away from creating machines that are as sophisticated as the machines found inside the cell. Nevertheless, as we continue to move in that direction, I think the case for a Creator will become even more compelling.

    Who knows? With insights such as these maybe one day we will return to the good old days of biology, when teleology was paramount.

    Resources

    Biomolecular Machines and the Watchmaker Argument

    Responding to Challenges to the Watchmaker Argument

    Endnotes
    1. Whenever you depart, in the least, from the similarity of the cases, you diminish proportionably the evidence; and may at last bring it to a very weak analogy, which is confessedly liable to error and uncertainty.” David Hume, “Dialogues Concerning Natural Religion,” in Classics of Western Philosophy, 3rd ed., ed. Steven M. Cahn, (1779; repr., Indianapolis: Hackett, 1990), 880.
    2. For example, Daniel Mange et al., “Von Neumann Revisited: A Turing Machine with Self-Repair and Self-Reproduction Properties,” Robotics and Autonomous Systems 22 (1997): 35-58, https://doi.org/10.1016/S0921-8890(97)00015-8; Jean-Yves Perrier, Moshe Sipper, and Jacques Zahnd, “Toward a Viable, Self-Reproducing Universal Computer,” Physica D: Nonlinear Phenomena
      97, no. 4 (October 15, 1996): 335–52, https://doi.org/10.1016/0167-2789(96)00091-7; Umberto Pesavento, “An Implementation of von Neumann’s Self-Reproducing Machine,” Artificial Life 2, no. 4 (Summer 1995): 337–54, https://doi.org/10.1162/artl.1995.2.4.337.
    3. Sara Imari Walker and Paul C. W. Davies, “The Algorithmic Origins of Life,” Journal of the Royal Society Interface 10 (2013), doi:10.1098/rsif.2012.0869.
  • The Flagellum’s Hook Connects to the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jan 08, 2020

    What would you say is the most readily recognizable scientific icon? Is it DNA, a telescope, or maybe a test tube?

    blog__inline--the-flagellums-hook-connects-to-the-case-1

    Figure 1: Scientific Icons. Image credit: Shutterstock

    Marketing experts recognize the power of icons. When used well, icons prompt consumers to instantly identify a brand or product. They can also communicate a powerful message with a single glance.

    Though many skeptics question if it’s science at all, the intelligent design movement has identified a powerful icon that communicates its message. Today, when most people see an image the bacterial flagellum they immediately think: Intelligent Design.

    This massive protein complex powerfully communicates sophisticated engineering that could only come from an Intelligent Agent. And along these lines, it serves as a powerful piece of evidence for a Creator’s handiwork. Careful study of its molecular architecture and operation provides detailed evidence that an Intelligent Agent must be responsible for biochemical systems and, hence, the origin of life. And, as it turns out, the more we learn about the bacterial flagellum, the more evident it becomes that a Creator must have played a role in the origin and design of life—at least at the biochemical levelas new research from Japan illustrates.1

    The Bacterial Flagellum

    This massive protein complex looks like a whip extending from the bacterial cell surface. Some bacteria have only a single flagellum, others possess several flagella. Rotation of the flagellum(a) allows the bacterial cell to navigate its environment in response to various chemical signals.

    blog__inline--the-flagellums-hook-connects-to-the-case-2

    Figure 2: Typical Bacteria with Flagella. Image credit: Shutterstock

    An ensemble of 30 to 40 different proteins makes up the typical bacterial flagellum. These proteins function in concert as a literal rotary motor. The flagellum’s components include a rotor, stator, drive shaft, bushing, universal joint, and propeller. It is essentially a molecular-sized electrical motor directly analogous to human-produced rotary motors. The rotation is powered by positively charged hydrogen ions flowing through the motor proteins embedded in the inner membrane.

    blog__inline--the-flagellums-hook-connects-to-the-case-3

    Figure 3: The Bacterial Flagellum. Image credit: Wikipedia

    The Bacterial Flagellum and the Revitalized Watchmaker Argument

    Typically, when intelligent design proponents/creationists use the bacterial flagellum to make the case for a Creator, they focus the argument on its irreducibly complex nature. I prefer a different tact. I like to emphasize the eerie similarity between rotary motors created by human designers and nature’ bacterial flagella.

    The bacterial flagellum is just one of a large number of protein complexes with machine-like attributes. (I devote an entire chapter to biomolecular machines in my book The Cell’s Design.) Collectively, these biomolecular machines can be deployed to revitalize the Watchmaker argument.

    Popularized by William Paley in the eighteenth century, this argument states that as a watch requires a watchmaker, so too, life requires a Creator. Following Paley’s line of reasoning, a machine is emblematic of systems produced by intelligent agents. Biomolecular machines display the same attributes as human-crafted machines. Therefore, if the work of intelligent agents is necessary to explain the genesis of machines, shouldn’t the same be true for biochemical systems?

    Skeptics inspired by atheist philosopher David Hume have challenged this simple, yet powerful, analogy. They argue that the analogy would be compelling only if there is a high degree of similarity between the objects that form the analogy. Skeptics have long argued that biochemical systems and machines are too dissimilar to make the Watchmaker argument work.

    However, the striking similarity between the machine parts of the bacterial flagellum and human-made machines cause this objection to evaporate. New work on flagella by Japanese investigators lends yet more support to the Watchmaker analogy.

    New Insights into the Structure and Function of the Flagellum’s Universal Joint

    The flagellum’s universal joint (sometimes referred to as the hook) transfers the torque generated by the motor to the propeller. The research team wanted to develop a deeper understanding of the relationship between the molecular structure of the hook and how the structural features influence its function as a universal joint.

    Comprised of nearly 100 copies (monomers) of a protein called FlgE, the hook is a curved, tube-like structure with a hollow interior. FlgE monomers stack on top of each other to form a protofilament. Eleven protofilaments organize to form the hook’s tube, with the long axis of the protofilament aligning to form the long axis of the hook.

    Each FlgE monomer consists of three domains, called D0, D1, and D2. The researchers discovered that when the FlgE monomers stack to form a protofilament, the D0, D1, and D2 domains of each of the monomers align along the length of the protofilament to form three distinct regions in the hook. These layers have been labeled the tube layer, the mesh layer, and the spring layer.

    During the rotation of the flagellum, the protofilaments experience compression and extension. The movement of the domains, which changes their spatial arrangement relative to one another, mediates the compression and extension. These domain movements allow the hook to function as a universal joint that maintains a rigid tube shape against a twisting “force,” while concurrently transmitting torque from the motor to the flagellum’s filament as it bends along its axis.

    Regardless of one’s worldview, it is hard not to marvel at the sophisticated and elegant design of the flagellum’s hook!

    The Bacterial Flagellum and the Case for a Creator

    If the Watchmaker argument holds validity, it seems reasonable to think that the more we learn about protein complexes, such as the bacterial flagellum, the more machine-like they should appear to be. This work by the Japanese biochemists bears out this assumption. The more we characterize biomolecular machines, the more reason we have to think that life stems from a Creator’s handiwork.

    Dynamic properties of the hook assembly add to the Watchmaker argument (when applied to the bacterial flagellum). This structure is much more sophisticated and ingenious than the design of a typical universal joint crafted by human designers. This elegance and ingenuity of the hook are exactly the attributes I would expect if a Creator played a role in the origin and design of life.

    Message received, loud and clear.

    Resources

    The Bacterial Flagellum and the Case for a Creator

    Can Intelligent Design Be Part of the Scientific Construct?

    Endnotes
    1. Takayuki Kato et al., “Structure of the Native Supercoiled Flagellar Hook as a Universal Joint,” Nature Communications 10 (2019): 5295, doi:10.1038/s4146.
  • Genome Code Builds the Case for Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 18, 2019

    A few days ago, I was doing a bit of Christmas shopping for my grandkids and I happened across some really cool construction kits, designed to teach children engineering principles while encouraging imaginative play. For those of you who still have a kid or two on your Christmas list, here are some of the products that caught my eye:

    These building block sets are a far cry from the simple Lego kits I played with as a kid.

    As cool as these construction toys may be, they don’t come close to the sophisticated construction kit cells use to build the higher-order structures of chromosomes. This point is powerfully illustrated by the insights of Italian investigator Giorgio Bernardi. Over the course of the last several years, Bernardi’s research teams have uncovered design principles that account for chromosome structure, a set of rules that he refers to as the genome code.1

    To appreciate these principles and their theological implications, a little background information is in order. (For those readers familiar with chromosome structure, skip ahead to The Genome Code.)

    Chromosomes

    DNA and proteins interact to make chromosomes. Each chromosome consists of a single DNA molecule wrapped around a series of globular protein complexes. These complexes repeat to form a supramolecular structure resembling a string of beads. Biochemists refer to the “beads” as nucleosomes.

    blog__inline--genome-code-builds-the-case-for-creation-1

    Figure 1: Nucleosome Structure. Image credit: Shutterstock

    The chain of nucleosomes further coils to form a structure called a solenoid. In turn, the solenoid condenses to form higher-order structures that constitute the chromosome.

    blog__inline--genome-code-builds-the-case-for-creation-2

    Figure 2: Chromosome Structure Image credit: Shutterstock

    Between cell division events (called the interphase of the cell cycle), the chromosome exists in an extended diffuse form that is not readily detectable when viewed with a microscope. Just prior to and during cell division, the chromosome condenses to form its readily recognizable compact structures.

    Biologists have discovered that there are two distinct regions—labeled euchromatin and heterochromatin for chromosomes in the diffuse state. Euchromatin is resistant to staining with dyes that help researchers view it with a microscope. On the other hand, heterochromatin stains readily. Biologists believe that heterochromatin is more tightly packed (and, hence, more readily stained) than euchromatin. They have also learned that heterochromatin associates with the nuclear envelope.

    blog__inline--genome-code-builds-the-case-for-creation-3

    Figure 3: Structure of the Nucleus Showing the Distribution of Euchromatin and Heterochromatin. Image credit: Wikipedia

    The Genome Code

    Historically, biologists have viewed chromosomes as consisting of compositionally distinct units called isochores. In vertebrate genomes, five isochores exist (L1, L2, H1, H2, and H3). The isochores differ in the composition of guanine- and cytosine-containing deoxyribonucleotides (two of the four building blocks of DNA). The GC composition increases from L1 to H3. Gene density also increases, with the H3 isochore possessing the greatest number of genes. On the other hand, the size of DNA pieces of compositional homogeneity decreases from L1 to H3.

    Bernardi and his collaborators have developed evidence that the isochores reflect a fundamental unit of chromosome organization. The H isochores correspond to GC-rich euchromatin (containing most of the genes) and the L isochores correspond to GC-poor heterochromatin (characterized by gene deserts).

    Bernardi’s research teams have demonstrated that the two groups of isochores are characterized by different distributions of DNA sequence elements. GC-poor isochores contain a disproportionately high level of oligo A sequences while GC-rich isochores harbor a disproportionately high level of oligo G sequences. These two different types of DNA sequence elements form stiff structures that mold the overall three-dimensional architecture of chromosomes. For example, oligo A sequences introduce curvature to the DNA double helix. This topology allows the double helix to wrap around the protein core that forms nucleosomes. The oligo G sequence elements adopt a topology that weakens binding to the proteins that form the nucleosome core. As Bernardi points out, “There is a fundamental link between DNA structure and chromatin structure, the genomic code.”2

    In other words, the genomic code refers to a set of DNA sequence elements that:

    1. Directly encodes and molds chromosome structure (while defining nucleosome binding),
    2. Is pervasive throughout the genome, and
    3. Overlaps the genetic code by constraining sequence composition and gene structure.

    Because of the existence of the genomic code, variations in DNA sequence caused by mutations will alter the structure of chromosomes and lead to deleterious effects.

    The bottomline: Most of the genomic sequence plays a role in establishing the higher-order structures necessary for chromosome formation.

    Genomic Code Challenges the Junk DNA Concept

    According to Bernardi, the discovery of the genomic code explains the high levels of noncoding DNA sequences in genomes. Many people view such sequences as vestiges of an evolutionary history. Because of the existence and importance of the genomic code, the vast proportion of noncoding DNA found in vertebrate genomes must be viewed as functionally vital. According to Bernardi:

    Ohno, mostly focusing on pseudo-genes, proposed that non-coding DNA was “junk DNA.” Doolittle and Sapienza and Orgel and Crick suggested the idea of “selfish DNA,” mainly involving transposons visualized as molecular parasites rather than having an adaptive function for their hosts. In contrast, the ENCODE project claimed that the majority (~80%) of the genome participated “in at least one biochemical RNA-and/or chromatin-associated event in at least one cell type.”…At first sight, the pervasive involvement of isochores in the formation of chromatin domains and spatial compartments seems to leave little or no room for “junk” or “selfish” DNA.3

    The ENCODE Project

    Over the last decade or so, ENCODE Project scientists have been seeking to identify the functional DNA sequence elements in the human genome. The most important landmark for the project came in the fall of 2012 when the ENCODE Project reported phase II results. (Currently, ENCODE is in phase IV.) To the surprise of many, the project reported that around 80 percent of the human genome displays biochemical activity—hence, function—with many scientists anticipating that that percentage would increase as phases III and IV moved toward completion.

    The ENCODE results have generated quite a bit of controversy, to say the least. Some researchers accept the ENCODE conclusions. Others vehemently argue that the conclusions fly in the face of the evolutionary paradigm and, therefore, can’t be valid. Of course, if the ENCODE Project conclusions are correct, then it becomes a boon for creationists and intelligent design advocates.

    One of the most prominent complaints about the ENCODE conclusions relates to the way the consortium determined biochemical function. Critics argue that ENCODE scientists conflated biochemical activity with function. These critics assert that, at most, about ten percent of the human genome is truly functional, with the remainder of the activity reflecting biochemical noise and experimental artifacts.

    However, as Bernardi points out, his work (independent of the ENCODE Project) affirms the project’s conclusions. In this case, the so-called junk DNA plays a critical role in molding the structures of chromosomes and must be considered functional.

    Function for “Junk DNA”

    Bernardi’s work is not the first to recognize pervasive function of noncoding DNA. Other researchers have identified other functional attributes of noncoding DNA. To date, researchers have identified at least five distinct functional roles that noncoding DNA plays in genomes.

    1. Helps in gene regulation
    2. Functions as a mutational buffer
    3. Forms a nucleoskeleton
    4. Serves as an attachment site for mitotic apparatus
    5. Dictates three-dimensional architecture of chromosomes

    A New View of Genomes

    These types of insights are forcing us to radically rethink our view of the human genome. It appears that genomes are incredibly complex, sophisticated biochemical systems and most of the genes serve useful and necessary functions.

    We have come a long way from the early days of the human genome project. Just 15 years ago, many scientists estimated that around 95 percent of the human genome consists of junk. That acknowledgment seemingly provided compelling evidence that humans must be the product of an evolutionary history. Today, the evidence suggests that the more we learn about the structure and function of genomes, the more elegant and sophisticated they appear to be. It is quite possible that most of the human genome is functional.

    For creationists and intelligent design proponents, this changing view of the human genome provides reasons to think that it is the handiwork of our Creator. A skeptic might wonder why a Creator would make genomes littered with so much junk. But if a vast proportion of genomes consists of functional sequences, then this challenge no longer carries weight and it becomes more and more reasonable to interpret genomes from within a creation model/intelligent design framework.

    What a Christmas gift!

    Resources

    Junk DNA Regulates Gene Expression

    Junk DNA Serves as a Mutational Buffer

    Junk DNA Serves a Nucleoskeletal Role

    Junk DNA Plays a Role in Cell Division

    ENCODE Project

    Studies that Affirm the ENCODE Results

    Endnotes
    1. Giorgio Bernardi, “The Genomic Code: A Pervasive Encoding/Molding of Chromatin Structures and a Solution of the ‘Non-Coding DNA’ Mystery,” BioEssays 41, no. 12 (November 8, 2019), doi:10.1002/bies.201900106.
    2. Bernardi, “The Genomic Code.
    3. Bernardi, “The Genomic Code.
  • Mutations, Cancer, and the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 11, 2019

    Cancer. Perhaps no other word evokes more fear, anger, and hopelessness.

    It goes without saying that cancer is an insidious disease. People who get cancer often die way too early. And even though a cancer diagnosis is no longer an immediate death sentence—thanks to biomedical advances—there are still many forms of cancer that are difficult to manage, let alone effectively treat.

    Cancer also causes quite a bit of consternation for those of us who use insights from science to make a case for a Creator. From my vantage point, one of the most compelling reasons to think that a Creator exists and played a role in the origin and design of life is the elegant, sophisticated, and ingenious designs of biochemical systems. And yet, when I share this evidence with skeptics—and even seekers—I am often met with resistance in the form of the question: What about cancer?

    Why Would God Create a World Where Cancer Is Possible?

    In effect, this question typifies one of the most commonand significantobjections to the design argument. If a Creator is responsible for the designs found in biochemistry, then why are so many biochemical systems seemingly flawed, inelegant, and poorly designed?

    The challenge cancer presents for the design argument carries an added punch. It’s one thing to cite inefficiency of protein synthesis or the error-prone nature of the rubisco enzyme, but it’s quite another to describe the suffering of a loved one who died from cancer. There’s an emotional weight to the objection. These deaths feel horribly unjust.

    Couldn’t a Creator design biochemistry so that a disease as horrific as cancer would never be possible—particularly if this Creator is all-powerful, all-knowing, and all-good?

    I think it’s possible to present a good answer to the challenge that cancer (and other so-called bad designs) poses for the design argument. Recent insights published by a research duo from Cambridge University in the UK help make the case.1

    A Response to the Bad Designs in Biochemistry and Biology

    Because the “bad designs” challenge is so significant (and so frequently expressed), I devoted an entire chapter in The Cell’s Design to addressing the apparent imperfections of biochemical systems. My goal in that chapter was to erect a framework that comprehensively addresses this pervasive problem for the design argument.

    In the face of this challenge it is important to recognize that many so-called biochemical flaws are not genuine flaws at all. Instead, they arise as the consequences of trade-offs. In their cellular roles, many biochemical systems face two (or more) competing objectives. Effectively managing these opposing objectives means that it is impossible for every aspect of the system to perform at an optimal level. Some features must be carefully rendered suboptimal to ensure that the overall system performs robustly under a wide range of conditions.

    Cancer falls into this category. It is not a consequence of flawed biochemical designs. Instead, cancer reflects a trade-off between DNA repair and cell survival.

    DNA Damage and Cancer

    The etiology (cause) of most cancers is complex. While about 10 percent of cancers have a hereditary basis, the vast proportion results from mutations to DNA caused by environmental factors.

    Some of the damage to DNA stems from endogenous (internal) factors, such as water and oxygen in the cell. These materials cause hydrolysis and oxidative damage to DNA, respectively. Both types of damage can introduce mutations into this biomolecule. Exogenous chemicals (genotoxins) from the environment can also interact with DNA and cause damage leading to mutations. So does exposure to ultraviolet radiation and radioactivity from the environment.

    Infectious agents such as viruses can also cause cancer. Again, these infectious agents cause genomic instability, which leads to DNA mutations.

    blog__inline--mutations-cancer-and-the-case-for-a-creator

    Figure: Tumor Formation Process. Image credit: Shutterstock

    In effect, DNA mutations are an inevitable consequence of the laws of nature, specifically the first and second laws of thermodynamics. These laws make possible the chemical structures and operations necessary for life to even exist. But, as a consequence, these same life-giving laws also undergird chemical and physical processes that damage DNA.

    Fortunately, cells have the capacity to detect and repair damage to DNA. These DNA repair pathways are elaborate and sophisticated. They are the type of biochemical features that seem to support the case for a Creator. DNA repair pathways counteract the deleterious effects of DNA mutation by correcting the damage and preventing the onset of cancer.

    Unfortunately, these DNA repair processes function incompletely. They fail to fully compensate for all of the damage that occurs to DNA. Consequently, over time, mutations accrue in DNA, leading to the onset of cancer. The inability of the cell’s machinery to repair all of the mutation-causing DNA damage and, ultimately, protect humans (and other animals) from cancer is precisely the thing that skeptics and seekers alike point to as evidence that counts against intelligent design.

    Why would a Creator make a world where cancer is possible and then design cancer-preventing processes that are only partially effective?

    Cancer: The Result of a Trade-Off

    Even though mutations to DNA cause cancer, it is rare that a single mutation leads to the formation of a malignant cell type and, subsequently, tumor growth. Biomedical researchers have discovered that the onset of cancer involves a series of mutations to highly specific genes (dubbed cancer genes). The mutations that cause cells to transform into cancer cells are referred to as driver mutations. Researchers have also learned that most cells in the body harbor a vast number of mutations that have little or no biological consequence. These mutations are called passenger mutations. As it turns out, there are thousands of passenger mutations in a typical cancer cell and only about ten driver mutations to so-called cancer genes. Biomedical investigators have also learned that many normal cells harbor both passenger and driver mutations without ever transforming. (It appears that other factors unrelated to DNA mutation play a role in causing a cancer cell to undergo extensive clonal expansion, leading to the formation of a tumor.)

    What this means is that mutations to DNA are quite extensive, even in normal, healthy cells. But this factor prompts the question: Why is the DNA repair process so lackluster?

    The research duo from Cambridge University speculate that DNA repair is so costly to cells—making extensive use of energy and cell resources—that to maintain pristine genomes would compromise cell survival. These researchers conclude that “DNA quality control pathways are fully functional but naturally permissive of mutagenesis even in normal cells.”2 And, it seems as if the permissiveness of the DNA repair processes generally have little consequence given that a vast proportion of the human genome consists of noncoding DNA.

    Biomedical researchers have uncovered another interesting feature about the DNA repair processes. The processes are “biased,” with repairs taking place preferentially on the DNA strand (of the double helix) that codes for proteins and, hence, is transcribed. In other words, when DNA repair takes place it occurs where it counts the most. This bias displays an elegant molecular logic and rationale, strengthening the case for design.

    Given that driver mutations are not in and of themselves sufficient to lead to tumor formation, the researchers conclude that cancer prevention pathways are quite impressive in the human body. They conclude, “Considering that an adult human has ~30 trillion cells, and only one cell develops into a cancer, human cells are remarkably robust at preventing cancer.”3

    So, what about cancer?

    Though cancer ravages the lives of so many people, it is not because of poorly designed, substandard biochemical systems. Given that we live in a universe that conforms to the laws of thermodynamics, cancer is inevitable. Despite this inevitability, organisms are designed to effectively ward off cancer.

    Ironically, as we gain a better understanding of the process of oncogenesis (the development of tumors), we are uncovering more—not less—evidence for the remarkably elegant and ingenious designs of biochemical systems.

    The insights by the research team from Cambridge University provide us with a cautionary lesson. We are often quick to declare a biochemical (or biological) feature as poorly designed based on incomplete understanding of the system. Yet, inevitably, as we learn more about the system we discover an exquisite rationale for why things are the way they are. Such knowledge is consistent with the idea that these systems stem from a Creator’s handiwork.

    Still, this recognition does little to dampen the fear and frustration associated with a cancer diagnosis and the pain and suffering experienced by those who battle cancer (and their loved ones who stand on the sidelines watching the fight take place). But, whether we are a skeptic or a believer, we all should be encouraged by the latest insights developed by the Cambridge researchers. The more we understand about the cause and progression of cancers, the closer we are to one day finding cures to a disease that takes so much from us.

    We can also take added encouragement from the powerful scientific case for a Creator’s existence. The Old and New Testaments teach us that the Creator revealed by scientific discovery has suffered on our behalf and will suffer alongside usin the person of Christas we walk through the difficult circumstances of life.

    Resources

    Examples of Biochemical Trade-Offs

    Evidence that Nonfunctional DNA Serves as a Mutational Buffer

    Endnotes
    1. Serena Nik-Zainal and Benjamin A. Hall, “Cellular Survival over Genomic Perfection,” Science 366, no. 6467 (November 15, 2019): 802–03, doi:10.1126/science.aax8046.
    2. Nik-Zainal and Hall, 802–03.
    3. Nik-Zainal and Hall, 802–03.
  • Evolutionary Story Tells the Tale of Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Dec 04, 2019

    In high school I was a bit of a troublemaker. It wasn’t out of the ordinary for me to be summoned to Mr. Reynolds’ office—the school’s vice principal—for some misdeed or other. After a few office visits, I quickly learned the value of a good story. If convincing enough, I could defray the accusations leveled against me. All I had to do was create plausible deniability.

    Story Telling in the Evolutionary Paradigm

    Storytelling isn’t just the purview of a mischievous kid facing the music in the principal’s office, it is part of the construct of science.

    Recent work by a team of scientific investigators from the University of Florida (UF) highlights the central role that storytelling plays in evolutionary biology.1 In fact, it is not uncommon for evolutionary biologists to weave grand narratives that offer plausible evolutionary stories for the emergence of biological or behavioral traits. And, though these accounts seem scientific, they are often unverifiable scientific explanations.

    Inspired by Rudyard Kipling’s (1865–1936) book of children’s origin stories, the late evolutionary biologist Stephen Jay Gould (1941–2002) referred to these evolutionary tales as just-so stories. To be fair, others have been critical of Gould’s cynical view of evolutionary accounts, arguing that, in reality, just-so stories in evolutionary biology are actually hypotheses about evolutionary transformations. But still, more often than not, these “hypotheses” appear to be little more than convenient fictions.

    An Evolutionary Just-So Story of Moths and Bats

    The traditional evolutionary account of ultrasonic sound detection in nocturnal moths serves as a case in point. Moths (and butterflies) belong to one of the most important groups of insects: lepidoptera. This group consists of about 160,000 species, with nocturnal moths comprising over 75 percent of the group.

    Moths play a key role in ecosystems. For example, they serve as one of the primary food sources for bats. Bats use echolocation to help them locate moths at night. Bats emit ultrasonic cries that bounce off the moths and reflect back to the bats, giving these predators the pinpoint location of the moths, even during flight.

    Many nocturnal moth species have defenses that help them escape predation by bats. One defense is ears (located in different areas of their bodies) that detect ultrasonic sounds. This capability allows the moths to hear the bats coming and get out of their way.

    For nearly a half century, evolutionary biologists explained moths’ ability to hear ultrasonic sounds as the outworking of an “evolutionary arms race” between echolocating bats and nocturnal moths. Presumably, bats evolved the ability to echolocate, allowing them to detect and prey upon moths at night by plucking them out of the air in mid-flight. In response, some groups of moths evolved ears that allowed them to detect the ultrasonic screeches emitted by bats, helping them to avoid detection.

    blog__inline--evolutionary-story-tells-the-tale-of-creation

    Figure: Flying Pipistrelle bat. Image credit: Shutterstock

    For 50 years, biologists have studied the relationship between echolocating bats and nocturnal moths with the assumption that this explanation is true. (I doubt Mr. Reynolds ever assumed my stories were true.) In fact, evolutionary accounts like this one provide evidence for the idea of coevolution. Advanced by Paul Ehrlich and Peter Raven in 1964, this evolutionary model maintains that ecosystems are shaped by species that affect one another’s evolution.

    If the UF team’s work is to be believed, then it turns out that the story recounting the evolutionary arms race between nocturnal moths and echolocating bats is fictional. As team member Jesse Barber, a researcher who has studied bats and moths, complains, “Most of the introductions I’ve written in my papers [describing the coevolution of bats and moths] are wrong.”2

    An Evolutionary Study on the Origin of Moths and Butterflies

    To reach this conclusion, the UF team generated the most robust evolutionary tree (phylogeny) for lepidopterans to date. They also developed an understanding of the timing of events in lepidopteran natural history. They were motivated to take on this challenge because of the ecological importance of moths and butterflies. As noted, these insects play a central role in terrestrial ecosystems all over the world and coevolutionary models provide the chief explanations for their place in these ecosystems. But, as the UF researchers note, “These hypotheses have not been rigorously tested, because a robust lepidopteran phylogeny and timing of evolutionary novelties are lacking.”3

    To remedy this problem, the researchers built a lepidopteran evolutionary tree from a data set of DNA sequences that collectively specified 2,100 protein-coding genes from 186 lepidopteran species. These species represented all the major divisions within this biological group. Then, they dated the evolutionary timing of key events in lepidopteran natural history from the fossil record.

    Based on their analysis, the research team concluded that the first lepidopteran appeared around 300 million years ago. This creature fed on nonvascular plants. Around 240 million years ago, lepidopterans with tubelike proboscises (long, sucking mouthpiece) appeared, allowing these insects to extract nectar from flowering plants.

    These results cohere with the coevolutionary model that the first lepidopterans fed internally on plants and, later, externally, as they evolved the ability to access nectar from plants. Flowering plants appear around 260 million years ago, which is about the time that the tubelike proboscis appears in lepidopterans.

    But perhaps the most important and stunning finding from their study stems from the appearance of hearing organs in moths. It looks as if these organs arose independently 9 separate times—around 80 to 90 million years ago—well before bats began to echolocate. (The earliest known bat from the fossil record with the capacity to echolocate is around 45 to 50 million years old.)

    The UF investigators uncovered another surprising result related to the appearance of butterflies. They discovered that butterflies became diurnal (active in the daytime) around 98 million years ago. According to the traditional evolutionary story, butterflies (which are diurnal) evolved from nocturnal moths when they transitioned to daytime activities to escape predation of echolocating bats, which feed at night. But as with the origin of hearing organs in moths, the transition from nocturnal to diurnal behavior occurred well before the first appearance of echolocating bats and seems to have occurred independently at least two separate times.

    It Just Isn’t So

    The UF evolutionary biologists’ study demonstrates that the coevolutionary models for the origin of hearing organs in moths and diurnal behavior of butterflies—dominant for over a half century in evolutionary thought—are nothing more than just-so stories. They appear to make sense on the surface but are no closer to the truth than the tales I would weave in Mr. Reynolds’ office.

    In light of this discovery, the research team posits two new evolutionary models for the origin of these two traits, respectively. Now scientists think that the evolutionary emergence of hearing organs in moths may have provided these insects the capacity for auditory surveillance of their environment. Their capacity to hear may have helped them detect the low-frequency sounds of flapping bird wings, for example, and avoid predation. Presumably, these same hearing organs later evolved to detect the high-frequency cries of bats. As for the evolutionary origin of diurnal behavior characteristic of butterflies, researchers now speculate that butterflies became diurnal to take advantage of flowers that bloom in the daytime.

    Again, on the surface, these explanations seem plausible. But one has to wonder if these models, like their predecessors, are little more than just-so stories. In fact, this study raises a general concern: How much confidence can we place in any evolutionary account? Could it be that other evolutionary accounts are, in reality, good stories, but in the end will turn out to be just as fanciful as the stories written by Rudyard Kipling?

    In and of itself, recognizing that many evolutionary models could just be stories doesn’t provide sufficient warrant for skepticism about the evolutionary paradigm. But it does give pause for thought. Plus, two insights from this study raise real concerns about the capacity of evolutionary processes to account for life’s history and diversity:

    1. The discovery that ultrasonic hearing in moths arose independently nine separate times
    2. The discovery that diurnal behavior in butterflies appeared independently in at least two separate instances

    Convergence

    Evolutionary biologists use the term convergence to refer to the independent origin of identical or nearly identical biological and behavioral traits in organisms that cluster into unrelated groups.

    Convergence isn’t a rare phenomenon or limited to the independent origin of hearing organs in moths and diurnal behavior in butterflies. Instead, it is a widespread occurrence in biology, as evolutionary biologists Simon Conway Morris and George McGhee document in their respective books Life’s Solution and Convergent Evolution. It appears as if the evolutionary process routinely arrives at the same outcome, time and time again.4 In fact, biologists observe these repeated outcomes at the ecological, organismal, biochemical, and genetic levels.

    From my perspective, the widespread occurrence of convergent evolution is a feature of biology that evolutionary theory can’t explain. I see the widespread occurrence of convergence as a failed scientific prediction of the evolutionary paradigm.

    Convergence Should Be Rare, Not Widespread

    In effect, chance governs biological and biochemical evolution at its most fundamental level. Evolutionary pathways consist of a historical sequence of chance genetic changes operated on by natural selection, which, too, consists of chance components. The consequences are profound. If evolutionary events could be repeated, the outcome would be dramatically different every time. The inability of evolutionary processes to retrace the same path makes it highly unlikely that the same biological and biochemical designs should appear repeatedly throughout nature.5

    In support of this view, consider a 2002 landmark study carried out by two Canadian investigators who simulated macroevolutionary processes using autonomously replicating computer programs. In their study, the computer programs operated like digital organisms.6 The programs could be placed into different “ecosystems” and, because they replicate autonomously, they could evolve. By monitoring the long-term evolution of these digital organisms, the two researchers determined that evolutionary outcomes are historically contingent and unpredictable. Every time they placed the same digital organism in the same environment, it evolved along a unique trajectory.

    In other words, given the historically contingent nature of the evolutionary mechanisms, we would expect convergence to be rare in the biological realm. Yet, biologists continue to uncover example after example of convergent features—some of which are quite astounding.

    Bat Echolocation and Convergence

    Biologists have discovered one such example of convergence in the origin of echolocating bats. Echolocation appears to have arisen two times independently: once in microbats and once in Rhinolophidae, a superfamily of megabats.7 Prior to this discovery, reported in 2000, biologists classified Rhinolophidae as a microbat based on their capability to echolocate. But DNA evidence indicates that this superfamily has greater affinity to megabats than to microbats. This result means that echolocation must have originated separately in the microbats and Rhinolophidae. Researchers have also shown that the same genetic and biochemical changes occurred in microbats and megabats to create their echolocating ability. These changes appear to have taken place in the gene prestin and in its protein-product, prestin.8

    In other words, we observe two outcomes: (1) the traditional evolutionary accounts for coevolution among echolocating bats, nocturnal moths, and diurnal butterflies turned out to be just-so stories, and (2) the convergence observed in these three groups stands as independent and separate instances of failed predictions of the evolutionary paradigm.

    Convergence and the Case for Creation

    If the widespread occurrence of convergence can’t be explained through evolutionary theory, then how can it be explained?

    It is not unusual for architects and engineers to redeploy the same design features, sometimes in objects, devices, or systems that are completely unrelated to one another. So, instead of viewing convergent features as having emerged through repeated evolutionary outcomes, we could understand them as reflecting the work of a divine mind. From this perspective, the repeated origins of biological features equate to the repeated creations by an intelligent Agent who employs a common set of solutions to address a common set of problems facing unrelated organisms.

    Now that’s a story even Mr. Reynolds might believe.

    Resources

    Convergence of Echolocation

    The Historical Contingency of the Evolutionary Process

    Endnotes
    1. Akito Y. Kawahara et al., “Phylogenomics Reveals the Evolutionary Timing and Pattern of Butterflies and Moths,” Proceedings of the National Academy of Sciences, USA 116, no. 45 (November 5, 2019): 22657–63, doi:10.1073/pnas.1907847116.
    2. Ed Yong, “A Textbook Evolutionary Story about Moths and Bats Is Wrong,” The Atlantic (October 21, 2019), https://www.theatlantic.com/science/archive/2019/10/textbook-evolutionary-story-wrong/600295/.
    3. Kawahara et al., “Phylogenomics.”
    4. Simon Conway Morris, Life’s Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003); George McGhee, Convergent Evolution: Limited Forms Most Beautiful (Cambridge, MA: MIT Press, 2011).
    5. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W. W. Norton & Company, 1990).
    6. Gabriel Yedid and Graham Bell, “Macroevolution Simulated with Autonomously Replicating Computer Programs,” Nature 420 (December 19, 2002): 810–12, doi:10.1038/nature01151.
    7. Emma C. Teeling et al., “Molecular Evidence Regarding the Origin of Echolocation and Flight in Bats,” Nature 403 (January 13, 2000): 188–92, doi:10.1038/35003188.
    8. Gang Li et al., The Hearing Gene Prestin Reunites Echolocating Bats, Proceedings of the National Academy of Sciences, USA 105, no. 37 (September 16, 2008): 13959–64, doi:10.1073/pnas.0802097105.
  • Evolution of Antibiotic Resistance Makes the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 27, 2019

    What would it be like to live in a world without antibiotics?

    It isn’t that hard to imagine, because antibiotics weren’t readily available for medical use until after World War II. And since that time, widespread availability of antibiotics has revolutionized medicine. However, the ability to practice modern medicine is being threatened because of the rise of antibiotic-resistant bacteria. Currently, there exists a pressing need to understand the evolution of antibiotic-resistant strains and to develop new types of antibiotics. Surprisingly, this worthy pursuit has unwittingly stumbled upon evidence for a Creator’s role in the design of biochemical systems.

    Alexander Fleming (1881–1955) discovered the first antibiotic, penicillin, in 1928. But it wasn’t until Ernst Chain, Howard Florey, and Edward Abraham purified penicillin in 1942 and Norman Heatley developed a bulk extraction technique in 1945 that the compound became available for routine medical use.

    blog__inline--evolution-of-antibiotic-resistance-1

    Figure 1: Alexander Fleming. Image Credit: Wikipedia

    Prior to this time, people often died from bacterial infections. Complicating this vulnerability to microbial pathogens was the uncertain outcome of many medical procedures. For example, patients often died after surgery due to complications arising from infections.

    blog__inline--evolution-of-antibiotic-resistance-2

    Figure 2: A generalized structure for penicillin antibiotics. Image credit: Shutterstock

    Bacterial Resistance Necessitates New Antibiotics

    Unfortunately, because of the growing threat of superbugs—antibiotic-resistant strains of bacteria—health experts around the world worry that we soon will enter into a post-antibiotic era in which modern medicine will largely revert to pre-World War II practices. According to Dr. David Livermore, laboratory director at Public Health England, which is responsible for monitoring antibiotic-resistant strains of bacteria, “A lot of modern medicine would become impossible if we lost our ability to treat infections.”1

    Without antibiotics, people would routinely die of infections that we easily treat today. Abdominal surgeries would be incredibly risky. Organ transplants and chemotherapy would be out of the question. And the list continues.

    The threat of entering into a post-antibiotic age highlights the desperate need to develop new types of antibiotics. It also highlights the need to develop a better understanding of evolutionary processes that lead to the emergence of antibiotic resistance in bacteria.

    Recently, a research team from Michigan State University (MSU) published a report that offers insight into the latter concern. These researchers studied the evolution of antibiotic resistance in bacteria that had been serially cultured in the laboratory for multiple decades in media that was free from antibiotics.2 Through this effort, they learned that the genetic history of the bacterial strain plays a key role in its acquisition of resistance to antibiotics.

    This work has important implications for public health, but it also carries theological implications. The decades-long experiment provides evidence that the elegant designs characteristic of biochemical and biological systems most likely stem from a Creator’s handiwork.

    The Long-Term Evolution Experiment

    To gain insight into the role that genetic history plays in the evolution of antibiotic resistance, the MSU researchers piggy-backed on the famous Long-Term Evolution Experiment (LTEE) at Michigan State University. Inaugurated in 1988, the LTEE is designed to monitor evolutionary changes in the bacterium E. coli, with the objective of developing an understanding of the evolutionary process.

    blog__inline--evolution-of-antibiotic-resistance-3

    Figure 3: A depiction of E. coli. Image Credit: Shutterstock

    The LTEE began with a single cell of E. coli that was used to generate twelve genetically identical lines of cells. The twelve clones of the parent E. coli cell were separately inoculated into a minimal growth medium containing low levels of glucose as the only carbon source. After growing overnight, an aliquot (equal fractional part) of each of the twelve cultures was transferred into fresh growth media. This process has been repeated every day for about thirty years. Throughout the experiment, aliquots of cells have been frozen every 500 generations. These frozen cells represent a “fossil record” of sorts that can be thawed out and compared to current and other past generations of cells.

    Relaxed Selection and Decay of Antibiotic Resistance

    In general, when a population of organisms no longer experiences natural selection for a particular set of traits (antibiotic resistance, in this case), the traits designed to handle that pressure may experience functional decay as a result of mutations and genetic drift. This process is called relaxed selection.

    In the case of antibiotic resistance, when the threat of antibiotics is removed from the population (relaxed selection), it seems reasonable to think that antibiotic resistance would decline in the population because in most cases antibiotic resistance comes with a fitness cost. In other words, bacterial strains that acquire antibiotic resistance face a trade-off that makes them less fit in environments without the antibiotic.

    Genetic History and the Re-Evolution of Antibiotic Resistance

    In light of this expectation, the MSU researchers wondered how readily bacteria that have experienced relaxed selection can overcome loss of antibiotic resistance when the antibiotic is reintroduced to the population.

    To explore this question, the researchers examined the evolution of antibiotic resistance in the LTEE ancestor by exposing it to a set of different antibiotics and compared its propensity to acquire antibiotic resistance with four strains of E. coli derived from the LTEE ancestor (that underwent 50,000 generations of daily growth and transfer into fresh media in the absence of exposure to antibiotics).

    As expected, the MSU team discovered that 50,000 generations of relaxed selection rendered the four strains more susceptible to four different antibiotics (ampicillin, ceftriaxone, ciprofloxacin, and tetracycline) compared to the LTEE ancestor. When they exposed these strains to the different antibiotics, the researchers discovered that acquisition of antibiotic resistance was idiosyncratic: some strains more readily evolved antibiotic resistance than the LTEE ancestor and others were less evolvable.

    Investigators explained this difference by arguing that during the period of relaxed selection some of the strains experienced mutations that constrained the evolution of antibiotic resistance, whereas others experienced mutations that potentiated (activated) the evolution of antibiotic resistance. That is, historical contingency has played a key role in the acquisition of antibiotic resistance. Different bacterial lineages accumulated genetic differences that influence their capacity to evolve and adapt in new directions.

    Historical Contingency

    This study follows on the heels of previous studies that demonstrate the historical contingency of the evolutionary process.3 In other words, chance governs biological and biochemical evolution at its most fundamental level. As the MSU researchers observed, evolutionary pathways consist of a historical sequence of chance genetic changes operated on by natural selection (or that experience relaxed selection), which, too, consists of chance components.

    Because of the historically contingent nature of the evolutionary process, it is highly unlikely that the same biological and biochemical designs should appear repeatedly throughout nature. In his book Wonderful Life, Stephen Jay Gould used the metaphor of “replaying life’s tape.” If one were to push the rewind button, erase life’s history, and then let the tape run again, the results would be completely different each time.4

    The “Problem” of Convergence

    And yet, we observe the opposite pattern in biology. From an evolutionary perspective, it appears as if the evolutionary process independently and repeatedly arrived at the same outcome, time and time again (convergence). As evolutionary biologists Simon Conway Morris and George McGhee point out in their respective books Life’s Solution and Convergent Evolution, identical evolutionary outcomes are a widespread feature of the biological realm.5

    Scientists see these repeated outcomes at ecological, organismal, biochemical, and genetic levels. To illustrate the pervasiveness of convergence at the biochemical level, I describe 100 examples of convergence in my book The Cell’s Design.6

    From my perspective, the widespread occurrence of convergent evolution is a feature of biology that evolutionary theory can’t genuinely explain. In fact, given the clear-cut demonstration that the evolutionary process is historically contingent, I see the widespread occurrence of convergence as a failed scientific prediction for the evolutionary paradigm.

     

    Evolution in Bacteria Doesn’t Equate to Large-Scale Evolution

    The evolution of E. coli in the LTEE doesn’t necessarily validate the evolutionary paradigm. Just because such change is observed in a microbe doesn’t mean that evolutionary processes can adequately account for life’s origin and history, and the full range of biodiversity.

     

    Convergence and the Case for Creation

    Instead of viewing convergent features as having emerged through repeated evolutionary outcomes, we could understand them as reflecting the work of a divine Mind. In this scheme, the repeated origins of biological features equate to the repeated creations by an intelligent Agent who employs a common set of solutions to address a common set of problems facing unrelated organisms.

    Sadly, many in the scientific community are hesitant to embrace this perspective because they are resistant to the idea that design and purpose may play a role in biology. But, one can hope that someday the scientific community will be willing to move into a post-evolution future as the evidence for a Creator’s role in biology mounts.

    Resources

    The Historical Contingency of the Evolutionary Process

    Microbial Evolution and the Validity of the Evolutionary Paradigm

    Endnotes
    1. Sarah Bosley, “Are You Ready for a World without Antibiotics?” The Guardian, August 12, 2010, https://www.theguardian.com/society/2010/aug/12/the-end-of-antibiotics-health-infections.
    2. Kyle J. Card et al., “Historical Contingency in the Evolution of Antibiotic Resistance after Decades of Relaxed Selection,” PLoS Biology 17, no. 10 (October 23, 2019): e3000397, doi:10.1371/journal.pbio.3000397.
    3. Zachary D. Blount et al., “Historical Contingency and the Evolution of a Key Innovation in an Experimental Population of Escherichia coli,” Proceedings of the National Academy of Sciences USA 105, no. 23 (June 10, 2008): 7899-7906, doi:10.1073/pnas.0803151105.
    4. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W.W. Norton & Company, 1990).
    5. Simon Conway Morris, Life’s Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003); George McGhee, Convergent Evolution: Limited Forms Most Beautiful (Cambridge, MA: MIT Press, 2011).
    6. Fazale Rana, The Cell’s Design: How Chemistry Reveal the Creator’s Artistry (Grand Rapids, MI: Baker, 2008).
  • Analysis of Genomes Converges on the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 13, 2019

    Are you a Marvel or a DC fan?

    Do you like the Marvel superheroes better than those who occupy the DC universe? Or is it the other way around for you?

    Even though you might prefer DC over Marvel (or Marvel over DC), over the years these two comic book rivals have often created superheroes with nearly identical powers. In fact, a number of Marvel and DC superheroes are so strikingly similar that their likeness to one another is obviously intentional.1

    Here are just a few of the superheroes Marvel and DC have ripped off each other:

    • Superman (DC, created in 1938) and Hyperion (Marvel, created in 1969)
    • Batman (DC, created in 1939) and Moon Knight (Marvel, created in 1975)
    • Green Lantern (DC, created in 1940) and Nova (Marvel, created in 1976)
    • Catwoman (DC, created in 1940) and Black Cat (Marvel, created in 1979)
    • Atom (DC, created in 1961) and Ant-Man (Marvel, created in 1962)
    • Aquaman (DC, created in 1941) and Namor (Marvel, created in 1939)
    • Green Arrow (DC, created in 1941) and Hawkeye (Marvel, created in 1964)
    • Swamp Thing (DC, created in 1971) and Man Thing (Marvel, created in 1971)
    • Deathstroke (DC, created in 1980) and Deadpool (Marvel, created in 1991)

    This same type of striking similarity is also found in biology. Life scientists have discovered countless examples of biological designs that are virtually exact replicas of one another. Yet, these identical (or nearly identical) designs occur in organisms that belong to distinct, unrelated groups (such as the camera eyes of vertebrates and octopi). Therefore, they must have an independent origin.

     

    blog__inline--analysis-of-genomes-converges-1

    Figure 1: The Camera Eyes of Vertebrates (left) and Cephalopods (right); 1: Retina; 2: Nerve Fibers; 3: Optic Nerve; 4: Blind Spot. Image credit: Wikipedia

    From an evolutionary perspective, it appears as if the evolutionary process independently and repeatedly arrived at the same outcome, time and time again. As evolutionary biologists Simon Conway Morris and George McGhee point out in their respective books, Life’s Solution and Convergent Evolution, identical evolutionary outcomes are a widespread feature of the biological realm.2 Scientists observe these repeated outcomes (known as convergence) at the ecological, organismal, biochemical, and genetic levels.

    From my perspective, the widespread occurrence of convergent evolution is a feature of biology that evolutionary theory can’t genuinely explain. In fact, I see pervasive convergence as a failed scientific prediction—for the evolutionary paradigm. Recent work by a research team from Stanford University demonstrates my point.3

    These researchers discovered that identical genetic changes occurred when: (1) bats and whales “evolved” echolocation, (2) killer whales and manatees “evolved” specialized skin in support of their aquatic lifestyles, and (3) pikas and alpacas “evolved” increased lung capacity required to live in high-altitude environments.

    Why do I think this discovery is so problematic for the evolutionary paradigm? To understand my concern, we first need to consider the nature of the evolutionary process.

    Biological Evolution Is Historically Contingent

    Essentially, chance governs biological and biochemical evolution at its most fundamental level. Evolutionary pathways consist of a historical sequence of chance genetic changes operated on by natural selection, which, too, consists of chance components. The consequences are profound. If evolutionary events could be repeated, the outcome would be dramatically different every time. The inability of evolutionary processes to retrace the same path makes it highly unlikely that the same biological and biochemical designs should appear repeatedly throughout nature.

    The concept of historical contingency embodies this idea and is the theme of Stephen Jay Gould’s book Wonderful Life.4 To help illustrate the concept, Gould uses the metaphor of “replaying life’s tape.” If one were to push the rewind button, erase life’s history, and then let the tape run again, the results would be completely different each time.

    Are Evolutionary Processes Historically Contingent?

    Gould based the concept of historical contingency on his understanding of the evolutionary process. In the decades since Gould’s original description of historical contingency, several studies have affirmed his view.

    For example, in a landmark study in 2002, two Canadian investigators simulated macroevolutionary processes using autonomously replicating computer programs, with the programs operating like digital organisms.5 These programs were placed into different “ecosystems” and, because they replicated autonomously, could evolve. By monitoring the long-term evolution of the digital organisms, the two researchers determined that evolutionary outcomes are historically contingent and unpredictable. Every time they placed the same digital organism in the same environment, it evolved along a unique trajectory.

    In other words, given the historically contingent nature of the evolutionary mechanisms, we would expect convergence to be rare in the biological realm. Yet, biologists continue to uncover example after example of convergent features—some of which are quite astounding.

    The Origin of Echolocation

    One of the most remarkable examples of convergence is the independent origin of echolocation (sound waves emitted from an organism to an object and then back to the organism) in bats (chiropterans) and cetaceans (toothed whales). Research indicates that echolocation arose independently in two different groups of bats and also in the toothed whales.

     

    blog__inline--analysis-of-genomes-converges-2

    Figure 2: Echolocation in Bats. Image credit: Shutterstock

    One reason why this example of convergence is so remarkable has to do with the way some evolutionary biologists account for the widespread occurrences of convergence in biological systems. Undaunted by the myriad examples of convergence, these scientists assert that independent evolutionary outcomes result when unrelated organisms encounter nearly identical selection forces (e.g., environmental, competitive, and predatory pressures). According to this idea, natural selection channels unrelated organisms down similar pathways toward the same endpoint.

    But this explanation is unsatisfactory because bats and whales live in different types of habitats (terrestrial and aquatic). Consequently, the genetic changes responsible for the independent emergence of echolocation in the chiropterans and cetaceans should be distinct. Presumably, the evolutionary pathways that converged on a complex biological system such as echolocation would have taken different routes that would be reflected in the genomes. In other words, even though the physical traits appear to be identical (or nearly identical), the genetic makeup of the organisms should reflect an independent evolutionary history.

    But this expectation isn’t borne out by the data.

    Genetic Convergence Parallels Trait Convergence

    In recent years, evolutionary biologists have developed interest in understanding the genetic basis for convergence. Specifically, these scientists want to understand the genetic changes that lead to convergent anatomical and physiological features (how genotype leads to phenotype).

    Toward this end, a Stanford research team developed an algorithm that allowed them to search through entire genome sequences of animals to identify similar genetic features that contribute to particular biological traits.6 In turn, they applied this method to three test cases related to the convergence of:

    • echolocation in bats and whales
    • scaly skin in killer whales
    • lung structure and capacity in pikas and alpacas

    The investigators discovered that for echolocating animals, the same 25 convergent genetic changes took place in their genomes and were distributed among the same 18 genes. As it turns out, these genes play a role in the development of the cochlear ganglion, thought to be involved in echolocation. They also discovered that for aquatic mammals, there were 27 identical convergent genetic changes that occurred in same 15 genes that play a role in skin development. And finally, for high-altitude animals, they learned that the same 25 convergent genetic changes occurred in the same 16 genes that play a role in lung development.

    In response to this finding, study author Gill Bejerano remarked, “These genes often control multiple functions in different tissues throughout the body, so it seems it would be very difficult to introduce even minor changes. But here we’ve found that not only do these very different species share specific genetic changes, but also that these changes occur in coding genes.”7

    In other words, these results are not expected from an evolutionary standpoint. It is nothing short of amazing that genetic convergence would parallel phenotypic convergence.

    On the other hand, these results make perfect sense from a creation model vantage point.

    Convergence and the Case for Creation

    Instead of viewing convergent features as having emerged through repeated evolutionary outcomes, we could understand them as reflecting the work of a Divine Mind. In this scheme, the repeated origins of biological features equate to the repeated creations by an Intelligent Agent who employs a common set of solutions to address a common set of problems facing unrelated organisms.

    Like the superhero rip-offs in the Marvel and DC comics, the convergent features in biology appear to be intentional, reflecting a teleology that appears to be endemic in living systems.

    Resources

    Convergence of Echolocation

    The Historical Contingency of the Evolutionary Process

    Endnotes
    1. Jamie Gerber, 15 DC and Marvel Superheroes Who Are Strikingly Similar, ScreenRant (November 12, 2016), screenrant.com/marvel-dc-superheroes-copies-rip-offs/.
    2. Simon Conway Morris, Life’s Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003); George McGhee, Convergent Evolution: Limited Forms Most Beautiful (Cambridge, MA: MIT Press, 2011).
    3. Amir Marcovitz et al., “A Functional Enrichment Test for Molecular Convergent Evolution Finds a Clear Protein-Coding Signal in Echolocating Bats and Whales,” Proceedings of the National Academy of Sciences, USA 116, no. 42 (October 15, 2019), 21094–21103, doi:10.1073/pnas.1818532116.
    4. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History (New York: W. W. Norton & Company, 1990).
    5. Gabriel Yedid and Graham Bell, “Macroevolution Simulated with Autonomously Replicating Computer Programs,” Nature 420 (December 19, 2002): 810–12, doi:10.1038/nature01151.
    6. Marcovitz et al., “A Functional Enrichment Test.”
    7. Stanford Medicine, “Scientists Uncover Genetic Similarities among Species That Use Sound to Navigate,” ScienceDaily, October 4, 2019, sciencedaily.com/releases/2019/10/191004105643.htm.
  • Glue Production Is Not Evidence for Neanderthal Exceptionalism

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Nov 06, 2019

    Football players aren’t dumb jocks—though they often have that reputation. Football is a physically demanding sport that requires strength, toughness, agility, and speed. But it is also an intellectually demanding game.

    Mastering a playbook, understanding which plays work best for the various in-game scenarios, recognizing defenses and offenses, and adjusting on the fly require hours of study and preparation. Football really is a thinking person’s game.

    blog__inline--glue-production-is-not-evidence-for-neanderthal-exceptionalism-1

    Figure 1: Quarterback Calling an Audible at the Line of Scrimmage. Image Credit: Shutterstock

    Some anthropologists view Neanderthals in the same way that many people view football players: as the “dumb jock” version of a hominin, a creature cognitively inferior to modern humans. Yet, other anthropologists dispute this characterization, arguing that it is undeserved. Instead, they claim that Neanderthals had cognitive capabilities on par with modern humans.

    In support of their claim, these scientists point to finds in the archaeological record that seemingly suggest these hominins were exceptional, just like modern humans. As a case in point, archaeologists have unearthed evidence for tar production at a site in Italy that dates to around 200,000 years in age. They interpret this discovery as evidence that Neanderthals were using tar as glue for hafting (fixing) flint spearheads to wooden spear shafts.1 Archaeologists have also unearthed spearheads with tar residue from two sites in Germany, one dating to 120,000 years in age and the other between 40,000 to 80,000 years.2 Because these dates precede the arrival of modern humans into Europe, anthropologists assume the tar at these sites was deliberately produced and used by Neanderthals.

    Adhesives as a Signature for Superior Cognition

    Anthropologists consider the development of adhesives as a transformative technology. These materials would have provided the first humans the means to construct new types of complex devices and combine different types of materials (composites) into new technologies. Because of this new proficiency, anthropologists consider the production and use of adhesives to be diagnostic of advanced cognitive capabilities such as forward planning, abstraction, and understanding of materials.

    Production of adhesives from natural sources, even by the earliest modern humans, appears to have been a complex operation that required precise temperature control and the use of earthen mounds, or ceramic or metal kilns. In addition, birch bark needed to be heated in the absence of oxygen. Because the first large-scale production of adhesives usually centered around the dry distillation of birch and pine barks to produce tar and pitch, researchers have assumed that this technique is the only way to generate tar.

    blog__inline--glue-production-is-not-evidence-for-neanderthal-exceptionalism-2

    Figure 2: Tar Produced from Birch Bark. Image credit: Wikipedia

    So, if Neanderthals were using tar as an adhesive, the reasoning goes, they must have been pretty impressive creatures.

    In the summer of 2017 researchers from the University of Leiden published work that seemed to support this view.3 To address the question of how Neanderthals may have produced adhesives, these investigators conducted a series of experiments. They sought to learn how Neanderthals used the resources most reasonably available to them to obtain tar from birch bark through dry distillation.

    By studying a variety of methods for dry distillation of tar from birch in a laboratory setting, the research team concluded that Neanderthals could have produced tar from birch bark if they had used methods that were simple enough that they wouldn’t require precise temperature control during the distillation. Still, these methods are complex enough that the researchers concluded that for Neanderthals to pull off this feat, they must have had advanced cognitive abilities similar to those of modern humans.

    Is Adhesive Production and Use Evidence for Neanderthal Exceptionalism?

    At the time this work was reported, I challenged this conclusion by noting that the simplicity of these production methods argued against advanced cognitive abilities in Neanderthals, not for them.

    Recent work by researchers from Germany affirms my skepticism. Their research challenges the view that adhesive production and use constitutes evidence for human exceptionalism.4 The team wondered if a simpler way to produce tar—even simpler than the methods identified by the research team from the University of Leiden— exists. They also wondered if it was possible to produce tar in the presence of oxygen.

    From their work, they discovered that burning birch bark (or branches from a birch tree with the bark still attached) adjacent to a rock with a vertical or subvertical surface is a way to collect tar, which naturally deposits on the rock surface as the bark burns. In other words, tar can be produced accidentally, instead of deliberately. And once produced, it can be scraped from the rock surface.

    Using analytical techniques (gas chromatography coupled to mass spectrometry) to characterize the chemical makeup of the tar produced by this simple method, the research team showed that it is comparable to the chemical composition of tars produced by sophisticated dry distillation methods under anaerobic conditions. Because of the simplicity of this method, the research team thinks that collecting tar deposits from burning birch on rocks is the most likely way that Neanderthals produced tar, if they intentionally produced it at all.

    According to the research team, “The identification of birch tar at archaeological sites can no longer be considered as a proxy for human (complex, cultural) behavior as previously assumed. In other words, our finding changes textbook thinking about what tar production is a smoking gun of.”5

    One other point merits consideration: A growing body of evidence indicates that Neanderthals did not master fire, but rather used it opportunistically. In other words, these creatures could not create fire, but did harvest wildfires. Evidence demonstrates that there were vast periods of time during Neanderthals’ tenure in Europe when wildfires were rare because of cold climatic conditions. During these periods, Neanderthals didn’t use fire.

    Because fire is central to the dry distillation methods, for a significant portion of their time on Earth Neanderthals would have been unable to extract tar and use it for hafting. Perhaps this factor explains why recovery of tar from Neanderthal sites is so rare. And could it be that Neanderthals were not intentionally producing tar? Instead, did tar just happen to collect on rock surfaces as a consequence of burning birch branches when these creatures were able to harvest fire?

    What Difference Does It Make?

    One of the most important ideas taught in Scripture is that human beings uniquely bear God’s image. As such, every human being has immeasurable worth and value. And because we bear God’s image, we can enter into a relationship with our Maker.

    However, if Neanderthals possessed advanced cognitive ability just like that of modern humans, then it becomes difficult to maintain the view that modern humans are unique and exceptional. If human beings aren’t exceptional, then it becomes a challenge to defend the idea that human beings are made in God’s image.

    Yet, claims that Neanderthals are cognitive equals to modern humans fail to withstand scientific scrutiny, time and time again, as this latest study demonstrates. It is unlikely that any of us will see a Neanderthal run onto the football field anytime soon.

    Resources

    Neanderthals Did Not Master Fire

    Differences in Human and Neanderthal Brains

    Endnotes
    1. Paul Peter Anthony Mazza et al., “A New Palaeolithic Discovery: Tar-Hafted Stone Tools in a European Mid-Pleistocene Bone-Bearing Bed,” Journal of Archaeological Science 33, no. 9 (September 2006): 1310–18, doi:10.1016/j.jas.2006.01.006.
    2. Johann Koller, Ursula Baumer, and Dietrich Mania, “High-Tech in the Middle Palaeolithic: Neandertal-Manufactured Pitch Identified,” European Journal of Archaeology 4, no. 3 (December 1, 2001): 385–97, doi:10.1179/eja.2001.4.3.385; Alfred F. Pawlik and Jürgen P. Thissen, “Hafted Armatures and Multi-Component Tool Design at the Micoquian Site of Inden-Altdorf, Germany,” Journal of Archaeological Science 38, no. 7 (July 2011): 1699–1708, doi:10.1016/j.jas.2011.03.001.
    3. P. R. B. Kozowyk et al., “Experimental Methods for the Palaeolithic Dry Distillation of Birch Bark: Implications for the Origin and Development of Neandertal Adhesive Technology,” Scientific Reports 7 (August 31, 2017): 8033, doi:10.1038/s41598-017-08106-7.
    4. Patrick Schmidt et al., “Birch Tar Production Does Not Prove Neanderthal Behavioral Complexity,” Proceedings of the National Academy of Sciences, USA 116, no. 36 (September 3, 2019): 17707–11, doi:10.1073/pnas.1911137116.
    5. Schmidt et al., “Birch Tar Production.”
  • Scientists Reverse the Aging Process: Exploring the Theological Implications

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 30, 2019

    During those days people will seek death but will not find it; they will long to die, but death will elude them.

    Revelation 9:6

     

    I make dad noises now.

    When I sit down, when I stand up, when I get out of bed, when I get into bed, when I bend over to pick up something from the ground, and when I straighten up again, I find myself involuntarily making noises—grunting sounds.

    I guess it is all part of the aging process. My body isn’t quite what it used to be. If someone offered me an elixir that could turn back time and reverse the aging process, I would take it without hesitation. It’s no fun growing old.

    Well, I just might get my wish, thanks to the work of a research team from the US and Canada. These researchers demonstrated that they could disrupt the aging process and, in fact, reverse the biological clock in humans.1

    This advance is nothing short of stunning. It opens up exciting—and disquieting—biomedical possibilities rife with ethical and theological ramifications. The work has other interesting implications, as well. It can be marshaled to demonstrate the scientific credibility of the Old Testament by making scientific sense of the long life spans of the patriarchs listed in the Genesis 5 and 11 genealogies.

    Some Biological Consequences of Aging

    Involuntary grunting is not the worse part of aging, by far. There are other more serious consequences, such as loss of immune function. Senescence (aging) of the immune system can contribute to the onset of cancer and increased susceptibility to pathogens. It can also lead to wide-scale inflammation. None of these are good.

    As we age, our thymus decreases in size. And this size reduction hampers immune system function. Situated between the heart and sternum, the thymus plays a role in maturation of white blood cells, key components of the immune system. As the thymus shrinks with age, the immune system loses its capacity to generate sufficient levels of white blood cells, rendering older adults vulnerable to infections and cancers.

    A Strategy to Improve Immune Function

    Previous studies in laboratory animals have shown that administering growth hormone enlarges the thymus and, consequently, improves immune function. The research team reasoned that the same effect would be seen in human patients. But due to at least one of its negative side effects, the team couldn’t simply administer growth hormone without other considerations. Growth hormone lowers insulin levels and leads to a form of type 2 diabetes. To prevent this adverse effect, the researchers also administered two drugs commonly used to treat type 2 diabetes.

    blog__inline--scientists-reverse-the-aging-process-1

    Figure 1: The Structure of Human Growth Hormone. Image credit: Shutterstock

    To test this idea, the researchers performed a small-scale clinical trial. The study began with ten men (finishing with nine) between the ages of 51 and 65. The volunteers self-administered the drug cocktail three to four times a week for a year. During the course of the study, the researchers monitored white blood cell levels and thymus size. They observed a rejuvenation of the immune system (based on the count of white blood cells in the blood). They also noticed changes in the thymus, with fatty deposits disappearing and thymus tissue returning.

    Reversing the Aging Process

    As an afterthought, the researchers decided to test the patient’s blood using an epigenetic clock that measures biological age. To their surprise, the researchers discovered that the drug cocktail reversed the biological age of the study participants by two years, compared to their chronological age. In other words, even though the patients gained one year in their chronological age during the course of the study, their bodies became younger, based on biological markers, by two years. This age reversal lasted for six months after the trial ended.

    Thus, for the first time ever, researchers have been able to extend human life expectancy through an aging-intervention therapy. And while the increase in life expectancy was limited, this accomplishment serves as a harbinger of things to come, making the prospects of dramatically extending human life expectancy significantly closer to a reality.

    This groundbreaking work carries significant biomedical, ethical, and theological implications, which I will address below. But the breakthrough is equally fascinating to me because it can be used to garner scientific support for Genesis 5 and 11.

    Anti-Aging Technology and Biblical Long Life Spans

    The mere assertion that humans could live for hundreds of years as described in the genealogies of Genesis 5 and 11 is, for many people, nothing short of absurd. Compounding this seeming absurdity is the claim in Genesis 6:3, which describes God intervening to shorten human life spans from about 900 to about 120 years. How can this dramatic change in human life spans be scientifically rational?

    As I discuss in Who Was Adam?, advances in the biochemistry of aging provide a response to these challenging questions. Scientists have uncovered several distinct biochemical mechanisms that either cause, or are associated with, senescence. Even subtle changes in cellular chemistry can increase life expectancy by nearly 50 percent. These discoveries point to several possible ways that God could have allowed long life spans and then altered human life expectancy—simply by “tweaking” human biochemistry.

    Thanks to these advances, biogerontologists have become confident that in the near future, they will be able to interrupt the aging process by direct intervention through altered diet, drug treatment, and gene manipulation. Some biogerontologists such as Aubrey de Grey don’t think it is out of the realm of possibility to extend human life expectancy to several hundred years—about the length of time the Bible claims that the patriarchs lived. The recent study by the US and Canadian investigators seems to validate de Grey’s view.

    So, if biogerontologists can alter life spans—maybe someday on the order of hundreds of years—then the Genesis 5 and 11 genealogies no longer appear to be fantastical. And, if we can intervene in our own biology to alter life spans, how much easier must it be for God to do so?

    Ethical Concerns

    As mentioned, I would be tempted to take an anti-aging elixir if I knew it would work. And so would many others. What could possibly be wrong with wanting to live a longer, healthier, and more productive life? In fact, disrupting—and even reversing—the aging process would offer benefits to society by potentially reducing medical costs associated with age-related diseases such as dementia, cancer, heart disease, and stroke.

    Yet, these biomedical advances in anti-aging therapies do hold the potential to change who we are as human beings. Even a brief moment of reflection makes it plain that wide-scale use of anti-aging treatments could bring about fundamental changes to economies, to society, and to families and put demands on limited planetary resources. In the end, anti-aging technologies may well be unsustainable, undesirable, and unwise. (For a more detailed discussion of the ethical issues surrounding anti-aging technology check out the book I cowrote with Kenneth Samples, Humans 2.0.)

    Anti-Aging Therapies and Transhumanism

    Many people rightly recognize the ethical concerns surrounding applications of anti-aging therapies, but a growing number see these technologies in a different light. They view them as paving the way to an exciting and hopeful future. The increasingly real prospects of extending human life expectancy by disrupting the aging process or even reversing the effects of aging are the types of advances (along with breakthroughs in CRISPR gene editing and computer-brain interfaces) that fuel an intellectual movement called transhumanism.

    This idea has long been on the fringes of respected academic thought, but recently transhumanism has propelled its way into the scientific, philosophical, and cultural mainstreams. Advocates of the transhumanist vision maintain that humanity has an obligation to use advances in biotechnology and bioengineering to correct our biological flaws—to augment our physical, intellectual, and psychological capabilities beyond our natural limits. Perhaps there are no greater biological limitations that human beings experience than those caused by aging bodies and the diseases associated with the aging process.

    blog__inline--scientists-reverse-the-aging-process-2

    Figure 2: Transhumanism. Image credit: Shutterstock

    Transhumanists see science and technology as the means to alleviate pain and suffering and to promote human flourishing. They note, in the case of aging, the pain, suffering, and loss associated with senescence in human beings. But the biotechnology we need to fulfill the transhumanist vision is now within grasp.

    Anti-Aging as a Source of Hope and Salvation?

    Using science and technology to mitigate pain and suffering and to drive human progress is nothing new. But transhumanists desire more. They advocate that we should use advances in biotechnology and bioengineering for the self-directed evolution of our species. They seek to fulfill the grand vision of creating new and improved versions of human beings and ushering in a posthuman future. In effect, transhumanists desire to create a utopia of our own design.

    In fact, many transhumanists go one step further, arguing that advances in gene editing, computer-brain interfaces, and anti-aging technologies could extend our life expectancy, perhaps even indefinitely, and allow us to attain a practical immortality. In this way, transhumanism displays its religious element. Here science and technology serve as the means for salvation.

    Transhumanism: a False Gospel?

    But can transhumanism truly deliver on its promises of a utopian future and practical immortality?

    In Humans 2.0, Kenneth Samples and I delineate a number of reasons why transhumanism is a false gospel, destined to disappoint, not fulfill, our desire for immortality and utopia. I won’t elaborate on those reasons here. But simply recognizing the many ethical concerns surrounding anti-aging technologies (and gene editing and computer-brain interfaces) highlights the real risks connected to pursuing a transhumanist future. If we don’t carefully consider these concerns, we might create a dystopian future, not a utopian world.

    The mere risk of this type of unintended future should give us pause for thought about turning to science and technology for our salvation. As theologian Ronald Cole-Turner so aptly put it:

    “We need to be aware that technology, precisely because of its beneficial power, can lead us to the erroneous notion that the only problems to which it is worth paying attention involve engineering. When we let this happen, we reduce human yearning for salvation to a mere desire for enhancement, a lesser salvation that we can control rather than the true salvation for which we must also wait.”2

    Resources

    Endnotes
    1. Gregory M. Fahy et al., “Reversal of Epigenetic Aging and Immunosenescent Trends in Humans,” Aging Cell (September 8, 2019): e13028, doi:10.1111/acel.13028.
    2. “Transhumanism and Christianity,” in Transhumanism and Transcendence: Christian Hope in an Age of Technological Enhancement, ed. Ronald Cole-Turner (Washington, D.C.: Georgetown University Press, 2011), 201.
  • Origin and Design of the Genetic Code: A One-Two Punch for Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 23, 2019

    True confession: I am a sports talk junkie. It has gotten so bad that sometimes I would rather listen to people talk about the big game than actually watch it on TV.

    So, in the spirit of the endless debates that take place on sports talk radio, I ask: What duo is the greatest one-two punch in NBA history? Is it:

    • Kareem and Magic?
    • Kobe and Shaq?
    • Michael and Scottie?

    Another confession: I am a science-faith junkie. I never tire when it comes to engaging in discussions about the interplay between science and the Christian faith. From my perspective, the most interesting facet of this conversation centers around the scientific evidence for God’s existence.

    So, toward this end, I ask: What is the most compelling biochemical evidence for God’s existence? Is it:

    • The complexity of biochemical systems?
    • The eerie similarity between biomolecular motors and machines designed by human engineers?
    • The information found in DNA?

    Without hesitation I would say it is actually another feature: the origin and design of the genetic code.

    The genetic code is a biochemical code that consists of a set of rules defining the information stored in DNA. These rules specify the sequence of amino acids used by the cell’s machinery to synthesize proteins. The genetic code makes it possible for the biochemical apparatus in the cell to convert the information formatted as nucleotide sequences in DNA into information formatted as amino acid sequences in proteins.

     

    blog__inline--origin-and-design-of-the-genetic-code

    Figure: A Depiction of the Genetic Code. Image credit: Shutterstock

    In previous articles (see the Resources section), I discussed the code’s most salient feature that I think points to a Creator’s handiwork: it’s multidimensional optimization. That optimization is so extensive that evolutionary biologists struggle to account for it’s origin, as illustrated by the work of biologist Steven Massey1.

    Both the optimization of the genetic code and the failure of evolutionary processes to account for its design form a potent one-two punch, evincing the work of a Creator. Optimization is a marker of design, and if it can’t be accounted for through evolutionary processes, the design must be authentic—the product of a Mind.

    Can Evolutionary Processes Generate the Genetic Code?

    For biochemists working to understand the origin of the genetic code, its extreme optimization means that it is not the “frozen accident” that Francis Crick proposed in a classic paper titled “On the Origin of the Genetic Code.”2

    Many investigators now think that natural selection shaped the genetic code, producing its optimal properties. However, I question if natural selection could evolve a genetic code with the degree of optimality displayed in nature. In the Cell’s Design (published in 2008), I cite the work of the late biophysicist Hubert Yockey in support of my claim.3 Yockey determined that natural selection would have to explore 1.40 x 1070 different genetic codes to discover the universal genetic code found in nature. Yockey estimated 6.3 x 1015 seconds (200 million years) is the maximum time available for the code to originate. Natural selection would have to evaluate roughly 1055 codes per second to find the universal genetic code. And even if the search time was extended for the entire duration of the universe’s existence, it still would require searching through 1052 codes per second to find nature’s genetic code. Put simply, natural selection lacks the time to find the universal genetic code.

    Researchers from Germany raised the same difficulty for evolution recently. Because of the genetic code’s multidimensional optimality, they concluded that “the optimality of the SGC [standard genetic code] is a robust feature and cannot be explained by any simple evolutionary hypothesis proposed so far. . . . the probability of finding the standard genetic code by chance is very low. Selection is not an omnipotent force, so this raises the question of whether a selection process could have found the SGC in the case of extreme code optimalities.”4

    Two More Evolutionary Mechanisms Considered

    Life scientist Massey reached a similar conclusion through a detailed analysis of two possible evolutionary mechanisms, both based on natural selection.9

    If the genetic code evolved, then alternate genetic codes would have to have been generated and evaluated until the optimal genetic code found in nature was discovered. This process would require that coding assignments change. Biochemists have identified two mechanisms that could contribute to coding reassignments: (1) codon capture and (2) an ambiguous intermediate mechanism. Massey tested both mechanisms.

    Massey discovered that neither mechanism can evolve the optimal genetic code. When he ran computer simulations of the evolutionary process using codon capture as a mechanism, they all ended in failure, unable to find a highly optimized genetic code. When Massey ran simulations with the ambiguous intermediate mechanism, he could evolve an optimized genetic code. But he didn’t view this result as success. He learned that it takes between 20 to 30 codon reassignments to produce a genetic code with the same degree of optimization as the genetic code found in nature.

    The problem with this evolutionary mechanism is that the number of coding reassignments observed in nature is scarce based on the few deviants of the genetic code thought to have evolved since the origin of the last common ancestor. On top of this problem, the structure of the optimized codes that evolved via the ambiguous intermediate mechanism is different from the structure of the genetic code found in nature. In short, the result obtained via the ambiguous intermediate mechanism is unrealistic.

    As Massey points out, “The evolution of the SGC remains to be deciphered, and constitutes one of the greatest challenges in the field of molecular evolution.”10

    Making Sense of Explanatory Models

    In the face of these discouraging results for the evolutionary paradigm, Massey concludes that perhaps another evolutionary force apart from natural selection shaped the genetic code. One idea Massey thinks has merit is the Coevolution Theory proposed by J. T. Wong. Wong argued that the genetic code evolved in conjunction with the evolution of biosynthetic pathways that produce amino acids. Yet, Wong’s theory doesn’t account for the extreme optimization of the genetic code in nature. And, in fact, the relationships between coding assignments and amino acid biosynthesis appear to result from a statistical artifact, and nothing more.11 In other words, Wong’s ideas don’t work.

    That brings us back to the question of how to account for the genetic code’s optimization and design.

    As I see it, in the same way that two NBA superstars work together to help produce a championship-caliber team, the genetic code’s optimization and the failure of every evolutionary model to account for it form a potent one-two punch that makes a case for a Creator.

    And that is worth talking about.

    Resources

    Endnotes
    1. Steven E. Massey, “Searching of Code Space for an Error-Minimized Genetic Code via Codon Capture Leads to Failure, or Requires at Least 20 Improving Codon Reassignments via the Ambiguous Intermediate Mechanism,” Journal of Molecular Evolution 70, no. 1 (January 2010): 106–15, doi:10.1007/s00239-009-9313-7.
    2. F. H. C. Crick, “The Origin of the Genetic Code,” Journal of Molecular Biology 38, no. 3 (December 28, 1968): 367–79, doi:10.1016/0022-2836(68)90392-6.
    3. Hubert P. Yockey, Information Theory and Molecular Biology (Cambridge, UK: Cambridge University Press, 1992), 180–83.
    4. Stefan Wichmann and Zachary Ardern, “Optimality of the Standard Genetic Code Is Robust with Respect to Comparison Code Sets,” Biosystems 185 (November 2019): 104023, doi:10.1016/j.biosystems.2019.104023.
    5. Massey, “Searching of Code Space.”
    6. Massey, “Searching of Code Space.”
    7. Ramin Amirnovin, “An Analysis of the Metabolic Theory of the Origin of the Genetic Code,” Journal of Molecular Evolution 44, no. 5 (May 1997): 473–76, doi:10.1007//PL00006170.
  • New Insights into Genetic Code Optimization Signal Creator’s Handiwork

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 16, 2019

    I knew my career as a baseball player would be short-lived when, as a thirteen-year-old, I made the transition from Little League to the Babe Ruth League, which uses official Major League Baseball rules. Suddenly there were a whole lot more rules for me to follow than I ever had to think about in Little League.

    Unlike in Little League, at the Babe Ruth level the hitter and base runners have to know what the other is going to do. Usually, the third-base coach is responsible for this communication. Before each pitch is thrown, the third-base coach uses a series of hand signs to relay instructions to the hitter and base runners.

    blog__inline--new-insights-into-genetic-code

    Credit: Shutterstock

    My inability to pick up the signs from the third-base coach was a harbinger for my doomed baseball career. I did okay when I was on base, but I struggled to pick up his signs when I was at bat.

    The issue wasn’t that there were too many signs for me to memorize. I struggled recognizing the indicator sign.

    To prevent the opposing team from stealing the signs, it is common for the third-base coach to use an indicator sign. Each time he relays instructions, the coach randomly runs through a series of signs. At some point in the sequence, the coach gives the indicator sign. When he does that, it means that the next signal is the actual sign.

    All of this activity was simply too much for me to process. When I was at the plate, I couldn’t consistently keep up with the third-base coach. It got so bad that a couple of times the third-base coach had to call time-out and have me walk up the third-base line, so he could whisper to me what I was to do when I was at the plate. It was a bit humiliating.

    Codes Come from Intelligent Agents

    The signs relayed by a third-base coach to the hitter and base runners are a type of codea set of rules used to convert and convey information across formats.

    Experience teaches us that it takes intelligent agents, such as baseball coaches, to devise codes, even those that are rather basic in their design. The more sophisticated a code, the greater the level of ingenuity required to develop it.

    Perhaps the most sophisticated codes of all are those that can detect errors during data transmission.

    I sure could have used a code like that when I played baseball. It would have helped me if the hand signals used by the third-base coach were designed in such a way that I could always understand what he wanted, even if I failed to properly pick up the indicator signal.

    The Genetic Code

    As it turns out, just such a code exists in nature. It is one of the most sophisticated codes known to us—far more sophisticated than the best codes designed by the brightest computer engineers in the world. In fact, this code resides at the heart of biochemical systems. It is the genetic code.

    This biochemical code consists of a set of rules that define the information stored in DNA. These rules specify the sequence of amino acids that the cell’s machinery uses to build proteins. In this process, information formatted as nucleotide sequences in DNA is converted into information formatted as amino acid sequences in proteins.

    Moreover, the genetic code is universal, meaning that all life on Earth uses it.1

    Biochemists marvel at the design of the genetic code, in part because its structure displays exquisite optimization. This optimization includes the capacity to dramatically curtail errors that result from mutations.

    Recently, a team from Germany identified another facet of the genetic code that is highly optimized, further highlighting its remarkable qualities.2

    The Optimal Genetic Code

    As I describe in The Cell’s Design, scientists from Princeton University and the University of Bath (UK) quantified the error-minimization capacity of the genetic code during the 1990s. Their work indicated that the universal genetic code is optimized to withstand the potentially harmful effects of substitution mutations better than virtually any other conceivable genetic code.3

    In 2018, another team of researchers from Germany demonstrated that the universal genetic code is also optimized to withstand the harmful effects of frameshift mutations—again, better than other conceivable codes.4

    In 2007, researchers from Israel showed that the genetic code is also optimized to harbor overlapping codes.5 This is important because, in addition to the genetic code, regions of DNA harbor other overlapping codes that direct the binding of histone proteins, transcription factors, and the machinery that splices genes after they have been transcribed.

    The Robust Optimality of the Genetic Code

    With these previous studies serving as a backdrop, the German research team wanted to probe more deeply into the genetic code’s optimality. These researchers focused on potential optimality of three properties of the genetic code: (1) resistance to harmful effects of substitution mutations, (2) resistance to harmful effects of frameshift mutations, and (3) capacity to support overlapping genes.

    As with earlier studies, the team assessed the optimality of the naturally occurring genetic code by comparing its performance with sets of random codes that are conceivable alternatives. For all three property comparisons, they discovered that the natural (or standard) genetic code (SGC) displays a high degree of optimality. The researchers write, “We find that the SGC’s optimality is very robust, as no code set with no optimised properties is found. We therefore conclude that the optimality of the SGC is a robust feature across all evolutionary hypotheses.”6

    On top of this insight, the research team adds one other dimension to multidimensional optimality of the genetic code: its capacity to support overlapping genes.

    Interestingly, the researchers also note that the results of their work raise significant challenges to evolutionary explanations for the genetic code, pointing to the code’s multidimensional optimality that is extreme in all dimensions. They write:

    We conclude that the optimality of the SGC is a robust feature and cannot be explained by any simple evolutionary hypothesis proposed so far. . . . the probability of finding the standard genetic code by chance is very low. Selection is not an omnipotent force, so this raises the question of whether a selection process could have found the SGC in the case of extreme code optimalities.7

    While natural selection isn’t omnipotent, a transcendent Creator would be, and could account for the genetic code’s extreme optimality.

    The Genetic Code and the Case for a Creator

    In The Cell’s Design, I point out that our common experience teaches us that codes come from minds. It’s true on the baseball diamond and true in the computer lab. By analogy, the mere existence of the genetic code suggests that biochemical systems come from a Mind—a conclusion that gains additional support when we consider the code’s sophistication and exquisite optimization.

    The genetic code’s ability to withstand errors that arise from substitution and frameshift mutations, along with its optimal capacity to harbor multiple overlapping codes and overlapping genes, seems to defy naturalistic explanation.

    As a neophyte playing baseball, I could barely manage the simple code the third-base coach used. How mind-boggling it is for me when I think of the vastly superior ingenuity and sophistication of the universal genetic code.

    And, just like the hitter and base runner work together to produce runs in baseball, the elegant design of the genetic code and the inability of evolutionary processes to account for its extreme multidimensional optimization combine to make the case that a Creator played a role in the origin and design of biochemical systems.

    With respect to the case for a Creator, the insight from the German research team hits it out of the park.

    Resources:

    Endnotes
    1. Some organisms have a genetic code that deviates from the universal code in one or two of the coding assignments. Presumably, these deviant codes originate when the universal genetic code evolves, altering coding assignments.
    2. Stefan Wichmann and Zachery Ardern, “Optimality of the Standard Genetic Code Is Robust with Respect to Comparison Code Sets,” Biosystems 185 (November 2019): 104023, doi:10.1016/j.biosystems.2019.104023.
    3. David Haig and Laurence D. Hurst, “A Quantitative Measure of Error Minimization in the Genetic Code,” Journal of Molecular Evolution 33, no. 5 (November 1991): 412–17, doi:1007/BF02103132; Gretchen Vogel, “Tracking the History of the Genetic Code,” Science 281, no. 5375 (July 17, 1998): 329–31, doi:1126/science.281.5375.329; Stephen J. Freeland and Laurence D. Hurst, “The Genetic Code Is One in a Million,” Journal of Molecular Evolution 47, no. 3 (September 1998): 238–48, doi:10.1007/PL00006381; Stephen J. Freeland et al., “Early Fixation of an Optimal Genetic Code,” Molecular Biology and Evolution 17, no. 4 (April 2000): 511–18, 10.1093/oxfordjournals.molbev.a026331.
    4. Regine Geyer and Amir Madany Mamlouk, “On the Efficiency of the Genetic Code after Frameshift Mutations,” PeerJ 6 (May 21, 2018): e4825, doi:10.7717/peerj.4825.
    5. Shalev Itzkovitz and Uri Alon, “The Genetic Code Is Nearly Optimal for Allowing Additional Information within Protein-Coding Sequences,” Genome Research 17, no. 4 (April 2007): 405–12, doi:10.1101/gr.5987307.
    6. Wichmann and Ardern, “Optimality.”
    7. Wichmann and Ardern, “Optimality.”
  • Is the Optimal Set of Protein Amino Acids Purposed by a Mind?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Oct 09, 2019

    As a graduate student and a postdoc, I spent countless hours in the lab doing research. Part of my work involved performing biochemical assays—laboratory procedures designed to measure the activities of biomolecules and biochemical systems.

    To get our assays to work properly, we had to carefully design and optimize each test before executing it with exacting precision in the laboratory. Optimizing these assays was no easy feat. It could take weeks of painstaking effort to get the protocols just right.

    My experiences working in the lab taught me some important lessons that I carry with me today as a Christian apologist. One of these lessons has to do with optimization. Optimized systems don’t just happen, whether they are laboratory procedures, manufacturing operations, or well-designed objects or devices. Instead, optimization results from the insights and efforts of intelligent agents, and therefore serves as a sure indicator of intelligent design.

    As it turns out, nearly every biochemical system appears to be highly optimized. For me, this fact indicates that life stems from a Mind. And as life scientists continue to characterize biochemical systems, they keep discovering more and more examples of biochemical optimization, as recent work by a large team of collaborators working at the Earth-Life Science Institute (ELSI) in Tokyo, Japan, illustrates.1

    These researchers uncovered more evidence that the twenty amino acids encoded by the genetic code possess the optimal set of physicochemical properties. If not for these properties, it would not be possible for the cell to build proteins that could support the wide range of activities required to sustain living systems. This insight gives us important perspective into the structure-function relationships of proteins. It also has theological significance, adding to the biochemical case for a Creator.

    Before describing the ELSI team’s work and its theological implications, a little background might be helpful for some readers. For those who are familiar with basic biochemistry, just skip ahead to Why These Twenty Amino Acids?

    Background: Protein Structure

    Proteins are large, complex molecules that play a key role in virtually all of the cell’s operations. Biochemists have long known that the three-dimensional structure of a protein dictates its function. Because proteins are such large, complex molecules, biochemists categorize protein structure into four different levels: primary, secondary, tertiary, and quaternary structures.

    blog__inline--is-the-optimal-set-of-protein-amino-acids-1

    Figure 1: The Four Levels of Protein Structure. Image credit: Shutterstock

    • A protein’s primary structure is the linear sequence of amino acids that make up each of its polypeptide chains.
    • The secondary structure refers to short-range three-dimensional arrangements of the polypeptide chain’s backbone arising from the interactions between chemical groups that make up its backbone. Three of the most common secondary structures are the random coil, alpha (α) helix, and beta (β) pleated sheet.
    • Tertiary structure describes the overall shape of the entire polypeptide chain and the location of each of its atoms in three-dimensional space. The structure and spatial orientation of the chemical groups that extend from the protein backbone are also part of the tertiary structure.
    • Quaternary structure arises when several individual polypeptide chains interact to form a functional protein complex.

    Background: Amino Acids

    The building blocks of proteins are amino acids. These compounds are characterized by having both an amino group and a carboxylic acid bound to a central carbon atom. Also bound to this carbon are a hydrogen atom and a substituent that biochemists call an R group.

    blog__inline--is-the-optimal-set-of-protein-amino-acids-2

    Figure 2: The Structure of a Typical Amino Acid. Image credit: Shutterstock

    The R group determines the amino acid’s identity. For example, if the R group is hydrogen, the amino acid is called glycine. If the R group is a methyl group, the amino acid is called alanine.

    Close to 150 amino acids are found in proteins. But only 19 amino acids (plus 1 imino acid, called proline) are specified by the genetic code. Biochemists refer to these 20 as the canonical set.

    blog__inline--is-the-optimal-set-of-protein-amino-acids-3

    Figure 3: The Protein-Forming Amino Acids. Image credit: Shutterstock

    A protein’s primary structure forms when amino acids react with each other to form a linear chain, with the amino group of one amino acid combining with the carboxylic acid of another to form an amide linkage. (Sometimes biochemists call the linkage a peptide bond.)

    blog__inline--is-the-optimal-set-of-protein-amino-acids-4

    Figure 4: The Chemical Linkage between Amino Acids. Image credit: Shutterstock

    The repeating amide linkages along the amino acid chain form the protein’s backbone. The amino acids’ R groups extend from the backbone, creating a distinct physicochemical profile along the protein chain for each unique amino acid sequence. To first approximation, this unique physicochemical profile dictates the protein’s higher-order structures and, hence, the protein’s function.

    Why These Twenty Amino Acids?

    Research has revealed that the set of amino acids used to build proteins is universal. In other words, the proteins found in every organism on Earth are made up of the same canonical set.

    Biochemists have long wondered: Why these 20 amino acids?

    In the early 1980s biochemists discovered that an exquisite molecular rationale undergirds the amino acid set used to make proteins.2 Every aspect of amino acid structure has to be precisely the way it is for life to be possible. On top of that, biochemists concluded that the set of 20 amino acids possesses the “just-right” physical and chemical properties that evenly and uniformly vary across a broad range of size, charge, and hydrophobicity (water resistance). In fact, it appears as if the amino acids selected for proteins seem to form a uniquely optimal set of 20 amino acids compared to random sets of amino acids.3

    With these previous studies as a backdrop, the ELSI investigators wanted to develop a better understanding of the optimal nature of the universal set of amino acids used to build proteins. They also wanted to gain insight into the origin of the canonical set.

    To do this they used a library of 1,913 amino acids (including the 20 amino acids that make up the canonical set) to construct random sets of amino acids. The researchers varied the set sizes from 3 to 20 amino acids and evaluated the performance of the random sets in terms of their capacity to support: (1) the folding of protein chains into three-dimensional structures; (2) protein catalytic activity; and (3) protein solubility.

    They discovered that if a random set of amino acids included even a single amino acid from the canonical set, it dramatically out-performed random sets of the same size without any of the canonical amino acids. Based on these results, the researchers concluded that each of the 20 amino acids used to build proteins stands out, possessing highly unusual properties that make them ideally suited for their biochemical role, confirming the results of previous studies.

    An Evolutionary Origin for the Canonical Set?

    The ELSI researchers believe that—from an evolutionary standpoint—these results also shed light as to how the canonical set of amino acids emerged. Because of the unique adaptive properties of the canonical amino acids, the researchers speculate that “each time a CAA [canonical amino acid] was discovered and embedded during evolution, it provided an adaptive value unusual among many alternatives, and each selective step may have helped bootstrap the developing set to include still more CAAs.”4

    In other words, the researchers offer the conjecture that whenever the evolutionary process stumbled upon one of the amino acids in the canonical set and incorporated it into nascent biochemical systems, the addition offered such a significant evolutionary advantage that it became instantiated into the biochemistry of the emerging cellular systems. Presumably, as this selection process occurred repeatedly over time, members of the canonical set would be added, one by one, to the evolving amino acid set, eventually culminating in the full canonical set.

    Scientists find further support for this scenario in the following observation: some of the canonical amino acids seemingly play a more important role in optimizing smaller sets of amino acids, some play a more important role in optimizing intermediate size sets of amino acids, and others play a more prominent role in optimizing larger sets. They argue that this difference may reflect the sequence by which amino acids were added to the evolving set of amino acids as life emerged.

    On the surface, this evolutionary explanation is not unreasonable. But more careful consideration of the idea raises concerns. For example, just because a canonical amino acid becomes incorporated into a set of amino acids and improves its adaptive value doesn’t mean that the resulting set of amino acids could produce the range of proteins with the solubility, foldability, and catalytic range needed to support life processes. Intuitively, it seems to me as a biochemist, that there must be a threshold for the number of canonical amino acids in any set of amino acids for it to have the range of physicochemical properties needed to build all the proteins needed to support minimal life.

    I also question this evolutionary scenario because some of the amino acids that optimize smaller sets would not have been the ones present initially on the early Earth because they cannot be made by prebiotic reactions. Instead, many of the amino acids that optimize smaller sets can only be generated through biosynthetic routes that must have emerged much later in any evolutionary scenario for the origin of life.5 This limitation also means that the only way for some of the canonical amino acids to become incorporated into the canonical set is that multi-step biosynthetic routes for those amino acids evolved first. But if the full canonical set isn’t available, then it is questionable if the proteins needed to catalyze the biosynthesis of these amino acid would exist, resulting in a chicken-and-egg dilemma.

    In light of these concerns, is there a better explanation for the highly optimized canonical set of amino acids?

    A Creator’s Role?

    Optimality of the universal set of protein amino acids finds explanation if life stems from a Creator’s handiwork. As noted, optimization is an indicator of intelligent design, achieved through foresight and preplanning. Optimization requires inordinate attention to detail and careful craftsmanship. By analogy, the optimized biochemistry epitomized by the amino acid set that makes up proteins rationally points to the work of a Creator.

    Is There a Biochemical Anthropic Principle?

    This discovery also leads to another philosophical implication: It lends support to the existence of a biochemical anthropic principle.

    The ELSI researchers speculate that no matter the starting point in the evolutionary process, the pathways will all converge at the canonical set of amino acids because of the acids’ unusual adaptive properties. In other words, the amino acids that make up the universal set of protein-coding amino acids are not the outworking of an historically contingent evolutionary process, but instead seem to be fundamentally prescribed by the laws of nature. To put it differently, it appears as if the canonical set of amino acids has been preordained in some way.6 One of the study’s authors, Rudrarup Bose, suggests that “Life may not be just a set of accidental events. Rather, there may be some universal laws governing the evolution of life.”7

    Though I prefer to see the origin of life as a creation event, it is important to recognize that even if one were to adopt an evolutionary perspective on life’s origin, it looks as if a Mind is responsible for jimmy-rigging the process to a predetermined endpoint. It looks as if a Mind purposed for life to be present in the universe and structured the laws of nature so that, in this case, the uniquely optimal canonical set of amino acids would inevitably emerge.

    Along these lines, it is remarkable to think that the canonical set of amino acids has the precise properties needed for life to exist. This “coincidence” is eerie, to say the least. As a biochemist, I interpret this coincidence as evidence that our universe has been designed for a purpose. It is provocative to think that regardless of one’s perspective on the origin of life, the evidence converges toward a single conclusion: namely that life manifests from an intelligent agent—God.

    Resources

    The Optimality of Biochemical Systems

    The Biochemical Anthropic Principle

    Endnotes
    1. Melissa Ilardo et al., “Adaptive Properties of the Genetically Encoded Amino Acid Alphabet Are Inherited from Its Subset,” Scientific Reports 9, no. 12468 (August 28, 2019), doi:10.1038/s41598-019-47574-x.
    2. Arthur L. Weber and Stanley L. Miller, “Reasons for the Occurrence of the Twenty Coded Protein Amino Acids,” Journal of Molecular Evolution 17, no. 5 (September 1981): 273–84, doi:10.1007/BF01795749; H. James Cleaves II, “The Origin of the Biologically Coded Amino Acids,” Journal of Theoretical Biology 263, no. 4 (April 2010): 490–98, doi:10.1016/j.jtbi.2009.12.014.
    3. Gayle K. Philip and Stephen J. Freeland, “Did Evolution Select a Nonrandom ‘Alphabet’ of Amino Acids?” Astrobiology 11, no. 3 (April 2011), 235–40, doi:10.1089/ast.2010.0567; Matthias Granhold et al., “Modern Diversification of the Amino Acid Repertoire Driven by Oxygen,” Proceedings of the National Academy of Sciences, USA 115, no. 1 (January 2, 2018): 41–46, doi:10.1073/pnas.1717100115.
    4. Ilardo et al., “Adaptive Properties.”
    5. J. Tze-Fei Wong and Patricia M. Bronskill, “Inadequacy of Prebiotic Synthesis as Origin of Proteinous Amino Acids,” Journal of Molecular Evolution 13, no. 2 (June 1979): 115–25, doi:10.1007/BF01732867.
    6. Tokyo Institute of Technology, “Scientists Find Biology’s Optimal ‘Molecular Alphabet’ May Be Preordained,” ScienceDaily, September 10, 2019, http://www.sciencedaily.com/releases/2019/09/190910080017.htm.
    7. Tokyo Institute, “Scientists Find.”
  • Can Dinosaurs Be Resurrected from Extinction?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 25, 2019

    If you could visit a theme park that offered you a chance to view and even interact with real-life dinosaurs, would you go? I think I might. Who wants to swim with dolphins when you can hang out with dinosaurs? Maybe even ride one?

    Well, if legendary paleontologist Jack Horner has his way, we just might get our wish—and, it could be much sooner than any of us realize. Horner is a champion of the scientific proposal to resurrect dinosaurs from extinction. And it looks like this idea might have a real chance at success.

    Horner’s not taking the “Jurassic Park/World” approach of trying to clone dinosaurs from ancient DNA (which won’t work for myriad technical reasons). He wants to transform birds into dinosaur-like creatures by experimentally manipulating their developmental processes in a laboratory setting.

    The Evolutionary Connection between Birds and Dinosaurs

    The basis for Horner’s idea rises out of the evolutionary paradigm. Most paleontologists think that birds and dinosaurs share an evolutionary history. These scientists argue that shared anatomical features (a key phrase we’ll return to) between birds and certain dinosaur taxa demonstrate their evolutionary connection. Currently, paleontologists place dinosaurs into two major groups: avian and nonavian dinosaurs. Accordingly, paleontologists think that birds are the evolutionary descendants of dinosaurs.

    So, if Horner and others are successful, what does this mean for creation? For evolution?

    Reverse Evolution

    In effect, Horner and other interested scientists seek to reverse what they view as the evolutionary process, converting birds into an evolutionarily ancestral state. Dubbed reverse evolution, this approach will likely become an important facet of paleontology in the future. Evolutionary biologists believe that they can gain understanding of how biological transformations took place during life’s history by experimentally reverting organisms to their ancestral state. Reverse evolution experiments fuse insights from paleontology with those from developmental biology, molecular biology, comparative embryology, and genomics. Many life scientists are excited, because, for the first time, researchers can address questions in evolutionary biology using an experimental strategy.

    Proof-of-Principle Studies

    The first bird that researchers hope to reverse-evolve into a dinosaur-like creature is the chicken (Gallus gallus). This makes sense. We know a whole lot about chicken biology, and life scientists can leverage this understanding to precisely manipulate the embryonic progression of chicks so that they develop into dinosaur-like creatures.

    As I described previously (see Resources for Further Exploration), in 2015 researchers from Harvard and Yale Universities moved the scientific community one step closer to creating a “chickenosaurus” by manipulating chickens in ovo to develop snout-like structures, instead of beaks, just like dinosaurs.1

    Now, two additional proof-of-principle studies demonstrate the feasibility of creating a chickenosaurus. Both studies were carried out by a research team from the Universidad de Chile.

    In one study, the research team coaxed chicken embryos to develop a dinosaur-like foot structure, instead of the foot structure characteristic of birds.2 A bird’s foot has a perching digit that points in the backward direction, in opposition to the other toes. The perching digit allows birds to grasp. In contrast, the corresponding toe in dinosaurs is nonopposable, pointing forward.

    blog__inline--can-dinosaurs-be-resurrected-from-extinction-1

    Figure 1: Dinosaur Foot Structure. Image credit: Shutterstock

    blog__inline--can-dinosaurs-be-resurrected-from-extinction-2

    Figure 2: Bird Foot Structure. Image credit: Shutterstock

    The researchers took advantage of the fact that vertebrate skeletons are plastic, meaning that their structure can be altered by muscle activity. These types of skeletal alterations most commonly occur during embryonic and juvenile stages of growth and development.

    Investigators discovered that muscle activity causes the perching toe of birds to reorient during embryonic development from originally pointing forward to adopting an opposable orientation. Specifically, the activity of three muscles (flexor hallucis longus, flexor hallucis brevis, and musculus extensor hallucis longus) creates torsion that twists the first metatarsal, forcing the perching digit into the opposable position.

    The team demonstrated that by injecting the compound decamethonium bromide into a small opening in the eggshell just before the torsional twisting of the first metatarsal takes place, they could prevent this foot bone from twisting. The compound causes muscle paralysis, which limits the activity of the muscles that cause the torsional stress on the first metatarsal. The net result: the chick developed a dinosaur-like foot structure.

    In a second study, this same research team was able to manipulate embryonic development of chicken embryos to form a dinosaur-like leg structure.3 The lower legs of vertebrates consist of two bones: the tibia and the fibula. In most vertebrates, the fibula is shaped like a tube, extending all the way to the ankle. In birds, the fibula is shorter than the tibia and has a spine-like morphology (think chicken drumsticks).

    blog__inline--can-dinosaurs-be-resurrected-from-extinction-3

    Figure 3: The Lower Leg of a Chicken. Image credit: Shutterstock

    Universidad de Chile researchers discovered that the gene encoding the Indian Hedgehog protein becomes active at the distal end of the fibula during embryonic development of the lower leg in chicks, causing the growth of the fibula to cease. They also learned that the event triggering the increased activity of the Indian Hedgehog gene likely relates to the depletion of the Parathyroid Hormone-Related Protein near the distal end of the fibula. This protein plays a role in stimulating bone growth.

    The researchers leveraged this insight to experimentally create a chick with dinosaur-like lower legs. Specifically, they injected the amniotic region of the chicken embryo with cyclopamine. This compound inhibits the activity of Indian Hedgehog. They discovered that this injection altered fibula development so that it was the same length as the tibia, contacting the ankle, just like in dinosaurs.

    These two recent experiments on foot structure along with the previous one on snout structure represent science at its best. While the experiments reside at the proof-of-principle stage, they still give scientists like Jack Horner reason to think that we just might be able to resurrect dinosaurs from extinction one day. These experiments also raise scientific and theological questions.

    Do Studies in Reverse Evolution Support the Evolutionary Paradigm?

    On the surface, these studies seemingly make an open-and-shut case for the evolutionary origin of birds. It is impressive that researchers can rewind the tape of life and convert chickens into dinosaur-like creatures.

    But deeper reflection points in a different direction.

    All three studies highlight the amount of knowledge and insight about the developmental process required to carry out the reverse evolution experiments. The ingenious strategy the researchers employed to alter the developmental trajectory is equally impressive. They had to precisely time the addition of chemical agents at the just-right levels in order to influence muscle activity in the embryo’s foot or gene activity in the chick’s developing lower legs. Recognizing the knowledge, ingenuity, and skill required to alter embryological development in a coherent way that results in a new type of creature forces the question: Is it really reasonable to think that unguided, historically contingent processes could carry out such transformations when small changes in development can have profound effects on an organism’s anatomy?

    It seems that the best the evolutionary process could achieve would be the generation of “monsters” with little hope of survival. Why? Because evolutionary mechanisms can only change gene expression patterns in a random, haphazard manner. I would contend that the coherent, precisely coordinated genetic changes needed to generate one biological system from another signals a Creator’s handiwork, not undirected evolutionary mechanisms, as the explanation for life’s history.

    Can a Creation Model Approach Explain the Embryological Similarities?

    Though the work in reverse evolution seems to fit seamlessly within an evolutionary framework, observations from these studies can be explained from a creation model perspective.

    Key to this explanation is the work of Sir Richard Owen, a preeminent biologist who preceded Charles Darwin. In contemporary biology, scientists view shared features possessed by related organisms as evidence of common ancestry. Birds and theropod dinosaurs would be a case in point. But for Owen, shared anatomical features reflected an archetypal design that originated in the Mind of the First Cause. Toward this end, the anatomical features shared by birds and theropods can be understood as reflecting common design, not common descent.

    Though few biologists embrace Owen’s ideas today, it is important to note that his ideas were not tried and found wanting. They simply were abandoned in favor of Darwin’s theory, which many biologists preferred because it provided a mechanistic explanation for life’s history and the origin of biological systems. In fact, Darwin owes a debt of gratitude to Owen’s thinking. Darwin coopted the idea of the archetype, but then replaced the canonical blueprint that existed in the Creator’s Mind (per Owen) with a hypothetical common ancestor.

    This archetypal approach to biology can account for the results of reverse-evolution studies. Accordingly, the researchers have discovered differences in the developmental program that affect variations in the archetype, yielding differences in modern birds and long-extinct dinosaurs.

    The idea of the archetype can extend to embryonic growth and development. One could argue that the Creator appears to have developed a core (or archetypal) developmental algorithm that can be modified to yield disparate body plans. From a creation model standpoint, then, the researchers from Harvard and Yale Universities and the Universidad de Chile didn’t reverse the evolutionary process. They unwittingly reverse-engineered a dinosaur-like developmental algorithm from a bird-like developmental program.

    Why Would God Create Using the Same Design Templates?

    There may well be several reasons why a Creator would design living systems around a common set of templates. In my estimation, the most significant reason is discoverability.

    Shared anatomical and physiological features, as well as shared features of embryological development make it possible to apply what we learn by studying one organism to others. This shared developmental program makes it possible to use our understanding of embryological growth and development to reengineer a bird into a dinosaur-like creature. Discoverability makes it easier to appreciate God’s glory and grandeur, as evinced in biochemical systems by their elegance, sophistication, and ingenuity.

    Discoverability also reflects God’s providence and care for humanity. If not for the shared features, it would be nearly impossible for us to learn enough about the living realm for our benefit. Where would biomedical science be without the ability to learn fundamental aspects about our biology by studying model organisms such as chickens? And where would our efforts to re-create dinosaurs be if not for the biological designs they share with birds?

    Resources for Further Exploration

    Reverse Evolution

    Shared Biological Designs and the Creation Model

    Endnotes
    1. Bhart-Anjan S. Bhullar et al., “A Molecular Mechanism for the Origin of a Key Evolutionary Innovation, the Bird Beak and Palate, Revealed by an Integrative Approach to Major Transitions in Vertebrate History,” Evolution 69, no. 7 (2015): 1665–77, doi:10.1111/evo.12684.
    2. João Francisco Botelho et al., “Skeletal Plasticity in Response to Embryonic Muscular Activity Underlies the Development and Evolution of the Perching Digit of Birds,” Scientific Reports 5 (May 14, 2015): 9840, doi:10.1038/srep09840.
    3. João Francisco Botelho et al., “Molecular Developments of Fibular Reduction in Birds and Its Evolution from Dinosaurs,” Evolution 70, no. 3 (March, 2016): 543–54, doi:10.1111/evo.12882.
  • Primate Thanatology and the Case for Human Exceptionalism

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 18, 2019

    I will deliver this people from the power of the grave;
    I will redeem them from death.
    Where, O death, are your plagues?
    Where, O grave, is your destruction?

    Hosea 13:14

    It was the first time someone I knew died. I was in seventh grade. My classmate’s younger brother and two younger sisters perished in a fire that burned his family’s home to the ground. We lived in a small rural town in West Virginia at the time. Everyone knew each other and the impact of that tragedy reverberated throughout the community.

    I was asked to be a pallbearer at the funeral. To this day, I remember watching my friend’s father with a cast on one arm and another on one of his legs, hobble up to each of the little caskets to touch them one last time as he sobbed uncontrollably right before we lifted and carried the caskets to the waiting hearses.

    Death is part of life and our reaction to death is part of what makes us human. But, are humans unique in this regard?

    Funerary Practices

    Human responses to death include funerary practices—ceremonies that play an integral role in the final disposition of the body of the deceased.

    Anthropologists who study human cultures see funerals as providing important scientific insight into human nature. These scientists define funerals as cultural rituals designed to honor, remember, and celebrate the life of those who have died. Funerals provide an opportunity for people to express grief, mourn loss, offer sympathy, and support the bereaved. Also, funerals often serve a religious purpose that includes (depending on the faith tradition) praying for the person who has died, helping his or her soul transition to the afterlife (or reincarnate).

    Funerary Practices and Human Exceptionalism

    For many anthropologists, human funerary practices are an expression of our capacities for:

    • symbolism
    • open-ended generative manipulation of symbols
    • theory of mind
    • complex, hierarchical social interactions

    Though the idea of human exceptionalism is controversial within anthropology today, a growing minority of anthropologists argue that the combination of these qualities sets us apart from other creatures. They make us unique and exceptional.

    As a Christian, I view this set of qualities as scientific descriptors of the image of God. That being the case, then, from my vantage point, human funerary practices (along with language, music, and art) are part of the body of evidence that we can marshal to make the case that human beings uniquely bear God’s image.

    What about Neanderthals?

    But are human beings really unique and exceptional?

    Didn’t Neanderthals bury their dead? Didn’t these hominins engage in funerary practices just like modern humans do?

    If the answer to these questions is yes, then for some people it undermines the case for human uniqueness and exceptionalism and, along with it, the scientific case for the image of God. If Neanderthal funerary practices flow out of the capacity for symbolism, open-ended generative capacity, etc., then it means that Neanderthals must have been like us. They must have been exceptional, too, and humans don’t stand apart from all other creatures on Earth, as the Scriptures teach.

    Did Neanderthals Bury Their Dead?

    But, could these notions about Neanderthal exceptionalism be premature? Although there is widespread belief that Neanderthals buried their dead in a ritualistic manner and even though this claim can be attested in the scientific literature, a growing body of archeological evidence challenges this view.

    Many anthropologists question if Neanderthal burials were in fact ritualistic. (If they weren’t, then it most likely indicates that these hominins didn’t have a concept of the afterlife—a concept that requires symbolism and open-ended generative capacities.) Others go so far as to question if Neanderthals buried their dead at all. (For an in-depth discussion of the scientific challenges to Neanderthal burials, see the Resources section below.)

    Were Neanderthal Burials an Evolutionary Precursor to Human Funerary Practices?

    It is not unreasonable to think that these hominins may well have disposed of corpses and displayed some type of response when members of their group died. Over the centuries, keen observers (including primatologists, most recently) have documented nonhuman primates inspecting, protecting, retrieving, carrying, and dragging the dead bodies of members of their groups.1 In light of these observations, it makes sense to think that Neanderthals may have done something similar.

    While it doesn’t appear that Neanderthals responded to death in the same way we do, it is tempting (within the context of the evolutionary paradigm) to view Neanderthal behavior as an evolutionary stepping-stone to the funerary practices of modern humans.

    But, is this transitional view the best explanation for Neanderthal burials—assuming that these hominins did, indeed, dispose of group members’ corpses? Research in thanatology (the study of dying and death) among nonhuman primates holds the potential to shed light on this question.

    The Nonhuman Primate Response to Death

    Behavioral evolution researchers André Gonçalves and Susana Caravalho recently reviewed studies in primate thanatology—categorizing and interpreting the way these creatures respond to death. In the process, they sought to explain the role the death response plays among various primate groups.

    blog__inline--primate-thanatology-1

    Figure 1: Monkey Sitting over the Body of a Deceased Relative. Image credit: Shutterstock

    When characterizing the death response of nonhuman primates, Gonçalves and Caravalho group the behaviors of these creatures into two categories: (1) responses to infant deaths and (2) responses to adult deaths.

    In most primate taxa (classified groups), when an infant dies the mother will carry the dead baby for days before abandoning it, often grooming the corpse and swatting away flies. Eventually, she will abandon it. Depending on the taxon, in some instances young females will carry the infant’s remains for a few days after the mother abandons it. Most other members in the group ignore the corpse. At times, they will actively avoid both mother and corpse when the stench becomes overwhelming.

    blog__inline--primate-thanatology-2

    Figure 2: Baboon Mother with a Child. Image credit: Shutterstock

    The death of an adult member of the group tends to elicit a much more pervasive response than does the death of an infant. The specific nature of the response depends upon the taxon and also on other factors such as: (1) the bond between individual members of the group and the deceased; (2) the social status of the deceased; and (3) the group structure of the particular taxon. Typically, the closer the bond between the deceased and the group member the longer the duration of the death response. The same is true if the deceased is a high-ranking member of the group.

    Often the death response includes vocalizations that connote alarm and distress. Depending on the taxon, survivors may hit and pull at the corpse, as if trying to rouse it. Other times, it appears that survivors hit the corpse out of frustration. Sometimes groups members will sniff at the corpse or peer at it. In some taxa, survivors will groom the corpse or stroke it gently, while swatting away flies. In other taxa, survivors will stand vigil over the corpse, guarding it from scavengers.

    In some instances, survivors return to the corpse and visit it for days. After the corpse is disposed, group members may continue to visit the site for quite some time. In other taxa, group members may avoid the death site. Both behaviors indicate that group members understand that an event of great importance to the group took place at the site where a member died.

    Are Humans and Nonhuman Primates Different in Degree? Or Kind?

    It is clear that nonhuman primates have an awareness of death and, for some primate taxa, it seems as if members of the group experience grief. Some anthropologists and primatologists see this behavior as humanlike. It’s easy to see why. We are moved by the anguish and confusion these creatures seem to experience when one of their group members dies.

    For the most part, these scientists would agree that the human response to death is more complex and sophisticated. Yet, they see human behavior as differing only in degree rather than kind when compared to other primates. Accordingly, they interpret primate death awareness as an evolutionary antecedent to the sophisticated funerary practices of modern humans, with Neanderthal behavior part of the trajectory. And for this reason, they maintain that human beings really aren’t unique or exceptional.

    The Trouble with Anthropomorphism

    One problem with this conclusion (even within an evolutionary framework) is that it fails to account for the human tendency toward anthropomorphism. As part of our human nature, we possess theory of mind. We recognize that other human beings have minds like ours. And because of this capability, we know what other people are thinking and feeling. But, we don’t know how to turn this feature on and off. As a result, we also apply theory of mind to animals and inanimate objects, attributing humanlike behaviors and motivations to them, though they don’t actually possess these qualities.

    British ethnologist Marian Stamp Dawkins argues in her book Why Animals Matter that scientists studying animal behavior fall victim to the tendency to anthropomorphize just as easily as the rest of us. Too often, researchers interpret experimental results from animal behavioral studies and from observations of animal behavior in captivity and the wild in terms of human behavior. When they do, these researchers ascribe human mental experiences—thoughts and feelings—to animals. Dawkins points out that when investigators operate this way, it leads to untestable hypotheses because we can never truly know what occurs in animal minds. Moreover, Dawkins argues that we tend to prefer anthropomorphic interpretations to other explanations. She states, “Anthropomorphism tends to make people go for the most human-like explanation and ignore the other less exciting ones.”2

    A lack of awareness of our tendency toward anthropomorphism raises questions about the all-too-common view that the death response of nonhuman primates—and Neanderthals—is humanlike and an evolutionary antecedent to modern human funerary practices. This is especially true in light of the explanation offered by Gonçalves and Caravalho for the death response in primates.

    The two investigators argue that the response of mothers to the death of their infants is actually maladaptive (from an evolutionary perspective). Carrying around dead infants and caring for them is energetically costly and hinders their locomotion. Both consequences render them vulnerable to predators. The pair explain this behavior by arguing that the mother’s response to the death of her infant falls on the continuum of care-taking behavior and can be seen as a trade-off. In other words, nonhuman primate mothers who have a strong instinct to care for their offspring will ensure the survival of their infant. But if the infant dies, the instinct is so strong that they will continue to care for it after its death.

    Gonçalves and Caravalho also point out that the death response toward adult members of the group plays a role in reestablishing new group dynamics. Depending on the primate taxon, the death of members shifts the group’s hierarchical structure. This being the case, it seems reasonable to think that the death response helps group members adjust to the new group structure as survivors take on new positions in the hierarchy.

    Finally, as Dawkins argues, we can’t know what takes place in the minds of animals. Therefore, we can’t legitimately attribute human mental experiences to animals. So, while it may seem to us as if some nonhuman primates experience grief as part of the death response, how do we know that this is actually the case? Evidence for grief often consists of loss of appetite and increased vocalizations. However, though these changes occur in response to the death of a group member, there may be other explanations for these behaviors that have nothing to do with grief at all.

    Death Response in Nonhuman Primates and Neanderthals

    Study of primate thanatology also helps us to put Neanderthal burial practices (assuming that these hominins buried dead group members) into context. Often, when anthropologists interpret Neanderthal burials (from an evolutionary perspective), they are comparing these practices to human funerary practices. This comparison makes it seem like Neanderthal burials are part of an evolutionary trajectory toward modern human behavior and capabilities.

    But what if the death response of nonhuman primates is factored into the comparison? When we add a second endpoint, we find that the Neanderthal response to death clusters more closely to the responses displayed by nonhuman primates than to modern humans. And as remarkable as the death response of nonhuman primates may be, it is categorically different from modern human funerary practices. To put it another way, modern human funerary practices reflect our capacity for symbolism, open-ended manipulation of symbols, theory of mind, etc. In contrast, the death response of nonhuman primates and hominins, such as Neanderthals, seems to serve utilitarian purposes. So, it isn’t the presence or absence of the death response that determines our exceptional nature. Instead, it is a death response shaped by our capacity for symbolism and open-ended generative capacity that highlights our exceptional uniqueness.

    Modern humans really do seem to stand apart compared to all other creatures in a way that aligns with the biblical claim that human beings uniquely possess and express the image of God.

    RTB’s biblical creation model for human origins, described in Who Was Adam?, views hominins such as Neanderthals as creatures created by God’s divine fiat that possess intelligence and emotional capacity. These animals were able to employ crude tools and even adopt some level of “culture,” much like baboons, gorillas, and chimpanzees. But they were not spiritual beings made in God’s image. That position—and all of the intellectual, relational, and symbolic capabilities that come with it—remains reserved for modern humans alone.

    Resources for Further Exploration

    Did Neanderthals Bury Their Dead?

    Nonhuman Primate Behavior

    Problem-Solving in Animals and Human Exceptionalism

    Endnotes
    1. André Gonçalves and Susana Caravalho, “Death among Primates: A Critical Review of Nonhuman Primate Interactions towards Their Dead and Dying,” Biological Reviews 94, no. 4 (April 4, 2019), doi:10.1111/brv.12512.
    2. Marian Stamp Dawkins, Why Animals Matter: Animal Consciousness, Animal Welfare, and Human Well-Being (New York, Oxford University Press, 2012), 30.
  • Simple Biological Rules Affirm Creation

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Sep 04, 2019

    “Biology is the study of complicated things that give the appearance of having been designed for a purpose.”
    –Richard Dawkins

    To say that biological systems are complicated is an understatement.

    When I was in college, I had some friends who avoided taking courses in the life sciences because of the complexity of biological systems. On the other hand, I found the complexity alluring. It’s what drew me to biochemistry. I love to immerse myself in the seemingly never-ending intricacies of biomolecular systems and try to make sense of them.

    Perhaps nothing exemplifies the daunting complexity of biochemistry more than intermediary metabolism.

    Order in the Midst of Biochemical Complexity

    I remember a conversation I had years ago with a first-year graduate student who worked in the same lab as me when I was a postdoc at the University of Virginia. He was complaining about all the memorization he had to do for the course he was taking on intermediary metabolism. How else was he going to become conversant with all the different metabolic routes in the cell?

    I told him that he was approaching his classwork in the wrong way. Despite the complexity and chemical diversity of the metabolic pathways in the cell, a set of principles exists that dictates the architecture and operation of metabolic routes. I encouraged my lab mate to learn these principles because, once he did, he would be able to use them to write out all of the metabolic routes with minimal memorization.

    These principles make sense of the complexity of intermediary metabolism. Are there similar rules that make sense of biological diversity and complexity?

    Rules Govern Biological Systems

    As it turns out, the insight I offered my lab mate may well have been prescient.

    The idea that a simple set of principles—rules, if you will—accounts for the complexity and diversity of biological systems may be more widespread than life scientists fully appreciate. At least it appears this way based on work carried out recently by researchers from Duke University.1 These investigators discovered a simple rule that predicts the behavior of mutually beneficial symbiotic relationships (mutualism) in ecosystems. Mutualistic interactions play an important and dominant role in ecosystem stability.

    The Duke University scientists’ accomplishment represents a significant milestone. Lingchong You, one of the study’s authors, points out the difficulty of finding rules that govern all biological systems:

    “In a perfect world, you’d be able to follow a simple set of molecular rules to understand how every biological system operated. But, in reality, it’s difficult to establish rules that encompass the immense diversity and complexity of biological systems. Even when we do establish general rules, it’s still challenging to use them to explain and quantify various physical properties.”2

    Yet, You and his collaborators have done just that for mutualism. Their insight moves biology closer to physics and chemistry where simple rules can account for the physical world. Their work holds the potential to open up new vistas in the life sciences that can lead to a deeper, more fundamental understanding of biological systems.

    In fact, the researchers think that simple rules dictating the operation of biological systems may not be an unusual feature of mutualistic interactions but may apply more broadly. They write, “Beyond establishing another simple rule . . . we also demonstrated that one can purposefully seek an appropriate abstraction level where a simple unifying rule emerges over system diversity.”3

    If the Duke University scientists’ insight generally applies to biological systems, it has interesting theological implications. If biological systems do, indeed, conform to a simple set of rules, it becomes more reasonable to think that a Creator played a role in the origin, history, and design of life.

    I’ll explain how in a moment, but first let’s take a look at some details of the Duke University investigators’ work.

    Mutualism and Ecosystem Stability

    Biological organisms often form symbiotic relationships. When these relationships benefit all of the organisms involved, it is called mutualism. These mutualistic relationships are vital to ecosystems and they directly and indirectly benefit humanity. For example, coral reefs depend on mutualistic interactions between coral and algae. In turn, reefs provide habitats for a diverse ensemble of organisms that support human life and flourishing.

    Unfortunately, mutualistic systems can collapse when one or more of the partners experiences stress or disappears from the ecosystem. A disruption in a relationship can lead to the loss of other members of the ecosystem, thereby altering the ecosystem’s composition and opening up niches for invading organisms. Sadly, this type of collapse is happening in coral reefs around the world today.

    Mutualism Can Be Explained by a Simple Rule

    To gain insight into the rules that dictate ecosystem stability and predict collapse (due to a loss of mutualistic relationships), the Duke University researchers sought to develop a framework that would allow them to determine the outcome of mutualistic interactions. For the predictive framework, the scientists wrote 52 mathematical equations, each one specifically describing one of the various forms of mutualism. These equations were based on a simple biological logic; namely, mutualism consists of two or more populations of organisms that produce a benefit (B) for all the organisms that reduces the stress (S) they experience at a cost (C).

    Mathematical analysis of these equations allowed the researchers to discover a simple inequality that governs the transition from coexistence to collapse. As it turns out, mutualistic interactions remain stable when B > S, and they collapse when this inequality is not observed. Though intuitive, it is still remarkable that this simple relationship dictates the behavior of all types of mutualism.

    The researchers learned that determining the value of S is relatively straightforward. On the other hand, quantifying B proves to be a challenge due to the large number of variables such as temperature, nutrient availability, genetic variation, etc., that influence mutualistic interactions. To work around this problem, the researchers developed a machine-learning algorithm that could calculate B using the input of a large number of variables.

    This work has obvious importance for ecologists as ecosystems all over the planet face collapse. Beyond that, it has important theological implications when we recognize that a simple mathematical equation governs the behavior of mutualistic relationships among organisms.

    Let me explain.

    The Case for a Creator

    From my vantage point, one of the most intriguing aspects of our universe is its intelligibility and our capacity as human beings to make sense of the world around us—quite often, through the use of simple rules we have discovered. Along these lines, it is even more remarkable that the universe and its phenomena can be described using mathematical relationships, which reflects an underlying rationale to the universe itself.

    For most of the history of science, the discovery and exploration of the mathematical nature of the universe has been confined to physics and, to a lesser extent, chemistry. Because of the complexity and diversity of biological systems, many people working in the life sciences have questioned if simple mathematical rules exist in biology and could ever be discovered.

    But the discovery of a simple rule that predicts the behavior of mutualistic relationships in ecosystems suggests that mathematical relationships do describe and govern biological phenomena. And, as the researchers point out, their discovery may turn out to be the rule rather than the exception.

    From my perspective, a universe governed by mathematical relationships suggests that a deep, underlying rationale undergirds nature, which is precisely what I would expect if a Mind was behind the universe. To put it differently, if a Creator was responsible for the universe, as a Christian, I would expect that mathematical relationships would define the universe’s structure and function. In like manner, if the origin and design of living systems originated from a Creator, it would make sense that biological systems would possess an underlying mathematical structure as well—though it might be hard for us to discern these relationships because of the systems’ complexity.

    blog__inline--simple-biological-rules-affirm-creation

    Figure: The Mathematical Universe. Image credit: Shutterstock.

    The mathematical structure of the universe—and maybe even of biology—makes the world around us intelligible. And intelligibility is precisely what we would expect if the universe and everything in it were the products of a Creator—one who desired to make himself known to us through the creation (Romans 1:20). It is also what we would expect if human beings were made in God’s image (as Scripture describes), with the capacity to discern God’s handiwork in the world around us.

    A Case against Materialism

    But what if humans—including our minds—were cobbled together by evolutionary processes? Why would we expect human beings to be capable of making sense of the world around us? For that matter, why would we expect the universe—including the biological realm—to adhere to mathematical relationships?

    In other words, the mathematical undergirding of nature fits better in a theistic conception of reality than one rooted in materialism. And toward that end, the discovery by the Duke University investigators points to God’s role in the origin and design of life.

    Is There a Biological Anthropic Principle?

    As the Duke University scientists show, the discovery of a simple mathematical relationship describing the behavior of mutualistic interactions in ecosystems suggests that these types of relationships may be more commonplace than most life scientists thought or imagined. (See Biochemical Anthropic Principle in the Resources section.)

    This discovery also suggests that a cornerstone feature of ecosystems—mutualistic relationships—is not the haphazard product of evolutionary history. Instead, scientists observe a process fundamentally dictated and constrained by the laws of nature as revealed in the simple mathematical rule that describes the behavior of these systems. We can infer that mutualism within ecosystems may not be the outworking of chance events—the consequence of a historically contingent evolutionary process. Rather, these relationships appear to be fundamentally prescribed by the design of the universe. In other words, mutualism in ecosystems is inevitable in a universe like ours.

    For me, it is eerie to think that mutualism, which appears to be specified by the laws of nature, is precisely what is needed to maintain stable ecosystems. The universe appears to be structured in a just-right way so that stable ecosystems result. If the universe was any other way, then mutualism wouldn’t exist nor would ecosystems.

    One way to interpret this “coincidence” is to view it as evidence that our universe has been designed for a purpose. And purpose must come from a Mind—namely, God.

    Resources

    The Argument from Math and Beauty

    Designed for Discovery

    The Biochemical Anthropic Principle

    The Design of Intermediary Metabolism

    Endnotes
    1. Feilun Wu et al., “A Unifying Framework for Interpreting and Predicting Mutualistic Systems, Nature Communications 10 (2019): 242, doi:/10.1038/s41467-018-08188-5.
    2. Duke University, “Simple Rules Predict and Explain Biological Mutualism,” ScienceDaily (January 16, 2019), https://www.sciencedaily.com/releases/2019/01/190116110941.htm.
    3. Wu et al., “A Unifying Framework.”
  • ATP Transport Challenges the Evolutionary Origin of Mitochondria

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 21, 2019

    In high school, I spent most Sunday mornings with my family gathered around the TV watching weekly reruns of the old Abbott and Costello movies.

    blog__inline--atp-transport-challenges-1

    Image: Bud Abbott and Lou Costello. Image credit: Wikipedia

    One of my favorite routines has the two comedians trying to help a woman get her parallel-parked car out of a tight parking spot. As Costello takes his place behind the wheel, Abbott tells him to “Go ahead and back up.” And of course, confusion and hilarity follow as Costello repeatedly tries to clarify if he is to “go ahead” or “back up,” finally yelling, “Will you please make up your mind!”

    As it turns out, biologists who are trying to account for the origin of mitochondria (through an evolutionary route) are just as confused about directions as Costello. Specifically, they are trying to determine which direction ATP transport occurred in the evolutionary precursors to mitochondria (referred to as pre-mitochondria).

    In an attempt to address this question, a research team from the University of Virginia (UVA) has added to the frustration, raising new challenges for evolutionary explanations for the origin of mitochondria. Their work threatens to drive the scientific community off the evolutionary route into the ditch when it comes to explaining the origin of eukaryotic cells.1

    To fully appreciate the problems this work creates for the endosymbiont hypothesis, a little background is in order. (For those familiar with the evidence for the endosymbiont hypothesis, you may want to skip ahead to The Role of Mitochondria.)

    The Endosymbiont Hypothesis

    Most biologists believe that the endosymbiont hypothesis serves as the best explanation for the origin of complex cells.

    According to this idea, complex cells originated when symbiotic relationships formed among single-celled microbes after free-living bacterial and/or archaeal cells were engulfed by a “host” microbe.

    The “poster children” of the endosymbiont hypothesis are mitochondria. Presumably, the mitochondria started its evolutionary journey as an endosymbiont. Evolutionary biologists believe that once engulfed by the host cell, this microbe took up permanent residency, growing and dividing inside the host. Over time, the endosymbiont and the host became mutually interdependent, with the endosymbiont providing a metabolic benefit for the host cell (such as providing a source of ATP). In turn, the host cell provided nutrients to the endosymbiont. Presumably, the endosymbiont gradually evolved into an organelle through a process referred to as genome reduction. This reduction resulted when genes from the endosymbiont’s genome were transferred into the genome of the host organism, generating the mitonuclear genome.

    blog__inline--atp-transport-challenges-2

    Image: Endosymbiont Hypothesis. Image credit: Wikipedia

    Evidence for the Endosymbiont Hypothesis

    Much of the evidence for the endosymbiotic origin of mitochondria centers around the similarity between mitochondria and bacteria. These organelles are about the same size and shape as typical bacteria and have a double membrane structure like gram-negative cells. These organelles also divide in a way that is reminiscent of bacterial cells.

    Biochemical evidence also exists for the endosymbiont hypothesis. Evolutionary biologists view the presence of the diminutive mitochondrial genome as a vestige of this organelle’s evolutionary history. They see the biochemical similarities between mitochondrial and bacterial genomes as further evidence for the evolutionary origin of these organelles.

    The presence of the unique lipid, cardiolipin, in the mitochondrial inner membrane also serves as evidence for the endosymbiont hypothesis. This important lipid component of bacterial inner membranes is absent in the membranes of eukaryotic cells—except for the inner membranes of mitochondria. In fact, biochemists consider cardiolipin a signature lipid for mitochondria and a vestige of the organelle’s evolutionary history.

    The Role of Mitochondria

    Mitochondria serve cells in a number of ways, including:

    • Calcium storage
    • Calcium signaling
    • Signaling with reactive oxygen species
    • Regulation of cellular metabolism
    • Heat production
    • Apoptosis

    Arguably one of the most important functions of mitochondria relates to their role in energy conversion. This organelle generates ATP molecules by processing the breakdown products of glycolysis through the tricarboxylic acid cycle and the electron transport chain.

    Biochemists refer to ATP as a high-energy compound—it serves as an energy currency for the cell, and most cellular processes are powered by ATP. One way that ATP provides energy is through its conversion to ADP and an inorganic phosphate molecule. This breakdown reaction liberates energy that can be coupled to cellular activities that require energy.

     

    blog__inline--atp-transport-challenges-3

    Image: The ATP/ADP Reaction Cycle. Image credit: Shutterstock

    ATP Production and Transport

    The enzyme complex ATP synthase, located in the mitochondrial inner membrane, generates ATP from ADP and inorganic phosphate, using a proton gradient generated by the flow of electrons through the electron transport chain. As ATP synthase generates ATP, it deposits this molecule in the innermost region of the mitochondria (called the matrix or the lumen).

    In order for ATP to become available to power cellular processes, it has to be transported out of the lumen and across the mitochondrial inner membrane into the cytoplasm. Unfortunately, the inner mitochondrial membrane is impermeable to ATP (and ADP). In order to overcome this barrier, a protein embedded in the inner membrane called ATP/ADP translocase performs the transport operation. Conveniently, for every molecule of ATP transported out of the lumen, a molecule of ADP is transported from the cytoplasm into the lumen. In turn, this ADP is converted into ATP by ATP synthase.

    Because of the importance of this process, copies of ATP/ADP translocase comprises 10% of the proteins in the inner membrane.

    If this enzyme doesn’t function properly, it will result in mitochondrial myopathies.

    The Problem ATP Transport Causes for The Endosymbiont Hypothesis

    Two intertwined questions confronting the endosymbiont hypothesis relate to the evolutionary driving force behind symbiogenesis and the nature of pre-mitochondria.

    Traditionally, evolutionary biologists have posited that the host cell was an anaerobe, while the endosymbiont was an aerobic microbe, producing ATP from lactic acid generated by the host cell. (Lactic acid is the breakdown product of glucose in the absence of oxygen).

    But, as cell biologist Franklin Harold points out, this scenario has an inherent flaw. Namely, if the endosymbiont is producing ATP necessary for its survival from host cell nutrients, why would it relinquish some—or even all—of the ATP it produces to the host cell?

    According to Harold, “The trouble is that unless the invaders share their bounty with the host, they will quickly outgrow him; they would be pathogens, not symbionts.”2

    And, the only way they could share their bounty with the host cell is to transport ATP from the engulfed cell’s interior to the host cell’s cytoplasm. While mitochondria accomplish this task with the ATP/ADP translocase, there is no good reason to think that the engulfed cell would do this. Given the role ATP plays as the energy currency in the cell and the energy that is expended to make this molecule, there is no advantage for the engulfed cell to pump ATP from its interior to the exterior environment.

    Harold sums up the problem this way: “Such a carrier would not have been present in the free-living symbiont but must have been acquired in the course of its enslavement; it cannot be called upon to explain the initial benefits of the association.”3

    In other words, currently, there is no evolutionary explanation for why the ATP/ADP translocase in the mitochondrial inner membrane—a protein central to the role of mitochondria in eukaryotic cells—pumps ATP from the lumen to the cytoplasm.

    Two Alternative Models

    This problem has led evolutionary biologists to propose two alternative models to account for the evolutionary driving force behind symbiogenesis: 1) the hydrogen hypothesis; and 2) the oxygen scavenger hypothesis.

    The hydrogen hypothesis argues that the host cell was a methanogenic member of archaea that consumed hydrogen gas and the symbiont was a hydrogen-generating alpha proteobacteria.

    The oxygen-scavenging model suggests that the engulfed cell was aerobic, and because it used oxygen, it reduced the amount of oxygen in the cytoplasm of the host cell, thought to be an anaerobe.

    Today, most evolutionary biologists prefer the hydrogen hypothesis—in part because the oxygen scavenger model, too, has a fatal flaw. As Harold points out, “This [oxygen scavenger model], too, is dubious, because respiration generates free radicals that are known to be a major source of damage to cellular membranes and genes.”4

    Moving Forward, Or Moving Backward?

    To help make headway, two researchers from UVA attempted to reconstruct the evolutionary precursor to mitochondria, dubbed pre-mitochondria.

    Operating within the evolutionary framework, these two investigators reconstructed the putative genome of pre-mitochondria using genes in the mitochondrial genome and genes from the nuclear genomes of organisms they believe were transferred to the nucleus during the process of symbiogenesis. (Genes that clustered with alphaproteobacterial genes were deemed to be of mitochondrial origin.)

    Based on their reconstruction, they conclude that the original engulfed cell actually used its ATP/ADP translocase to import ATP from the host cell cytoplasm into its interior, exchanging the ATP for an ADP. This is the type of ATP/ADP translocase found in obligate intracellular parasites alive today.

    According to the authors, this means that:

    “Pre-mitochondrion [was] an ‘energy scavenger’ and suggests an energy parasitism between the endosymbiont and its host at the origin of the mitochondria. . . . This is in sharp contrast with the current role of mitochondria as the cell’s energy producer and contradicts the traditional endosymbiotic theory that the symbiosis was driven by the symbiont supplying the host ATP.5

    The authors speculate that at some point during symbiogenesis the ATP/ADP translocase went ahead and backed up, reversing direction. But, this explanation is little more than a just-so story with no evidential support. Confounding their conjecture is their discovery that the ATP/ADP translocase found in mitochondria is evolutionarily unrelated to the ATP/ADP translocases found in obligate intracellular parasites.

    The fact that the engulfed cell was an obligate intracellular parasite not only brings a halt to the traditional version of the endosymbiont hypothesis, it flattens the tires of both the oxygen scavenger model and hydrogen hypothesis. According to Wang and Wu (the UVA investigators):

    “Our results suggest that mitochondria most likely originated from an obligate intracellular parasite and not from a free-living bacterium. This has important implications for our understanding of the origin of mitochondria. It implies that at the beginning of the endosymbiosis, the bacterial symbiont provided no benefits whatsoever to the host. Therefore we argue that the benefits proposed by various hypotheses (e.g, oxygen scavenger and hydrogen hypotheses) are irrelevant in explaining the establishment of the initial symbiosis.”6

    If the results of the analysis by the UVA researchers stand, it leaves evolutionary biologists with no clear direction when it comes to determining the evolutionary driving force behind the early stages of symbiogenesis or the evolutionary route to mitochondria.

    It seems that the more evolutionary biologists probe the question of mitochondrial origins, the more confusion and uncertainty results. In fact, there is not a coherent compelling evolutionary explanation for the origin of eukaryotic cells—one of the key events in life’s history. The study by the UVA investigators (along with other studies) casts aspersions on the most prominent evolutionary explanations for the origin of eukaryotes, justifying skepticism about the grand claim of the evolutionary paradigm: namely, that the origin, design, and history of life can be explained exclusively through evolutionary processes.

    In light of this uncertainty, can the origin of mitochondria, and hence eukaryotic cells, be better explained by a creation model? I think so, but for many scientists this is a road less traveled.

    Resources

    Challenges to the Endosymbiont Hypothesis:

    In Support of a Creation Model for the Origin of Eukaryotic Cells:

    ATP Production and the Case for a Creator:

    Endnotes
    1. Zhang Wang and Martin Wu, “Phylogenomic Reconstruction Indicates Mitochondrial Ancestor Was an Energy Parasite,” PLOS One 9, no. 10 (October 15, 2014): e110685, doi:10.1371/journal.pone.0110685.
    2. Franklin M. Harold, In Search of Cell History: The Evolution of Life’s Building Blocks (Chicago, IL: The University of Chicago Press, 2014), 131.
    3. Harold, In Search of Cell History, 131.
    4. Harold, In Search of Cell History, 132.
    5. Wang and Wu, “Phylogenomic Reconstruction.
    6. Wang and Wu, “Phylogenomic Reconstruction.
  • Does Information Come from a Mind?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 14, 2019

    Imagine youre flying over the desert, and you notice a pile of rocks down below. Most likely, you would think little of it. But suppose the rocks were arranged to spell out a message. I bet you would conclude that someone had arranged those rocks to communicate something to you and others who might happen to fly over the desert.

    You reach that conclusion because experience has taught you that messages come from persons/people—or, rather, that information comes from a mind. And, toward that end, information serves as a marker for the work of intelligent agency.

    blog__inline--does-information-come-from-a-mind

    Image credit: Shutterstock

    Recently, a skeptic challenged me on this point, arguing that we can identify numerous examples of natural systems that harbor information, but that the information in these systems arose through natural processes—not a mind.

    So, does information truly come from a mind? And can this claim be used to make a case for a Creator’s existence and role in life’s origin and design?

    I think it can. And my reasons are outlined below.

    Information and the Case for a Creator

    In light of the (presumed) relationship between information and minds, I find it provocative that biochemical systems are information systems.

    Two of the most important classes of information-harboring molecules are nucleic acids (DNA and RNA) and proteins. In both cases, the information content of these molecules arises from the nucleotide and amino acid sequences, respectively, that make up these two types of biomolecules.

    The information harbored in nucleotide sequences of nucleic acids and amino acid sequences of proteins is digital information. Digital information is represented by a succession of discrete units, just like the ones and zeroes that encode the information manipulated by electronic devices. In this respect, sequences of nucleotides and amino acids for discrete informational units that encode the information in DNA and RNA and proteins, respectively.

    But the information in nucleic acids and proteins also has analog characteristics. Analog information varies in an uninterrupted continuous manner, like radio waves used for broadcasting purposes. Analog information in nucleic acids and proteins are expressed through the three-dimensional structures adopted by both classes of biomolecules. (For more on the nature of biochemical information, see Resources.)

    If our experience teaches us that information comes from minds, then the fact that key classes of biomolecules are comprised of both digital and analog information makes it reasonable to conclude that life itself stems from the work of a Mind.

    Is Biochemical Information Really Information?

    Skeptics, such as philosopher Massimo Pigliucci, often dismiss this particular design argument, maintaining that biochemical information is not genuine information. Instead, they maintain that when scientists refer to biomolecules as harboring information, they are employing an illustrative analogy—a scientific metaphor—and nothing more. They accuse creationists and intelligent design proponents of misconstruing scientists use of analogical language to make the case for a Creator.1

    In light of this criticism, it is worth noting that the case for a Creator doesn’t merely rest on the presence of digital and analog information in biomolecules, but gains added support from work in information theory and bioinformatics.

    For example, information theorist Bernd-Olaf Küppers points out in his classic work Information and the Origin of Life that the structure of the information housed in nucleic acids and proteins closely resembles the hierarchical organization of human language.2 This is what Küppers writes:

    The analogy between human language and the molecular genetic language is quite strict. . . . Thus, central problems of the origin of biological information can adequately be illustrated by examples from human language without the sacrifice of exactitude.3

    Added to this insight is the work by a team from NIH who discovered that the information content of proteins bears the same mathematical structure as human language. To this end, they discovered that a universal grammar exists that defines the structure of the biochemical information in proteins. (For more details on the NIH teams work, see Resources.)

    In other words, the discovery that the biochemical information shares the same features as human language deepens the analogy between biochemical information and the type of information we create as human designers. And, in doing so, it strengthens the case for a Creator.

    Further Studies that Strengthen the Case for a Creator

    So, too, does other work, such as studies in DNA barcoding. Biologists have been able to identify, catalog, and monitor animal and plant species using relatively short, standardized segments of DNA within genomes. They refer to these sequences as DNA barcodes that are analogous to the barcodes merchants use to price products and monitor inventory.

    Typically, barcodes harbor information in the form of parallel dark lines on a white background, creating areas of high and low reflectance that can be read by a scanner and interpreted as binary numbers. Barcoding with DNA is possible because this biomolecule, at its essence, is an information-based system. To put it another way, this work demonstrates that the information in DNA is not metaphorical, but is in fact informational. (For more details on DNA barcoding, see DNA Barcodes Used to Inventory Plant Biodiversity in Resources.)

    Work in nanotechnology also strengthens the analogy between biochemical information and the information we create as human designers. For example, a number of researchers are exploring DNA as a data storage medium. Again, this work demonstrates that biochemical information is information. (For details on DNA as a data storage medium, see Resources.)

    Finally, researchers have learned that the protein machines that operate on DNA during processes such as transcription, replication, and repair literally operate like a computer system. In fact, the similarity is so strong that this insight has spawned a new area of nanotechnology called DNA computing. In other words, the cell’s machinery manipulates information in the same way human designers manipulate digital information. For more details, take a look at the article “Biochemical Turing Machines ‘Reboot’ the Watchmaker Argument” in Resources.)

    The bottom line is this: The more we learn about the architecture and manipulation of biochemical information, the stronger the analogy becomes.

    Does Information Come from a Mind?

    Other skeptics challenge this argument in a different way. They assert that information can originate without a mind. For example, a skeptic recently challenged me this way:

    “A volcano can generate information in the rocks it produces. From [the] information we observe, we can work out what it means. Namely, in this example, that the rock came from the volcano. There was no Mind in information generation, but rather minds at work, generating meaning.

    Likewise, a growing tree can generate information through its rings. Humans can also generate information by producing sound waves.

    However, I dont think that volcanoes have minds, nor do trees—at least not the way we have minds.”

    –Roland W. via Facebook

    I find this to be an interesting point. But, I don’t think this objection undermines the case for a Creator. Ironically, I think it makes the case stronger. Before I explain why, though, I need to bring up an important clarification.

    In Roland’s examples, he conflates two different types of information. When I refer to the analogy between human languages and biochemical information, I am specifically referring to semantic information, which consists of combinations of symbols that communicate meaning. In fact, Roland’s point about humans generating information with sound waves is an example of semantic information, with the sounds serving as combinations of ephemeral symbols.

    The type of information found in volcanic rocks and tree rings is different from the semantic information found in human languages. It is actually algorithmic information, meaning that it consists of a set of instructions. And technically, the rocks and tree rings dont contain this information—they result from it.

    The reason why we can extract meaning and insight from rocks and tree rings is because of the laws of nature, which correspond to algorithmic information. We can think of these laws as instructions that determine the way the world works. Because we have discovered these laws, and because we have also discovered nature’s algorithms, we can extract insight and meaning from studying rocks and tree rings.

    In fact, Küppers points out that biochemical systems also consist of sets of instructions instantiated within the biomolecules themselves. These instructions direct activities of the biomolecular systems and, hence, the cell’s operations. To put it another way, biochemical information is also algorithmic information.

    From an algorithmic standpoint, the information content relates to the complexity of the instructions. The more complex the instructions, the greater the information content. To illustrate, consider a DNA sequence that consists of alternating nucleotides, AGAGAGAG . . . and so on. The instructions needed to generate this sequence are:

    1. Add an A
    2. Add a G
    3. Repeat steps 1 and 2, x number of times, where x corresponds to the length of the DNA sequence divided by 2

    But what about a DNA sequence that corresponds to a typical gene? In effect, because there is no pattern to that sequence, the set of instructions needed to create that sequence is the sequence itself. In other words, a much greater amount of algorithmic information resides in a gene than in a repetitive DNA sequence.

    And, of course, our common experience teaches us that information—whether it’s found in a gene, a rock pile, or a tree ring—comes from a Mind.

    Resources

    Endnotes
    1. For example, see Massimo Pigliucci and Maarten Boudry, “Why Machine-Information Metaphors Are Bad for Science and Science Education,” Science and Education 20, no. 5–6 (May 2011): 453–71; doi:10.1007/s11191-010-9267-6.
    2. Bernd-Olaf Küppers, Information and the Origin of Life (Boston, MA: MIT Press, 1990), 24–25.
    3. Küppers, Information, 23.
  • New Insights into Endothermy Heat Up the Case for a Creator

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Aug 07, 2019

    I feel cold all the time.

    When I was younger, I was always hot. I needed to be in air conditioning everywhere I went. I could never get the temperature cold enough. But now that I am older, I feel like a frail person who is always chilled, needing to drape myself with a blanket to keep warm.

    Nevertheless, like all human beings, I am still warm-blooded. I am an endotherm, as are all mammals and birds.

    For many biologists, endothermy represents a bit of an enigma. Maintaining a constant body temperature requires an elevated basal metabolic rate. But the energy needed to preserve a constant body temperature doesn’t come cheap. In fact, warm-blooded animals demand 30 times the energy per unit time compared to cold-blooded (ectothermic) creatures.

    Though biologists have tried to account for endothermy, no model has adequately explained why birds and mammals are warm-blooded. The advantages of being warm-blooded over being cold-blooded have not seemed to adequately outweigh costs—until now.

    Recently, a biologist from the University of Nevada, Reno, Michael L. Logan, published a model that helps make sense of this enigma.1 His work evokes the optimal design and elegant rationale for endothermy in birds and mammals—and ectothermy in amphibians and reptiles.

    An Explanation for Endothermy

    For endothermy to exist, it must confer some significant advantage for animals constant, elevated body temperatures.

    Logan argues that endothermy maintains mammalian and bird body temperatures close to the thermal optimum for immune system functionality. The operations of the immune system are temperature-dependent. If the temperature is too low or too high, the immune system responds poorly to infectious agents. But an elevated and stable body temperature primes mammalian and bird immune systems to rapidly and effectively respond to pathogens. When birds and mammals acquire a pathogen, their bodies mount a fever response. This slight elevation in temperature places their body temperature at the thermal optimum.

    In other words, the fever response plays a critical role when animals battle infectious agents. And warm-blooded animals have the advantage of possessing body temperatures close to ideal.

    Temperature and Immune System Function

    A body of evidence indicates that the immune systems components display temperature-dependent changes in activity. As it turns out, fever optimizes immune system function by:

    1. Increasing the flow of blood through the bloodstream because of the vasodilation (blood vessel expansion) associated with fever. This increased blood flow accelerates the movement of immune cells throughout the body, giving them more timely access to pathogens.
    2. Increasing binding of immune system proteins to immune cells, assisting their trafficking to lymph tissue.
    3. Increasing cellular activity, such as proliferation and differentiation of immune cells and phagocytosis.

    blog__inline--new-insights-into-endothermy

    Figure: The Human Immune System. Image credit: Shutterstock

    Other studies indicate that some pathogens, such as fungi, lose virulence at higher temperatures, further accounting for elevated body temperatures and the importance of the fever response. Of course, if body temperature becomes too high, it will compromise immune system function, moving it away from the temperature optimum and leading to other complications. So, the fever response must be carefully regulated.

    Heres the key point: the metabolic costs of endothermy are justified because warm-bloodedness allows the immune systems of birds and mammals to be near enough to the temperature optimum that infectious agents can be quickly cleared from their bodies.

    Fever Response in Ectotherms

    Cold-blooded animals (ectotherms) also mount a fever response to infectious agents for the same reason as endotherms. However, the body temperature of ectotherms is set by their surroundings. This limitation means that ectotherms need to regulate their body temperature and mount the fever response through their behavior by moving into spaces with elevated temperatures. Doing so places them at the mercy of environmental changes. This condition means that cold-blooded creatures experience a significant time lag between the onset of infection and the fever response. It also means that, in some cases, ectotherms can’t elevate their body temperature to the immune system optimum if, for example, it is night or overcast.

    Finally, in an attempt to elevate their body temperatures, ectotherms need to be out from under cover, making themselves vulnerable to predators. So, according to Logan’s model, endothermy offers some tangible advantages compared to ectothermy.

    But endothermy comes at a cost. As mentioned, the metabolic cost of endothermy is extensive compared to ectothermy. Pathogen virulence marks another disadvantage. Logan points out that pathogens that infect cold-blooded animals are much less virulent than pathogens that infect warm-blooded creatures.

    Endothermy and Ectothermy Trade-Offs

    So, when it comes to regulation of animal body temperature, a set of trade-offs exists that include:

    • Metabolic costs
    • Immune system responsiveness and effectiveness
    • Pathogen virulence
    • Vulnerability to predators

    These trade-offs can be managed by two viable strategies: endothermy and ectothermy. Each has advantages and disadvantages. And each is optimized in its own right.

    Regulation of Body Temperature and the Case for a Creator

    Logan seeks to account for the evolutionary origins of endothermy by appealing to the advantages it offers organisms battling pathogens. But, examining Logans scenario leaves one feeling as if the explanation is little more than an evolutionary just-so story.

    When endothermy presented an enigma for biologists, it would have been hard to argue that it reflected the handiwork of a Creator, particularly in light of its large metabolic cost. But now that scientists understand the trade-offs in play and the optimization associated with the endothermic lifestyle, we can also interpret the optimization of endothermy and ectothermy as evidence for design.

    From my vantage point, optimization signifies the handiwork of a Creator. As I discuss in The Cell’s Design, saying something is optimized is equivalent to saying it is well-designed. The optimization of an engineered system doesn’t just happen. Rather, such systems require forethought, planning, and careful attention to detail. In the same way, the optimized designs of biological systems like endothermy and ectothermy reasonably point to the work of a Creator.

    And I am chill with that.

    Resources

    Endnotes
    1. Michael L. Logan, “Did Pathogens Facilitate the Rise of Endothermy?” Ideas in Ecology and Evolution 12 (June 4, 2019): 1–8, https://ojs.library.queensu.ca/index.php/IEE/article/view/13342.
  • Is SETI an Intelligent Design Research Program?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jul 24, 2019

    When I was a little kid, my father was chair of the physics department at West Virginia Institute of Technology (WVIT). We lived in faculty housing on the outskirts of the WVIT campus and, as a result, the college grounds became my playground.

    I have always felt at home on college and university campuses. Perhaps this is one reason I enjoy speaking at university venues. I also love any chance I get to interact with college students. They have inquisitive minds and they won’t hesitate to challenge ideas.

    Skeptical Challenge

    A few years ago I was invited to present a case for a Creator, using evidence from biochemistry, at Cal Poly San Luis Obispo. During the Q&A session, a skeptical student challenged my claims, insisting that intelligent design/creationism isn’t science. In leveling this charge, he was advocating scientism—the view that science is the only way to discover truth; in fact, science equates to truth. Thus, if something isn’t scientific, then it can’t be true. On this basis he rejected my claims.

    You might be surprised by my response. I agreed with my questioner.

    My case for a Creator based on the design of biochemical systems is not science. It is a philosophical and theological argument informed by scientific discovery. In other words, scientific discoveries have metaphysical implications. And, by identifying and articulating those implications, I built a case for God’s existence and role in the origin and design of life.

    Having said this, I do think that design detection is legitimately part of the fabric of science. We can use scientific methodologies to detect the work of intelligent agency. That is, we can develop rigorous scientific evidence for intelligent design. I also think we can ascribe attributes to the intelligent designer from scientific evidence at hand.

    In defense of this view, I (and others who are part of the Intelligent Design Movement, or IDM) have pointed out that there are branches of science that function as intelligent design programs, such as research in archaeology and the Search for Extraterrestrial Intelligence (SETI). We stand to learn much from these disciplines about the science of design detection. (For a detailed discussion, see the Resources section.)

    SETI and Intelligent Design

    Recently, I raised this point in a conversation with another skeptic. He challenged me on that point, noting that Seth Shostak, an astronomer from the SETI Institute, wrote a piece for Space.com repudiating the connection between intelligent design (ID) and SETI, arguing that they don’t equate.

     

    blog__inline--is-seti-an-intelligent-design-research-program
    SONY DSC

    Figure: Seth Shostak. Image credit: Wikipedia

    According to Shostak,

    “They [intelligent design proponents] point to SETI and say, ‘upon receiving a complex radio signal from space, SETI researchers will claim it as proof that intelligent life resides in the neighborhood of a distant star. Thus, isn’t their search completely analogous to our own line of reasoning—a clear case of complexity implying intelligence and deliberate design? And SETI, they would note, enjoys widespread scientific acceptance.”1

    Shostak goes on to say, “If we as SETI researchers admit this is so, it sounds as if were guilty of promoting a logical double standard. If the ID folks arent allowed to claim intelligent design when pointing to DNA, how can we hope to claim intelligent design on the basis of a complex radio signal?”2

    In an attempt to distinguish the SETI Institute from the IDM, Shostak asserts that ID proponents make their case for intelligent design based on the complexity of biological and biochemical systems. But this is not what the SETI Institute does. According to Shostak, “The signals actually sought by today’s SETI searches are not complex, as the ID advocates assume. We’re not looking for intricately coded messages, mathematical series, or even the aliens’ version of ‘I Love Lucy.’”

    Instead of employing complexity as an indicator of intelligent agency, SETI looks for signals that display the property of artificiality. What they mean by artificiality is that specifically, SETI is looking for a simple signal of narrow-band electromagnetic radiation that forms an endless sinusoidal pattern. According to SETI investigators, this type of signal does not occur naturally. Shostak also points out that the context of the signal is important. If the signal comes from a location in space that couldn’t conceivably harbor life, then SETI researchers would be less likely to conclude that it comes from an intelligent civilization. On the other hand, if the signal comes from a planetary system that appears life-friendly, this signal would be heralded as a successful detection event.

    Artificiality and Intelligent Design

    I agree with Shostak. Artificiality, not complexity, is the best indicator of intelligent design. And, it is also important to rule out natural process explanations. I can’t speak for all creationists and ID proponents, but the methodology I use to detect design in biological systems is precisely the same one the SETI Institute employs.

    In my book The Cell’s Design, I propose the use of an ID pattern to detect design. Toward this end, I point out that objects, devices, and systems designed by human beings—intelligent designers—are characterized by certain properties that are distinct from objects and systems generated by natural processes. To put it in Shostak’s terms, human designs display artificiality. And we can use the ID pattern as a way to define what artificiality should look like.

    Here are three ways I adopt this approach:

    1. In The Cell’s Design, I follow after natural theologian William Paley’s work. Paley described designs created by human beings as contrivances in which the concept of artificiality was embedded. I explain examples of such artificiality in biochemical systems.
    2. In Origins of Life (a work I coauthored with astronomer Hugh Ross) and Creating Life in the Lab, I point out that natural processes don’t seem to be able to account for the origin of life and, hence, the origin of biochemical systems.
    3. Finally, in Creating Life in the Lab, I show that attempts to create protocells starting with simple molecules and attempts to recapitulate the different stages in the origin-of-life pathway depend upon intelligent agency. This dependence further reinforces the artificiality displayed by biochemical systems.

    Collectively, all three books present a comprehensive case for a Creator’s role in the origin and fundamental design of life, with each component of the overall case for design resting on the artificiality of biochemical systems. So, even though the SETI Institute may want to distance themselves from the IDM, SETI is an intelligent design program. And intelligent design is, indeed, part of the construct of science.

    In other words, scientists from a creation model perspective can make a rigorous scientific case for the role of intelligent agency in the origin and design of biochemical systems, and even assign attributes to the designer. At that point, we can then draw metaphysical conclusions about who that designer might be.

    Resources

    Endnotes
    1. Seth Shostak, “SETI and Intelligent Design,” Space.com (December 1, 2005), https://www.space.com/1826-seti-intelligent-design.html.
    2. Shostak, “SETI and Intelligent Design.”
  • Does Old-Earth Creationism Make God Deceptive?

    by Telerik.Sitefinity.DynamicTypes.Model.Authors.Author | Jul 17, 2019

    “Are [vestigial structures] unequivocal evidence of evolution?

    No. Are they reasonable evidence of evolution? Yes.

    Ditto gene sequences.

    Appearance of evolution is no more a valid deflection [for the overwhelming evidence for evolution] than the appearance of age is a valid dodge of the overwhelming confluence of evidence of antiquity.

    Both are sinking ships. I got off before going under with you on this one.”

    —Hill R. (a former old-earth creationist who now espouses theistic evolution/evolutionary creationism)

    Most people who follow my work at Reasons to Believe know I question the grand claim of the evolutionary paradigm; namely, that evolutionary processes provide the exclusive explanation for the origin, design, and history of life. In light of my skepticism, friends and foes alike often ask me how I deal with (what many people perceive to be) the compelling evidence for the evolutionary history of life, such as vestigial structures and shared genetic features in genomes.

    As part of my response, I point out that this type of evidence for evolution can be accommodated by a creation model, with the shared features reflecting common design, not common descent—particularly now that we know that there is a biological rationale for many vestigial structures and shared genetic features. This response prompted my friend Hill R. to level his objection. In effect, Hill says I am committing the “appearance of evolution” fallacy, which he believes is analogous to the “appearance of age” fallacy committed by young-earth creationists (YECs).

    Hill is not alone in his criticism. Other people who embrace theistic evolution/evolutionary creation (such as my friends at BioLogos) level a similar charge. According to these critics, both appearance of age and appearance of evolution fallacies make God deceptive.

    If biological systems are designed, but God made them appear as if they evolved, then the conclusions we draw when we investigate nature are inherently untrustworthy. This is a problem because, according to Scripture, God reveals himself to us through the record of nature. But if we are misled by natures features and, consequently, draw the wrong conclusion, then it makes God deceptive. However, God cannot lie or deceive. It is contrary to his nature.

    So, how do I respond to this theological objection to RTB’s creation model?

    Before I reply, I want to offer a little more background information to make sure that anyone who is unfamiliar with this concern can better appreciate the seriousness of the charge against our creation model. If you don’t need the background explanation, then feel free to skip ahead to A Response to the Appearance of Evolution Challenge.

    Evidence for Evolution: Vestigial Structures

    Evolutionary biologists often point to vestigial structures—such as the pelvis and hind limbs of whales and dolphins (cetaceans)—as compelling evidence for biological evolution. Evolutionary biologists view vestigial structures this way because they are also homologous (structurally similar) structures. Vestigial structures are rudimentary body parts that are smaller and simpler than the corresponding features possessed by the other members of a biological group. As a case in point, the whale pelvis and hind limbs are homologous to the pelvis and hind limbs of all other mammals.

    blog__inline--does-old-earth-creationism-make-god-deceptive-1

    Figure 1: Whale Pelvis. Image credit: Shutterstock

    Evolutionary biologists believe that vestigial structures were fully functional at one time but degenerated over the course of many generations because the organisms no longer needed them to survive in an ever-changing environment—for example, when the whale ancestor transitioned from land to water. From an evolutionary standpoint, fully functional versions of these structures existed in the ancestral species. The structures’ form and function may be retained (possibly modified) in some of the evolutionary lineages derived from the ancestral species, but if no longer required, the structures become diminished (and even lost) in other lineages.

    Evidence for Evolution: Shared Genetic Features

    Evolutionary biologists also consider shared genetic features found in organisms that naturally group together as compelling evidence for common descent. One feature of particular interest is the identical (or nearly identical) DNA sequence patterns found in genomes. According to this line of reasoning, the shared patterns arose as a result of a series of substitution mutations that occurred in the common ancestor’s genome. Presumably, as the varying evolutionary lineages diverged from the nexus point, they carried with them the altered sequences created by the primordial mutations.

    Synonymous mutations play a significant role in this particular argument for common descent. Because synonymous mutations don’t alter the amino acid sequence of proteins, their effects are considered to be inconsequential. (In a sense, they are analogous to vestigial anatomical features.) So, when the same (or nearly the same) patterns of synonymous mutations are observed in genomes of organisms that cluster together into the same group, most life scientists interpret them as compelling evidence of the organisms’ common evolutionary history.

    A Response to the Evidence for Evolution

    As a rejoinder to this evidence, I point out that we continue to uncover evidence that vestigial structures display function (see Vestigial Structures are Functional in the Resources section.) Likewise, evidence is beginning to accumulate that synonymous mutations have functional consequences. (see Shared Genetic Features Reflect Design in the Resources section.) Again, if these features have functional utility, then they can reasonably be interpreted as the Creator’s handiwork.

    But, even though these biological features bear function, many critics of the RTB model think that the shared features of these biological systems still bear the hallmarks of an evolutionary history. Therefore, they argue that these features look as if they evolved. And if so, we are guilty of the “appearance of evolution fallacy.

    Appearance of Age and the Appearance of Evolution

    In 1857, Philip Gosse, a biologist and preacher from England, sought to reconcile the emerging evidence for Earth’s antiquity with Scripture. Gosse was convinced that the earth was old. He was also convinced that Scripture taught that the earth was young. In an attempt to harmonize these disparate stances, he proposed the appearance of age argument in a book titled Omphalos. In this work, Gosse argued that God created Earth in six days, but made it with the appearance of age.

    blog__inline--does-old-earth-creationism-make-god-deceptive-2

    Figure 2: Philip Henry Gosse, 1855. Image credit: Wikipedia

    This idea persists today, finding its way into responses modern-day YECs make to the scientific evidence for Earth’s and life’s antiquity. For many people (including me), the appearance of age argument is fraught with theological problems, the chief one being that it makes God deceptive. If Earth appears to be old, and it measures to be old, yet it is young, then we can’t trust anything we learn when we study nature. This problem is not merely epistemological; it is theological because nature is one way that God has chosen to make himself known to us. But if our investigation of nature is unreliable, then it means that God is untrustworthy.

    In other words, on the surface, both the appearance of age and the appearance of evolution arguments made by YECs and old-earth creationists (OECs), respectively, seem to be equally problematic.

    But does the RTB position actually commit the appearance of evolution fallacy? Does it suffer from the same theological problems as the argument first presented by Gosse in Omphalos? Are we being hypocritical when we criticize the appearance of age fallacy, only to commit the appearance of evolution fallacy?

    A Response to the Appearance of Evolution Challenge

    This charge against the RTB creation model neglects to fully represent the reasons I question the evolutionary paradigm.

    First, my skepticism is not theologically motivated but scientifically informed. For example, I point out in an article I recently wrote for Sapientia that a survey of the scientific literature makes it clear that evolutionary theory as currently formulated cannot account for the key transitions in life’s history, including:

    • the origin of life
    • the origin of eukaryotic cells
    • the origin of body plans
    • the origin of human exceptionalism

    Additionally, some predictions that flow out of the evolutionary paradigm have failed (such as the widespread prevalence of convergence), further justifying my skepticism. (See Scientific Challenges to the Evolutionary Paradigm in the Resources section.)

    In other words, when we interpret shared features as a manifestation of common design (including vestigial structures and shared genetic patterns), it is in the context of scientifically demonstrable limitations of the evolutionary framework to fully account for life’s origin, history, and design. To put it differently, because of the shortcomings of evolutionary theory, we don’t see biological systems as having evolved. Rather, we think they’ve been designed.

    Appearance of Design Fallacy

    Even biologists who are outspoken atheists readily admit that biological and biochemical systems appear to be designed. Why else would Nobel Laureate Francis Crick offer this word of caution to scientists studying biochemical systems: “Biologists must keep in mind that what they see was not designed, but rather evolved.”1 What other reason would evolutionary biologist Richard Dawkins offer for defining biology as “the study of complicated things that give the appearance of having been designed for a purpose”?2

    Biologists can’t escape the use of design language when they describe the architecture and operation of biological systems. In and of itself, this practice highlights the fact that biological systems appear to be designed, not evolved.

    To sidestep the inexorable theological implications that arise when biologists use design language, biologist Colin Pittendrigh coined the term teleonomy in 1958 to describe systems that appear to be purposeful and goal-directed, but aren’t. In contrast with teleology—which interprets purposefulness and goal-directedness as emanating from a Mind— teleonomy views design as the outworking of evolutionary processes. In other words, teleonomy allows biologists to utilize design language— when they describe biological systems—without even a tinge of guilt.

    In fact, the teleonomic interpretation of biological design resides at the heart of the Darwinian revolution. Charles Darwin claimed that natural selection could account for the design of biological systems. In doing so, he supplanted Mind with mechanism. He replaced teleology with teleonomy.

    Prior to Darwin, biology found its grounding in teleology. In fact, Sir Richard Owen—one of England’s premier biologists in the early 1800s—produced a sophisticated theoretical framework to account for shared biological features found in organisms that naturally cluster together (homologous structures). For Owen (and many biologists of his time) homologous structures were physical manifestations of an archetypal design that existed in the Creator’s mind.

    Thus, shared biological features—whether anatomical, physiological, biochemical, or genetic—can be properly viewed as evidence for common design, not common descent. In fact, when Darwin proposed his theory of evolution, he appropriated Owen’s concept of the archetype but then replaced it with a hypothetical common ancestor.

    Interestingly, Owen (and other like-minded biologists) found an explanation for vestigial structures like the pelvis and hind limb bones (found in whales and snakes) in the concept of the archetype. They regarded these structures as necessary to the architectural design of the organism. In short, a model that interprets shared biological characteristics from a design/creation model framework has historical precedence and is based on the obvious design displayed by biological systems.

    Given the historical precedence for interpreting the appearance of design in biology as bona fide design and the inescapable use of design language by biologists, it seems to me that RTBs critics commit the appearance of design fallacy when they (along with other biologists) claim that things in biology look designed, but they actually evolved.

    Theories Are Underdetermined by Data

    A final point. One of the frustrating aspects of scientific discovery relates to whats called the underdetermination thesis.3 Namely, two competing theories can explain the same set of data. According to this idea, theories are underdetermined by data. This limitation means that two or more theories—that may be radically different from one another—can equally account for the same data. Or, to put it another way, the methodology of science never leads to one unique theory. Because of this shortcoming, other factors—nonscientific ones—influence the acceptance or rejection of a scientific theory, such as a commitment to mechanistic explanations to explain all of biology.

    As a consequence of the underdetermination theory, evolutionary models don’t have the market cornered when it comes to offering an interpretation of biological data. Creation models, such as the RTB model—which relies on the concept of common design—also makes sense of the biological data. And given the inability of current evolutionary theory to explain key transitions in life’s history, maybe a creation model approach is the better alternative.

    In other words, when we interpret vestigial structures and shared genetic features from a creation model perspective, we are not committing an appearance of age type of fallacy, nor are we making God deceptive. Instead, we are offering a common sense and scientifically robust interpretation of the elegant designs so prevalent throughout the living realm.

    Far from a sinking ship one should abandon, a creation model offers a lifeline to scientific and biblical integrity.

    Resources

    Vestigial Structures Are Functional

    Shared Genetic Features Reflect Design

    Scientific Challenges for the Evolutionary Paradigm

    Archetype Biology

    Endnotes
    1. Francis Crick, What Mad Pursuit: A Personal View of Scientific Discovery (New York: Basic Books, 1988), 138.
    2. Richard Dawkins, The Blind Watchmaker: Why the Evidence for Evolution Reveals a Universe without Design (New York: W. W. Norton, 1996), 4.
    3. Val Dusek, Philosophy of Technology: An Introduction (Malden, MA: Blackwell Publishing, 2006), 12.

About Reasons to Believe

RTB's mission is to spread the Christian Gospel by demonstrating that sound reason and scientific research—including the very latest discoveries—consistently support, rather than erode, confidence in the truth of the Bible and faith in the personal, transcendent God revealed in both Scripture and nature. Learn More »

Support Reasons to Believe

Your support helps more people find Christ through sharing how the latest scientific discoveries affirm our faith in the God of the Bible.

Donate Now

U.S. Mailing Address
818 S. Oak Park Rd.
Covina, CA 91724
  • P (855) 732-7667
  • P (626) 335-1480
  • Fax (626) 852-0178
Reasons to Believe logo