Tomorrow I'll be sporting a red shirt, because, in light of our Prime Minister's comments this week about a radio host's 'gay red top', tomorrow is Gay Red Shirt Day.
Language evolves, and the use of 'gay' is one of those slightly divisive social issues--is it okay to use 'gay' in its more recent sense as a casual, catch-all negative? Should we be worried about the PM using it that way? It's not like he called anyone a fag (oh wait... there's another word that is used to describe gay people and also gets used as an all-purpose negative).
Let's take a brief etymological journey. Gay originally means happy, with connotations of being liberal and carefree. Riding on that meaning, it was appropriated by a heavily persecuted homosexual community to describe themselves. Only after this has 'gay' gained its new, negative connotation. You could say that people outside the community stole it back and reversed its original meaning--like a child breaking a toy just so another child can't play with it. It does no good to claim that gay (negative) is harmless simply because people using it that way mean no insult directly to LBGTQ people. The insult is woven inextricably in.
In the same spirit that the homosexual community dealt with the negative attitudes coming from outside by redefining itself as 'gay', Gay Red Shirt Day is a positive way to respond to John Key's comment.
You might think that there's no a lot of point in wearing a red shirt this Friday. You might think that it's a bit 'political' for you. You might even be a bit uncomfortable with jumping on a bandwagon. But there's a really good reason why that's exactly what you should do. For people who are LBGTQ--especially young people who are grappling with their sexual orientation or their gender identity--society can be a thoughtless and cruel place. Probably to an extent that those of us who are straight and cis-gendered find difficult to understand. It's the casual use of 'gay' as a negative in day-to-day lives that is a little, steady reminder that if you fall outside society's norms then--no--you're not really accepted. By wearing a red shirt, it's a reminder to those same people that there is also a large part of society who really do accept them without reservation.
Over at Evolution News and Views, Jonathan McLatchie, has responded to my criticism of Casey Luskin's junk DNA chapter in Science and Human Origins. In lieu of Luskin, perhaps, McLatchie steps up, being the spokesperson on matters of junk DNA in the past for the Discovery Institute.
McLatchie notes that my review of chapter 4 consisted of two parts - a set of predictions based on common, fallacious arguments against junk DNA used by ID advocates, and then the review, where I show that Luskin resorted to using most of them. You'd hope that McLatchie would make a genuine effort to understand these criticisms in his response, so that the discussion doesn't stagnate. Let's see.
Amongst my predictions were that Luskin would:
Present a qualitative argument that because new function is found from time to time (e.g. microRNAs) the base of 'junk' is being continually whittled away.
Ignore the quantitative argument that such new discoveries account for a negligible fraction of the human genome, still leaving 90% unexplained.
Ignore the population-genetic arguments for the existence of junk DNA (e.g. the effectiveness of selection in small populations [Lynch 2007] and the mutational load in mammals [Ohno 1972]).In other words, there is a positive case for junk DNA that is often ignored by ID advocates--this is not a matter of ignorantly assigning the label of 'junk' to every nucleotide that lacks a defined function.
McLatchie claims that "ID proponents are well aware of [junk DNA] literature and do not, as [McBride] claims, conflate 'junk DNA' and 'non-coding-DNA'". If only this were true. Let's look once again at what Luskin says in Chapter 4. Luskin (wrongly) attributes to Francis Collins that "non-coding DNA shared by humans and other mammals is supposedly functionless junk". Luskin doesn't say 'most' or 'some' non-coding DNA, he conflates the two. Luskin goes on to say that, "[n]umerous studies have found extensive evidence of function for non-coding DNA, showing that it is not genetic “junk” after all". More conflation, plain and simple. Luskin also heads up an entire section in his chapter as Non-coding DNA: Not really junk after all - the ultimate conflation. Luskin is guilty as charged. I have made no strawman, these are Luskin's words and they are wrong. I'm glad McLatchie agrees that such a conflation is erroneous, but he needs to give Luskin, not me, the heads-up.
McLatchie goes further to state there is a "frequent Darwinist claim that the majority of [the genome] is without function". I suppose by 'Darwinist' he means evolutionary biologist or something similarly general, but the terms are not synonymous, especially not around a topic like junk DNA. This reflects either carelessness or a poor understanding of the history of junk DNA. Darwinists are those who see evolution as being driven primarily by positive natural selection. Amongst them are the ultra-Darwinists--pan-selectionists who militantly oppose junk DNA because selection shouldn't allow for it (quite similar to the design argument, except selection is acting as a non-intelligent 'designer'). Support for the junk DNA hypothesis did not come from these guys, but from the more pluralist/neutralist workers. I think the distinction is important, because there is and always was controversy around the concept of our genomes carrying a sizeable proportion of essentially unhelpful bases
Continuing onto transposable elements (TEs), McLatchie refers us to a paper that provides some examples of Alu functions as evidence of such genomic elements being functional after all. On the strength of these Alus he concludes, "The more we learn about transposable elements, the more we discover that they are not junk at all." That there are functional Alus is not surprising. Duplications of TEs are a class of mutation and like any mutation they can be functional. This is not in dispute. The dispute is around how many of those copies that end up spreading through populations are functional, rather than nearly neutral. McLatchie is still not giving us the quantitative part--how much of the 'dark genome' can we explain in this fashion? The answer remains: very little. How little? Larry Moran provides a rather handy summary of what's going on in the genome.
We then move onto transcription. McLatchie has some concerns about my citation of van Bakel et al.--a study that casts doubt on pervasive transcription of the human genome. He claims to find it surprising that I would cite it at all. According to him:
The problem is that the Bakel [sic] paper is based on a fatal methodological flaw.
Bakel et al. use a program called "RepeatMasker," which screens out all the repetitive DNA. But given that about 50% of our genome is comprised of repetitive DNA, the conclusions drawn by the authors seem to be less than convincing. In fact, the official description of RepeatMasker itself states that "On average, almost 50% of a human genomic DNA sequence currently will be masked by the program."
To make matters worse, the researchers proceed to base their results "primarily on analysis of PolyA+ enriched RNA." But we've known since 2005 that, in humans, PolyA- sequences are twice as abundant as PolyA+ transcripts. So the authors not only exclude half the genome from their research, but also completely ignore two thirds of the RNA in what remains.For context, I cited van Bakel et al. in a sentence where I also cited an opposing point of view, demonstrating the active, ongoing controversy in the area of pervasive transcription. Controversy glossed over by Luskin--and now by McLatchie--in favour of the simplistic claim that pervasive transcription is a fact. Beyond doubts about it occurring, such as those raised by van Bakel, there are also reasons to question that such pervasive transcription indicates pervasive function (e.g. that 90% of yeast transcription initiation events are estimated to be transcriptional noise).
But, in any case, a fatal flaw? It appears that McLatchie thinks that because van Bakel et al. looked at a subset of the transcriptome, their conclusions have no meaning. Yet the entire point of van Bakel et al.'s study was to compare two techniques: tiling arrays to RNA-seq. The techniques are compared on the same subset of transcripts and yet they find substantially different results. According to van Bakel et al., tiling arrays have poor precision. Unlike tiling arrays, the majority of transcripts identified using RNA-seq were associated with known genes, rather than being mysterious transcripts from the 'dark genome'. The ENCODE pilot study that both McLatchie and Luskin favourably cite identifies 'pervasive transcription' in the 1% of the genome they investigate, but this includes all the results of three technologies combined, even when transcripts were only identified once, by only one of the three technologies. Yet, what do such rare transcripts that lacked repeatability prove about the stable transcription of functional genomic elements? Again, as I emphasised in the chapter review, this is an open, relatively new area and definitive judgements based on single studies like the ENCODE pilot are simply not acceptable.
On the same topic, McLatchie also cites Kapranov et al.'s (2010) warning that "efforts to elucidate how non-coding RNAs (ncRNAs) regulate genome function will be compromised if that class of RNAs is dismissed as simply 'transcriptional noise.'" This might be fair enough--but far from dismissing potentially functional RNA transcripts as noise, the likes of van Bakel are warning that different techniques are not producing the same results as tiling arrays, and so caution is wise before concluding pervasive, stable transcription occurs. In any case, with the amount of work going on in the area by 2010, it was perhaps an unnecessary warning.
We then venture into pseudogenes. Apparently, I have understated the number of functional cases in pseudogenes. McLatchie exclaims: "Only a couple of known examples? Actually, the number is far larger than that." But exactly how much larger? He doesn't tell us. He mentions some conjecture for a potential function for GULOP, the pseudogene in our broken vitamin C synthesis pathway, however provides no research linking said function to GULOP.
Finally, McLatchie makes it onto the chromosome fusion argument--which I think I can now describe as infamous. McLatchie reminds us that:
As Luskin has explained previously on ENV, however, even in the event that chromosome 2 did indeed originate via a fusion event, common descent between humans and chimpanzees is not the only hypothesis that can account for this. On an evolutionary model, since no other primates share this fusion, the fusion event must have taken place some time after the supposed divergence of the human and chimpanzee lineage.Which is fine, provided McLatchie furnishes us with evidence of intelligent design intervention that explains these matters.
Would it surprise you if I said he doesn't?
Ann Gauger has provided another response to my critique of Science and Human Origins, the Discovery Institute book that seeks out a literal Adam and Eve. This time, she addresses my criticisms of Chapter 5--where she attempts to make possible Adam and Eve using population genetics.
I'll begin with a spoiler: Gauger does not address her argument's major, self-defeating shortcoming--that even if we accept her argument flaws-and-all, she only creates the possibility for a non-human Adam and Eve. An Adam and Eve that pre-date our genus, and pre-date the cranial expansion in the hominin lineage and would therefore have lacked human intelligence and language capacity.
So, what does she say? Readers of my review might recall that I was unimpressed by her population genetics argument for a two-person bottleneck for several reasons, one of which was that she singled out a one-locus study from the 1990s to make her entire case. She again tries to justify this choice, claiming that "it presented the most difficult challenge to a very small bottleneck in our history as a species", which remains simply untrue. The best challenge comes from whole-genome studies that show large effective population sizes across our evolutionary history. As I pointed out in the review.
She tries to further legitimise her choice by claiming that "[t]he fact that I had not addressed those alternate estimates is one reason why I never claimed to have proved the existence of a two-person bottleneck, but rather questioned the rush to judgment against such a bottleneck on the part of others."
This is not good enough by half. Gauger ignores the mountain of evidence that directly contradicts her claim, and tut-tuts those who would be less selective than she. I shall reiterate: the possibility of a two-person, Adam and Eve bottleneck is simply not an open question in science. There cannot have been such a bottleneck. Not only is there is literally zero evidence suggesting the possibility, human genetic diversity outright rejects the claim. Gauger gives us exactly no reason to think otherwise, but still thinks those that accept the obvious conclusions of the data are somehow rushing to "judgement".
Gauger makes much of the wide range of estimates of human effective population sizes found by different authors studying different parts of the genome using different methods:
Published estimates for ancestral population sizes vary from 100,000 to 1,000 -- you can find some of them as references in the paper by Li and Durbin (2012) that McBride cited. Why such a large range of ancestral population sizes? First, the epoch chosen for study matters. The further back in time one goes, the more confounding factors can intervene. Population bottlenecks, changing selection, inbreeding, migratory behavior, strong selection for one gene accompanied by hitchhiking of neighboring genes -- all these affect genome dynamics.What she ignores is that the factors that muddy our estimates of effective population size like linkage and selective sweeps are factors that reduce the estimate rather than inflate it. Recall that effective population size is not the same as the actual (census) population size, which is typically around an order of magnitude higher than the effective size, and this gap is widened by factors like selection and linkage. Further compounding this, estimates of long-term effective population size are expected to be smaller than the average effective population size over shorter time periods. Taken together, there is a bias in our estimates of actual past population sizes, but it is a bias towards underestimation. So, when we make long-term estimates of the human effective population size that vary by two orders of magnitude but bottom out at around 1,000, this lower estimate (and its corresponding census population size of 10,000) are almost-certain underestimates of the real picture. Even with the variation in different estimates, we are left with no room for a census population size of two--a further four orders of magnitude lower again.
She takes one more crack at population genetics:
But more worrying to me are the hidden assumptions in evolutionary models. Population genetics is a theory-laden subject, based entirely on neo-Darwinian assumptions. These assumptions, combined with over-simplifications required by current model building and/or mathematical analysis, can lead to erroneous claims about past genetic history.What hidden assumptions would those be? In what sense are they hidden? I don't get it--you'd expect at this point that Gauger would here lay down some devastating arguments about these hidden assumptions, but she does no such thing. Those three sentences comprise the entirety of her devastating argument against evolutionary models.
You can't have models without assumptions. Assumptions exist in all models, and in fact are sometimes quite robust to violation. To be an actual problem that gets us from where we are to a claim of a literal Adam and Eve, Gauger needs to demonstrate non-trivial violations that cause a massive overestimation of effective population size. She also needs to explain where all the variation in modern human populations came from in the time since this bottleneck. Failing that she has nothing more than some selective hyper-scepticism. And she certainly doesn't have Adam and Eve.
Douglas Axe has provided the second response to my review of Science and Human Origins in a piece entitled "Thou Shalt Not Put Evolutionary Theory to a Test", which has been closely followed by another response from Ann Gauger -- "On Enzymes and Teleology". They deal with similar material--their 2011 paper, which features in Science and Human Origins, so I'll discuss the two together.
Both responses centre on the limits of evolution, and about specification of evolutionary changes. Axe tells us:
McBride also agrees with us that [Darwinian evolution] has significant limitations. In his words: "Evolution is not a process that is capable of producing anything and everything, at all times in all species. It is, conversely, a greatly constrained process." This being so, surely McBride ought to agree with us that the claims of evolutionists need to be evaluated in light of these constraints.Indeed, I do. The literature is also full of people interested in these questions - for example, the process structuralists of the 1980s and 1990s. However, I don't agree with the Gauger-Axe model for investigating evolutionary processes and their limits. I have already discussed the Gauger and Axe (2011) paper a couple times - in my review of chapter 2 and in a previous post on the paper in question. I'll try here to clarify our main areas of difference.
So, to recap the paper once more. Gauger and Axe make a straightforward argument: they attempt to evolve one enzyme (Kbl2) to perform the function of another (BioF2). Their results suggest a minimum of seven changes are needed, which they argue is at or beyond the limit of evolution. In other words, if the Kbl gene was duplicated, it would not be at all likely to evolve the function of BioF - certainly not by way of nucleotide substitution. To imbue Kbl with BioF function, Gauger and Axe identified residues that appeared important to the biochemical function of BioF, and then set about creating mutant Kbl genes with substitutions from the homologous BioF residues. They ultimately fail to achieve BioF activity. Because they only test a small subset of possible mutants (restricted to nucleotide substitutions), it is not clear whether this amounts to a putative failure of evolution or, in fact, a failing of intelligent design.
They conclude that this result is problematic for evolution. Gauger summarises their rationale for the study and their conclusion:
The reason for our choice [of Kbl and BioF] was not ignorance. We knew that the enzymes we tested were modern, and that one was not the ancestor of the other. (They are, however, among the most structurally similar members of their family, and share many aspects of their reaction mechanism, but their chemistry itself is different.) We also knew that in order for a Darwinian process to generate the mechanistically and chemically diverse families of enzymes that are present in modern organisms, something like the functional conversion of one of these enzyme to the other must be possible. We reasoned that if these two enzymes could not be reconfigured through a gradual process of mutation and selection, then the Darwinian explanation of gene duplication and gradual divergence to new functions was called into question.
Gauger states I have claimed that their "results mean nothing." I don't think this -- the Gauger-Axe experiment teaches us something about what sort of specified genetic distances can't be easily bridged by sequential point mutation. They also identified a functionally important residue in BioF that hadn't been previously reported. However, Gauger believes that their result should cause us to generally question the ability of duplication and divergence to generate new function and Axe makes the strong claim that "this result has catastrophic implications for Darwinism". These conclusions are unjustified. Even though the two genes in question are a part of a gene family - meaning they share a degree of sequence similarity and are understood to have arisen by duplications - this doesn't mean that functionally evolving one into another is, or should be, feasible. While Gauger tells us that the two enzymes are structurally more similar than other members of the protein family, 'more similar' is a relative thing - two-thirds of the amino acid residues differ between these two genes. A further 54 nucleotides (18 codons) have been inserted or deleted between the two genes, in a minimum of nine separate indel events. These are not recently diverged duplicates, and the claim that Darwinian evolution should be able to bridge this gap with a few point mutations is arbitrary.
There are other reasons why the Gauger-Axe experiment doesn't tell us much about the limits of biological evolution. A major one is the problem of specification. Let me explain what's wrong with this using a common analogy that comes from card games. Imagine four people are playing a game of five-card stud, and in one hand, a player wins with a full house. There is a hierarchy of predictions we could have made about that hand. First, we could have predicted the winning player (the odds of this are 1 in 4). We could have predicted that they get a full house (the odds are 1 in 694). We could have predicted their exact hand (the odds are 1 in 2.5 million). Or we could have predicted everybody's exact hands (odds are 1 in 1.47x1024). Clearly, as we get more specific in our predictions, the odds of being right tumble away to almost zero.
By making specifications at both ends of their experiment, Gauger and Axe move right up this hierarchy towards the 'massively unlikely' end. In our card game analogy, there is a dramatic worsening of the odds between getting any full house (1 in 694) and getting a particular full house (1 in 2.5 million). In biology, the deck of cards is much larger than 52. The difference in odds between a protein evolving any new function and a protein evolving a specific function will dwarf the difference in poker. Despite this, Gauger and Axe claim not only that "something like the functional conversion" of their chosen genes should be possible under evolution, but that their results have "catastrophic implications for Darwinism" when this specified change doesn't happen with fewer than seven substitutions. It is not good enough to win a hand of poker with a full house, you need Gauger and Axe's full house.
Gauger and Axe want to generalise that there are usually large, unbridgeable gaps between the functions of genes. However, not only do they generalise from a single example, their example is restricted to examining nucleotide substitutions as the cause of functional differences. Rather than a series of substitutions, Xu et al. (2012) find that functional differences are often caused by structural divergences like the gain or loss of exons and introns, or insertions/deletions (such as the nine or more indel events that have occurred since the divergence of Kbl and BioF). Considering the prevalence of this mode of evolution described by Xu et al., the potential for structural changes to cause large divergences exists, which suggests bridging functional distances is possible. The rate at which pesticide or antibiotic resistance evolves indicates that there are many islands of function within reach of mutation and selection, not only for bacteria but in eukaryotic lineages as well.
Evolution is a population process, and Axe makes it clear how he thinks evolution is supposed to proceed in populations. If there is doubt that Axe lacks a solid understanding of contemporary population genetics, it is cleared up when he tells us:
McBride criticizes me for not mentioning genetic drift in my discussion of human origins, apparently without realizing that the result of Durrett and Schmidt rules drift out. Each and every specific genetic change needed to produce humans from apes would have to have conferred a significant selective advantage in order for humans to have appeared in the available time. Any aspect of the transition that requires two or more mutations to act in combination in order to increase fitness would take way too long (>100 million years).
[Emphasis in original].So here we have it. Axe doesn't believe that populations harbour variation that may contribute to evolution. Axe doesn't believe that drift is relevant in a six-million-year time frame. Axe doesn't believe that adaptive pairs of mutations can ever arise in less than 100 million years if they are individually non-adaptive. Broadly, Axe doesn't believe that evolution can proceed in any way but via stepwise positive selection. And Axe bases his reasoning on Durrett and Schmidt (2008) - a paper where the scope is limited to the waiting times for pre-specified mutations in populations.
What's wrong with this picture? For one, genetic drift fixes mutations in populations at the effectively neutral mutation rate. For humans, the mutation rate is somewhere between about 1 and 2.5 x 10-8 mutations per site per generation (John Hawks details the various estimates nicely here). So each newborn has maybe 50 or 100 mutations that weren't present in its parents. The overwhelming majority of these mutations must be neutral or nearly neutral - if they were deleterious our species would quickly evolve to extinction. So, in the span of six million years, we expect (assuming a 20 year generation time and a diploid genome of 6 billion sites) somewhere between 18 million and 45 million effectively neutral nucleotide substitutions, and more if generation times were shorter in our ancestors. Instead of the zero substitutions occurring by drift as Axe predicts, we're looking at tens of millions of changes. These effectively neutral changes can have phenotypic effects, those effects simply won't have had large effects on survival.
Further, the standing variation in populations is known to be important for adaptive evolution. Jeremy Yoder over at Denim and Tweed has an excellent discussion of this, summarising some recent research. Karasov et al. (2010) also explain how adaptive evolution isn't mutation-limited in Drosophila, demonstrating that we underestimate the potential for adaptation when using estimates of long-term effective population size (which is strongly influenced by occasional population bottlenecks). To rule out a role for standing variation is truly puzzling - although it certainly helps make the odds look worse.Interestingly, Axe relies heavily on a single paper -- Durrett and Schmidt (2008). It is an nice paper but like Gauger and Axe's paper it has a clearly limited scope: in the case of Durrett and Schmidt, they refer to waiting times for specific mutations and pairs of mutations to arise in populations of different sizes. In fact, the authors themselves have already tried to clear up intelligent design misunderstandings of their work. They pointed out then that claims based on their paper that pairs of mutations can't arise in less than 216 million years are fallacious because in the human genome there are 20,000 genes with multiple sites - in other words we can't look back on a pair of mutations that have happened and calculate the odds of them happening as if they were a pre-specified necessity. Indeed, before those mutations did arise, there was a perfectly viable species that lacked that change.
In the end, Gauger sums up exactly what is wrong with their argument: she claims that "life is inherently teleological". If you believe there is teleology (i.e. mindful intent) behind biology then no doubt you will believe evolutionary directions can be pre-specified. And that's fine so long as you remember you're no longer testing the claims of evolutionary biology.
Over at the Biologic Institute, Gauger has given the first response from an author to my review of Science and Human Origins. Unfortunately, despite being cross-posted there and on ENV, comments on both forums are disabled so I shall make my response to her here.
In her response Gauger tells us:
Mr. McBride doesn’t like using cars as an example of design versus common descent, even though another evolutionary biologist once used it as an analogy for evolution. The car analogy was a throw-away comment in my piece, not intended as a serious model for anything.So, an apparently flippant analogy is it the first issue she decides to defend. The chapter lacked a serious attempt to make the same point, so while the car analogy was brief it was clearly intended to make a point. Here's what she wrote in Chapter 1:
For most biologists, similarity is assumed to confirm that humans and chimps are linked together by common ancestry. This assumption underlies all evolutionary reasoning. But note that similarity of structure or sequence cannot confirm common descent by itself. “Mustang” and “Taurus” cars have strong similarities, too, and you could argue that they evolved from a common ancestor, “Ford.” But the similarities between these cars are the result of common design, not common ancestryMy point in response - Gauger doesn't actually quote anything I say, so I will have to - was quite straightforward:
Is the Ford example a fair assessment of similarity in biology? In fact, designed objects do not usually form a nested hierarchy in the way that species do under common descent. This is because there are fewer constraints on designed objects that there are on biological ones. When new technology arises--a safer braking system in a car, perhaps, or a faster processor in a mobile phone--it will make it into the top-of-the-line models no matter who makes them, but not into the cheaper ones. When this happens have all of the more expensive models become more 'closely related'? This lack of independence between lineages means that they do not have single, reliable groupings and do not form a nested hierarchy. There is no biological analogue for this. In biology, for unrelated lines to share common, derived features, those features must evolve independently.That is to say, we cannot group models of cars in the same way that we can group species. So, a throwaway analogy it may be, but a shoddy one. She continues to tell us that an intelligent designer could very well use common descent as a tool. Yes, of course this is theoretically true, but all of a sudden we no longer have a non-biological analogue. That is to say, there is no evidence that an intelligent designer would do so. So while ID can accommodate this by saying an unknown designer could have used common descent, what this demonstrates is that ID is such a loose "theory" that literally anything - and therefore nothing - is explained by it.
She continues to tell us ask: "Are gene and species trees neat and tidy, arranged in nested hierarchies, or not? Or do we see evidence of multiple trees with different topologies?" Here, she is hinting that if evolution were true, gene trees should usually have the same topologies as each other and as species trees. Gene trees are topologies built with a single gene - which is to say a tiny bit of the genome. The thing about gene trees is that they are not actually expected to be perfectly congruent with species trees or each other. Gauger acknowledges that there are some "[p]roposed naturalistic causes" for why they might not match up, but she does not do justice to the well-developed literature on the topic (30 years of literature, in fact). In fact, coalescent theory not only explains the how and why, it makes predictions about the congruence of gene and species trees.
A major cause of incongruence between gene trees and species trees is incomplete lineage sorting (ILS). When cladogenesis happens (that is, one lineage splits into two) the two daughter lineages might each have large populations that carry lots of genetic diversity. So, even though our species tree would show a simple split, the trees for any given gene would in fact show a complex history - the species isn't "born" with a homogeneous genome or a single DNA sequence for any given gene - instead the species inherits all the long histories of all the lineages that contribute to it. So we can see straight off that the story of a gene might not match the story of cladogenesis in species (even though we use genes to try to estimate species' histories). Gene trees work on the basis of coalescence. How far back in time do we need to go before the DNA sequence for a given gene that is different in two species becomes the same - i.e. the common ancestral individual whose gene was passed onto the two daughter species? Because the populations of our two study species might be large, some genes might have deep coalescence - their coalescence might actually pre-date the species. When that happens, you end up with incomplete lineage sorting - genes might not only appear older than the species they represent, they might also have the wrong topology (that is species can appear more closely related from the analysis single genes than they truly are).
An excellent recent example is the gorilla genome (and a similar example with orangutans is discussed in the Biologos link above that also explains ILS in much more detail than I do here). Certain creationists got so very excited as they misinterpreted genetic similarities between gorillas, humans and chimpanzees when the gorilla genome was published. As it happens, about 30% of the gorilla genome is closer to either the chimpanzee genome or the human genome than the latter two are to each other (even though the latter two are more closely related). According to Jeffrey Tomkins, "[t]hese results continue to clearly support a Genesis-based biblical view of unique created kinds and mankind being created in the image of God."
In fact, Kingman's 30-year-old coalescent theory doesn't just predict that ILS will occur, it quantitatively predicts the amount of ILS that we now find between these three species. Joe Felsenstein casually does the maths in a comment on Panda's Thumb. It is another successful evolutionary prediction. We are left concluding that gene and species trees make sense under evolutionary theory, but under ID there is no explanation for how these phenomena come about (only "the designer could have done it that way"), and ID does not even have an accurate analogy to fall back on.
Gauger finally mentions convergent evolution and the horizontal transfer of genes between species. She points to a couple interesting examples - that insects and mammals have highly similar molecules responsible for their sense of smell, but they are produced by unrelated genes, and that choanoflagellate nitrogen metabolism is "borrowed" from algae. The first case is evidently the result of convergent evolution based on similar selection pressures and the constraints on the molecules that are able to meet the task. The second is understood to be the result of lateral gene transfer. Gauger, however, concludes:
[P]roteins that have a common function but different topology, or complex biologic problems that are solved independently but arrive at similar outcomes, suggest the reapplication of existing ideas in new contexts. This of course, is precisely the kind of thing that designers do, whether for cars or for creaturesThe conclusion is leaking more oil than her Ford Taurus. The independent systems of olfactory reception in insects and mammals can be interpreted as design only because literally anything can. It offers no positive evidence of design - of intervention - in biology.
Just a note: I have added a brief policy for comments on the blog. In the interests of ensuring civil discourse, I want comments here to be polite and free of personal attacks.
Ideally, what I want is for all discussion here to focus on matters of science and ideas.
This is a pre-emptive policy, and hopefully is completely unnecessary. The reason for it is that I'd like this to be a place where an interested creationist or ID advocate can raise a scientific objection or ask an honest question, and not get an earful of abuse (which is a complaint I've commonly seen elsewhere).
Even though this is bad pedagogy, here are two examples of unacceptable comments on Still Monkeys:
- Claiming scientists or "Darwinists" are deluded, yet are too stupid to see they've been brainwashed
- Calling some a creotard or an IDiot
No courtesy will be extended to trolls, and get as fiery about matters of science as you please.
Following my chapter-by-chapter review (here are parts 1, 2, 3,a prelude to 4, 4 and 5) of the ID book Science and Human Origins there has been a minor reponse from ID advocates.
So far, none of the ID advocates have found the time to read the thing, but they haven't let that get in the way of criticising me.
Before reading the review, Ann Gauger has said I misunderstood her. Denyse O'Leary upped the ante with an accusation of dishonesty.
And now David Klinghoffer has bizarrely labelled me "Darwinist hero of the Hour"* and has even more bizarrely accused the other "Darwinists" of being too scared to read the book. Instead, these very scared Darwinists have relied on me - "a previously unknown reviewer" writing at "his blog that no one before ever heard of." My review was "discovered" by some people Klinghoffer had heard of, who have proceeded to "lift up the hitherto obscure McBride on their shoulders". Yes, Klinghoffer actually lays it on that thick. Obscure or not, the review is also a fact-by-fact discussion of the book, a point Klinghoffer omits.
Klinghoffer is so busy telling Darwinists how terrible they are, he forgets that book reviews are actually a common tool to decide if a book is worth reading or, in fact, a steaming heap. It turns out that Science and Human Origins is the latter. The book is an attempt to attack science, not enrich it; the Discovery Institute doesn't deserve any money for publishing it. I had a couple days where I couldn't decide if I'd review it or not, because I didn't want to nominally support their crusade.
My review is as extensive as I had time for: it is more than a third of the length of the original book's main text. There are areas that I lack any degree of expertise - physical anthropology springs to mind (although, could the chapter's author, Luskin, claim any more expertise?). Obviously, a thousand others could do a more thorough job than I. Actually, in many cases, that has already been done. Ironically, Klinghoffer accuses Darwinists of being "afraid of directly confronting ID arguments", but he'd need only check the Panda's Thumb archives for refutations of many of Luskin's current arguments: Luskin simply trots out his mid-2000s arguments that didn't hold water back then. If no one wants to re-read these arguments, who could blame them?
Here's a challenge to the ID crowd: read the book, then read my review and tell me what you think - including where I went wrong, if you reckon I did. Have a think about what Gauger, Axe and Luskin are telling you, and ask why mainstream scientists - Christian and atheist - do not share their views. Consider how a literal Adam and Eve are still possible after trying to reconcile chapters 3 and 5. Ask questions - feel free to ask them here politely and I will answer them politely.
Put simply, let's talk about the science.
* I am not a "Darwinist", but that's a story for another day.
Here ends the review of the Discovery Institute's Science and Human Origins, a Discovery Institute publication that is intended to challenge--amongst other things--the notion that humans share a common ancestor with chimpanzees, and that we couldn't have had descended from a literal Adam and Eve.
Here are parts 1, 2, 3,a prelude to 4, and 4.
Although it concludes the book, Ann Gauger's "The Science of Adam and Eve" in Science and Human Origins was the first chapter I'd seen. Excerpts had been posted suggesting the book made an argument for a population bottleneck that would allow for a real-world, literal, Adam and Eve. The idea intrigued me for a couple reasons -- how could intelligent design remain secular when its leading institution is publishing about a literal Adam and Eve? Will common descent be acceptable and the argument be instead that we were imbued with humanness at this bottleneck? How will the evidence of our long-term effective population size be treated?This was the genesis for the multi-part review that concludes here.
Although the rest of the book could be justified as building up to this point, Chapter 5 contains the presentation of the only original material in the book (as far as I know Gauger hasn't written about this before). The four chapters leading to this point have repeated old arguments, with minimal engagement with contemporary literature. If I have been scathing when reviewing these chapters, it is because I was expecting the Science from the book's title to have been taken more seriously. Yet not only have I found little concordance between any of the main claims made and the literature, I have found little evidence of engagement with the literature. But the final chapter has the promise of a little more. So get ready for it... everything has been building up to this.
To convince us of the possiblity of a literal Adam and Eve, Ann Gauger presents to us doubt over whether a single published paper from the 1990s truly supports a large human population since speciation. In this paper, Francisco Ayala had used the ancient polymorphisms in a 270 base-pair sequence of immune system DNA (exon 2 of HLA-DRB1) to suggest we must have had population far greater than two for the whole time since our lineage and the chimpanzee lineage separated. Because he found more than four alleles (versions of exon 2) that predated the splitting of the human and chimpanzee lineages, there could not have been a human population bottleneck of two people. In fact, Ayala's calculation supported a human population size of 100,000.
However, there is incongruence between the phylogenetic trees built from exon 2 and that built from its surrounding introns. Because recombination appears to be limited in the genomic region where HLA-DRB1 is found, Gauger argues that factors other than ancestral polymorphism might account for the exon 2 diversity. This limited recombination means that there are recognisable, major haplotypes of HLA-DR, and Gauger argues we should base our estimates of the number of ancient polymorphisms on these. Because these haplotypes contrain long introns, molecular dating for them is also more reliable than dating is on exon 2.
What Gauger then argues is that there are five major haplotypes but only three of the haplotypes are "ancient". Because up to four haplotypes could be inherited from two people, the existence of only three leaves the door open for an Adam-and-Eve bottleneck. Unfortunately for Gauger, even if we accept all parts of her argument up to here, we are forced to conclude that this final step is wrong if the book is to be internally consistent.
The other two major haplotypes might not "ancient", but they are still 4 to 6 million years old (Gauger agrees with this). While this does mean they originated in the hominoids, Gauger takes this as evidence they could have come from Adam and Eve. Why is this wrong? Well, if we recall Luskin's chapter, he argued that Homo habilis was seriously non-human. No self-contemplation for the habilines. Yet, H. habilis originated about 2.3 million years ago, and H. erectus did not arrive until about 1.8 million years ago, marking what Luskin accepts to be the start of humanness. Back at 4 million years ago when the last of the HLA-DR haplotypes originated, our closest relatives were Australopithecus. Anatomically modern humans were a long way away. So we can be sure that the five major haplotypes of HLA-DR all pre-date the genus Homo, and contradict the claim made by Gauger that:
The argument from population genetics has been that there is too much genetic diversity to pass through a bottleneck of two individuals, as would be the case for Adam and Eve. But that turns out not to be true.Instead, the argument from population genetics still definitively rules out the possibility of Adam and Eve, if Adam and Eve were human.
So far, I have only mentioned the one line of evidence around ancestral human population sizes as discussed in Gauger's chapter. But there are many more reasons to argue against a human two-person bottleneck. Unfortunately, Gauger does not engage with any of the modern literature that provides evidence of reasonably large human populations over our evolutionary history, instead only taking on Ayala's single-locus argument from the 1990s. I will discuss just one -- a compelling, modern examination of our ancestral population size.
When I first heard about this book, my immediate thought was, "What about Li and Durban (2011)?". Readers of Gauger's chapter might not know this paper, because she certainly doesn't discuss it. Li and Durban, however, trace effective population sizes (evolutionarily relevant estimates of the adult breeding population size) over our evolutionary history as a species, using comparisons of entire genomes from across a range of ethnic groups. At no point is there anything approaching an Adam and Eve population bottleneck at any point that correlates to the genus Homo. Again, when an earlier chapter has drawn the human/non-human distinction between Homo habilis and Homo erectus (i.e. approximately 2 million years ago) we can be quite well assured that a literal Adam and Eve are unsupported by population genetics.The figure below, from Li and Durban, suggests an effective population size of about 15,000 at the origin of Homo erectus.
In a section called "Take home message", the second-last of the book Gauger shifts to suggest that perhaps "we began from two intelligently designed first parents" and that if this were the case "all this analysis of how many ancient haplotypes we share with chimps doesn’t really matter".
What evidence does she provide for this possiblity? She argues:
There certainly are surprising patterns of genetic variation within HLA-DRB1 that suggest unknown processes may be operating. Let me propose that a process exists which generates specific hypervariability within exon 2 and suppresses recombination elsewhere. The process is targeted to generate diversity precisely in the peptide-binding domain. I suggest that intelligent design had to be involved at the beginning, in order to rapidly generate HLA diversity after the foundation of our new species (assuming we came from two first parents). Evidence supporting this idea comes from the fact that HLA-DRB1 diversity has in fact increased very rapidly by anyone’s count, going from a handful of variants to over six hundred alleles in six million years or less. Also, the HLADRB1 variable regions in exon 2 show a patchwork, cross-species relationship to their surrounding DNA sequences, making their origin hard to account for by common descent. Their repeated use of similar motifs from different species may instead indicate common design. I further suggest that this process may be human-specific, since other primates don’t show nearly the same degree of allelic diversity within lineages as humans do.That is, verbatim, the argument for intelligent design that the book has been building towards: the second exon in a gene in the major histocompatibility complex (MHC) is highly variable, and the diversity in humans is higher than other primates. Gauger does not consider overdominant selection for immune system diversity in the rapidly expanding human population, as it spread from Africa across the world, which would seem to account for differences between human and other primate diversity at this locus.
She concludes by telling us:"it seems not unreasonable to propose that HLA-DRB1 diversity is the result of a process that generates specific hypervariability and/or gene conversion within exon 2 in order to rapidly generate HLA diversity. The existence of such a process essentially demolishes any population genetics arguments about ancestral population sizes." While this may indeed challenge the population genetics inferences drawn from exon 2, it does not demolish the population genetics argument about ancestral haplotypes, nor those based on loci outside the MHC.
We may have much to learn about the evolution of MHC diversity, but to reject common descent and postulate an intelligent designer who specially created us from an ancestral couple because of this is ludicrously at odds with the balance of evidence.
Closing thoughts on Science and Human Origins
I feel like I have written enough about this book to get away with tacking on a little conclusion here rather than making another post to sum up. Indeed, the review has run to a third of the length of the book itself (not hard: the text of those five chapters, less the references/endnotes, runs shy of 25,000 words).
I have been left wondering why the Discovery Institute, or intelligent design advocates in general, or biblical literalists feel a need to try and accommodate science when they have a belief in a supernatural entity capable of breaking natural laws. In the case of this book, it has left them needing to make all kinds of awkward criticisms of fields in which the authors clearly lack expertise. A lawyer is not the right guy to challenge the world's palaeoanthropologists, nor the world's geneticists. Certainly, he shouldn't be trying to take them all on at once. It will end with him trying to smear the reputation of scientists rather than engaging with their ideas. Accusations that the entire field of palaeoanthropology is driven by personal disputes and that Francis Collins is a bad Christian are simply not compelling reading in a book that is putatively about scientific argument.
By the end of the book I was left with a massive, if fairly obvious, incongruence. The reality is that the overwhelming majority of scientists in each of the fields addressed in this book share a broad consensus that is at odds with what the authors claim. And, despite the breathless accusations of a Darwinian conspiracy, mainstream scientists are a diverse bunch. Like Francis Collins and Francisco Ayala, who are both singled out in this book, many are themselves Christian yet accept the balance of evidence for our evolutionary past. They accommodate their beliefs with an uncompromised view of the science. This is because they have engaged openly with the evidence of their discipline and concluded that evolutionary principles best explain human origins. There is no atheist conspiracy to force evolution on the public; instead, it is all of the diverse and beautiful evidence of the world around us that points to evolution having shaped us and earth's biota. There is no shame in this, and it hardly makes us less human to acknowledge it.
And, of course, there are entirely different world views, such as the one taken by certain other religious folk. They choose to place their faith paramount to scientific evidence. Although I don't agree myself -- I value the conclusions of science ahead of those of personal revelation -- it is still a stronger position than that of intelligent design.
ID tries to straddle some in-between place, where it claims to disprove scientific consensus in a number of different fields, and then attributes the lack of a shifting consensus towards ID to bias and brainwashing. But, as this book amply demonstrates, the real problem is that ID fails to engage with much of the modern literature in those fields.
This book also demonstrates ID's difficulty with separating itself from Christianity in practice. The introduction, and four of the five chapters are framed in a Christian context. Concern over how Christian beliefs have been impacted by science, and the role of Christians who accept mainstream science are at the fore. Even issues like stem-cell research appear, given no context. The thrust of the whole book is to claim human exceptionalism, disprove our common descent with apes, and search for a real-life Adam and Eve. None of this is part of a secular programme to genuinely investigate the world. This is particularly clear because the authors are happy to create doubt about what they call Darwinism, rather than create positive cases for an alternative to it. This is an obvious echo of the Wedge Strategy on which the Discovery Institute was founded.
Science and Human Origins has to be described first and foremost as being anti-evolution rather than pro-intelligent-design, or pro-science. If it offers solace to those seeking evidence against evolution for their faith, the solace should be as incomplete as the arguments made in the book.
Continuing a chapter-by-chapter review of Gauger, Axe and Luskin's Science and Human Origins, a Discovery Institute publication that is intended to challenge--amongst other things--the notion that humans share a common ancestor with chimpanzees, and that we couldn't have had descended from a literal Adam and Eve.
Yesterday, I changed the format of my review a little. I predicted a few things about what I was expecting from Casey Luskin's effort to dismantle the argument for junk DNA - Chapter 4 - before I had the chance to read it. Let's see how I did. I'll also mention now that this chapter review is going to get technical at times.
Luskin's chapter is entitled "Francis Collins, junk DNA, and chromosomal fusion". Apart from trying to make the case against the existence of junk DNA, Luskin wants to make it clear that he is very disappointed in Collins, who he chooses to frame as "an evangelical Christian who embraces both Darwinian evolution and embryonic stem cell research". I can only imagine that the reader is expected to disapprove of the last point. The interplay between Collins' faith and science is an issue for Luskin as "his emphatic defense of ape/human common ancestry still has wide influence in the faith community." Collins might infect the faithful with science and lead them away from human exceptionalism. Luskin also notes that Collins and "atheist Darwinist Richard Dawkins" say very similar things when they speak about evolution. I think the implication there is pretty clear.
When I was thinking about what to expect from this chapter, I predicted that Luskin would conflate non-coding DNA and junk DNA, and that Luskin would exploit this erroneous conflation by pointing to known functions of non-coding DNA as evidence against junk DNA. Here, Luskin wastes hardly a word. In the midst of his anti-Francis-Collins tirade, Luskin points out "studies have found extensive evidence of function for non-coding DNA showing that it is not genetic “junk” after all." He then starts a section called Non-coding DNA: Not really junk after all, evidently keen to drive home the point. In here he cautions the reader that "even a cursory review of the scientific literature shows it is wildly inappropriate to simply assume that repetitive DNA—or most others types of non-coding DNA—are useless genetic 'junk.'"
I'd like just once to see all these references where all these researchers are saying that if DNA does not code for a protein then it is junk. The whole issue leads me to wonder how much of the relevant literature Luskin has actually read. In any serious, introductory discussion of junk DNA I'd expect to see Ohno's 1972 paper introducing the concept of junk DNA referenced, and Ohno's argument discussed. Luskin does no such thing. He simply wants us to believe that 'Darwinists' everywhere irrationally assert that all non-coding DNA is junk DNA. Well, the only assertions are his, and the dozen creationists before him that have done the same. None of the quotes Luskin provides from Francis Collins come close to saying any such thing.
Indeed, knowledge of some of the functions of non-coding DNA predates the term 'junk DNA' in the literature. In other words, the argument for junk DNA has always started from an understanding that non-coding DNA can be functional. It just happens to also be the case that a large swathe of the typical mammalian genome -- and ours is decidedly typical -- shows no evidence of function, and actually could not be evolutionarily conserved by natural selection, for reasons I've explained elsewhere already. The strawman junk DNA proponent who simply assumes the existence of junk does not exist.
Luskin moves onto discussing transposable elements. These are highly repetitive sequences that collectively make up close to half of our genome, and so are at the heart of any discussion about junk DNA. Like non-coding DNA in general, Luskin lists some known functions for transposable elements and other repetitive elements like satellite DNA. In fact, he lists more than an entire page of bullet-pointed functions. Surely, if there's a whole page of bullet points, then transposable elements are functional. Right?
To answer that we need to consider what transposable elements are -- something Luskin clumsily omits from his chapter. Transposable elements get their name from their unsual property: they 'jump' around in the genome. Certain classes of them also make copies of themselves that reinsert at random in the genome. When either of these things happen, it is similar to other types of mutation -- some function might change, or they might happen to land somewhere non-functional and cause no harm. But, like other mutations, variations in these transposable elements between different people are often linked to disease -- in other words, sometimes transposable elements disrupt a genetic function and cause harm and disfunction.
Because we can examine the genomes of different people and see where new copies of these transposable elements have been made, we get a pretty clear picture of how good they are at self-replication in our genome, and so we also get a pretty clear picture that they've all come from this activity. The most prolific transposable element has more than a million copies in your genome! Even though sometimes a new copy might land somewhere and shift the function of a gene, or later mutate and become part of a new gene (and there are certainly recorded cases of this), the majority of these duplications have no effect. Mostly, they land between genes or inside introns where they change nothing of importance. If that were not the case, most of the duplications of transposable elements would either have to be removed by purifying selection, or would be causing disease.
So, you have large tracts of your DNA that have come about from the replication of genetic elements that can copy themselves. This would lead to an exponential increase in copies of these transposable elements if they continued to do so. However, because it is rare for their sequences to be performing functions in our genome, they also mutate freely and quickly. The typical transposon accumulates random mutations to the point where it no longer is able to self-replicate. We break them with mutation. Of the about 45% of your genome made up of these repeated copies of transposable elements, less than 0.1% are functional retrotransposons.
Just to be clear, let me repeat this: sometimes these duplications fortuitously land somewhere and are functional, conferring soem benefit. There are a few well-known examples -- I discussed a really awesome one at the end of a two part series on junk DNA I've previously written here at Still Monkeys. However, when we consider a) the origin of such sequences, b) that we would expect many copies of such dulpications to be retained in populations if they are neutral (i.e. they don't worsen fitness) and c) we are talking about half of the entire human genome, then the burden of evidence for function falls on the side of the 'no junk' brigade.
Luskin continues with an attempt to provide such evidence. Now is a good point to recall one of the other predictions I made: that Luskin would provide qualitative evidence of function in intergenic and other non-coding DNA, but will fail to provide a quantitative assessment for how this impacts our view of the genome. That is to say, he'd point to newly discovered function where none was known before and claim victory, yet fail to inform his readers that these interesting discoveries actually only account for tiny fractions of the genome. He does not tell his readers that because the fractions of explained DNA are so small, the proportion of purported junk DNA in mammalian genomes has not shifted since the very first assessment by Ohno, now 40 years ago.
An area that Luskin highlights is that of pervasive transcription. When a gene is used, it is transcribed into RNA. Those RNA transcripts might then be translated into proteins (this is what happens for coding sequences) or else they might be directly functional (e.g. regulating genes, forming ribosomes and so forth). Luskin is right to emphasize the diversity of roles played by RNA, as they have been somewhat underappreciated in the past.
Transcription is the first step when a stretch of DNA performs a cellular function. What Luskin latches onto is that several studies have suggested much of our genome gets transcribed into RNA, pointing to the possibility that a far greater proportion of the genome is functional than had been previously considered. Luskin highlights a number of commentaries (although virtually no primary research literature) on this side of the argument, doing his readers a serious misservice. This is, in fact, a highly contentious area and there are very good reasons to temper our excitement over pervasive transcription.
Many of the RNA transcripts that have been detected from areas of the genome that lack recognised functions are low frequency transcripts. It is widely recognised that because of the way transcription is initiated, there are many spurious transcripts -- "transcriptional noise" or junk RNA -- that are degraded by cellular processes such as nonsense-mediated decay. In yeast, Struhl (2007) argues that 90% of transcripts are spurious. And while certain techniques have detected high levels of transcription in humans, other techniques that are less error-prone have failed to do so (van Bakel, 2010, 2011) and have been the source of continuing controversy (Clark et al., 2011). WIth this said, previously unidentified, functional RNA transcripts will continue to be identified. Hundreds of non-coding RNAs have been identified being expressed in specific tissue types, which suggests they may be functional (Mercer et al. 2008), although no function for them has yet been identified. But, let's remember to compare those hundreds with the tens of thousands of identified genes in the mammalian genome that collectively sum to only about 2 or 3% of the genome. In the vastness of the genome there is still little evidence of pervasive function, and there remain good reasons to expect much junk despite the evidence of transcription.
Luskin bothers with none of this massive grey area, none of the controversy. He instead moves onto pseudogenes. Pseudogenes are genes that no longer function in their former capacity because they have been degraded by mutation. Luskin highlights one example from humans used by Francis Collins as evidence of common descent, which Luskin simply calls a vitamin C pseudogene, and describes as being "supposedly functionless". Okay - is Luskin about to prove that this gene (GULOP) is not actually functionless? No -- he moves onto a general discussion about other pseudogenes instead. A pity. GULOP is fairly compelling evidence of our common descent amongst the primates, as Collins has previously said. The haplorrhine primates (that means us and our closer relatives) differ from the other primates in that at one point, mutations fixed in GULO, the last component in the vitamin C synthesis pathway. Most likely, this is because the fruit-based diet of the common ancestor of the haplorrine primates was enriched with vitamin C, such that the loss of the pathway was not detrimental. The same thing has happened in some fruit-eating birds. We retain the legacy of this pathway in our genome, but none of its function. This non-functioning legacy is shared by the other closely related primates. The single mechanism that plausibly explains the sharing of a once-functioning, but identically broken gene is that we are related by descent. The only alternative is to insinuate that it is only "supposedly functionless" without a single bit of evidence, which is Luskin's approach.
Some pseudogenes get transcribed as RNA and sometimes act as regulators for genes. Such pseudogenes are not junk, and there are a couple known examples. We have about 20,000 pseudogenes, so again this is a numbers game. The majority are non-functional, and contribute to our total junk (although they total only about 1% of our genome).
I predicted that Luskin would talk about introns and alternative splicing. Introns break up our genes with non-coding sequences. All of our protein-coding sequences make up less than 2% of our genome, but the introns that break them up make up about a third of our genome. However, introns are not discussed at all by Luskin. This is a major omission.
The rest of the chapter is spent discussing the fusion of two ancestral chromosomes to form human chromosome 2. Luskin argues that this is not proof of common descent, and there may not have even been a fusion event. He argues that the telomeric DNA in human chromosome 2 is shorter than the telomeres found at the end of typical chromosomes. Again, this is designed to cast a small shadow of doubt on our common descent with other primates. No positive argument is offered for an alternative model.
This is what also struck me in Luskin's discussion of transposable elements. In a chapter of a book purportedly about intelligent design, I would have appreciated a model under which such selfish replicating elements in our genome could be understoood as a rational component of design. Perhaps this would tell us something about the nature of the mysterious designer. For example, a designer who would use such elements would appear to be a hands-off designer, because they would be willing to trade the suffering of dysfunction and disease caused by these elements with occasional adaptation over the long-term. An oddity about ID is that its proponents dislike anyone trying to draw inferences about the designer, yet I have never heard any complaints about archaeologists using the designed objects of ancient civilisations to do the same.
I haven't given any of the population genetics arguments for why there must be either junk DNA, or at least DNA that can freely accumulate change. These are a compelling, positive basis from which to understand junk DNA. The reason I haven't discussed them is because Luskin failed to address them, including the original one by the originator of the term 'junk DNA'. I've explained elsewhere already for those who'd like to know more than what Luskin is willing to tell them.
Luskin here has continued in the tradition of the other chapters in this book by ignoring all of the best arguments that run contrary to his, while making previously refuted arguments with biased evidence, pretty much in line with what I predicted before reading the chapter. He presents no positive case for a pervasively functional genome, and has only set out to cast doubt on the concept of junk DNA. Even in this, he has comprehensively failed. The book is called Science and Human Origins, but the science is threadbare, and treated unevenly and unfairly.
*** UPDATE ***
A common theme in feeback I've been getting on Luskin's two chapters is that Luskin has been recycling his old material from years ago, which has all been soundly refuted.
One such case is with his argument against chromosomal fusion. In the past he has been much more outspoken about this than he was in the book - here Luskin has removed much of his previous chromosome fusion argument from the book, an unspoken admission that his arguments were without merit. But back in 2009 on Panda's Thumb, Dave Wisker had some fun with the chromome 2 fusion argument. His four part series is a must-read, and there are valuable comments there as well (h/t Arthur Hunt).
Carl Zimmer had the cheek to ask for evidence about Luskin's chromosome 2 claims. The extraordinary non-response unfolded in a Facebook thread. Absolute must-read stuff, especially Zimmer's linked content on his blog, The Loom.
Continuing a chapter-by-chapter review of Gauger, Axe and Luskin's Science and Human Origins, a Discovery Insititute publication that is intended to challenge--amongst other things--the notion that humans share a common ancestor with chimpanzees, and that we couldn't have had descended from a literal Adam and Eve.
I am going to start with a prediction.
I am yet to read Chapter 4, the obligatory 'junk DNA' written by Casey Luskin, but I have tried to discuss this topic numerous times with Intelligent Design advocates, as my earlier posts on the topic attest to. The responses are always the same. I'm aware Luskin is not a molecular biologist, so I'm expecting him to do similar, if less thorough job, of presenting the same claims against junk DNA that have been made by Jonathan Wells, and other ID advocates.
Therefore I am predicting that this chapter will:
- Conflate junk DNA and non-coding DNA.
- Identify functions of non-coding DNA (introns allow alternative splicing, for example) as evidence that junk DNA doesn't exist.
- Present a qualitative argument that because new function is found from time to time (e.g. microRNAs) the base of 'junk' is being continually whittled away.
- Ignore the quantitative argument that such new discoveries account for a negligible fraction of the human genome, still leaving 90% unexplained.
- Make the argument that because active copies of transposable elements can play genomic roles, we can't discount the importance of any copies of transposable elements.
- Ignore that only a handful of copies of transposable elements are actually active, and that most are defunct.
- Play up pervasive transcription, while ignoring evidence that spurious transcripts are expected to be produced by error.
- Ignore the population-genetic arguments for the existence of junk DNA (e.g. the effectiveness of selection in small populations [Lynch 2007] and the mutational load in mammals [Ohno 1972]).
- Ignore the 'onion test' i.e. if the junk is truly functional, why do some closely related and ecologically similar species have several times more junk than others?
- Broadly, ignore every serious argument that provides support for the inference of junk DNA, and still claim a resounding victory.