Mainly for clarification, disambiguation and to match experimental evidence?
My theory, is concerned with "Macro mutations" and in its plainest form just expands on what is already known. I will use the term DNA subroutine for a gene that can be switched on or off to express the macro mutation in question.
Fact: an organism under stress exhibits considerably higher mutation rates.
Theory: Different types of stresses will confer different spectrum of mutation - ie. the mutations will favour more likely beneficial mutations for a predicted future requirement which the stress would signal.
The mechanism proposed is that over a window of genetic experience (say a million years) the DNA stores information regarding which mutations were more appropriate for the given stress and which ones weren't. A "bank" of hundreds of thousands of DNA subroutines that has built up over time and proved their worth are either expressed or switched off to save metabolic resources. If a subroutine has been switched off long enough it can be relegated to "junk DNA" status and will not further be trusted in the field due to it being no longer valid in the new context (or from an orthodox perspective become unusable due to genetic drift)
A word on micro mutations:
Micro mutations are classified as completely random errors in duplication of genes. These are strongly evident and well studied. However, the orthodox view is that the only way for these micro mutations to avoid eventually destroying the function of the gene in question is for them to be field tested by natural selection. To put it another way, the whole organism has to die before reproducing to avoid one crucial micro mutation from being copied. I find this argument incomprehensible. It is like as if the only way to avoid errors in programming the oxygen intake valve of the space shuttle is to launch it anyway, let it crash and avoid using those blueprints again. I call it the crash, burn and learn concept. It might be alright for a virus with millions of launches every second, but I don't think it would quite be so good for the Emperor Penguin.
This lends itself to my belief that there is something else at play other than natural selection. There must be some sort of error correction or testing mechanism on individual DNA subroutines, and if there is, they can just as easily apply to non-expressed genes (or, much more likely, there is a system that expresses these, but localises them for testing only, thus suppressing the evolved purpose of the DNA subroutine for that generation at least).
4 comments:
Very nicely disambiguated! This is now a valid theory. In theory you could probably make good progress towards testing it using genome information in publicly-available databases... if not now, in another decade or so.
The disambiguation and clarification has succeeded to the extent that I can only point to two things that seem inherently implausible:
(a) That the bank would consist of 'hundreds of thousands' of subroutines rather than a few clusters of linked subroutines.
(b) That the testing mechanism would not involve expression of the genes, if only in specific classes of cells (I am thinking pre-spermatozoa, whatever they are called).
That the bank would consist of 'hundreds of thousands' of subroutines rather than a few clusters of linked subroutines.
I am not sure if your judgement is based on what you know about DNA that I don't, or whether hundreds of thousands doesn't fit into orthodox models. I can imagine subroutines from "rate" genes that just affect certain organs to ones that confer resistance or sensitivity to certain elements. Further along to subroutines that shut down certain metabolically expensive artifact (finger muscle say). I think the sky is the limit to the number. I imagine that switched off genes would look identical to the switched on ones until we discover what they each do (like never).
That the testing mechanism would not involve expression of the genes, if only in specific classes of cells (I am thinking pre-spermatozoa, whatever they are called).
I accept that you must actually run the subroutine to check that it works. But in my mind there is just NO WAY that random micro mutations are allowed to hijack perfectly good subroutines. What if at the same time a micro-mutation happened by chance to perfectly optimise your liver at the same time as another micro-mutation made you look ugly. Complex systems have to be broken down to simple subroutines and individually tested. There is no other way!
As for the testing mechanism, I think the only practicable way to do this is still to express the genes, but this could be done in 'laboratory' cells localised in a small organ concerned with passing on genetic material. Apropos of which, you would think there would be strong evolutionary pressure to *not* carry around your genetic material in a place where it could conveniently be bitten off by shepherds. What teleological reasons have been postulated for the stupid way male mammals are constructed?
I think you may actually be on to something. Every good programmer knows that important subroutines need to be stress tested. In DNA terms, DNA subroutines need to be checked in a range of possible temperatures. Not only can these laboratory cells be cooled more conveniently, but the temperature variation is much greater than internally (for warm-blooded creatures)
The experimental evidence on the development of unexpressed stretches of 'junk' DNA suggests strongly that there are *no* "other, as yet unproven error-correcting mechanisms" to keep it from changes.
From wikipedia,
Junk DNA has stretches that have conservation properties (have stayed identical over many million years). This is where my hundreds of thousands of subrotines lie :).
And retrotransposons are probably what execute required adaptive mutations.
Post a Comment