A bit of background reading may be necessary here, because Alvin Plantinga’s Evolutionary Argument Against Naturalism (EAAN) is fairly technical and most people do not intuitively understand probability theory, especially Bayesian maths. Suffice to say that in order for Plantinga’s argument to go through he must show that humans most always form true beliefs about the world [ P(R)≈1 ] and that probability of this happening on the joint hypotheses of metaphysical naturalism and evolutionism is low [P(R|E&N)≈0]. Alas, Plantinga fails to substantiate either of these claims in anything like a rigorous logical fashion. He more or less assumes the truth of the former premise and merely hand-waves his way to the latter. Whenever you see a brilliant logician such as Plantinga eliding the steps to his conclusion instead of outlining a tightly reasoned deductive argument, well, caveat emptor.
A couple points must be made here. Metaphysical materialists cannot assume P(R)≈1 since we believe that all of the (oddly pervasive) talk of gods, spirits, ghosts, magic, chakras, witches, faeries, etc. is all so much bunk. People all around the world make up all sorts of wacky beliefs about disembodied minds and the imaginary forces emanating therefrom, and thus P(R) is evidently nowhere near unity. Moreover, since most religions (with a few interesting exceptions) assert that all other religions make up all sorts of untruths about the world, which are integrated into their devotees worldviews, it seems odd for any religious person to argue that humans almost always form true beliefs about the world. Finally, it should be evident from the overabundance of material at websites such as http://snopes.com/ and http://nizkor.org/features/fallacies/ that we humans are indeed quite prone to all manner of irrational thinking, not least of which an inborn tendency to attribute agency where none exists. Daniel Dennett and Pascal Boyer (among others) have written extensively and convincingly on this latter point, and I commend them to anyone interested in mapping out the bounds of human rationality.
Secondly, while the probability P(R|E&N) is nowhere near unity, it is neither nearly so low as to allow Plantinga's argument to go through. The crucial question here is whether we would expect naturalistic evolutionary mechanisms to select for true beliefs over false ones. This question is not nearly so simple as it sounds (or as Plantinga's treatment suggests) but it should be fairly obvious that it is generally far easier to program a neural network to solve problems of circumstantial adaptation by providing them with adaptive goals and good data than by providing them with maladaptive goals and bad data, as Plantinga suggests. Indeed, if E&N are both true we must expect that adaptive goals (e.g. craving food, avoiding pain) came first and the neurological wiring which allows for holding propositional truths (and hence the possibility of R) came along some time later.