As a fan of the ideals of open access publishing I do believe this was an important finding. Clearly there are problems with the peer review process in these journals. This needs to be addressed.
What I find strange is that the conclusions of this experiment fail basic logic. This experiment had no controls. There were no submissions of the spoof article to closed access journals, therefore it is impossible to conclude that the acceptance of poor scientific manuscripts is specific to open access journals. This stunt was also not a test of the open access ideology or business model, it was only a test of the peer review process of these journals. No doubt, those open access journals that accepted the article clearly failed the most basic requirement of scientific publishing, however Science magazine has also mistakenly accepted flawed papers. I found a more balanced assessment of the meaning of this experiment at National Geographic.