Critique and discussion of research is extremely important to the scientific process, so I appreciate that goal of this commentary.
That being said, after reading both the original paper and the commentary, I find that a large portion of the commentary focuses on saying that China et al. didn't study the effects of aversion training... which is true, because the China et al. study never intended to study aversion training (the commentary does acknowledge this). The study was examining the differences between training methods of 'come' and 'sit'. While I'm not arguing that aversion training should be part of the discussion of e-collar bans (it's the only legal use of e-collar training here in Norway, actually), I'm a little skeptical of using an e-collar technique not explored in the original study as justification for the rest of the commentary. If there's valid issues with the original study, the existence of aversion training with e-collars should be a moot point, IMO, or a side note rather than the introduction and justification.
...if delivery of shock was not noticeable in the dogs' reactions (implied if the data extraction were truly conducted under blind conditions), then the authors' conclusion that "dog training with these devices causes unnecessary suffering" (p. 9) is inappropriate.
They are right in that China et al. did not produce evidence in their study that e-collar training causes harm, and have a point in that the wording used was perhaps more emotionally charged than necessary. However, the passage quoted above fails to mention that China et al. made that claim not on their study alone - the data from which showed no significant advantage to e-collar training over other methods - but from multiple studies that explored potentially harmful effects of e-collar training combined with their study's conclusion that they found no increased efficacy with e-collar training. Full quote from China et al. below:
Given the better target behavior response parameters associated with a reward-focused training programme, and the finding that the use of an E-collar did not create a greater deterrent for disobedience; we conclude that an E-collar is unnecessary for effective recall training. Given the additional potential risks to the animal's well-being associated with use of an E-collar (7
), we conclude that dog training with these devices causes unnecessary suffering, due to the increased risk of a dog's well-being is compromised through their use, without good evidence of improved outcomes.
I just found the way they presented the claim without acknowledging the researcher's reference to other studies about e-collar use disingenuous. They possibly had valid ground to discuss China et al. appearing biased in the language they used or that making claims about the ethics of e-collar use is outside the scope of the study, but it weakens their argument when they don't take the full context into account.
But the part about using the industry standard correction (lowest level that the dog responds to) compromising training outcomes, that bothered me. Uncritically citing a study done in the 60s as their evidence that "that punishment is most effective when it is delivered at the maximum acceptable level of intensity (Azrin, 1960)" had me raising an eyebrow. Yes, single-event learning with intense negative experiences is EXTREMELY powerful, but that doesn't make it ideal for most learning situations, nor does it take into account the risks to the animal's mental well-being to use such extreme methods indiscriminately (the 1960 study on pigeons is available here, if anyone's curious: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1403961/pdf/jeabehav00201-0033.pdf
). As an e-collar trainer @3GSD4IPO
I'd appreciate your input on this one, and whether we need to throw out any studies that use less than maximum intensity with their e-collars because otherwise e-collar training is just ineffective(??). That doesn't seem right to me at all.
I also rolled my eyes hard at the 'frog in boiling water' analogy. If this were a pop sci piece I'd be less annoyed, but a thoroughly debunked myth has no place in a scientific journal even as allegory. I can accept that this is a personal pet peeve and has no bearing on the validity of the arguments, though.
Now for the part I'm on board with: China et al. showed no evidence that the original unwanted behavior of these dogs was successfully eliminated long-term, and that absolutely bears more research. It was out of scope for the original study, but it's absolutely data that is important if we want to thoroughly compare and contrast various training methods. I can't speak to the discussion on the study design, statistics, error correction, etc. though - it's been far too long since I've been actively looking at these parts of research for anything I half-remember to be reliable. But I agree that designing good data sampling and correcting for possible errors is an extremely difficult part of these kinds of experiment, where so many factors can't ethically or practically be controlled by the researcher. This is why we should never rely on a single study to make sweeping conclusion, especially with something as complex and hard to quantify as animal behavior and animal-human interaction. I wish they had gone into more detail about these flaws and their impact on the results - this seems like where the focus of their argument should've been to me.
I have no qualms with their conclusion, I just feel like a lot of their points are skewed and exaggerated to make their argument look stronger, but only succeed in weakening the more valid criticisms. That doesn't mean the China et al. article is flawless and above reproach by any means, and I see it as just one piece of a large body of research that needs to be critically examined and weighed to draw any overarching conclusions about the efficacy and ethics of training tools/techniques. In the end, I'm not convinced of why China et al. SHOULDN'T be used as part of determining e-collar legislation in conjunction with other research.