Meta-analysis of energy psychology studies finds these methods have a real effect!

Hierarchy_of_Evidence_2015

(by Robert Schwarz, PsyD, DCEP)

Last week, another milestone in the scientific study of energy psychology was reached when a meta- analysis of EFT studies was published. The title of the article is “The efficacy of acupoint stimulation in the treatment of psychological distress: A meta-analysis”. A meta-analysis is an analysis of multiple studies. The authors, Gilomen and Lee (2015) looked at all of the major studies using energy psychology protocols with acupoint stimulation (i.e., EFT and TFT).

The bottom line: the study found that EFT as a treatment does have a moderate effect size when compared with controls. What makes this finding even more delicious from our point of view is that it was published in a very conservative journal, the Journal of Behavior Therapy and Experimental Psychiatry, and authored by two people who have no allegiance to energy methods.

I would go so far as to say that this is almost the equivalent of the French Winemakers Association publishing an article titled, “California wines are almost as good as French wines”.

The article is deeply wonky and goes into a great deal of explanation about meta-analysis. It raises numerous questions about energy psychology treatment, which I will address in a moment. For those of us in the field who think scientific discourse has an important place, this article is a major milestone. Why?

  1. It is a meta-analysis, which cannot even be considered until enough studies in an area have been completed. Energy Psychology has now reached that point!!
  2. It continues the trend of increased scientific study of EP protocols, with increased methodological rigor. My position has always been that if EFT or TFT outcomes were only placebo or some non-real effect, as methodological rigor increases the results would increasingly diminish. This has not occurred in research about EP.
  3. The results of this meta-analysis continue to support the efficacy of energy psychology protocols. Feinstein (2012) reported that 15 out of 16 individual randomized control studies that reported on effect size found large effect sizes. Gilomen and Lee’s meta-analysis found that the average effect size was moderate. The authors comment on this difference but do not really discuss it. They are using different statistical methods.  Feinstein was using Cohen’s d and  Gilomen and Lee are using Hedge’s g.   Hedge’s g is a newer and more stringent measure.  Most psychotherapy outcome research has used Cohen’s d.

Gilomen and Lee (2015) raise important questions and concerns. They note that most of the active control groups are not full-fledged stand-alone treatments. This is true. However, this is beginning to change, such as in Karatzias et al. (2011), who compare EFT to EMDR. They suggest that perhaps there should be a study to see if EP adds something more to a proven treatment like prolonged exposure (PE). They are half right. The next major study should be a heads up comparison between EP and PE (how funny is that?) in the treatment of PTSD in vets.

Gilomen and Lee (2015) raise the issue that, to date, there has not been a good study that has been able to single out the effect of tapping on acupuncture points compared to everything else in the protocol. This is a fair argument, up to a point. There is some evidence to suggest that acupoint stimulation does add therapeutic effect to therapy (Fox & Malinowski, 2013).

Furthermore, there is the curious fact that EP treatment appears to work for so many things. From where does this robustness come? It makes great theoretical sense that it comes from working with the energy system of the body.

It is curious that this criticism continues to be aimed only at EP. Has there been a call for dismantling studies of EMDR, CBT or ACT? Perhaps there has and I just don’t know about it. For example, does anyone really know the active ingredient of CBT?

All in all, Gilomen and Lee (2015) have done energy psychology a great service. ACEP has said all along that we welcome and support scientific investigation. It is no small matter that this meta-analysis was published in the Journal of Behavior Therapy and Experimental Psychiatry.

Viva la difference!

Robert Schwarz, PsyD, DCEP

Author, Tools for Transforming Trauma

ACEP Executive Director

Want to learn more about the latest science that supports energy psychology and explore new body-mind healing methods? Join us at the 18th International Energy Psychology Conference, June 2-5, 2016. Learn more.

To learn more about research on energy psychology techniques, visit energypsych.org/research.

References

Feinstein, D. (2012). Acupoint stimulation in treating psychological disorders: Evidence of Efficacy. Review of General Psychology. Advance online publication. doi: 10.1037/a0028602

Fox & Malinowski (2013). Improvement in study-related emotions in undergraduates following Emotional Freedom Techniques (EFT): A single-blind controlled study. Energy Psychology: Theory, Research, and Treatment, 5(2), 15–26.

Gilomen, S. & Lee, C.W. (2015). The efficacy of acupoint stimulation in the treatment of psychological distress: A meta-analysis. J. Behav. Ther. & Exp. Psychiatry. 48 (2015) 140e148.

Karatzias, T., Power, K. Brown, K. , McGoldrick, T., Begum, M., Young, J., Loughran, P., Chouliara, Z. & Adams, S. (2011). A controlled comparison of the effectiveness and efficiency of two psychological therapies for posttraumatic stress disorder: Eye movement desensitization and reprocessing vs. emotional freedom techniques. Journal of Nervous & Mental Disease 199(6), 372-378.

Comments

  1. Research results are consistent that energy psychology intervention significantly reduce symptoms. Universities need to allow research on energy psychology in with large amount of people to provide services to

  2. David Feinstein says:

    Thank you for these balanced reflections about this important study, Bob. I just commented about it on Fred’s discussion group in reply to a post by John Freedom. I’m copying my comments here. Best wishes! David
    * * *
    Thank you, John, for these reflections on Gilomen and Lee’s Meta Analysis of EP studies. The article is based on Gilomen’s 2013 Murdoch University master’s thesis of the same title: “The Efficacy of Acupoint Stimulation in the Treatment of Psychological Distress: A Meta-Analysis” (212 pages). Statistically, it is a highly sophisticated piece of work. In terms of the authors’ objectivity vs. pseudoskepticism, the second author, Christopher Lee (who I am assuming was Gilomen’s thesis advisor) identifies himself as a PTSD specialist and associates himself with EMDR, Dialectical Behavior Therapy, and Schema Therapy. In 2009 he gave a talk to the 10th European Conference of EMDR with the intriguing title “Understanding EMDR: A History of Practise Guiding Science.” There is every reason to believe that Gilomen and Lee’s article is a sincere and informed analysis of EP studies up to 2013. The study’s conclusions that the evidence does not conclusively demonstrate that acupoint tapping is the active ingredient in the demonstrated effects of EFT/TFT protocols are disappointing but probably fair as far as they go. Had the authors been able to consider the dismantling studies that have been conducted since that time, they may have reached a different conclusion.

    I personally had to swallow hard when I saw that they used “Hedge’s g” to calculate the effects sizes, instead of “Cohen’s d” (the measure I used, which is considered the standard by many), and particularly when they provided a reasonable justification for that choice. This difference in the choice of statistical method no doubt explains why my calculations showed strong effect sizes while theirs showed only moderate effect sizes (Hedge’s g is more stringent when the number of subjects is small — this doesn’t necessarily mean that the effect sizes were not large, but the statistic doesn’t let you readily draw that conclusion without a relatively large N).

    This difference in the stringency of the statistic employed is actually emblematic of a larger issue. If you read the 18 studies that are the basis of Gilomen and Lee’s meta-anlaysis, with a fair but critical eye, you will probably come away with a sense that a superior clinical method has been identified and that it has some substantial backing. Beyond any rhetoric of enthusiastic authors that might have crept into the write-ups, the facts seem to suggest that a powerful clinical innovation has emerged. More so when you are a practitioner and see in the studies that others are also getting unusually strong outcomes that match your own. Yet a critical analysis of the data does not support that impression. Which do you believe?

    The question is not unique to Energy Psychology (it is just much worse with EP due to the paradigm clash), but evidence thresholds before acceptance are an issue with any new clinical innovation. When an intervention is first introduced into practice, there is no scientific evidence of its efficacy. If it is successful and begins to be used by others, social forces (such as the desire for credibility or insurance reimbursement) require that it begin to be scrutinized. The early studies are not usually as sophisticated as the studies conducted once the clinical community is taking it seriously, and an escalating set of standards can actually be applied. By the time a method has met the most rigorous of these (if any really do), it is no longer an innovative practice. It has been around a long time, has secured the attention of conventional funding sources and research institutions, and using it or not using it is a matter of personal taste, selecting from among various establishment-approved practices, hardly an innovative leap. Until then, the clinician has to choose how much evidence is personally required, and what kinds, before taking the leap.

    I tried to address the implications of this for EP in my rebuttal to Gary Baaker’s insistence that EP be held to extremely high research standards before it be taken seriously in the “research debate” special issue of the Energy Psychology journal (2012). In my rebuttal, I wrote:
    * * *
    Clinical innovation begins when a therapist -­ based on instinct, chance, or theoretical calculation ­- does something atypical with a client and it appears to have a beneficial effect. The novel operation is then repeated or applied with additional clients. If successes are observed, informal or formal reports about these early outcomes bring the method to the attention of other clinicians. Behavioral therapy and cognitive therapy, for instance, were first introduced in books presenting multiple case histories, not in peer-reviewed papers providing evidence from controlled studies (Mollon, 2007). Eventually, however, there is enough reason to believe that the approach merits systematic investigation. This might begin with standardized pre- and post-treatment assessments. If these findings are encouraging, the investigation is typically extended to larger numbers. Next, the more stringent requirements of RCTs, such as well-defined control conditions and randomization, are introduced. For some, this progression, if consistently showing positive outcomes, is an adequate demonstration that the approach has merit. But the bar can be raised ever higher.

    The ways of controlling for extraneous variables are, in fact, almost endless. Among the possible considerations for an RCT, beyond proper randomization and comparison conditions, are large sample sizes, sophisticated methods for defining treatment populations, use of structured diagnostic interviews or other measures that are more objective than self-reports, assessment tools that meet the most rigorous requirements of validity and reliability, assurance that treatment manuals have been strictly adhered to by those administering the therapy, application of ever more exacting statistical methods, scrutinizing the components of a treatment to assess their active and inactive ingredients and possible interactive effects, and controlling for any bias or allegiances of those doing the treatment or interpreting the data, etc.

    These escalating requirements almost ensure that the only settings for research that meet the highest standards are likely to be well-funded, established institutions -­ the very voices of conventional paradigms and prevailing economic interests. The scientific rationale for this trade-off is that it controls for extraneous variables. However, a point of diminishing returns is eventually reached wherein increasingly stringent research standards do not significantly increase the confidence with which a study’s findings can be interpreted. In fact, one of the fallacies in the thinking of those who adhere to exorbitant criteria in evaluating clinical innovation (the “pseudoskeptics”) is to assume that the possibility of extraneous influences in an experiment explains unexpected findings (that is, findings that don’t conform to the prevailing paradigm) so those findings can be discounted (Feinstein, 2009).

  3. Isn’t it lovely to have confirmation officially , what we knew already, that energy psychology works

  4. How many years do you think it will take for a Galileo type paradigm shift?

    • Great question Robert. The good news is that things are moving in the right direction and at an accelerating pace I think. Bob

  5. But does removing symptoms address the real cause beneath? Not in my experience. I managed to bury things deeper into my body, despite removal of symptoms – with various spiritual modalities, including EFT, which re-emerged later, thus enabling me to deal with the real cause, using esoteric healing.

  6. I love the reference to the California wine commentary…the beginning of a shift that signaled not merely the acceptance of California wines but began an eoniphile’s paradigm shift that heralded a new wave of appreciation of the up and coming new kid on the block, a perfect analogy. Thank you Bob for this excellent synopsis!

  7. Hi Folks,

    Dr. Craig Weiner has also written a very thoughtful blog on this topic. I recommend it.

    http://www.efttappingtraining.com/eft-research-paper/efficacy-of-acupoint-stimulation-treatment-of-psychological-distress-meta-analysis/

  8. Here is some great information from Dr Amy Gaesser

    Technical Explanation regarding use of Hedge’s g:
    Both Cohen’s d and Hedges’ g pool variances on the assumption of equal population variances, but g pools using n – 1 for each sample instead of n, which provides a better estimate, especially the smaller the sample sizes.The bias is reduced using g. By bias, they are not referring to the researcher’s bias negatively influencing the analysis. Rather it is a bias naturally inherent in statistical analyses involving small sample sizes. The formula for Hedge’s g includes a mathematical adjustment that reduces this computation bias. There is still debate about when to use Cohen’s d versus Hedge’s g. Hedge’s g is starting to be used in research for studies with small sample sizes, and is not a unique application by Gilomen and Lee to their meta analysis of EFT.

    For those wanting a quick elevator explanation of the re: Gilomen & Lee study’s use of Hedge’s g, one could say:
    Advancements in statistical analyses suggest that Hedge’s g is a more accurate measure of effect size because it is less biased than Cohen’s d when small sample sizes are involved. The Gilomen and Lee meta analysis provides excellent news for EFT research because it shows that even after using the more conservative Hedge’s g, EFT outcomes demonstrate a significant difference with a moderate effect size, similar to many presently used mainstream interventions. Bottomline – the Gilomen and Lee study indicates that EFT is demonstrating itself as an efficacious treatment and showing results at least as good as present mainstream treatments.

    Amy H. Gaesser, PhD, NCC
    Assistant Professor, School Counseling
    Counseling and Development Program
    Department of Educational Studies
    Purdue University

    • Margaret Hux says:

      Re: Hedge’s g vs. Cohen’s d: I know that I’m responding to this much later than the original debate so not sure that people will see my comments here. I have just obtained the full text and reviewed this interesting and important study. I think the issue of effect size is not clear and this may be worth a question to the authors to clarify.

      I don’t believe that weighting studies by n-1 rather than by n would downgrade a clinical effect from large to medium size but would have a very small impact on the effect size so I don’t think this accounts for the difference between the two methods.

      Correct me if I have misinterpreted their full report, but they state that after 18 studies met the criteria for inclusion, adequate quality, etc, they identified two articles as being more than 2 standard deviations away from the mean effect observed and removed these as being outliers. These two particularly large effect sizes as Hedge’s g; Figure 2 – were Church and Pina 2012 with a huge effect size and Church et al 2012 with effect size just over the cutpoint). I am not sure that removal of outliers if they are high quality studies that meet all criteria is standard expected process.

      THEN, they test for publication bias, a method that looks at the dispersion of all of the studies and assumes that if no null papers were suppressed there should be an even spread of papers above and below the mean effect size. This is shown in Figure 2. In this figure there are two papers far to the left, the two that were removed as being outliers – and they assume there should be two papers far to the right with standard treatment WAY more effective so they impute two papers of similar size to those to the left (the “outliers”).

      And am I correct in assuming that then their final estimate (medium effect size) is calculated with the two greatest effect size studies removed as outliers and assuming that two highly negative papers must exist and are added into the estimate. This would indeed downgrade the overall estimate and is a really incorrect method. Either leave in the large positive studies and assume negative balancing studies must exist, or remove the outliers and do not include them in the check for publication bias.

      This seems worth a question to the authors clarifying and it seems to me this would account for the difference in the estimates and should be ?retracted???

      Comments??

      Best, Marg Hux.
      Health Economics Research, IMS Health., Toronto, Ontario

  9. John Freedom says:

    There has been some discussion re: the need for dismantling studies. EFT has faced healthy skepticism and a dose of criticism, as people wonder whether acupoint stimulation (or tapping) is really an important component of the therapy, or if results can be explained simply by its exposure and cognitive aspects rather than the tapping. In order to determine whether meridian tapping is indeed a valuable component of EFT’s success, Baker, Carrington, & Putilin (2009) recommended that researchers conduct “dismantling studies” to separate tapping from the cognitive and exposure portions of the protocol.

    In the first study that set out to explore the necessity of acupoint tapping, Waite and Holder (2003) conducted a study in which they compared three tapping conditions (EFT, sham points, and a doll) to a non-tapping condition. However, the researchers mistakenly used EFT points, because they asked participants to with their fingertips, which contain meridian treatment points (Baker, Carrington & Putilin, 2009; Pasahow, 2010). Participants in all three tapping groups showed significant improvements; the non-tapping group did not. Waite and Holder concluded that EFT owed its success to distraction and desensitization. However, they did not take the fingertip acupoints into consideration in drawing this conclusion. Moreover, the assessments used were not valid or reliable and the manualized form of EFT was not used. Perhaps because of these issues, the study is an outlier when compared to other EFT studies.

    Two studies, not specifically designed to be dismantling studies, began to assess the components of EFT. Both compared EFT to a similar treatment that used diaphragmatic breathing instead of acupoint tapping. In 2003, Wells et al. conducted the first of these studies. Participants were being treated for phobia. The results showed that on measures of phobic response, results were superior for the EFT group. In 2010, Baker and Siegel conducted a partial replication of this study in order to control for expectancy effects and other non-specific variables. Assessors were blinded as to treatment condition and a no-treatment control group was used. The results showed that EFT was responsible for the positive outcomes of the EFT group.

    In 2013, Fox conducted a dismantling study, comparing EFT to a control group that used the cognitive and exposure portions of EFT along with mindful breathing. The participants in this randomized controlled study were university students. The results of this randomized controlled trial showed that on most measures the tapping group did significantly better than the control group.

    In 2015, Reynolds conducted a dismantling study. In this case, EFT was compared to a control condition which included the cognitive and exposure portions of EFT, but had participants tap on sham locations―using the right open hand to tap on the left forearm. Participants were teachers from two demographically matched school districts in the same county. They were assessed for burnout risk using the Maslach Burnout Inventory (MBI; Maslach et al., 1996), which includes three measures of burnout. On all three measures, EFT was found to be superior to the sham tapping employed in the control group.

    In 2014, Rogers and Sears conducted a dismantling study using a convenience sample of 56 university students randomized into an EFT group and a control group. The control group used an identical protocol but with sham tapping points. A stress test was administered before and after a single tapping session. The group that had tapped on actual acupressure points exhibited a significantly greater reduction in stress.

    ​The Fox study was pub’d in the Energy Psychology Journal in November 2013. The Reynolds and then the Sears & Rogers studies will be pub’d in the EPJ in the May and November, 2015 issues of the EPJ, respectively.

    ​John Freedom and Sarah Murphy

  10. The only issue is not whether such techniques work but instead: by how much do they work or can they be expected to work.

    Ewing GW, Grakov IG. A Comparison of the Aims and Objectives of the Human Brain Project with Grakov’s Mathematical Model of the Autonomic Nervous System (Strannik technology). Submitted for Review 29th May 2015

    Ewing GW. A Framework for a Mathematical Model of the Autonomic Nervous System and Physiological Systems using the NeuroRegulation of Blood Glucose as an Example. J Comput Sci Syst Biol 2015; 8(2): 59-73.

    Ewing GW. Grakov IG. A Further Review of the Genetic and Phenotypic Nature of Diabetes Mellitus.
    Case Reports in Clinical Medicine 2013;2(9):538-553.

    Ewing GW. Virtual Scanning: a New Medical Paradigm? Journal of Computer Science and Systems Biology 2013;6:93-98.

    Ewing GW. The ‘Biology of Systems’ or the ‘Systems of Biology’: Looking at Diabetes from the Systemic Perspective. International Journal of Systems Biology 2013;4(1):45-56.

    Ewing GW, Grakov IG. Fashion or Science? How can orthodox biomedicine explain the body’s function and regulation? N.Am.J.Med.Sci. 2012;4(2):57-61.

    Ewing GW, Parvez SH. The influence of Pathologies and EEG frequencies upon sense perception and coordination in Developmental Dyslexia. A Unified Theory of Developmental Dyslexia. N.Am.J.Med.Sci. 2012;4(3):109-116.

    Ewing GW. A Theoretical Framework for Photosensitivity: Evidence of Systemic Regulation. Journal of Computer Science and System Biology 2009;2(6):287-297.

    There is much more about this extensive line of research on Linkedin or ResearchGate

    All medical techniques – contemporary and alternative – are based upon how the autonomic nervous system is regulated i.e. how the brain receives sensory input, how this is compared with memories, how this influences the neural networks and how this influences the regulated stability of the autonomic nervous system and physiological systems, and how this is manifest as changes to cellular & molecular biology.

    Graham Ewing

Trackbacks

  1. […] ACEP – lots of great articles. This one in particular is about a Meta Analysis that showed Energy Psychology techniques have a real effect: https://acepblog.org/2015/06/12/meta-analysis-of-energy-psychology-studies-finds-energy-psychology-ha… […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: