Home What's New Site Map Feedback Search NREP

Research Studies ] Research Instruments ] Research Methodology ] Research-Based Therapy Protocols ] Promising Leads for Research ]



Exploring Psychotherapy Scientifically and Experientially

James R. Iberg, Ph.D.
2009 McDaniel Avenue
Evanston, IL 60201-2124

The Mutual Need for Bias Control: Personal Knowledge and Science

A Study Design Which Offers a Good Compromise: the 2 level factorial experiment

Identify the activities to be tested and the levels you wish to test

Satisfy yourself that you stand to gain something for your trouble

Choose your outcome variable(s)

Design the factorial experiment

Identify potential participants and get informed consent

Conduct the experiment and collect the data

Analyze your results

Conclusion: We can do research that is good science and experientially meaningful



In this paper I will describe a fundamental dilemma for psychotherapy research, and propose one way to deal constructively with that dilemma.

The Mutual Need for Bias Control

There is a basic underlying tension, perhaps even a conflict, for clinicians when it comes to research. One reason to do research is to learn about “how things work” in the sense of identifying causal relationships in general for the average person. This can have benefits for society, as evidenced by vaccines which prevent diseases that once killed thousands of people every year. I will refer to the search for principles which apply to people in general as the “traditional science” approach.

For the psychotherapist, however, traditional science often offers limited help, since the goal of therapy is to help the particular individual client, not the “average client.” The clinician must go beyond what helps on the average, to the specific help to which the real client actually responds productively. As Gendlin put it, “A client’s inwardly arising life-forward direction is more precise and more finely organized than any generalized values. The steps that come in focusing and experiential therapy are much more finely differentiated than generalizations” (Gendlin, 1996, p.264). I will refer to the search for specific steps and understandings tailored to the individual as the “experiential approach.”

A dilemma for clinicians is this: as one becomes increasingly adept at helping particular clients, skepticism grows regarding the value of the general, almost always less well-fitting (for the specific individual) knowledge of the traditional science approach. Yet clinicians stand to benefit from the continuing discovery of relative benefits and by-products of various psychotherapeutic procedures. General knowledge promises to bring the clinician more efficiently to the point where the therapeutic relationship must take the individual beyond the generalities.

A parallel dilemma may affect traditional scientists. Some are highly skeptical of the individual’s ability to achieve correct understandings through introspection. The basis for this skepticism rests on real limits to the experiential method: an individual suffering from vitamin deficiency might never (or not in time) discover the dietary source of the problem through introspection. Similarly, the cause of the bubonic plague was long misunderstood. Throngs of individuals, many of whom were no doubt highly introspective and sensitive to their inner experience, failed to correctly identify the cause of the disease. Even physical pain may elude correct introspective analysis because of (among other things) the phenomenon of “referred pain” in which the subjective experience of pain would have its source located somewhere other than the actual origin (e.g. angina pains felt in the arm). Thus those steeped in the scientific approach and its power to overcome subjective limitations may hold a bias against the experiential approach. Yet the scientist ought to welcome means to deal with “side effects” and other ways that scientific findings do not produce the promised benefit for certain individuals. Doing yet another study and relying on its generalizations is far too inefficient for the individuals suffering side effects or failing to achieve general benefits.

The two approaches are different and each has its own advantages and disadvantages (see Table 1).

Table 1. Two Approaches to Discovery: Respective Strengths and Shortcomings

Traditional Science Approach Experiential Approach

•  offers well developed analytical methods and rules of evidence         

•  highly specific individualized knowledge is gained

•  builds cumulative knowledge base about what works for people  in general 

•  individuals constantly provide corrective feedback to any generalizations

•  wins credibility with a broad audience, beyond those personally familiar with the subject

•  respects individual autonomy, authority, and creativity

•  protects against distortions based on individual peculiarities

•  people are on equal footing as “researchers”


•  rules of evidence and methods are obscure and arcane to most people (and thus subject to interpersonal power dynamics because only some have requisite expertise)

•  personal truths may be mistaken for general principles when they are not

•  individuals studied are reduced, at least in some ways, to categories

•  the details of experiencing may lack larger contexts necessary to discover operative general rules

•  individuals for whom things work differently than the typical person may be given  wrong advice

•  requires firsthand experience to know – generalizing to others must be done cautiously


Progress in helping people will likely be promoted if we capitalize on the respective advantages and minimize the respective disadvantages of the two approaches, rather than identifying with one or the other.

What I propose is the development of research projects which utilize methods of traditional science and yet remain experientially meaningful for the people involved. To accomplish
this, we need bias controls for each approach.

A Study Design Which Offers a Good Compromise

The bias control for clinicians involves tolerating procedures necessary for research which have little if any immediate experiential benefit (although I believe they need not feel “wrong” experientially). The bias control for scientists involves tolerating sometimes less than ideal research designs and methods in order to stay closer to experientially meaningful procedures.

For several years I have been using a research design with groups of graduate students and with my own clients. I would like to describe this design in hopes of intriguing clinicians into doing research on their work. The design is the “two-level factorial experiment,” which has scientific respectability (Box, Hunter, & Hunter, 1978). It can be applied to study the actual practices of those who use it, whereas much scientific research involves “laboratory” studies or “clinical trials” totally separate from the everyday processes meant to be informed by the research (Seligman, 1995). For example, in my Client Centered and Experiential Psychotherapy classes, during the eleven week trimester, I design a study involving the therapist activities students are learning, and we gather client questionnaires after each session. By the end of the trimester, they get the results of the study so that they can learn from data provided by their own clients about the effects of some of their therapist activities – direct and relatively immediate feedback on their own experiences, rather than reading about a study done years ago with specially trained therapists working somewhere else with randomly assigned clients.

Two-level factorial experiments can be applied in individualized ways in a variety of settings. In about eight to twelve weeks of weekly sessions, one can gather the data necessary to begin to discover the effects of specified activities on participants’ experiences. With coordination, several small studies done this way can generate findings with generalizable meaning, relying on the principle of replication rather than clinical trials. What is involved in doing a factorial study?

Identify the activities to be tested and the levels you wish to test

The typical procedure for therapists using this design is to start by asking “what do I do often with clients that is a key ingredient to my success?” Identify two or three specific therapist activities for which you would like to document effects. Simplest is to identify something that you would feel comfortable doing in some sessions and not doing in others. Suppose a therapist identified using a guided relaxation exercise as a key ingredient to more successful sessions. Factorial studies suit this well, because in some sessions during the study, guided relaxation can be used, while in other sessions it is not used (assuming this felt ok to the therapist and clients give informed consent). Suppose another key activity is guiding the client to “clear a space” (Gendlin, 1981). Then you have two factors for an experiment1.

I want to emphasize the flexibility and adaptability of factorial studies. What you study can be the things that you do, about which you are really interested in learning.

Satisfy yourself that you stand to gain something for your trouble

The effects of the key things you do may seem obvious: they almost have to be, at least to you, otherwise you wouldn’t think they were key things. After all, good clinicians are adept at paying attention to client responses to what the therapist does. Why bother with an experiment if you already know the good effects of what you do? First, you can expand your understanding of the effects of your activities. An experiment will enable you to detect things more subtle than the effects you already know about.

To make an analogy, it might be obvious, if you pick two apples, one from each of two trees, which apple is bigger. You don’t need an experiment to confirm that. However, it might be difficult to tell, by “eyeballing,” if one of the two trees would usually yield bigger apples. With an experiment, you would be able to detect this, with a high level of confidence. When it comes to picking an apple for lunch, you might be satisfied to choose based on the actual size of the apples you have in your hands. But if you had to decide which tree your apple supply would come from for the whole season, knowing which one produces bigger apples on the average would mean you’d know which tree to choose to have bigger (or smaller) apples on more days than if you chose the other tree.

By doing an experiment, you might discover that there are benefits in addition to the ones you already knew about. You might also discover unintended and unwanted effects of which you were previously unaware. Such discoveries could be valuable for improving and refining your activities. You might fashion a variation of your key activity which preserves the benefits, but reduces or eliminates an unwanted effect. Your attentiveness to and awareness of the effects of what you do are likely to be sharpened.

Second, you can help others learn about the effective things you do. Even the activity-effect connections which are obvious to you may not be obvious to other clinicians. By the careful demonstration of activity-effect relationships accomplished with good studies, what you do may become worth learning for someone who doesn’t already do it.

Third, you can strengthen the public reputation of your approach. Part of what can build public awareness of and respect for an approach is numerous studies which document the “efficacy” of its methods (Task Force on Promotion and Dissemination of Psychological Procedures, 1995; Chambless and Hollon, 1998; DeRubeis and Crits-Christoph, 1998).

Choose your outcome variable(s)

You next specify exactly what will show the effects achieved by your activities.

You might choose client experiences of sessions, as indicated on a questionnaire after the session, or as indicated by ratings of the session on a psychological measure such as The
Experiencing Scale
(Klein, Mathieu-Coughlan, Kiesler, 1986), or as indicated by therapist ratings.

Generally, more “objective” measures are desirable to help protect against the possibility that your hopes for certain results could influence your assessment of the effects. However, if the measure that seems most meaningful to you would be your own ratings, I say use them, and worry later about finding additional measures to strengthen your case. With small scale factorial studies, you don’t have to get everything perfect in the first one: you can afford to do more than one study.

Design the factorial experiment

Generate the activity combinations. Next, create all possible combinations of the activities you wish to test. Consider a study I designed for a small group of experienced focusers at the 1997 Focusing International Conference in Pforzheim, Germany. Ahead of time, via e-mail, participants considered several possible activities to study. We settled on two activities: 1) whether or not, near the end of the session, the listener/guide gave an empathic response which summarized the highlights of the focuser’s process, and 2) whether or not, near the end of the session, the listener/guide suggested that the focuser ask her/his felt sense if there was a next action step which felt right. The four possible combinations of these two activities are shown in Table 2.

Table 2. Combinations of two factors, each at two levels

Number Overall reflection given Suggest asking for an action step

1 no no
2 yes no
3 no yes
4 yes yes


Each row in Table 2 defines one of the four distinct combinations of these two listener/guide activities. Each distinct combination was used in one session by each listener, so each listener did four different kinds of session, each with a different focuser. The order in which these four distinct session types were conducted was randomized, to control for things which might happen over time whose effects might otherwise be confounded with the effects of the therapist activities.

Determine how many people you need to use and how many sessions it will take. This depends on the breadth of the claims you wish to make about the representativeness of your results, how many activities you wish to test, and the magnitude of the effects you want to be able to detect.

You can do meaningful studies with only one client: then you learn about the effects of your activities for that particular client. If you wanted to do the experiment laid out in Table 2 with one client, in twelve sessions (three cycles of the basic experiment) you would likely be able to identify some effects which are “statistically significant” in the sense that they are not likely to be chance occurrences, but are real differences in how these activities affect the experience of that client. More cycles through the experiment (for two therapist activities the number of sessions increases by four with each cycle) would enable you to detect even more subtle effects (smaller in magnitude) for that client.

If you were to do the same experiment with three different clients, the design would likely generate significant findings about the effects in as few as four sessions for each client, and almost certainly in eight. In this case, findings are more generalizable to your typical client, since effects detected are not based on only one client’s experience.

In the sample study (Table 2), five focusing guides each completed the four sessions, which gives us a good basis for claiming the effects we found should generalize to other similar focusing guides. The guides also worked with different focusers in each session (with one exception), so generalizability to other focusers should be good. In this arrangement, only one session each was required from each of 20 focusers to complete the study.

We could also learn about more than two therapist activities in one experiment by increasing the number of sessions. To add a third therapist activity in our example would generate eight distinct session combinations (once through the combinations in Table 2 for the “lower” level of the third variable, and once through for the “higher” level of the third variable). Testing more than three therapist activities in one study is theoretically possible, but could easily become too confusing to be practical in implementation.

Identify potential participants and get informed consent.

You probably know people who would be happy to participate in your study. These would be the people to ask first, in my view. Participation should be voluntary, and there should be a written “informed consent” form which people read and sign about the nature of the study and what they can expect by participating. You need not tell them specifically what you are testing, but you should tell them how participation may affect sessions and what they get out of them in a general sense. This form should also make it clear that the person is free to withdraw from the study at any time and what, if any, consequences ensue for doing so. One thing to say which will be true if you design an experiment in the spirit I intend is that the activities tested are part of your usual repertoire: you will not be doing anything extreme or unusual, but you will be doing things more systematically than usual, so that you can tell which effects are attributable to which activities. If you think there are distinct benefits or risks with participation, you should mention those.

Conduct the experiment and collect the data

It pays off to have things well organized in advance. You can prepare materials so that the activity combinations that go with each session, in the order in which the sessions are to be conducted, are clearly printed on paper and arranged to minimize confusion and mistakes in conducting the experiment. I use a “check-list” for therapists before each session to help avoid oversights and mistakes, since this is a more structured way of proceeding than usual, and there are many details to keep straight. Once materials are organized, the actual conducting of the sessions isn’t difficult, and your mind can be free to do your best clinical application of the activities.

I recommend recording the data from outcome measures on computer as it comes in, rather than waiting to record it all at the end or writing it down by hand and then having to write again getting it into the computer. By doing this, you eliminate re-work, the job does not get too big, and you can error check as you go. With post-session questionnaires, you also get immediate feedback about your sessions, which study therapists have found interesting and clinically useful.

Analyze your results

I would like to illustrate typical results by summarizing two recent factorial studies.

First, the study outlined in Table 2. The outcome measure which showed effects attributable to the two activities tested was an index variable called “General Therapeutic Value.” This index variable was suggested by a Principal Components factor analysis of over two hundred session reports conducted over a three year period. The first factor had four questions which loaded on it heavily. The index variable used for analysis of the experiment was created by summing the standardized scores on those four questionnaire items (Cohen, 1990). The items were 1) an overall rating of the session on a seven point scale, from “very poor” to “perfect,” 2) the focuser’s rating of how well the listener understood him/her, ranging from “misunderstood,” to “exactly” on a five point scale, 3) a rating of how helpful the listener was, ranging from “not at all helpful” to “completely helpful” on a six point scale, and 4) a rating of how this session compared to all other therapeutic conversations the person has ever had, on a seven point scale ranging from “Terrible” to “Superlative.” Thus, if the focuser felt very well understood by the listener, felt the listener was very helpful, made a high rating of the overall quality of the session, and indicated that the session compared very favorably to all other “therapeutic interactions,” he/she would receive a high score for the “General Therapeutic Value” of the session.

The experiment showed a significant two way interaction (p < .015) between the two guiding activities tested (see Table 2). Figure 1 illustrates this effect graphically.

The effects of one of these guiding activities depend on the presence or absence of the other guiding activity. If a summary reflection is given, suggesting to ask if there is an action step gets higher General Therapeutic Value scores than omitting this suggestion. However, if no summary reflection is given, suggesting to ask if there is an action step gets lower General Therapeutic Value scores than omitting the suggestion. The two combinations of either using both a summary reflection and an action step, or using neither of these guiding activities get better ratings than the two combinations in which either activity is used without the other one (a more extensive report on this study is available from the author).

In a recent study with a group of eleven students, we studied two different therapist activities. In this case, each student had two practice clients in the study, each of whom experienced all four activity combinations.

The study involved the four kinds of sessions described in Table 3, which were each done with each client in a random order. Thus each client had four sessions and each therapist had eight (two of each activity combination).

Table 3. Winter 1997 Student Factorial Experiment

Number Therapist asked Focusing Questions Average Frequency of empathy

1 once every 1-2 minutes (fewer)
2 twice every 1-2 minutes
3 once every 30 seconds  (many)
4 twice every 30 seconds


The students had either one, or two opportunities to ask focusing questions (Gendlin, 1981, 1996) in the sessions: either only once at about fifteen minutes into the session, or twice, first at fifteen minutes and again at about thirty minutes into the session.

The frequency of empathic responses was either “fewer,” about once every minute or two, or “many,” about once every 30 seconds. These frequencies refer to the average frequencies that the students were to hold to throughout the session, with the understanding that there would be variability in the frequency over time depending on what was actually happening. Students each had a clock which they placed next to or behind the client, so that they could have it in view as an aid to approximating each frequency of response. The clock also was useful for determining the approximate time to ask the focusing questions, with the exact time determined by student judgment.

Clients and students each filled out a post-session questionnaire after each of the four sessions, to measure the effects of these therapist activities. Principal Components Factor analysis of these questionnaires yielded several relatively independent “client rating outcomes” and “therapist rating outcomes.” For example, the first client rating factor was made up primarily of five questions: 1) an overall rating of the session on a seven point scale, from “very poor” to “perfect,” 2) a rating of the extent to which the “doorway” to feelings opened in the session, on a five point scale from “Not at all” to “Extensively. Deep feelings opened up and moved me in unexpected ways,” 3) a rating of the extent to which the client felt able to talk about what was valuable to discuss, on a six point scale from “Not at all” to “Completely,” 4) a rating of how this session compared to all other therapeutic conversations the person has ever had, on a seven point scale ranging from “Terrible” to “Superlative,” 5) a rating of the most intense emotion felt in the session, ranging on a five point scale from “not at all intense” to “extremely intense.”

The outcome variable is a weighted average of the responses on these questions, called “Useful Feeling Developments,” to try to capture the meaning of these five items taken together. This variable is scaled to have a mean of zero, and a standard deviation of 1. Thus, positive scores indicate more Useful Feeling Developments, and negative scores indicate less Useful Feeling Developments, with zero approximating the average level for the whole group of clients.

The data analysis2 revealed a two way interaction effect of the therapist activities on Useful Feeling Developments (p < .02). That is to say, the level of Useful Feeling Developments depended on the combined effects of the two therapist activities. Figure two illustrates this result. The higher levels of Useful Feeling Developments occurred either when Focusing questions were asked twice along with fewer empathic responses, or when Focusing questions were asked only once along with many empathic responses.


Thus we learned that the effects of Focusing guiding and frequency of empathy depend on each other. The better results (as indicated on this outcome) occurred for sessions in which there were either two focusing questions with an empathic response about once a minute, or one focusing question with more frequent empathic responding, about once every 30 seconds. The lower frequency of empathy is enhanced by a second focusing question. On the other hand, with a 30 second average empathic response time (high frequency), a second focusing question may be “too much” activity from the therapist, and Useful Feeling Developments drop off. Either using one focusing question with empathic responses about every thirty seconds, or two focusing questions with the empathic responses about once a minute, would be expected to do about equally well in terms of Useful Feeling Developments for student therapists like those involved here.

Two other results are pertinent and help clarify a basis on which one might differentiate between one or the other of these two combinations of empathic frequency and focusing questions. The first is another client outcome rating factor, called “Directedness.” It is made up of four client-rated items: 1) the mark on a seven point “ruler” for describing the tone of the session, extending from “Friendly, ‘getting to know you,’” to “Evaluative;” 2) the mark on a seven point ruler extending from “I had a sense of independence” to “I had a sense of being guided or controlled;” 3) the mark on a seven point ruler to describe the focus of the talk as relatively more “interpersonal” or more “task-oriented;” and 4) the rating of the relative influences of the therapist and the client on the course of the session, ranging on a five point scale from 20% therapist/80% client to 80% therapist/20% client.

There was a simple main effect for the frequency of empathy on Directedness. With more frequent empathic responses (every 30 seconds), clients’ ratings indicated higher levels of Directedness (p < .01) than in the sessions with empathic responses once every minute or so. The level of focusing questions had no effect on Directedness.

One of the therapist rating variables also showed an effect for frequency of empathy. I called it “Experiencing Quality,” as it is made up mainly of the following four items from the therapists’ questionnaires: 1) the mark on a ten point scale to describe the “client’s explication of the complexity of their situation, both internal and external,” which ranged from “minimal” to “full articulation;” 2) the mark on a ten point scale to describe the extent to which the client experienced the formation and opening up of immediate, viscerally felt feelings, ranging from “no occurrences of bodily feeling” to “several bodily-feelings formed and most opened;” 3) the mark on a 3 point scale to assess client congruence, ranging from “not at all: a lot seemed to be going on which the client seemed unaware of or unable to express,” to “a great deal: seemed highly self-aware and able to express complicated experiencing;” and 4) the mark on a 3 point scale to assess the extent to which the client seemed anchored in his/her “ok” place, and able to work on feelings from there, ranging from “not very much: client seemed identified with some feelings and at odds with others,” to “very much: seemed accepting of wide ranging feelings.”

There was a simple main effect for frequency of empathic response on therapist ratings of Experiencing Quality. The sessions with the higher frequency of empathic responses were the ones which got higher therapist-ratings of Experiencing Quality (p < .02).

Thus which combination of Focusing Questions and frequency of empathic response to recommend depends on values. If you feel strongly that non-directiveness is to be preserved, these results may suggest that Two Focusing Questions combined with empathic responses about once a minute may be the better choice. If you feel that achieving higher therapist ratings of Experiencing Quality is more important, then one focusing question with an empathic response about every thirty seconds may be the better choice.

Note the counter-intuitive finding that the higher Useful Feeling Development sessions with better Experiencing Quality are those with the lower level of focusing questions. Focusing Oriented therapists might think they would produce deeper experiencing by asking more focusing questions. But this hearkens back to wisdom voiced by many experienced focusing teachers regarding the fundamental importance of establishing a solid foundation in simple listening before proceeding to guiding (e.g., in parts of the discussion in the Focusing Discussion group on the Internet in 1997; and Gendlin, 1981, 1996).

Also counter-intuitive is the finding that the higher Useful Feeling Development sessions rated by the clients as less Directed were those with the lower frequency of empathic response and the higher level of focusing questions. The latter finding is inconsistent with the “classical” client-centered notion that non-directiveness can best be preserved by doing nothing but empathic responding, and lots of it.

If you refer back to Table 3, you will see that the findings recommend in any case either session types numbered 2 or 3, over 1 or 4. Session type 1 will produce lower levels of Useful Feeling Developments, and lower Experiencing Quality. Session type 4 will produce lower levels of Useful Feeling Developments and higher Directedness ratings.

Getting somewhat counter-intuitive results is, I believe, a good sign. We can, with studies like this, further differentiate how we understand the complex interaction of a therapeutic encounter. We have evidence here that empathy and focusing questions are interactive in their effects, and we have specified some of how that interactivity looks. Keep in mind that this experiment tested a limited set of the possible combinations of focusing interventions and empathic responding. In particular, both levels of empathic responding are actually quite frequent. An empathic response every minute is pretty close tracking and checking out of the therapist’s understanding. Some experienced therapists respond much less frequently than this (in one study conducted by Post-Pinchek, 1997, an experienced therapist first established her “baseline” empathic response rate to help decide what levels to test, and it was about one response every five minutes. This is a different region of the “activity space” than what is reported on here, and we wouldn’t expect the same results). Other studies have documented advantages to a relatively high frequency of empathic response for relatively inexperienced therapists with volunteer clients (Iberg, 1991, 1996).


Scientific research that is experientially sound for participants is possible. I have tried to show how factorial experiments can be used to accomplish this. I am fascinated by the potential of such studies to realize advantages from both the scientific approach and the experiential approach. Both "pure scientists" and "pure clinicians" have to accommodate a bit to the other's perspective to make this possible.

Therapists may be concerned that doing such a study constrains the activity of the therapist too much. Naturally, in some ways constraints are undesirable. For example, it is generally preferable, as the therapist, to give a focusing question when your felt sense tells you to give one, not according to the clock. But by being more systematic for four sessions (for example) in the studies reported above, we learned something about how things work and interact with each other in general, and the results challenge some preconceptions.

Tolerating the constraints involved in a study is analogous to taking lessons at something you already do well. When you first give yourself over to your teacher, your behavior is constrained by the teacher’s instructions helping you become aware of how things have been functioning implicitly. Temporarily, your performance may suffer. But this temporary loss is well worth it if the end result is that you improve aspects of your form so that your new behavior consistently outperforms your former behavior. Gendlin (1993) has emphasized the importance of the "restored implicit governing:" if we gain new awareness, but don’t follow through to changed action, the exercise remains only "academic." Similarly, if doing a study only results in a complicated paper in a scientific journal, that exercise would fail to meet the experiential criteria I am advocating. Participants must experience some change which has action implications for their own behavior, to meet this standard.

A student’s comments from the study just reported exemplify such action implications:

I did not find a frequency of response that I preferred. The two practice clients that I worked with were each compatible with a different frequency of responses. For me, this was a beneficial situation. The fact that the two clients differed in the number of responses they wanted to receive in a given time period helped me to feel as if I had experience with … both frequencies as being effective and ineffective. Working with these clients, I felt I learned to watch them in order to determine which frequency they found to be more comfortable. I also learned to be comfortable using both the high and low frequency responses myself … I think that it will be beneficial to me in the future that I feel comfortable with both response times and I am pleased that I had the experience with both high and low frequencies in this class.

When participants (therapists and clients) expand their personal repertoires as the result of participating in a scientific study, the benefits to the advocate of the experiential approach can outweigh the costs of tolerating the structuring required to do science.

Scientists, on the other hand, may worry that small studies like this, done without carefully standardizing procedures and randomly assigning subjects, makes meaningful generalization impossible. The accommodation necessary of the scientist is to let go of the ideal of having the definitive study that is methodologically air tight (at least for some of the studies that are done). I propose a patient approach in which we only come to broad generalizations over time when certain results keep reoccurring across studies.

This "many small studies" approach to conducting research, although it may appear decentralized at best and disorganized at worst, is consistent with several of the recommendations made by Gendlin (1986) for what should come after traditional psychotherapy research. Identifying key therapist activities is one way of "defining ongoing psychotherapy processes." We can test activities for their effects, rather than testing methods of therapy against each other (p.131). The spirit of doing factorial studies in various places is consistent with Gendlin’s recommendation to explore, then verify (p.133). Gendlin encouraged researchers, as I do here, to define outcome variables that the investigator really believes will catch the differences that your therapist activity makes. You can do factorial studies in the settings in which you actually work: one study described here was with participants at a conference, not in the traditional "office" setting (Gendlin’s recommendation 15, p.134).

Small scale, low budget studies make it possible to listen to the experiences of the participants in a study about what did and did not feel right about how the study was set up, and put these learnings, as well as the scientific findings, to work in the design of the next study, which can be done sooner, rather than years later.

Scientists (or the scientists in us) stand to discover that this smaller scale of research involves much less expense, can be fun and interesting to do, and that there are definite satisfactions to research activities which produce immediate benefits and learnings for the people kind enough to participate in the research. It is possible to participate in this kind of research not only "for the good of others," but also for direct personal benefits. Participants are likely to regard the scientist in even more high regard when this is the result.

On a broader level regarding the credibility of Psychotherapy, there is a challenge to continue to establish the "efficacy" (Task Force on Promotion and Dissemination of Psychological Procedures, 1995) or to "empirically support" (Chambless and Hollon, 1998) psychotherapy procedures. I say this objective will be well-served when many clinicians are involved in experientially meaningful research, to complement and humanize those special and relatively rare studies which meet the clinical trials gold standard of traditional science.


Box, G. E. P., Hunter, W. G., and Hunter, J. S. (1978) Statistics for Experimenters. New York, Wiley.

Chambless, D. L., and Hollon, S. D. (1998) Defining empirically supported therapies. The Journal of Consulting and Clinical Psychology, 66, 7-18.

DeRubeis, R. J., & Crits-Christoph, P. (1998) Empirically supported individual and group psychological treatments for adult mental disorders. The Journal of Consulting and Clinical Psychology, 66, 37-52.

Gendlin, E. T. (1962). Experiencing and the creation of meaning. New York: The Free Press of Glencoe.

Gendlin, E. T. (1968). The experiential response. In E. Hammer (Ed.). Use of interpretation in treatment. New York: Grune & Stratton.

Gendlin, E. T.(1986). What comes after traditional psychotherapy research? American Psychologist, 41, 131-136.

Gendlin, E. T.(1993). Words can say how they work. In Crease, R. P. (Ed.), Proceedings, Heidegger Conference, State University of New York: Stony Brook.

Gendlin, E. T.(1996).Focusing-Oriented Psychotherapy: A Manual of the Experiential Method. New York: The Guilford Press.

Iberg, J. R. (1991) Applying statistical control theory to bring together clinical supervision and psychotherapy research. Journal of Consulting and Clinical Psychology, 59, 575-586.

Iberg, J. R. (1996) Using statistical experiments with post-session client questionnaires as a student-centered approach to teaching the effects of therapist activities in psychotherapy. In Hutterer, R., Pawlowsky, G., Schmid, P. F., & Stipsits, R. (Eds.) Client-Centered and Experiential Psychotherapy: a Paradigm in Motion. Peter Lang: Frankfurt am Main.

Klein, M., Mathieu-Coughlan, P., Kiesler, D. (1986). The experiencing scales. L. S. Greenberg and W. M. Pinsof (Eds.) The Psychotherapeutic Process: a research handbook. New York: Guilford Press.

Post-Pinchek, G. E. (1997) The effect of therapist self-disclosure, reframing, and empathy on sessions quality ratings. Illinois School of Professional Psychology, Chicago.

Seligman, M. E. P. (1995). The effectiveness of psychotherapy: The Consumer Reports study. American Psychologist, 50, 965-974.

Task Force on Promotion and Dissemination of Psychological Procedures (1995). Training in and dissemination of empirically-validated psychological treatments: report and recommendations. The Clinical Psychologist: a publication of the Division of Clinical Psychology, Division 12– American Psychological Association, 48, 1, 3-24.

1 It is also possible to define therapist activities that are not all-or-nothing for factorial studies. For example, one variable that I have been exploring with my students is the frequency of empathic response, using two distinct levels in a given study, such as one response about every minute, versus one response every thirty seconds. Other levels have also been used (see Analyze your results section). In two-level studies, you choose two distinct levels and test them against each other. As you learn things over many studies, the levels you use are expected to change as you gradually “zero-in” on the levels which get better results.

2 For a technical report, email Jim Iberg at ibergjr@aol.com


Home ] Up ] What's New ] Site Map ] Feedback ] Search NREP ]

Send mail to experiential-researchers@focusing.org with questions or comments about this web site.
Copyright © 2000 Network for Research on Experiential Psychotherapies
Last modified: March 20, 2001