ExpErimEnting with a dEsign ExpErimEnt

the design experiment is an experimental research method that aims to help design and further develop new (policy) instruments. For the development of a set of guidelines for the facilitation of citizens’ initiatives by local governments, we are experimenting with this method. it offers good opportunities for modeling interventions by testing their instrumental validity –the usefulness for the intended practical purposes. at the same time design experiments are also useful for evaluating the empirical validity of theoretical arguments and the further development of these arguments in the light of empirical evidence (by using e.g. the technique of pattern matching). We describe how we have applied this methodology in two cases and discuss our research approach. We encountered some unexpected difficulties, especially in the cooperation with professionals and citizens. These difficulties complicate the valid attribution of causal effects to the use of the new instrument. However, our preliminary conclusion is that design experiments are useful in our field of study.


introduction
Experimentation as a research method in political science and the science of public administration was out of vogue for a long time.As King, Keohane and Verba observe: "political science is rarely experimental.We do not usually have the opportunity to manipulate the explanatory variables; we just observe them." (1994: 185). in public administration the situation does not seem to be much different (Houston and Delevan 1994;perry 1994).
morton and Williams claim that: 'political scientists often rule out experimentation as a useful method for many interesting research questions' […] However, experimentation is increasing dramatically in political science' (morton and Williams 2008).indeed, the number of scientific articles based on experimental research in our discipline's journals has grown over the recent decades. in such articles experiments are often used to test scientific hypotheses in laboratory settings where both the intervention and the contexts are stringently controlled and manipulated.on the other hand, in the world of politics and public administration 'experiments' and 'pilot projects' abound: new institutions (such as the direct election of mayors, new electoral rules), and new modes of citizen participation (interactive governance and more recently citizens' initiatives), are put to a practical test outside strictly manipulated laboratory settings before implementation on a wider scale.Sometimes, but not always, such 'experiments' are followed by evaluative research.But even if such laudable efforts at evaluation are made, more often than not they fail to provide a sound evidence base to determine the instrumental validity of the instrument -to determine if its works.
in our research project 'citizens making their neighborhood', we use 'design experiment' methodology, following on the ideas of Gerry stoker and peter John (stoker and John 2009).they argue that design experiments are a useful tool in the context of applied research in political science and public administration.John et al. (2011) demonstrate the usefulness of experimental research designs (including design experiments) in developing successful policy interventions aimed at changing civic behavior.in our research project we follow in their footsteps by using a design experiment to develop successful governmental strategies to facilitate so called citizens' initiatives (from here on referred to as ci's).ci's can be considered as a form of citizen participation in which citizens collectively engage in activities aimed at providing local "public goods or services", (for example the promotion of livability and safety in their street, neighborhood or town).a distinguishing feature of such initiatives is that citizens decide both on the aims of their activities and play the main role in the actual implementation.the role of the municipality (and other relevant local agencies) is mostly limited to providing support and facilitation.many dutch municipalities -partly in the light of the need for budget cutbacks-want to stimulate such ci's, but are looking for guidance and advice on how they can play a facilitative role in such a way as to maximize the chances for success of these initiatives.the success of such initiatives may be defined in terms of a variety of criteria, such as: • the attainment of citizens' objectives • improved quality of the neighborhood • Empowerment of citizens • increased trust in government and politics stimulating and facilitating ci's asks for new patterns of interaction between citizens and governments, and requires new know-how on the side of local officials and their organizations.This is now widely recognized, which is testified by a host of reports, brochures and case-descriptions with lists of do's and don'ts in case of ci's, sometimes even advertised as 'best practices'.But a theory-based, validated tool to establish what interventions contribute to successful ci's is as yet unavailable.therefore, the project 'citizens making their neighborhood'1 was initiated.the aim of this project is to develop a set of guidelines that public sector professionals can use in facilitating and supporting ci's.
these users include local actors on two levels of action2 : • Frontline workers in municipalities and other local organizations (e.g.housing corporations or welfare organizations) who support and facilitate participation of residents in the context of urban renewal and neighborhood policies.• Managers and (elected or appointed) officials who bear strategic responsibilities and are also responsible for securing the openness and responsiveness of their organization in case of new modes of citizen self-governance, including ci's.
in this contribution we focus on the development of guidelines for the frontline workers.We worked together with these frontline workers in our research project. in our interactions with them we were aware of the fact that over many years they were expected to look at residents as either subjects (to abide by governmental rules and regulations), clients (when consuming public services) or citizens (voicing their needs and demands through elections and other non-electoral channels for public participation).However, many professionals are not accustomed nor professionally equipped to deal with selfgoverning community members, who initiate improvements in the quality of their neighborhood or other parts of their lives.the guidelines to be developed therefore should help these officials in mapping the motivations, possibilities, problems and frustrations of residents involved in initiatives.this requires that the frontline-workers develop an adequate understanding of the dynamics of voluntary civic action.that is precisely the reason why our main theoretical frame of orientation is found in the civic voluntarism model formulated by verba and his associates (1995) and the clEar model (lowndes et al. 2005) that is based on this model.the development of guidelines for the professionals in the context of a design experiment implies a two stage process.First, in the early stages, an instrument has to be designed.typically this is done by the researchers based on their theoretical knowledge.But the initial design might also be (partly) based on the professional knowledge and experience of practitioners or the everyday knowledge of citizens.second, the researchers develop theoretical expectations (hypotheses) about the anticipated effects of the actual use of this set of guidelines (the experiment) and confront these expectations with data collected about the actual process and its outcomes.this second stage is in fact identical to performing a formative evaluation in which the effectiveness of a policy intervention is investigated.subsequently, if the results of this initial evaluation point to opportunities for improving results, the initial instrument is to be adapted and the improved instrument is re-tested.
In this contribution we first will elaborate on the reasons why we have selected the design experiment as a research method.We do so by comparing it to other methods.We will subsequently describe how we conducted our design experiment.For the various stages of the design experiment we have described our choices and their implications.Finally we reflect on the implications of our choices in the light of a number of (partly unexpected) problems.

criteria for selection of a ProPer research design
We considered a number of criteria in selecting a proper research design.First (1), the research design would have to be theory-based.our set of guidelines should be based upon well-founded hypotheses about the expected effects of particular interventions.thus, theory would provide a sound basis for the initial design of the instrument.theory, moreover, provides the main benchmarks (expected results) against which the observations of actual results are to be measured.if observed results fall short of these theoretical expectations, the theory (or an adapted version thereof) has to provide the basis for recommendations for further development of the instrument.second (2), the research design should allow to make valid causal inferences (internal validity).the question is: does the instrument (here the guidelines) under study indeed contribute to observed results in the context where the intervention was made?third (3), the design has to allow for opportunities to intervene (adopt our guidelines for professional behavior) and to control key parameters of the research situation.ci's are typically characterized by continuous interactions between a variety of different actors over longer periods of time.Preferably, the intervention and its effects should be studied in settings that reflect such complex conditions.Fourth (4), the research should preferably allow to generalize con-clusions to a wider population of cases (external validity).ideally, the adopted design should score high on all four criteria.
in the next section we will discuss classical experiments, action research and design experiments in the light of these criteria.3this comparison will bring out the strengths and weaknesses of the various alternatives.

Classical experiments
in classical experiments an intervention (traditionally called 'treatment') is performed on one group of subjects, whilst a control group does not undergo the treatment.By random assignment of the subjects to the experimental and the control group it is decided who receives the treatment and who does not (randomized controlled trial).a pre-test is administered in both the experimental and the control group in which the situation with regard to the dependent variable is measured.a similar test is also used to measure the situation after the treatment.if a difference occurs in the changes between the pre-and posttests occurs between the two groups, this difference can be attributed to the treatment.
Obviously, experiments require a firm theoretical basis to identify both the treatment variable and the dependent variable and to formulate the anticipated effects of the former on the latter (1).Therefore the experiment meets our first criterion.
the power of experiments to make causal inferences (criterion 2) is so strong, that this method is called 'golden standard' in research: 'these trials produce a more warrantable form of knowledge than other methods' can provide and for that reason experiments are considered as a superior tool for establishing causal relationships (Babbie 2004;John, Cotterill et al. 2011).Basic conditions for establishing causality are met: the cause and the effect co-vary; the cause precedes the effect and the effect cannot be spurious (because of the randomized control).
With regard to our third criterion (3) the classical experiment design takes place in a laboratory setting which allows for a considerable degree of control over the conditions in which the intervention is made and for control over the intervention itself.Finally, the generalization of results is straightforward, because the subjects in the treatment and the control group are typically individuals, randomly selected from a well-defined population (4).
Nevertheless, classical experiments require rather specific and exacting conditions.First of all, the number of subjects in the two groups has to be large enough to even out all variance in relevant characteristics.as ray pawson and nick tilly (pawson and tilley 1998) argue, however, if subjects are not individuals but collectivities it may be difficult (and practically impossible because of the costs) to find an appropriate number of cases; the possible variance might be very high because of the complexity of the systems, and the population to select from small.Furthermore, although an experiment allows for strict controls on the intervention and conditions, this typically is achieved by a strictly controlled laboratory setting.In the context of CI's it will be difficult if not impossible to realize either of these exacting conditions: the number of ci's that can be selected for experimental purposes is likely to be limited and imposing experimental controls in a laboratory setting is impossible.

action research
the goal of action research is to acquire knowledge for the good of disadvantaged groups in society."The disadvantaged subjects define their problems, define the remedies desired, and take the lead in designing the research that will help them realize their aims […].implicit in this approach is the belief that research functions not only as a means of knowledge production but also as a tool for the education and the development of consciousness as well as mobilization for action."(Babbie 2012:313).
this approach is unique for action research.Just like classical experiments, action research actively and consciously interferes in reality.Here the deliberate interventionism is different from other forms of social science research in which researchers scrupulously try to avoid to influence the process under investigation.Unlike most social scientists action researchers are "… focused on the input of policy makers and the refinements of an intervention.they offer an iterative methodology that relies on cycles of direct interaction with the decision makers and on refining how policy makers intervene" (Stoker and John 2009:362).
Because of their desire for relevance and political preoccupations action researchers are very much practically oriented.some action researchers even consider theoretical relevance as an obstacle for practical relevance: building on established theories might prevent researchers from asking 'questions that really matter' (swanborn 1984) in the eyes of their target groups.In the light of our first criterion we conclude that action research is a-theoretical if not plainly anti-theoretical.
In methodological terms it is hard to characterize action research.Neither Babbie, nor swanborn or John et al. provide a discussion of how researchers operating in this tradition secure the internal validity of their results (criterion 2).action research has a questionable methodological reputation, because many of its advocates are fervent adherents of the dictum: 'relevance rather than rigor'.too often the prescriptions of action research are not based on solid empirical evidence.Even susman and Evered (1978), in general sympathetic towards action research, admit that 'When action research is tested against the criteria of positivist science, action research is found not to meet its critical tests'.However, they claim that another type of knowledge can be generated, and is relevant in terms of generating good organizational knowledge.action research focuses on specific, real-life, real-time problems and situations, and has an interventionist orientation, aimed at innovation and problem-solving (susman and Evered 1978).therefore action research at least to some extent meets our third criterion.Finally, the orientation of action research on the specific case and its ideologically driven identification with a particular target group not only stand in the way of a careful consideration of theory and method, but also might prevent action researchers to consider the generalization of their findings (criterion 4).
all in all, we conclude that traditional action research has been dominated so heavily with political preoccupations that methodological problems regarding questions of internal and external validity were of little concern.

design experiment
design experiment methodology combines the action researchers' concern with relevance with an ambition for theoretical and methodological rigor.Historically, design experiments have been the province of the artificial and design sciences, artificial intelligence, architecture, engineering and medicine (Gorard, roberts et al. 2004:578).notice that the word design in the name of this methodology does not refer to the research design.the term design refers to the blueprint of a new instrument that is to be developed during the research process. in the social sciences the method was initially used in educational science to help determine what learning strategies and teaching methods might be effective in education.the methodology 'allows the educational researcher to study learning in context, while systematically designing and producing usable and effective classroom artifacts and interventions' (Gorard, roberts et al. 2004:579).as stoker & John (stoker and John 2009) observe, this methodology so far has hardly been used in political science and public administration, while other forms of experimental research are steadily increasing in popularity.the sparse use of design methodology is remarkable.as we have stated in the introduction, 'experiments' and 'pilots' abound in the domain of politics and public administration: there are numerous innovations in public participation, policy programs or institutional reforms that could be studied with the help of design methodology.the design experiment claims to provide an evidence about 'what works' in the early stages of the development of a policy intervention…" (stoker and John 2009).the aim of design experiments is to provide answers to design issues.to begin with, this implies that in answering its core questions design methodology takes its starting point in available theories.therefore stoker and John are right in claiming that [d]esign experiments are not a-theoretical.[...] design experiments work with one or several theories.they develop a practical understanding of what could work , which can derive from social science theory and/or the intentions of those carrying out the intervention.(stoker and John 2009:360).thetheory may serve to provide a priori ideas about certain interventions and their effects.Besides, the experience and professionalism of the officials and researchers involved can suggest new applications of the theory, or can provide new ideas for mechanisms at work.
While the instrument is implemented, the process and the intermediary results are closely monitored and evaluated.on that basis the researcher may give advice on how to improve the instrument.this iterative process of design, trial, error, redesign, trial, etc.
is not only practically relevant.it also provides the opportunity to evaluate the empirical validity of interesting theoretical hypotheses in the context of a field experiment.Our first criterion -a theoretically based method-is clearly met.one of the most challenging tasks in performing design experiments is to provide a rigorous analysis that allows for maximum feasible internal validity (criterion 2).this is daunting because typically field experiments are conducted in the context of one or only a few cases.design experiments share with classical experiments the desire to study the effects (based on a before-after design) of theoretically interesting interventions.But design experiments, for reasons explained in the aforementioned, can neither rely on the comparison of an experimental and a control group (composed of randomly selected units of observation), nor is it always possible to use multivariate statistics for demonstrating causal effects and to control for spuriousness.
instead, design experiments rely on a different strategy for establishing causality.in classical experiments the causal process is considered as a black box and the operation of causal mechanisms is assumed rather than observed.like other qualitative methods of research, design experiments are based on the identification of causal mechanisms and hypothesize a causal process that is specified in terms of sequences of expected events.on the basis of this, the researcher(s) will collect data on the occurrence of the predicted events, these data units can also be called causal-process observations or cpos (collier 2011).this amounts to a careful reconstruction of actually observed sequences of events in the context of one or more experimental situations that are matched with theoretically expected patterns in sequences of events.therefore this technique is also referred to as pattern matching.pattern matching is part of the method of process tracing, 'an analytic tool for drawing descriptive and causal inferences from diagnostic pieces of evidence' (collier 2011).
in addition to the theoretical plausibility of the anticipated effect(s) there are two additional pieces of empirical evidence in design experiments that provide support for the hypothesis that x has contributed to Y (here e.g. the success of a ci).First there should be a change in the scores on Y after the treatment.moreover, there should be evidence that theoretically plausible causal mechanisms have set in motion sequences of events that have contributed to the observed change in Y. thus dE-methodology offers the possibility to observe the effects of the use of the instrument over time and of the process taking place after the use of the instrument that may lead to the expected change in Y.
A final concern is the need for external validation of design experiments (criterion 4). in this respect, there are two validity threats.First, design experiments may not be externally valid because of the disturbances introduced in the process by the presence and interventions of the researchers (so-called Hawthorne effects).Second, the external validity of one or a limited number of design experiments is of course a priori limited.therefore, protagonists of design experiment methodology (stoker and John 2009) emphasize the need for external validation of the initially developed instrument in a wider number of environments.
our design exPeriment steP By steP on the basis of the above arguments we conclude that for the aims of our research the design experiment promised to be an effective research instrument.But making such a decision in the abstract is one thing.implementing such a relatively new methodology in practice is another.therefore, in the second part of this article we will describe our experiences so far in conducting our design experiment.
according to the recent literature (Gorard, roberts et al. 2004;stoker and Greasley 2005;stoker and John 2009;John, cotterill et al. 2011), a design experiment implies the following steps in the research process.Theory always comes first.Relevant theoretical ideas form the basis for the development of the possible intervention(s) that should lead to a desired outcome.practitioner's experiences can be used as well, for example, to translate initial theoretical idea's into workable interventions.these interventions have then to be implemented in a small number of relevant real-life situations.third, intensive and detailed recordings of the project and the intervention(s) are indispensable to allow conclusions about the causal process that links the intervention to the potential effects.if the process or the (intermediate) effects are not in line with the expectations there may be grounds for adaptations, through rounds of iterations; cycles of designs and redesigns.Finally, in order to enhance the external validity the researchers can also perform a study with appropriate statistical power to confirm the effect of the use of the new instrument.in the remainder of this paper we will describe how we went through these different steps in conducting our (still ongoing) design experiment.

Theory first
the aim of this research was to develop an evidence-based set of guidelines for facilitating successful CI's.However, one defines the success of such initiatives (see the introduction), an necessary ingredient for success is (continued) involvement of citizens, who initiate and develop these projects.therefore, we considered stimulating citizens' (continued) involvement as the key factor in defining the success of facilitation of CI's. in order to develop an initial set of guidelines we started our research project by studying available theoretical and empirical knowledge.the civic voluntarism model (verba, schlozman et al. 1995) and the related clEar-model developed by lowndes et al. (lowndes, pratchett et al. 2006)) provided us with a theoretical basis for identifying the main factors explaining the likelihood of citizens' involvement in ci's. in short, both verba and Lowndes state that first one must be able to participate in order to do so.time, money and civic skills are needed for people to participate.second, without motivation no one participates.only if an individual wants to participate, he or she will do so.and furthermore, help from people and organizations contributes to the chances that people participate; networks enable participation.also mobilisation plays a, sometimes underestimated, role; when people are asked to participate, the odds that they do raise considerably.Finally, response from governmental side is very stimulating.lowndes abbreviated these five factors as follows: Can do, Like to, Enabled to, Asked to and Responded to.together they form the acronym 'clEar'.
Knowing the main factors explaining citizen involvement, also provides the basis for considering how professional 'facilitators' might undertake remedial action when problems occur (cf. lowndes et al. 2006).
on the basis of this theoretical analysis and a subsequent secondary analysis of 21 case studies about CI's (Bakker et al. 2012) we drafted a seven page memorandum with a set of guidelines for professionals involved in the ci's.the memorandum can be considered as a first version of an 'intervention theory'.One of the main principles underlying this memorandum was the presumption that 'facilitators' should not take policies, planning procedures and routines as their starting point, but instead begin at the motivations, ambitions and capabilities of the citizens.this way the clEar framework (lowndes, pratchett et al. 2006) was made into a useful tool for analyzing the starting situation and finding interventions to support CIs.Our theoretical framework and the memorandum basically served two purposes during the research.on the one hand they provided the concepts and categories that helped us in systematically monitoring the ci's, and the effects of the interventions by the professionals.on the other hand, it enabled us to provide structured feedback to the facilitators and formed the basis for suggestions for adjusting their intervention strategies when necessary.as such this document also served as the first version of the set of guidelines for facilitators of CI's, that was the envisioned 'end product' of our design research.in an appendix we present a checklist which we used to present our analyses and our recommendations.this list is a summary of the tool, and serves here as an illustration of how we worked.
the framework and its underlying 'intervention theory' imply a wide range of hypotheses.these hypotheses link attributes of the facilitator's (orientations and actions) to the key dependent variable (citizens' continued involvement in ci's).the general idea underlying these hypotheses is that through their attitudes, orientations and actions 'facilitators' can affect the various CLEAR factors and via these influence the likelihood of citizens' involvement in ci's. in the context of this contribution it is impossible to provide a complete list of the hypotheses implied in the model.instead we will use two hypotheses for illustrative purposes in subsequent sections.these hypotheses relate to (one aspect -citizens' skills-of) the can do factor and the responded to factor from the framework.the hypotheses read as follows: Can do: To the extent that facilitators have an accurate picture of the deficits in the skills that are needed in the group of citizens involved in the CI and supplement any deficits therein, citizens' (continued) involvement in ci's is more likely.responded to: to the extent that facilitators respond to citizens' needs and demands for support in realizing a ci, citizens' (continued) involvement in ci's is more likely.
Experimenting with the set of guidelines will demonstrate whether these theoretical ideas can be confirmed.

selection of cases
to perform a design experiment, there 'has to be a small number of settings where the experiment is carried out or, in many cases, just one setting'.(stoker and John 2009:366).a number of considerations of both a methodological and a practical nature were important for the selection of our cases.The cases should satisfy the criteria in our definition of ci's. in addition the projects had to be located in one of the three municipalities participating in the research consortium involved in our research (see footnote 1).moreover, the project should neither be too short-lived (in order to study changes over time) nor should it exceed over the three year period of the research project.next to these demands, the willingness of citizens and professionals who are actually involved in the ci's to cooperate in our research was of crucial importance.Especially the professionals' cooperation could make or break the research.since, after the application of these criteria, there was hardly an abundance of cases to choose from we were happy to find two appropriate cases we could work with.the two selected citizens' initiatives that serve as our experimental cases are the realization of a neighborhood museum in the city of Enschede and the 'Street Name Project ' in Hengelo.In both of these cases we found professionals who were not only willing to cooperate, but also very interested in the research.
Due to practical contingencies we had to begin our fieldwork for both of these projects before the first version of our set of guidelines was fully developed.In both our cases the initiating citizens, on the basis of their own legitimate considerations determined the starting moment.From a researchers' perspective it would have been better if the groups had started a couple of months later, so that we would have been able to test a prototype of our instrument at the outset of the process.However, for obvious reasons we as researchers were not in a position to determine the starting moment.

Collaboration with policy-makers, practitioners and participants
in literature on design experiments in education usually the role of teacher and researcher are combined in one person.stoker and John (2009) think that in public administration applications it is very difficult to imagine researchers taking over the role of (for example) a social worker or a police officer.indeed, in neither of our cases taking over the role of the supporting officials was an option.it is the official, not the researcher, who already has a relationship with the participating citizens, who knows the neighborhood and who is formally responsible for the interventions.a good working relationship and a mutual understanding between the researcher and the professional(s) involved was therefore indispensable. in both cities we worked closely together with the front-line officials concerned with the ci's.We also involved their managers and policy officers to gain support for our recommendations on how the experiment were to be implemented.
the role of the researcher in our project is dual, we made recommendations as to how the front-line professionals were to apply the set of guidelines to each situation and we observed how the process of realization of the initiatives went on4 . in the course of the development of the projects, we first explained the intervention theory memorandum and we proposed to the professionals to have two meetings for every round of iterations.one in which we gave feedback of what we observed up to that point.For the second meeting we applied the set of guidelines to the situation on that moment.in doing so, we took the clEar-factors one by one and answered three questions: • What is needed in the case of this ci (with respect to e.g. the 'can do', and the 'responded to' factor)?• What is already available (Which skills, motivations, networks etcetera are available within the group of citizens)?• And finally: what aspects are problematic (what is still required?)and how and by whom can any of the problems be solved?
on the basis of the responses to these questions we formulated recommendations for possible interventions, and we asked the professionals to implement these.
the cooperation with the participants (the initiators and other active citizens in the projects) was of another nature.When the researchers first met the participants and introduced themselves and the research project, most participants reacted positive, some were indifferent and a few were critical.considerable efforts were made to explain the position of the researchers and the importance of them attending and observing the meetings.We promised confidentiality and explained our reticence (the observer never commented during meetings on the main process).notwithstanding our efforts, there was some awkwardness from both sides.some participants indicated they felt uneasy with the presence of an observer.others did not bother, or were very curious about the research and the researcher.For the researcher it occasionally was uneasy to only act as an observer, and to refrain from reactions to any of the things that happened and questions that arose.this felt especially awkward at moments when the researcher held information that could have been of good use during the meetings.

data collection
For the purpose of causal attribution in the design experiment we first needed evidence about the possible effects that CI's might have had on a number of key variables.Here it was especially important to monitor the clEar factors (e.g. the resources of and the responsiveness towards the participants), because these factors were critical for the (continued) involvement of citizens in the initiatives (as the key element for ci success).
To establish these effects we have first used data about changes in these key variables collected by pretest. in addition to this we wanted to monitor whether any changes in these key variables could be demonstrated to be the result of the ci and the facilitation.For that purpose we used observation reports of meetings-, and additional material, such as policy documents and interviews with managing officials.
observation reports were written by the researchers based on notes taken during the meetings.in the case of the neighborhood museum, there were monthly or sometimes fortnightly meetings with all the volunteers and the community worker.For the street name project-, professionals and volunteers met in various settings, at irregular intervals.We sat in and observed as many meetings as possible.in addition, we tracked the e-mails that were sent between participants and read the documents and minutes (if available) the participants drew up themselves.
For reasons we already mentioned, we were in both cases unable to conduct a genuine pre-test.instead we conducted interviews with participants at an early stage of each project.using a standardized questionnaire with additional open questions, we asked all volunteers about their backgrounds, their attitudes towards the municipality and semi-governmental organizations, their supporting officials, as well as their goals and their expectation about support and facilitation.We repeated the interviews with the same individuals near the end of the research period.on the basis of the observation reports, additional materials and the combined results of the pre-and post-tests we were in a position to trace patterns through which interventions by facilitators were (or were not) followed by expected changes in relevant clEar factors and contributed to the continued engagement or disengagement of citizens (a key element of ci success).

applying the set of guidelines. making interventions
We agreed with the facilitators to apply the set of guidelines in two 'rounds'.In the first round, we used our preliminary instrument primarily to analyze the ci process.on the basis of our observations we made a number of recommendations for each case. in presenting these to the professional(s) we explained how we reached our conclusions, asked the professionals if they agreed on the analysis, and asked them whether they felt it would be feasible to implement the recommendations in the subsequent round (a period of five or six months).Based on these discussions we sometimes adapted our conclusions and advices.But in most instances the professionals agreed with both our diagnoses and the interventions we proposed.
the meetings with the professionals were carefully prepared in our research team in order to minimize the risk that professionals might be offended by our conclusions or recommendations.We wanted to avoid the impression that we were 'breathing down their necks', being judgmental or feeding back information to the professional's super-visors.Clearly this could have inflicted major damage upon our good working relations with the professionals.For that reason, we phrased all our feedback carefully in terms of observed facts and meticulously explained the theoretical rationale for recommendations, avoiding a critical tone.For example, in the neighborhood museum case we formulated the following advice: "For the progress of this initiative a draft a budget estimate is needed.the participants in this initiative apparently lack the particular skill to make this draft.something should be done to remedy this.therefore, we recommend you to intervene.What might help to this avail?are you willing and able to draft a budget estimate yourself?"although, sometimes, the professionals might still have interpreted this as a hidden reproach ("Why didn't you help them right away?") our relations with them remained friendly and cooperative.
What we had not anticipated, however, was that in the daily practice of the professionals our recommendations might get "lost".one of the professionals in the street name project during the second round of interviews frankly admitted that, during the six months that had passed after the first talk with him, he had not consulted the documents containing our recommendations.Therefore, only two out of five of the recommended interventions were implemented in this project.
in the second case, the professional involved in the neighborhood museum implemented only one of the recommended seven interventions. this recommendation (see our example above) was about the draft of the annual financial year estimate for the group.the other six recommendations were not implemented for various reasons.as a result, we had only limited input for the redesign of our initial set of guidelines. in the next section we will demonstrate how our design experiment provided both confirmations and falsifications of the (intervention) hypotheses based on the clEar model.

Evaluating the validity of hypotheses and changing the instrument
in the absence of a strictly experimental design the technique of pattern matching may be an important tool for testing causal hypotheses in the context of a design experiment. in this section we explain how we have used this technique in our study.We will focus on the previously discussed example of the 'Can do' factor (skills) and the related first hypothesis.Based on the hypothesis that a lack of certain skills amongst participants will jeopardize citizens' continued involvement in a ci, we discussed the observed inability of the CI initiators to draft a budget and their lack of knowledge about their financial and legal liabilities, with the facilitator.We suggested that he should intervene and help the group in these respects, by doing it himself.the "intervention hypothesis" underlying this advice was that such support would allow the ci to proceed and boost the motivations and confidence of the participants leading to a continuation of their involvement in the initiative.
this "intervention hypotheses" based on the clEar-model imply a particular sequence of events.if the 'can do' factor is falling short: The group will fail to perform crucial tasks (a)→ leading to: a blockage of process (b)→ after having identified the lack of skills as the cause of this (c) → Professional is advised to intervene (d) → The professional heeds this advice (e) the crucial task will be performed (f) → CI proceeds (g)→ Citizens' retain motivation and increase their sense of personal competence (h) → Citizens' remain involved in the CI (i).
the case of the neighborhood museum allowed us to observe whether this hypothesis and its implied sequence of events (pattern) matched the actual pattern of occurrences.We observed that in spite of all good intentions and attempts amongst the CI participants, months went by debating financial issues and trying to solve them.members lacked overview of the revenues they might have, and were not in agreement on spending priorities (a).moreover, it was unclear who was responsible for budgetary decisions, and what implications this might have for the legal liabilities of the board members (a).Because of this the budget and issues of legal liability and financial responsibilities were a recurrent and major issue on the agenda of the group meetings for an extended period of time, leading to considerable frustration in the group (b). in using the clEar framework, we observed that crucial skills were lacking and shared this conclusion with the facilitator (c) and advised him to provide support (d). subsequently, the professional provided assistance and (after trying to coach the 'treasurer'of the group) made the draft estimate himself.(e).By the professional a draft budget was prepared.it was discussed and agreed upon without much discussion in the next ci meeting (f), this allowed the ci participants to continue their initiative and concentrate on substantive issues, rather than on tedious financial and legal concerns (g) this boosted members motivation to proceed.thenew developments that followed increased their sense of personal competence (h) which contributed to their continued involvement in the ci (i).
our theoretical framework implies many similar expectations.in a previous section we have also provided a second example of such a hypothesis, pertaining to the 'responded to' factor.Facilitators can have an important role in stimulating the continued involvement of volunteers in ci's by being responsive to their requests and needs for support.Here our guidelines imply that facilitators should be open to requests for support from citizens and take action when such requests are made.if this advice is heeded: The group will be supported to perform crucial tasks (a)→ Citizens' therefore will be satisfied or become even more motivated and will increase their sense of personal competence (b) → Citizens remain involved in the CI (c).
in our case studies we have found numerous instances that corroborate this advice.But, surprisingly we also found a number of cases where this was not the case.On the basis of this evidence, we were forced to reconsider our initial guidelines, leading to a reformulation of our instrument.in working with the guidelines, however, we noti-ced that in at least two situations -notwithstanding the authentic efforts of professionals-there was sometimes considerable frustration amongst the citizens about the responsiveness of their facilitator.in these cases the facilitators acted in line with the idea's in the instrument and did their best to be open to requests for support and provide support when such requests were made.to their surprise, however, the expected benign results did not follow on the requests for assistance.instead the participants showed quite some resistance.only on the basis of interviews with participants we began to realize that explicit requests and demands are formulated (or not formulated) and should be understood, in the context of citizens' expectations regarding the role of the facilitator.on the one hand, this implies that if participants explicitly ask for support they oftentimes not only expect some kind of response but may also expect a particular type of response.in the case of the neighborhood museum, for example, the facilitator upon requests for assistance, tended to take the lead and made decisions himself, to the frustration of the citizens' who were merely expecting his advice on how to do particular things themselves.in other instances, however, expectations with regard to the role of the facilitators may not be stated explicitly.according to our initial guidelines we stated: if facilitators are not asked for support they should refrain from action (even more so, because citizens might be irritated by facilitators who impose themselves; see previous case).
in the street name project, however, citizens' considered it as the self-evident, "natural" role of the facilitator to take a lot of responsibility in the implementation of the plans they had initiated.the professionals had asked them to come up with new initiatives, why should they also execute them?-according to one of the members during our interviews.in the same time the facilitators were vainly waiting for the citizens' explicitly formulated requests for support.On the basis of these observations, in the first round of our two experiments, we added a key element to the set of guidelines: facilitators should not only be open and responsive towards explicitly formulated requests for support but also be aware of the (implicit) expectations that citizens have regarding their role and the style of supporting and types of support provided with the ci participants.on this basis we amended the initial guideline and advised the facilitators to explicitly discuss such expectations with citizens.
these two examples demonstrate how the technique of pattern matching can at the same time be used for evaluating the instrumental validity of policy instruments (here our guidelines for facilitators) and establishing evidence for the empirical validity of theoretical hypotheses (implied in the clEar model). in the above we have tentatively outlined some expected patterns (sequences of events) and for illustrative purposes provided some (sketchy) evidence of empirically observed patterns.using data from our pre-tests, post-tests (e.g.sets of standardized survey items about citizens sense of personal competence), in combination with interviews and other qualitative data can and will be used for a more rigorous and systematic form of pattern matching.

pitfalls and learning points
'practice makes perfect', or so they say.We are convinced that the design experiment is as yet by no means a fully developed research methodology.many of its do's and don'ts are still to be worked out. the experiences of scholars who actually use this new methodology provide an important mechanism through which the methodological quality of design experiments can be improved.It is for this reason that we reflect on some of the lessons we learned in experimenting with a design experiments as a research tool in political science and public administration.
A first important lesson we drew from our project is that the problem of controlling the experimental environment in the context of a politico-administrative design experiment may be even more difficult than in an educational context.In educational settings, one can rely on a relatively stable environment both in terms of timing (rosters), locations (classrooms in a particular building) and groups of students.in ci's, however, the researcher has no control on who initiates a project, why, when, what about, how long and how.all this is in the hands of the initiating citizens.the relevant interaction of the initiators amongst themselves and with other actors can occur pretty much everywhere and at every moment, in changing settings.as we already saw in our project this created problems in organizing timely pretests at the outset.But there were other problems as well.the street name project, for example, started out with three volunteers.one of them did not show up any more after the first encounter.The second volunteer after four meetings announced that he was forced to stop due to health problems.Finally, the third volunteer left unannounced to hibernate in spain.although the last person returned and still plays a major role in the process, during a considerable period it was uncertain, whether our carefully selected case was still "alive".these complications came on top of some of the well-known problems of design experiments in education, like having to work with busy professionals, whose priorities may shift over time and who may change jobs without being immediately replaced.although we only had the opportunity to study two situations closely, we believe that in these respects our cases were not atypical.We suspect that such instability may very well be part of the natural context of cis.
secondly, we may have underestimated the complexities created by the multiple prescriptions implied in our instrument.rather than one singular treatment, our set of guidelines was a multifaceted heuristic device for the professional who would have to use this for developing a coherent facilitation strategy that typically comprised of several interventions. in a context where multiple interventions are likely to be made, the attribution of causal effects (a major challenge in any case) will be further complicated.the design-experiment method might be less demanding when the instrument to be developed would have involved a smaller range of interventions.
in hindsight, we could have improved our research by involving the professionals even more closely, and in an earlier stage.so far, we have made the analyses of the appropriate interventions ourselves, following the initial version of the set of guidelines, and we formulated the recommended interventions ourselves.We now know that at least one of the professionals was not fully aware of the importance (with respect to the research process) of actually following up on the recommendations; clearly no observations on the effects of the instrument can be made if interventions are not carried out. in addition, we could have made better use of the expertise of the professionals in making adjustments to the tool.
to improve the instrument, newer and better versions should be tested in sequence.Had we had a more advanced version to start with -one that could have been used by the professionals with little or no help and recommendations-a further improved version could have been used in the second rounds of iterations.We would have had a better opportunity to make more advanced improvements before the end of our research project.up until now we performed two rounds of testing and (partial) improvements; hopefully we succeed in doing yet another round before the end of the project period, that would bring us a step further in the development of an effective tool.conclusions conducting a design experiment in politico-administrative settings is relatively new and, as a result, proved somewhat adventurous.applying this method in two cases, we encountered some unexpected difficulties.
on balance, we are convinced that -notwithstanding such problems-the design experiment method has so far helped us to develop a theoretical basis for the design of a practical tool for the facilitation of citizens' initiatives.this list of guidelines is not fully developed yet, but the experiences so far are encouraging.the professionals involved in the project have already indicated in what way the instrument is helpful for them.Moreover, the first effects we have been able to establish provide confirmation for the basic principles underlying the instrument and essentially confirm the instrumental validity of the set of guidelines.When we have collected and fully analyzed new additional data (e.g. the posttests that will still have to be administered) we can make full use of our data and will be well positioned to further improve the instrument.
an important analytical tool used in establishing the impact of particular types of facilitation (x) on the success of citizens' initiatives (Y) was the technique of pattern matching.this technique provides an alternative for traditional experimental and quasiexperimental techniques for causal analysis in the context of small n research.it is based on the identification of key theoretical mechanisms that causally link causes (X) and effects (Y) and the specification of a theoretically expected course of events.This course of events is then compared with the empirically observed course of action.this implied the need to make rather precise expectations about the anticipated effects of the use of the set of guidelines.on the basis of these anticipated effects we were not only in a position to test but, if necessary, also to alter the hypotheses (and redesign the instrument).in due process, this also offered opportunities to explore new insight in the dynamics of the participatory processes in ci's such as the role of expectations with regard to the supporting officials and how the type of motivations play a role in starting and quitting the participation in a ci.
the external validity of small-n designs (including design experiments) is known to be more problematic than in large-n research. of course, in our experiment on two cases, this is no different.This design experiment can be considered as a first step; once our instrument is fully developed, the possibility opens up of performing a (quasi-)experimental study with statistical controls.
So far, we have not found a better way to address our specific research objectives.With all its flaws, the design experiment method allows for the development of new instruments, since it allows us to judge its effectiveness and make adjustments.this helps us to also test and adjust theoretical idea's, not in the laboratory, but in a real-life environment.