evoluation.de
 
Textsammlung
Jan Hense  15 Dec 2004 - 21:46  Textsammlung   

Methodische Fragen

Jan Hense  15 Nov 2004 - 08:03  Stakeholder  Textsammlung   

Original Message -------- Subject: Re: Looking for methodologies to identify/choose stake holders Date: Sun, 14 Nov 2004 12:38:50 -0800 From: Avichal Jha Reply-To: American Evaluation Association Discussion List To: EVALTALK@BAMA.UA.EDU

Hi Jonny,

Michael Patton's "snowball" sampling technique comes to mind. You can find a discussion of different techniques in "Utilization Focused Evaluation," published by sage. I believe the 3rd is the most recent edition. Carol Weiss also has a great discussion on involving stakeholders in "Evaluation: Methods for Studying Programs and Policies."

What the discussion boils down to is context: What are you evaluating? The evaluand itself should suggest at least a limited group of stakeholders; i.e., those who asked for the evaluation. In the case where we're evaluating policy, this may not be the case. In that situation, the context becomes that of the policy. As long as you have a single stakeholder in mind, ask that stakeholder for who other stakeholders might be. This process, repeated with each new stakeholder, will "snowball" into a much larger sample.

This is just one of the ways that Patton and others have discussed. I hope it helps (although my gut feeling is that this is more useful for program evaluation than policy analysis). As I suggested, if you haven't already looked at Patton and Weiss, I think you'll find their work very helpful.

Best of luck, Avi


Avichal Jha, M.A. Doctoral Student Evaluation and Applied Methods Claremont Graduate University avichal.jha@cgu.edu



Original Message----- From: American Evaluation Association Discussion List To: EVALTALK@BAMA.UA.EDU Sent: 11/14/2004 10:20 AM Subject: Looking for methodologies to identify/choose stake holders

We all agree that it is important to involve stake holders in various phases of the evaluation life cycle. But how to identify the population of relevant stake holders and choose among them? My sense is that we tend to use the "I will know them when I see them" method. (It's what I do.) But are there more deliberate and systematic ways to go about it? Has anyone tried to develop a methodology? If anyone has relevant references, please send them my way. Thanks.

Jonny Jonathan A. Morell, Ph.D. Senior Policy Analyst

Street address: 3520 Green Court Suite 300, Ann Arbor Michigan 48105 Mail address: PO Box 134001 Ann Arbor, Michigan 48113-4001

Desk 734 302-4668 Reception 734 302-4600 Fax:734 302-4991 Email jonny.morell@altarum.org

Jan Hense  5 Nov 2004 - 08:41  Home  Textsammlung   

Was ist Evaluation? / What is evaluation?

"Once upon a time there was a word. And the word was evaluation. And the word was good. Teachers used the word in a particular way. Later on, other people used the word in a different way. After a while, nobody knew for sure what the word meant. But they all knew it was a good word. Evaluation was a thing to be cherished. But what kind of a good thing was it? More important, what kind of a good thing is it?" (Popham, 1993, p. 1)

"Evaluation - more than any science - is what people say it is; and people are saying it is many different things." (Glass & Ellet, 1980, p. 211)

"Research is aimed at truth. Evaluation is aimed at action." (wird M.Q.Patton zugeschrieben, Quelle mir unbekannt) Richtig muss es heißen: "Research aims to produce knowledge and truth. Useful evaluation supports action." (Patton, 1997, p. 24)

Jan Hense  29 Oct 2004 - 10:04  Formative Evaluation  Summative Evaluation  Textsammlung   

"Evaluation may be done to provide feedback to people who are trying to improve something (formative evaluation); or to provide information for decision-makers who are wondering whether to fund, terminate, or purchase something (summative evaluation)." (Scriven, 1980, S. 6-7)

Problematik des Begriffpaars

Der Begriff formative Evaluation (nicht das Konzept) geht auf Scriven (1972) zurück und bildet mit seinem Gegenstück summative Evaluation wohl das prominenteste Begriffspaar in der Evaluationsliteratur. Dennoch handelt es sich um einen problematischen Begriff, da er ungenau definiert, theoretisch unstimmig und in seiner praktischen Verwendung oft entsprechend beliebig ist (vgl. dazu etwa die Beiträge von Patton, Chen und Wholey in Evaluation Practice, 1996, Vol. 17, No. 2).

Empfehlungen zum Gebrauch von formativ und summativ

Da sich das Begriffspaar wegen seiner hohen Anmutungsqualität trotz dieser Probleme mit Sicherheit halten wird, scheint mir folgende Begriffsverwendung sinnvoll:

Die Begrifflichkeiten formativ/summativ werden ausschließlich zur Bezeichnung intendierter Evaluationszwecke verwendet, so wie es das obige Zitat Scrivens andeutet. Auf alle anderen von Scriven und Apologeten vertretenen Addenda wird verzichtet. Darunter fallen:

Jan Hense  17 Sep 2004 - 08:55  Geschichte der Evaluation  Textsammlung   

"From the ambitions of the academic disciplines, from the convulsive reforms of the educational system, from the battle-ground of the War on Poverty, from the ashes of the Great Society, from the reprisals of an indignant taxpaying public, there has emerged evaluation." (Glass, 1976, S. 9)

Jan Hense  16 Sep 2004 - 06:54  Assessment  Textsammlung   

Original Message -------- Subject: Re: Evaluation, Assessment, and Rubrics Date: Wed, 15 Sep 2004 16:31:10 -0700 From: Richard Hake Reply-To: American Evaluation Association Discussion List To: EVALTALK@BAMA.UA.EDU

In her POD post of 14 Sep 2004 10:00:14-0700 titled "Evaluation, Assessment, and Rubrics," Leora Baron wrote:

I am looking for two items that my fellow POD'ers may be able to provide: First, a definition distinguishing between evaluation and assessment; and second, an online location that has a good description and illustration of rubrics.

I. ASSESSMENT vs EVALUATION If one:

(1) goes to the powerful but little used POD search engine ,

(2) types into the "Since" slot "2003" (without the quotes), and into the "Subject" slot,

(a) "assessment" (without the quotes), s(he) will obtain 90 hits,

(b) "evaluation" (without the quotes), s(he) will obtain 168 hits,

(c) "assessment vs evaluation" (without the quotes) s(he) will obtain 10 hits.

My own take on "assessment vs evaluation" can be found in Hake (2004). From the perspective of the physics education reform effort [Hake (2002a,b), I find it useful to make NO distinction between "assessment" and "evaluation," but to make a 4-quadrant discrimination cf., Stokes (2000) of types of assessment/evaluation on the basis formative vs summative on one axis and public vs private on an orthogonal axis.

The non distinction between "assessment" and "evaluation," is contrary to the preferences of: (a) Steve Ehrmann (2004), (b) most of those contributing to the POD thread "Assessment vs Evaluation," (c) Mark Davenport (2004), and (d) the "Glossary of Program Evaluation Terms" at Western Michigan University (Michael Scriven's new location).

II. RUBRICS If you mean by "rubric": "a technique, custom, form, or thing established or settled (as by authority)" (definition #4 in Webster's Third New International Dictionary Unabridged), then it all depends on what one is attempting to assess/evaluate.

IF it's student learning, and not *affective" impact as might be assessed by student evaluations of teaching (SET's)

(a) Peggy Maki's (2004) recent book might be useful, but I have not seen it. In a POD post of 22 Jul 2004 15:09:54-0400, Barbara Cambridge, Director of the Carnegie Academy Campus Program wrote: "Peggy Maki's new book on assessment is excellent. It is jointly published by Stylus and AAHE."

(b) You might consider pre/post testing using valid and consistently reliable tests developed by disciplinary experts in education research Hake (2004b,c). As indicated in Hake (2004b), this is becoming more and more popular in astronomy, economics, biology, chemistry, computer science, and engineering. In many cases it has been stimulated by the pre/post testing effort in physics education research, initiated by the landmark work of Halloun & Hestenes (1998a,b).

Richard Hake, Emeritus Professor of Physics, Indiana University 24245 Hatteras Street, Woodland Hills, CA 91367

REFERENCES Davenport, M.A. 2004. "Re: Assessment vs Evaluation," ASSESS post of 13 Aug 2004 12:08:46-0400; online at .

Ehrmann, S. 2004. "Re: Evaluation, Assessment, and Rubrics." POD post of 14 Sep 2004 14:31:48-0700; online at .

Hake, R.R. 2002a. "Lessons from the physics education reform effort," Ecology and Society 5(2): 28; online at . Ecology and Society (formerly Conservation Ecology) is a free "peer-reviewed journal of integrative science and fundamental policy research" with about 11,000 subscribers in about 108 countries.

Hake, R.R. 2002b. "Assessment of Physics Teaching Methods, Proceedings of the UNESCO-ASPEN Workshop on Active Learning in Physics, Univ. of Peradeniya, Sri Lanka, 2-4 Dec. 2002; also online as ref. 29 at .

Hake, R.R. 2004a. "Re: Assessment vs Evaluation," online at . In this post I misinterpreted Mark Davenport's interpretation - he DOES distinguish between assessment and evaluation Davenport (2004).

Hake, R.R. 2004b. "Re: Measuring Content Knowledge," online at Post of 14 Mar 2004 16:29:47-0800 to ASSESS, Chemed-L, EvalTalk, Physhare, Phys-L, PhysLrnR, POD, and STLHE-L.

Hake, R.R. 2004c. "Re: Measuring Content Knowledge," online at . Post of 15 Mar 2004 14:29:59-0800 to ASSESS, EvalTalk, Phys-L, PhysLrnR, and POD.

Halloun, I. & D. Hestenes. 1985a. "The initial knowledge state of college physics students." Am. J. Phys. 53:1043-1055; online at . Contains the landmark "Mechanics Diagnostic" test, precursor to the much used "Force Concept Inventory" Hestenes et al. (1992).

Halloun, I. & D. Hestenes. 1985b. "Common sense concepts about motion." Am. J. Phys. 53:1056-1065; online at .

Halloun, I., R.R. Hake, E.P Mosca, D. Hestenes. 1995. Force Concept Inventory (Revised, 1995); online (password protected) at . (Available in English, Spanish, German, Malaysian, Chinese, Finnish, French, Turkish, and Swedish.)

Hestenes, D., M. Wells, & G. Swackhamer, 1992. "Force Concept Inventory." Phys. Teach. 30: 141-158; online (except for the test itself) at . For the 1995 versions see Halloun et al. (1995).

Maki, P. 2004. "Assessing for Learning: Building a Sustainable Commitment Across the Institution." Stylus. Maki is the former Director of Assessment of the AAHE.

Stokes, D. E. 1997. "Pasteur's quadrant: Basic science and technological innovation." Brookings Institution Press.


EVALTALK - American Evaluation Association (AEA) Discussion List. See also

   the website:  http://www.eval.org

To unsubscribe from EVALTALK, send e-mail to listserv@bama.ua.edu

   with only the following in the body: UNSUBSCRIBE EVALTALK

To get a summary of commands, send e-mail to listserv@bama.ua.edu

   with only the following in the body: INFO REFCARD

To use the archives, go to this web site: http://bama.ua.edu/archives/evaltalk.html For other problems, contact a list owner at kbolland@sw.ua.edu or carolyn.sullins@wmich.edu

Jan Hense  9 Sep 2004 - 16:15  Evaluierbarkeit  Textsammlung   

Ziel einer Evaluierbarkeitsanalyse (evaluability assessment) ist, die Wahrscheinlichkeit zu erhöhen, dass die Evaluation rechtzeitig, relevant und responsiv (den Informationsbedürfnissen entsprechend) sein wird. Damit ist sie eine Strategie zur Kosteneffizienz, da die für Evaluationen zur Verfügung stehenden Ressourcen optimal ausgenutzt werden sollen.

Als Ergebnisse einer Evaluierbarkeitsanalyse sollten folgende Informationen vorliegen, auf welche die anschließende Evaluation aufbauen kann:

Datenquellen für eine Evaluierbarkeitsanalyse sind

Literatur: Wholey (1979), Trevisan & Huang (2003)

Kritik am ursprünglichen Konzept der Evaluierbarkeit, aus Sicht der theory-based evaluation:

In späteren Revisionen greift Wholey (1987) neuere Entwicklungen auf. Auch die Formulierung der Programmtheorie gehört nun zur Evaluierbarkeitsanalyse.

Jan Hense  26 Aug 2004 - 07:40  Impact  Interne Validität  Textsammlung   

Original Message -------- Subject: history threats Date: Wed, 25 Aug 2004 13:57:31 -0400 From: Diana Silver Reply-To: American Evaluation Association Discussion List To: EVALTALK@BAMA.UA.EDU

I am looking for cases I can cite in which evaluators of a program, using a quasi-experimental design, have noted history threats in attempting to

Jan Hense  16 Aug 2004 - 14:51  Evaluationsmodell  Textsammlung   

In diesem Abschnitt finden sich Inhalte zu verschiedenen Evaluationsansätzen, -modellen und -theorien und ihre jeweilige Protagonisten.

Jan Hense  13 Aug 2004 - 08:45  Assessment  Evaluation  Textsammlung   

Message-ID:

 Date: Thu, 12 Aug 2004 21:25:36 -0700
 Sender: American Evaluation Association Discussion List 
 From: Richard Hake 
 Subject: Re: Assessment vs Evaluation
 To: EVALTALK@BAMA.UA.EDU

In his ASSESS post of 10 Aug 2004 titled "Assessment vs Evaluation" Mark Davenport wrote:

"I often read in the literature and hear on the conference circuit people using the terms 'assessment' and 'evaluation' interchangeably, as if they were synonyms. Even more confusing, I have found the word assessment is used to define evaluation, and vice versa . . . . Personally, I don't think we need two terms to explain identical concepts (unless they occur in two completely unrelated fields wherein the risk of confusion is minimal). Certainly academic and student affairs assessment are related enough that we can share terms. . . . I have documented my thoughts in an internal white paper to my constitutents and would be happy to share it if you will drop me a note privately."

I hope Mark will place his white paper on the web so as to increase the readership and decrease mailing expenses. His post stimulated a 12-post (as of 12 Aug 2004 16:20:00-0700) ASSESS thread accessible at .

A similar thread (4 posts) titled "distinction between evaluation and assessment was initiated by Jeanne Hubelbank (2003) on EvalTalk and is accessible at the EvalTalk archives . One post in this thread led me to a "Glossary of Program Evaluation Terms" at Western Michigan University (Michael Scriven's new location), where these definitions are given:

Assessment: "The act of determining the standing of an object on some variable of interest, for example, testing students, and reporting scores."

Evaluation: Systematic investigation of the worth or merit of an object; e.g., a program, project, or instructional material.

Nevertheless, I'm with Mark Davenport in preferring to make no distinction between "assessment" and "evaluation." In a post titled "Re: A taxonomy" Hake (2003a), I proposed an assesment taxonomy for consideration and comment that is best presented in quadrant form cf., Stokes (1999):

                      plus Y
                      PUBLIC
                        |
                        |
               Scientific Research
                        |

<--FORMATIVE ASSESSMENT | SUMMATIVE ASSESSMENT --> _ plus X

                       0|
                        |
    Action Research     |  Institutional Research
                        |
                        |
                        |
                      PRIVATE

Fig. 1. Quadrant representation of various types of assessment/evaluation. (Figure may be distorted by email transmission.)

For educational research, the X-axis represents a continuum from pure FORMATIVE to pure SUMMATIVE assessment of either teaching or learning. NO DISTINCTION IS MADE BETWEEN "ASSESSMENT' AND EVALUATION." The Y axis represents a continuum from complete privacy to complete public disclosure of results.

The locations of various types of research in terms of the type of assessment they offer are shown as:

"Scientific Research" see e.g. Shavelson & Towne (2002): upper two quadrants - always public and anywhere in the continuum between formative and summative.

"Action Research" [see e.g. Feldman & Minstrell (2000) and Bransford et al.]: lower left quadrant - usually private to some degree, and usually formative to some degree.

"Institutional Research": lower right quadrant - usually private to some degree, and usually summative to some degree, although it could approach the formative for those who study and attempt to improve institutional practice.

Leamnson's (2003):

(a) "classroom research" can be either "scientific" or "action" research.

(b) "institutional research" is generally NOT formative from the standpoint of classroom teachers.

In my opinion, the science education use of pre/post testing [for reviews see Hake (2002; 2004a,b,c)] is usually formative for both action and scientific research, since the object is to improve classroom teaching and learning, NOT to rate instructors or courses.

Richard Hake, Emeritus Professor of Physics, Indiana University 24245 Hatteras Street, Woodland Hills, CA 91367

REFERENCES Bransford, J.D., A.L. Brown, R.R. Cocking, eds. 2000. How People Learn: Mind, Brain, Experience, and School: Expanded Edition. Nat. Acad. Press; online at , pages 199-200. This is an update of the earlier 1999 edition.

Feldman, A. & J. Minstrell. 2000. "Action research as a research methodology for the study of the teaching and learning of science," in E. Kelly & R. Leash, eds., "Handbook of Research Design in Mathematics and Science Education." Lawrence Erlbaum; online at (72kB).

Hake, R.R. 2002. "Lessons from the physics education reform effort," Ecology and Society 5(2): 28; online at . Ecology and Society (formerly Conservation Ecology) is a free "peer-reviewed journal of integrative science and fundamental policy research" with about 11,000 subscribers in about 108 countries.

Hake, R.R. 2003a. "Re: A taxonomy"; online at . Post of 9 Jul 2003 12:47:42-0700 to STLHE-L, PhysLnrR, EvalTalk, and POD. See also Hake (2003b).

Hake, R.R. 2003b. "Re: A taxonomy"; online at . Post of 12 Jul 2003 13:07:54-0700 to ASSESS, EvalTalk, PhysLrnR, STLHE-L, and POD.

Hake, R.R. 2004a. " Re: Measuring Content Knowledge," online at . Post of 14 Mar 2004 16:29:47 -0800 to ASSESS, Biopi-L, Chemed-L, EvalTalk, Phys-L, PhysLrnR, Physhare, STLHE-L, and POD. See also Hake (2004b).

Hake, R.R. 2004b. "Re: Measuring Content Knowledge," online at . Post of 15 Mar 2004 14:29:59 -0800 to ASSESS, EvalTalk, Phys-L, PhysLrnR, and POD; online at .

Hake, R.R. 2004c. "Design-Based Research: A Primer for Physics Education Researchers," submitted to the "American Journal of Physics" on 10 June 2004; online as reference 34 at , or download directly as a 310kB pdf by clicking on .

Hubelbank, J. 2003. "distinction between evaluation and assessment." EvalTalk post of 13 Nov 2003 10:52:00-0500; online at . The encyclopedic URL indicates that one must subscribe to EvalTalk to access its archives, but it takes only a few minutes to subscribe by following the simple directions at / "Join or leave the list (or change settings)" where "/" means "click on." If you're busy, then subscribe using the "NOMAIL" option under "Miscellaneous." Then, as a subscriber, you may access the archives and/or post messages at any time, while receiving NO MAIL from the list!

Leamnson, R. 2003. "A Taxonomy," STLHE-L/POD post of 9 Jul 2003 10:32:02-0400; online at .

Shavelson, R.J. & L. Towne. 2002. "Scientific Research in Education," National Academy Press; online at .

Stokes, D. E. (1997). "Pasteur's quadrant: Basic science and technological innovation." Brookings Institution Press.


EVALTALK - American Evaluation Association (AEA) Discussion List. See also

  the website:  http://www.eval.org

To unsubscribe from EVALTALK, send e-mail to listserv@bama.ua.edu

  with only the following in the body: UNSUBSCRIBE EVALTALK

To get a summary of commands, send e-mail to listserv@bama.ua.edu

  with only the following in the body: INFO REFCARD

To use the archives, go to this web site: http://bama.ua.edu/archives/evaltalk.html For other problems, contact a list owner at kbolland@sw.ua.edu or carolyn.sullins@wmich.edu

 
Blog


Suchen


Translate
readme


Copyright
Creative Commons License