Evaluation in Social Work Practice: An Overview

EVALUATION DEFINED

In general, evaluation can be defined as the activities, which systematically examine programme input, process, output, or outcome. The major qualification is "systematic". This excludes any activities that are spasmodic in nature. An old man in an elderly home, after a visit of a group of youth volunteers, yawned, "These kids are really enthusiastic." This fleeting comment cannot be counted as evaluation. However, if a worker systematically collects these comments for the assessment of the program, then it becomes an evaluation. One important determinant of whether such systematic activities will take place or not is the purpose behind. In essence, evaluation is a purposeful activity.

 

PURPOSE OF EVALUATION

There are many more reasons for evaluation to be undertaken. With respect to social services, they can be categorized into three main orientations, namely, service delivery (practice orientation), social service administration (administrative orientation) and social work profession (professional orientation).

 

SERVICE DELIVERY

In this orientation the basic purpose is to enhance the quality of services. As Solomon (1975) had pointed out, evaluation provides information concerning the various aspects of a programme, such that aspects of a programme which are ineffective can be discarded and evaluation can save the practitioners from acting on ineffective programmes. In other words, efforts can be economized.

In the micro perspective, evaluation helps practitioners to modify programmes and to make better choices among various programme alternatives that have different effects.

Evaluation conducted under such orientation is usually "formative" in nature, as contrast to "summative evaluation" assessing the effectiveness of a programme, and the key variables studied are process and outcome variables.

 

SOCIAL SERVICE ADMINISTRATION

Planning

In the macro perspective, evaluation affects the social service planning level. Weiss (1974) considered that "evaluation has a utilitarian purpose. Its function is to provide evidence of the outcome of programmes so planners can make wise decisions about these programmes in the future. To the extent that a programme is achieving its goals, evaluative evidence supports continuation, expansion and increased allocation of resources"

This is basically a concrete-rational model. It assumes that planners will take all the facts and information available in order to make decisions. Though contradictory to the spirit of scientific enquiry, it seems that evaluation in reality is much influenced, if not dictated by political considerations, as Meld (1974) pointed out, "Present political demands determine the pattern in which information is used - rather than the information determining the pattern of the political demands."

The evaluator may be value-free in his study. However, his employer and the consumer of evaluation are not. An evaluation report goes counter to the political trend will likely result in being shelved and subsequently disappear from the scene. Epstein et al. (1973) when answering the question - "Evaluation for Whom", pointed out that "it would be unrealistic ... to assume that all consumers of evaluation have equivalent values and to ignore the socio-political context in which evaluation takes place!"

 

Monitoring/Accountability

It is a commonly held view that the kind of "consumer" is the prime determinant of evaluation. The predominant consumer of evaluation is usually the funding source. Accountability to the funding source is the usual frame of reference for evaluation, and may become a form of political control. The decision of continuing financial support or not, is usually based on the evaluation. Therefore, evaluation procedure becomes the tool of the funding source to ensure that the practitioners do not violate its expectations.

Of course, evaluation is performed not only to meet the accountability request of the funding source, but of services recipients and the social work profession.

 

SOCIAL WORK PROFESSION

A professional's performance is accountable to the whole profession. Evaluation can also contribute to the building up of the professional knowledge, status and morale.

The building up of knowledge in turn contributes to the quality of service. How much truth is there in the statement that interest groups help young people to become mature and responsible members of the society? What elements in these group activities are conducive to development?

There is infinite number of unanswered questions in the social work field. Yet, answers to these questions constitute the knowledge base which social work is continuously striving for.

Perhaps, it would be idealistic to think of evaluation as some process that will eventually prove or disprove certain assumptions in service provision, let alone the question if it is necessary to prove or disprove certain principles or theories in social work. However, evaluation can act as a lamp, which reduces the darkness of uncertainty, but no one can guarantee that given light, one can surely find the right direction.

Nevertheless, the lamppost that this "lamp" rests upon gives support to the profession. Professional status partly depends on the knowledge base that it relies on. Furthermore, it is not only the knowledge base that the professional think they have, it is what the other people consider the professionals have, Any programme that does not withstand the test of evaluation will not appear to be effective to those who are cynical towards social work. On the other hand, any programme that demonstrates effectiveness through the test will always appear to be effective to those who are overly positive towards social work. However, evaluation will show the effectiveness, which the cynics doubt, indicate the unworthiness of those programmes to those who are indiscriminately positive.

Solomon (1975) considered that evaluation can provide personal satisfaction and security to the practitioners. The professionals' morale depends heavily on the professional status and the assurance that one's efforts have been worthwhile. With low professional status and little assurance, the professional morale will be inevitably low and consequently the loss of manpower within the profession will be high.

 

OBJECTIVES AND FOCUS OF EVALUATION

When an evaluator attempts to examine a programme, after he has clarified the basic purposes of the evaluation, the next step is to identify the basic purposes of the evaluation -- whether the evaluation attempts to assess the:

a) efforts that the staff have put into the programme;

b) the effectiveness of the programmes in achieving their objectives,

c) the efficiency in terms of the results obtained over the amount of efforts made.

For different evaluation objectives, we would have a different set of variables to examine, that is, a different focus.

If we attempt to measure efforts, then we focus on the input variables, for example, the number of man-hours spent, the amount of money or resources spent on the programme, etc.

If the evaluation objective is to assess the effectiveness of the programmes, then the outcome variables are examined. Outcomes may be the existence and intensity of changes produced, the impact and coverage of such changes. Examples of outcome are the percentage of drug addicts successfully completed the treatment programme and did not return to the use of drugs within 2 years, the percentage of unemployed persons obtaining employment after receiving job counseling and placement service, the number of cases where the marital conflicts have been resolved, etc.

Output variables are frequently confused with the outcome variables. Output variables are used primarily in service monitoring, i.e. work has been done. However, they tell us very little about the effectiveness of the programme. Examples are number of cases served, number of youth members, etc.

If efficiency is the objective, then we are concerned with the outcomes produced in terms of the input invested.

Input and outcome are the usual foci of evaluation. Process variables are usually considered to be the independent variables and their importance is shown by their presence or absence. To the practitioners, the process is the interaction among the workers' skill, techniques, and the situational variables. Thus, process variables are usually utilized in the interpretation of evaluation results and contribute to the planning of future actions. However, effectiveness (summative) evaluation studies are usually attacked on the ground that four conditions should be met before such evaluation is useful: -

1) the program has gone through its developmental stage, i.e. mature.

2) the objectives of the programme are sufficiently specified to allow for definable outcomes.

3) the program is well enough defined to determine whether it is present or absent in a given setting.

4) that some basis for observing or estimating the state of outcomes in the absence of the program be available for comparison to program outcomes.

If an evaluation research concludes that a programme is effective, its results would be useful only if the input and process variables can be described concisely and precisely. Thus, although efficiency may not be the prime objective, the description of the input and process variables is always important during the interpretation of the implications of the outcomes.

 

FORMS OF EVALUATION

There are many forms of evaluation, which cannot be individually described in this overview. Here, only the most common forms are discussed briefly.

There is one family of evaluation, which is characterized by its explanatory nature. They are experimental designs, quasi- experiments, and pre-experimental designs.

Social scientists mainly advocate the use of a rigorous experimental design in evaluation. Only reluctantly would they settle for a less rigorous quasi-experiment. Campbell (1969) considered that "True experiments should almost always be preferred to quasi-experiments where both are available."

However, Campbell also considered that, "Occasionally are the threats to external validity so much greater for the true experiment that one would prefer a quasi-experiment" Therefore, it is a matter of choice in the balance between internal and external validity that determines the form among this family of evaluation designs.

Another form of evaluation that has received considerable attention is the cost-benefit analysis. Cost-benefit analysis is traditionally concerned with the question of economic efficiency of resource utilization. Programme alternative is viewed as a change from the status quo. Four possible consequences of such change form the basis of the cost-benefit analysis:

1) Incremental or additional costs;

2) Benefits forgone elsewhere in moving resources from their existing use to the projected activity; (i.e. opportunity costs)

3) Incremental or additional benefits

4) Cost savings

Some researchers consider the task of identifying the benefits too difficult, let alone quantifying such benefits in money terms. "Shadow prices: have been used to combat this difficulty. Another "solution" to this problem is the cost- effectiveness analysis. The main objective of such analysis is to identify the most appropriate programme that can achieve a particular "objective" at minimal cost. In fact, cost-benefit and cost-effectiveness analysis are not new forms of evaluation designs, but a modified form with the cost element being included in the research analysis.

 

MODELS OF EVALUATION

Though there are various forms of evaluation designs, yet they are much related to the purposes, objectives and foci of the evaluation. Many writers had proposed models of evaluation to circumscribe the various social science concepts employed in different forms of evaluation.

 

Schulberg & Baker (1968)

Weiss (1974)

Washington (1975)

Goal Attainment Model

Traditional model

 

System Model

Accountability System

 
 

Social Experimentation

Impact Model

   

Behavioral Model.

 

 

The Behavioural Model of Evaluation (BME) proposed by Washington (1975) has a strong fragrance of theories borrowed from behavioral psychology. Washington considered BME as similar to the Goal Attainment model except for its emphases:

  1. Evaluation based on information of behaviours which are observable and measurable;
  2. the presence or absence of such behaviours; and
  3. the intensity and/or frequency of such behaviours.

However, it is obvious that these concepts can be borrowed by the Impact Model or any other forms of evaluation. As behaviours, which are observable and measurable, are good indicators for evaluation, B.M.E. can be considered as a technique employable in most form of evaluation.

 

OTHER ISSUES

Who should be the evaluator

1) External/Internal Evaluator: Funding sources tend to discount evaluation performed by internal evaluators considering that they may provide biased evaluation. But on the other hand, external evaluators are not welcomed because s/he may not have the essential knowledge of what actually happens and provides a "complete" picture. Furthermore, no one is willing to employ a research team, which has a reputation of giving negative results.

2) Client's evaluation Clients are usually not well informed about alternative services, alternative ways or better ways in the provision of service. Clients tend to show appreciation after they have received some service, and after all most people tend to indicate positive evaluation by courtesy. Client's evaluation is most useful for comparison purposes, such as comparing one service unit with another, comparing the past and present performance, etc.

 

Goal displacement

One major issue in service evaluation and monitoring is the possible impact on the behaviour of the service providers and subsequently efforts are not directly towards the goal of service provision but towards meeting the monitoring standards. The emphasis on output will lead the service providers to focus on the quantity of service provided instead of the quality of the services. For example, if we focus our attention as the number of participants, we will organize more mass programmes to attract more participants.

Cost-effectiveness of programme evaluation

We have to spend time and effort to conduct programme evaluation. The more elaborate the evaluation process, the more information that we can obtain, yet the more expensive it will be. The key issue is how to strike the right balance. If we spent 100% of our effort in providing service without evaluating it, then there would be serious doubts about our effectiveness and our long-term ability to learn from our experience. On the other extreme, if we sent 50% of our time in evaluation, it would highly unlikely for the benefits of evaluation will compensate the loss of resource in providing services. Obviously, the answer lies between the two extremes. The most probable method is just simply to start conducting evaluation by using whatever time we can squeeze in doing so. Subsequently, we evaluate the cost-effectiveness of conducting evaluation and fine-tune the process.

 

 

The Current Service Monitoring and Evaluation in Hong Kong

Service monitoring prior to 1999

Prior to 1999, the major method adopted in Hong Kong is basically input and output monitoring. In each service units, the number of personnel was prescribed, i.e. in an outreaching social work team, there were 1/3 Social Work Officer, 4 Assistant Social Work Officers, and 6 Social Work Assistants, etc. This system has longed been considered as rigid by the NGOs. Output statistics, such as number of members in a youth centre, number of cases handled by a social worker, etc. Again, this monitoring can tell us very little about the extent that public money has been well spent, i.e. results have been achieved.

 

The Service Performance Monitoring System

The SPMS was introduced in 1999 and consisting of two major components, the monitoring according to the Funding and Service Agreement (FSA) and the Service Quality Standards (SQS).

FSA is more or less like a contract spelling out what are expected from the service provider and the responsibility of the Social Welfare Department. The Essential Service Requirements and the Output Standards are the major specifications of requirements. The Output Standards, though called this way, consist of both output and outcome indicators. The tendency is to use more and more outcome indicators instead of output indicators.

 

The SQSs

The term "Service Quality Standard" can be quite confusing. It is not measuring the quality of service per se. It spells out the mechanisms and processes that should be in the service provision system that will help to assure that the quality of service can be maintained and continuously improved.

When the system was first introduced, there were 19 SQSs and 79 criteria. In 2001, subsequent to a review conducted by HKU, it was reduced to 16 SQSs and 54 criteria (refer to the web-site of SWD for details).

To reduce the workload possibly association with the implementation of SQSs, the SQSs are implemented by phases. In 1999-2000, 5 SQSs were implemented. In 2000-2001, another 5 were implemented. In 2001-2002, the remaining 6 were implemented.

As the SQSs were first introduced, over-documentation was widely observed. Complaints were often received on the amount of workload introduced by the SQSs. With experience and the emphasis in training, documentation had been substantially reduced.

 

The Assessment Process

Periodically (mainly quarterly), the service units are to submit service statistics as required by the FSA to the SWD. Annually, each service unit is to submit a self-assessment report as according to the FSA and the SQSs. As originally planned, for each service unit, external assessment will be conducted by the SWD once every 3-years.

In the review conducted by HKU in 2001, it was recommended that only samples of service units within each NGO would be externally assessed. Furthermore, only a sample of the SQSs will be assessed instead of assessing all 16 SQSs. These recommendations were adopted starting from the Phase 3 implementation taken place in 2001-2002.