Reducing Threats to the Implementation of a Competency-Based Performance Assessment System

  • Toni Bargagliotti, DNSc, RN
    Toni Bargagliotti, DNSc, RN

    Toni Bargagliotti, DNSc, RN is Dean and Professor at The University of Memphis Loewenberg School of Nursing.

  • Marjorie Luttrell, PhD, RN
    Marjorie Luttrell, PhD, RN

    Marjorie Luttrell, PhD, RN is an Associate Professor at The University of Memphis Loewenberg School of Nursing.

  • Carrie B. Lenburg, EdD, RN, FAAN
    Carrie B. Lenburg, EdD, RN, FAAN

    Dr Lenburg, Loewenberg Chair of Excellence in the School of Nursing, University of Memphis from 1997-1999, worked with the nursing faculty to convert the BSN program to the competency outcomes and performance assessment model and methods. She also is consultant to the nursing faculty of the University of Colorado Health Science Center to integrate the model into its range of four degree programs (BSN, MSN, ND, and PhD), and into all UC-SON Internet courses. She also is ongoing consultant to the newly developing BSN program at King College (Bristol TN), implementing the COPA Model from the outset. From 1973-1991 she coordinated the development, implementation and evaluation of the New York Regents College External Degree Nursing Program.

Abstract

Performance in nursing, both in service and education, has been an activity that evokes feelings of apprehension and fear of failure. The service area is mandated by Joint Commission to evaluate competency of clinicians for safe practice and schools of nursing have a similar mandate to graduate students that are safe clinicians. The transition of process evaluation to a competency-based assessment system in both service and education sectors is threatening to the stakeholders who are managers, staff, faculty, and students. This article explores the various dimensions of threats to stakeholders, the investment of the stakeholders, and strategies that can be used to protect the stakeholders' investment.

Reducing Threats to the Implementation of a Competency-Based Assessment System

One area in nursing with high dissonance between belief and actual practice is the evaluation of clinical abilities among clinicians and nursing students. The beliefs and values held by the parties involved is that all practicing nurses are, of course, competent by reason of licensure, employment, and perhaps professional certification. Similarly, in nursing education, students, faculty, and the profession share an almost unshakable belief that a clinical grade represents the observed clinical competence of students in a corresponding area.

The reality is that those in nursing education and clinical practice have consistently used repetitive process evaluation over time to make judgements about practice abilities. In this process-oriented formative evaluation system, cumulative and retrospective reports about ongoing clinical incidents - both positive and negative - generally prevail as sufficient evidence of actual competence. Subsequently, no area of nursing education can reach a flash point for students or faculty as quickly as making clinical evaluation more stringent. In nursing practice, managers who evaluate personnel performance often experience similar responses (Nolan, 1998). For this reason, implementing any system of more responsible clinical evaluation precipitates anxiety and resistance by those who are subject to it, and thus, the issues that can be anticipated whenever the specter of failure is raised must be considered seriously.

In spite of potential threats, resistance, and lack of understanding of what changes to make and how to successfully implement them, in recent years a growing number of nursing organizations, education programs, regulatory and advisory bodies, and accreditation authorities have put increased emphasis on the objective measurement of outcomes in academic and practice settings (American Association of Colleges of Nursing, 1998; Joint Commission on Accreditation of Health Organizations, 1996; Lenburg, 1991, 1999a; National League for Nursing Accrediting Commission, 1997; Orchard, 1994). The emphasis on outcomes derives from public sentiment that how something was done is not as important as whether or not it was effective. Since accountability for outcomes is no longer optional, nurses in both practice and education are struggling to learn, create and implement newly conceived competency-based systems for contemporary practice.

Competency-based performance evaluation is defined as a criterion-referenced, summative evaluation process that assesses a participant's actual ability to meet a predetermined set of performance standards under controlled conditions and protocols (Lenburg, 1979, 1992-1995, 1998). This approach differs conceptually from the traditional check-off method. The performance standards used in a competency-based system focus on knowledge and principles essential to effective implementation of required skills.


The performance standards used in a competency-based system focus on knowledge and principles essential to effective implementation of required skills.

In contrast, the typical check-off lists used in clinical evaluation focus on sequential steps to be done as required by a particular agency or course, i.e., the particular way a skill should be performed. They may or may not specify mandatory elements. Potentially, multiple ways (steps) could be used to implement a given skill, but the essential principles would be the same. Thus, the performance standards in a competency-based system are those actions or responsibilities deemed to be critical for practice and quality care, rather than the steps in doing them. Work by Lenburg (1979, 1992-95, 1999b) and Luttrell, Lenburg, Scherubel, Jacob, and Koch (1999) provide both a context and model that are potentially useful in finding ways through the minefield of competency development, implementation and evaluation. Even the most carefully developed programs, however, are fraught with contentions, disagreements, and issues that need to be confronted early on, if they are to be resolved without major disruption (Lenburg, 1990, 1991; Lenburg & Mitchell, 1991).

To successfully change a potentially difficult and controversial system, likeperformance assessment, requires understanding not only the changes, but also their influence on, and significance for, those most affected by the changes. This requires knowing who the stakeholders are, what their stake is, and what value systems are operative in the context of proposed changes. Clearly understanding the importance of these factors helps clinical evaluators to construct a system that protects the investments of the stakeholders and reduces conflicts inherent in the system. Failure to adequately focus on their concerns may well predict certain failure of the whole process.

The purpose of this article is to identify and reduce threats inherent in changing from a process-oriented evaluation system to a competency-based performance assessment system. This article will describe the various stakeholders in the evaluation process, present approaches for protecting the investments of students, faculty, staff and nurse managers, address interactions among these various constituents and the risks of system failure, and conclude by suggesting future directions.

Who Are the Stakeholders?

Superficially, clinical evaluation appears to be a private process only between those who are directly involved: faculty and students on the education side and managers and staff on the service side. In education, while the primary stakeholders are faculty and students, other constituencies they represent also become involved. Faculty believe that the broader constituencies they represent include:

  1. nursing faculty,
  2. university faculty,
  3. university administration,
  4. practicing nurses,
  5. the state board of nursing,
  6. the nursing profession at large,
  7. health care agencies,
  8. patients,
  9. taxpayers, and
  10. the public.
Faculty and the academic community hold that faculty have the sole right and responsibility to determine competence by assigning grades. As faculty in a professional school, nursing faculty carry out the important responsibility of gatekeeping because they are the predominant persons who observe and make decisions about students' actual nursing abilities prior to graduation. Accordingly, an important faculty constituency is state boards of nursing who represents the consuming public. Faculty believe another constituency they represent are practicing nurses who espouse the value that students should be adequately prepared before graduation.

Alternatively, students believe they are the primary stakeholders representing their singular interest. Although they believe they stand alone in the case of a failing grade, many students immediately call for help from a number of constituencies including:

  1. other students,
  2. college/university administrators,
  3. sympathetic faculty,
  4. sympathetic staff nurses,
  5. local politicians,
  6. the state board of nursing,
  7. professional nursing accrediting bodies, and at times
  8. attorneys.
Notably, these two primary participants - students and faculty - have overlapping constituencies. Herein lies the reason that a clinical evaluation system potentially generates intense and divergent emotions. Although both students and faculty may indicate a level of dissatisfaction with current process-oriented evaluation, change of any magnitude typically is highly threatening and strongly resisted because it represents yet another unknown and potential threat to self. Overcoming this resistance requires designing a cohesive and objective competency-based system that proactively protects their respective investments.

The parallel situation exists in the service setting with nurse managers and or service-based educators who evaluate the competency of nursing personnel. In the service setting, evaluators (managers or educators) believe their constituencies are:

  1. the employing agency,
  2. nursing service administrators,
  3. physicians practicing in the specific unit,
  4. human resources staff,
  5. patients,
  6. payers for health care,
  7. the state boards of nursing,
  8. the broader nursing profession, and
  9. the public.
The inherent right of management is to evaluate and to hold employees accountable for performance. Managers have a significantly important gatekeeping function to protect patients and the public from unsafe clinicians.

Similar to students, staff nurses being evaluated believe they are primary stakeholders with a singular interest. However, in the case of an unsatisfactory evaluation, staff bring forward other advocating constituencies, such as:

  1. other staff nurses,
  2. shift supervisors,
  3. physicians,
  4. nursing service administrators,
  5. human resources staff,
  6. state nursing associations,
  7. collective bargaining agents where applicable,
  8. patients,
  9. the public, and
  10. even attorneys.
Nurse managers and staff nurses quickly discover that these are overlapping constituent groups who may pose potential conflicts.

Protecting the Student's Investment

An important investment that often is forgotten in discussions of clinical evaluation is the student's major investment in becoming a competent clinician. Students enter nursing school believing that they will be able to successfully practice nursing following graduation. The investment and value placed on this goal makes it imperative to keep the clinical competence in the foreground, not the background, of serious discussions about changing evaluation methods.

Students have a major stake in how they will be evaluated and in the results. While a passing clinical grade is never an issue, a clinical failure immediately results in a range of serious consequences. These include the financial and time costs to repeat the course, the personal and professional embarrassment of an unsatisfactory academic grade on a transcript, and potential disqualification from the major or the program. These are not minor concerns. Moreover, basing a semester grade on observed performance at one specified evaluation time precipitates major student anxiety about having a bad clinical day on the day of a specific performance examination. Addressing these important student concerns requires safeguarding, rather than ignoring the student's investment while simultaneously enforcing essential standards of practice.


Protecting the student's personal investment in success is most easily accomplished by making competency-based evaluation a common practice in all courses throughout the curriculum using consistent and objective protocols.

Student anxiety is rightfully increased by any evaluation system with which they have had little or no experience. Although students may welcome the change from process-oriented evaluation to competency-based performance examinations, it is still important to implement such changes skillfully and in ways that minimize their stress and anxiety. Using different evaluation methods in high-risk situations without adequate pre-orientation is more likely to measure the student's ability to control anxiety, rather than to demonstrate his/her clinical competence. Protecting the student's personal investment in success is most easily accomplished by making competency-based evaluation a common practice in all courses throughout the curriculum using consistent and objective protocols. Such an evaluation system can be most easily introduced in lower risk situations such as in classroom and laboratory settings. Here, evaluation pertains to competencies such as critical thinking, problem solving, planning, analyzing or designing, as verified through written papers, presentations, projects, and various psychomotor and health assessment skills. All of these abilities, nonetheless, are based on predetermined critical elements that define competence. In this way, students come to expect an evaluation standard that is consistently applied to all students and situations during each testing time, which increases learning capacity and confidence.

Another method to reduce student anxiety, consistent with competency-based education, is to inform students from the beginning of the course which specific clinical outcomes and skills and the level of competence they will be expected to achieve. When students understand what they need to learn, at what level of ability and through what means, they are more likely to achieve performance expectations and become competent. The truth of an old saying is evident: Chance favors the prepared mind. During each clinical experience, students practice the various skills required for the course and thereby prepare for one or more competency performance examinations (CPEs). Also during this learning time, faculty are available to guide them based on assessment (evaluation) of their progress at the time through use of anecdotal notes during conferences. For example, each day students will assess their patients and when questions or uncertainties occur faculty will assist them in the particular skill. At the end of the semester, however, they are expected to perform such abilities without difficulty, according to the standard set by the faculty and published in the course syllabus.

The students' investment in success can be further protected by assuring them that one or more objective clinical performance assessments will be conducted only under predetermined and carefully controlled circumstances, using predictable patient care situations or simulations. Student anxiety is substantively reduced by having the detailed outline of all CPE requirements used to document course outcomes and a passing course grade included in the course syllabus distributed at the beginning of the course. A CPE generally is not designed to measure how well students can cope with the myriad of changes that could occur in patient care situations. Rather, it is the validation of the specific clinical skills and level of competence stated as specific critical elements that are required for that episode or course.

During the learning component of a clinical course, faculty plan multiple and varied patient experiences to maximize student learning in providing or managing patient care in challenging and complex situations. They are available to direct, guide, teach, and evaluate student progress. When students have questions or need help, faculty are there to coach and assist them in performing particular skills. However, at the end of the course, students are expected to accomplish the clinical skill without difficulty or assistance according to the standard set by the faculty and published in the course syllabus. Faculty are responsible for insuring that CPEs are conducted in the most stable and controlled environment possible. To do otherwise would be akin to administering the final course examination in the midst of a student cafeteria or during a fire drill.

Thirdly, the psychological risks associated with using strict CPEs can be reduced substantially by directly confronting one of its most anxiety-producing aspects. An effective performance-based system accommodates for students having a bad clinical day by allowing for a re-test period according to very controlled protocols and conditions. Not all students will need a re-test; a planned mechanism and corresponding policies, however, need to be in place for this eventuality. Proactively planning for an adequate re-test system promotes a fair, objective and consistent method to validate competence, not just the idiosyncratic circumstances of that particular day.

Designing an evaluation system that protects the investment of students also requires recognizing the investment of their constituencies who are quickly introduced into the system when the student fails. As experienced faculty know, a failing clinical grade suddenly brings significant others into the process, including college and university administrators, faculty on grade appeals committees, and even local politicians. Protecting student constituent investments requires understanding their stake in the situation and proactively planning strategies to deal with them with a minimum of disruption and negative fallout.

Protecting the Faculty's Investment

The investment of faculty is both personal and professional. They bring their own student history, practice history, personal belief systems, and faculty educational experiences into the evaluation process. Clearly, these factors differ for every faculty member. Students readily allude to low inter-rater reliability among faculty in the traditional evaluation process whenever they experience a clinical failure. Professionally, nursing faculty exercise a strategically important gatekeeping function for the school and the profession of nursing.

The lack of evaluator consistency also is a problem faculty encounter with the constituencies they believe they represent. They are astounded when other nursing faculty, other university faculty on grade appeals committees, nursing deans and directors, university administrators, politicians, and others do not immediately accept faculty judgment as being infallible. The faculty's astonishment is related in part to their belief that these individuals or groups are their own constituencies, not the students'.

Subsequently, failing clinical evaluations often become highly polarized problems in which dissenters just do not understand the complex issues at stake. Unconsciously, faculty take on the role of Sisyphus, the Greek mythological figure condemned by Zeus to spend his remaining life rolling a stone uphill that would always roll backward upon him. Faculty believe the stone they are continually trying to roll uphill are the standards of the profession and protection of the public. They feel thwarted in this effort by dissenters who do not understand the importance of evaluation outcomes or the faculty's responsibilities to insure actual competence for practice.


Implementing a competency-based evaluation system requires protecting both individual and collective faculty investments.

Implementing a competency-based evaluation system requires protecting both individual and collective faculty investments. The personal values and experiences faculty have accumulated over time are important enough to be protected. Indeed, it is the richness and depth of these factors that help them to be potentially outstanding teachers. The teaching component of their role is the time in which all of these many values can and should be imparted to students. Formally recognizing and strengthening these important contributions to the teaching mission of the school is important for faculty development, motivation and satisfaction. It is essential to minimize threats to these valuable contributions.

To implement a competency-based assessment system, faculty have to agree to acknowledge and put aside many of their individual biases and traditional practices. Instead, they need to learn the multiple safeguards and benefits of a holistic system, like the Lenburg COPA Model (Lenburg, 1979, 1992-1995, 1999b, and other articles in this issue; Luttrell, 1999). The model requires identified competency outcomes and structured performance assessment methods, including interrelated content, logistics and policies. Implementing such a cohesive, comprehensive and structured model is a major paradigmatic shift for faculty because it appears to challenge their values and ability to be fair and impartial evaluators. Moreover, the competency-based clinical performance examination (CPE) approach strips faculty of another cherished defense: dependence on cumulative anecdotal notes, however subjective or incomplete they may be. In the performance assessment model suggested here, anecdotal notes are used, but only to promote learning, not to determine actual competence following the learning period. Clinical faculty still do need anecdotal notes to monitor learning experiences of each student and provide guidance for ongoing and progressive learning to help them prepare for summative CPEs and subsequently for actual competent practice.

The second faculty investment that must be protected is their professional stake in the system and their important gatekeeping functions. Implementing performance-based examinations requires that they come to consensus on all aspects of objective clinical assessment, including content, methods, logistics and policies. Realistically, this means that the most stringent and the most lenient faculty evaluators have to agree and actually use the same metric, the same protocols, and the same competence performance assessment methods. Historically, faculty at these polar opposites, usually present in every program, have balanced each other out. Once the faculty have made the commitment to use a comprehensive and systematic model to document actual clinical competence, however, they are bound as professionals and educators to use objective and consistent standards for development, implementation and evaluation of the process. Faculty who continue to do as they always have done, or take independent actions to modify aspects of the agreed upon comprehensive process, perpetuate the problems of individual prejudices, subjectivity and inconsistency that undermine a valid evaluation system.

Protecting the Staff Nurse's Investment

The staff nurse's investment in remaining a competent clinician is often forgotten in the discussions of clinical evaluation. Staff nurses enter into employment agreements believing they will be successful in practice. They bring their investment of their nursing education and often years of successful nursing practice experience. The investment and value placed on these issues make it imperative to keep the clinical competence in the foreground of serious discussions about changing evaluation methods.

Staff nurses often bring years of successful practice experience with them to an evaluation process. They have a successful history with patients, nursing and physician colleagues, other nursing administrators, and other agency staff. Rightfully, they consider their professional reputation that has been carefully built over time to be at stake in any evaluative process. Basing an annual evaluation on the recollection of isolated events over time, one untoward critical incident, and/or other clinical events may precipitate higher levels of staff anxiety than necessary.

Staff nurses or clinicians in any employment setting have a personal set of performance standards they have developed over time as a result of their professional education, continuing education, professional literature, and practice. Melded within their individualized practice standards are the mores and cultures of former practice settings as well as those within their current professional practice culture. These individualized practice standards are continually impacted by environmental changes of new equipment, time constraints, staffing patterns, patient acuity levels, and demands from payer and accreditation agencies.


... in any practice setting many different ways of competently accomplishing the same patient outcomes may be used.

Within the same practice settings, individual clinicians do not always incorporate the practice standards and resource constraints in the same way. Consequently, in any practice setting many different ways of competently accomplishing the same patient outcomes may be used. Protecting the staff nurse's investment in competence requires designing performance evaluation systems that accommodate these intra-individual differences.

In addition to the intra-individual differences, evaluation systems need to take into account the continuing rapid changes occurring in the practice environment. This requires clinical judgment and decisions that often must be made without all of the data that will later be available. Staff nurses and clinicians need to be confident that their decisions will be evaluated based on accessible clinical data and the human and material resources that were available at the time the practice decision had to be made. For these and other reasons, competency-based evaluations that use some form of controlled patient care situations or simulations are more effective and pragmatic in many environments.

Within practice settings, nurses will have far more confidence in a system that uses the equivalent evaluation standards for all clinicians in a particular setting or level. While differing informal expectations of nurses may be established based on the level of experience and expertise, nursing personnel need to be confident that the standards upon which they are evaluated are clearly explicated and fully known to them in advance and without subjective embellishment. The complexity of the practice environment clearly invites staff nurse and clinician participation in the development of competency-based performance evaluation standards. Developing well-accepted performance standards that are both valid and reliable requires input from nurses and clinicians with a wide range of experience and preparation, as well as others with expertise in assessment and agency considerations.

Protecting the Nurse Manager's Investment

Nurse managers bring to the practice setting and the evaluation process their own personal practice history, values, and judgments. Subsequently, nurse managers may widely differ in their personal perspective on staff competence. Nurse managers practice in a setting with other nurse managers who may well have differing expectations for staff performance. What may be fully acceptable to one nurse manager may be unacceptable to others in the same agency. The margin of error that may be acceptable in one practice setting may be very unacceptable in another because of differences in patient population. Nurse managers are also middle managers who are constantly negotiating the varying demands and expectations of multiple constituencies that include other departments, physicians and other health care providers, payer demands, other clinical units within nursing, patients, families, and the consuming public. These expectations differ for different nursing units within an agency setting. While negotiating all of these variances, nurse managers are held accountable for the care provided in their management setting, and thus, their investment in evaluation is multifaceted and important to them.


Protecting the investments of nurse managers requires their collaborative work to develop common expectations that will be used consistently in a competency-based evaluation system.

Nurse managers also may experience many of the same problems experienced by faculty when negative evaluations occur. Similarly to the experience of faculty, they are stunned that a negative evaluation of a staff nurse can be a highly polarizing event with shift supervisors, physicians, and other nurse managers who hold widely different views of the practice of the evaluated nurse. Protecting the investments of nurse managers requires their collaborative work to develop common expectations that will be used consistently in a competency-based evaluation system. This requires consensus on the content, process, logistics, and policies, and a high level of cooperation, mutual trust and a shared commitment to successful implementation of the entire competency evaluation process.

Constituent Interactions

The academic system accords faculty the right and responsibility to assign grades and similarly, the employing agency accords managers the right and responsibility to evaluate staff. The perspective of students and staff nurses is that this responsibility is accompanied by awesome power on the part of the evaluator. Indeed, faculty do have the sole right to assign grades based upon a reasonable standard, and managers also have the right to evaluate staff based on a reasonable standard. For evaluations (grades or employment evaluations) to be meaningful, however, they cannot, nor should they be able to be overturned for political expediency. Some students and staff nurses attempt to politicize the situation by involving all possible constituencies at their disposal; they involve persons with positional power as a way of balancing their own perceived level of powerlessness in changing a negative evaluation (grade) and outcome.

The litmus test used by all of the student and staff nurse constituencies is whether the evaluation (grade) was fairly earned, from a legal as well as an evaluation perspective. Would another reasonable person (layperson) have reached this same judgment? Because of the many constituencies who ask this question, faculty and managers have a long-standing tradition of gaining evidence over time through the use of anecdotal notes to assuage concerns about due process. The result of CPEs, however, is not based on historical anecdotal notes, but rather on the actual performance of specific predetermined critical elements that define expected competence for a specific assessment situation.

In many situations, students and staff nurses who appeal a grade or an evaluation have a distinct advantage in this litmus testing process; they present their case to laypersons or other non-nurse health care providers who may not understand the practice or the mores of professional nurses or principles of criterion-referenced assessment but have a designated authority. One way to avoid problems and unwarranted direct intervention by the many non-nursing constituencies who may be engaged in the process is to construct and implement CPEs that are written in highly specific and objective language that is easily understood by students, staff nurses, and stakeholders. For example, a competency test of administration of medications depends on whether or not the prescribed drug was administered to the designated person, at the prescribed time, via the designated route. A CPE is neither the time nor place to ask the person to demonstrate how knowledgeable they are about medications; that is best documented in controlled cognitive-based written examinations or other forms of assessment, especially at the undergraduate level. Negative evaluations based on factors not specifically included in the performance examination protocols almost guarantees that the evaluated person will gain support from their constituencies if they perceive the process to be unfair. Such deviations can be compared to a teacher interrupting students in the midst of taking a multiple-choice examination to query them about another component of content that is not included on the test.

It is critically important to notify and orient students and staff about the change to a competency-based evaluation system early in the process. This same approach, however, is not most effective with their external constituencies. Long and detailed explanations of a competency-based evaluation system to laypersons probably will not clarify the process for them, and may in fact lead to more misunderstanding. In reality, most of the lay public believes that a competency and performance-based process already is being used to insure consumer safety and care.

Reducing the Risk of System Failure

As responsible persons in the education and practice arenas are faced with changing to a competency-based, outcome-focused expectations for performance, they also are confronted with their own fears, concerns and anxieties, which often become potential or actual threats to successful implementation (Kupperschmidt & Burns, 1997). Typically, their fears are related to change in general and potential failure in particular. Anxieties also are related to anticipated additional demands on time and energy. Moreover, their concerns may relate to the seeming lack of necessity to change something that is not broken, at least in their eyes.

Faculty and managers typically do not welcome change readily. Change means a time in which the familiar is superceded by the unfamiliar, which often causes negative, even hostile, feelings. Also, when something new is tried, they are likely to fear being unsuccessful, and thus feel insecure and at risk.


When key players are an integral part of the decision process and understand the rationale for change, they frequently become excited and motivated to move ahead with a new sense of urgency.

Those on both sides of the evaluation coin (faculty and managers, and students and staff) are likely to be afraid of failure, but may try to conceal such anxiety or uncertainty. Those with more experience in coping with change and related feelings of threat to self-confidence tend to be more successful in the process. When key players are an integral part of the decision process and understand the rationale for change, they frequently become excited and motivated to move ahead with a new sense of urgency. They also begin to realize that such competency outcomes and assessment methods are more effective when they are used with more interactive learning strategies that promote competence in the multitude of skills required for contemporary nursing practice. Change presents an opportunity for a new beginning in all of these spheres, in spite of the potential threats.

Implementing major change is difficult as it usually means an increased workload for faculty or managers. Faculty are also trying to meet ongoing demands of teaching, research, and/or service. Managers have multiple responsibilities of coordinating time, people and material resources. Coping with the present responsibilities does not allow much time or energy to respond to multiple additional assignments. Administrators can facilitate the change process and reduce stress and resistance by redistributing work loads to be more equitable for everyone involved and refraining from adding any other programs or projects during such a major change process.

Managers and faculty alike may resist change unless they come to believe that the new product (or result) will be better than the present one. Basically, they ask a legitimate question: Why fix something that does not appear to be broken? If graduates are passing state board examinations or staff are functioning adequately, why change to something that may not have the same or better outcomes? One way to help them deal with their uncertainty or resistance is to engage them in the process of determining the types of skills and extent of competence they believe are essential to meet current and anticipated levels of performance.

The process of reaching consensus on the specific skills and the required critical elements for each of them generally helps to reduce many overt concerns about subsequent competence in performance. Also, pilot testing the CPEs and related processes while still keeping the previous evaluation tools in place during the transition period often relieves anxiety about switching to totally different methods and attendant consequences. Faculty and managers need time to complete and refine the many new methods while student clinical grades or staff appraisals, will not be affected totally or irrevocably. Ultimately, clinical faculty who become confident in using the new competency performance methods are more effective in helping students or staff to alleviate their own anxiety and to become more competent and confident practitioners.

All of the above strategies in protecting the investments of faculty and managers mean scheduling sufficient time for development and implementation of multiple components of a comprehensive competency-based assessment system. Having frequent workshops designed to help them work through uncertainties and concerns and develop details of content and process are essential to reducing the threats associated with change. Faculty and managers can develop trust and confidence in all components of the competency-based assessment system when they understand and embrace the conceptual framework and psychometric principles that undergird all facets of its design. This tested foundation, when implemented fully, promotes objectivity, consistency, and documented competence of participants.

Discussion and Future Directions

Implementing a new and more rigorous system of validating competence is threatening and potentially problematic to individuals and agencies and thus requires considerable deliberation and planning. Students and teachers in academic programs and staff and managers in service settings typically experience similar kinds of concerns and have similar needs for preparation and protection from negative fallout. Plans to minimize anticipated negative events for the many categories of persons involved are an essential part of making such a paradigmatic change successful. It is incumbent upon those in leadership positions to thoughtfully reflect on the consequences of recommended changes for each level or type of participant to be involved and to determine methods to preserve morale, productivity and active involvement in the process. Using strategies that have worked in the past and creating additional methods based on the experience of others is important, especially during the early stages of development and implementation.

Individuals or group of persons who are subject to new performance standards and levels of verifiable competence stand to win or lose favor among others: so their various constituencies and all the diverse stakeholders are involved. If those being assessed do not meet established protocols for demonstrating competence, they face the consequences of jeopardizing a current position or losing a job, or of failing a course or failing out of a degree program on which a career is planned. If the aggregate evaluation outcomes for agencies or other groups are less than required, they, too, face negative fallout, such as loss of reputation and even accreditation. When faced with such consequences, it is no wonder that everyone gets somewhat anxious about various aspect of shifting from traditional to more contemporary competency-based performance assessment methods. Clearly, then, those most likely to experience either negative or positive outcomes of the process need to be involved as soon as possible; their role is to help creatively resolve problems that are likely to occur, as well as to design strategies to use for the unanticipated events that surely will emerge.

While these proposals present challenges at every level, continuing with traditional strategies poses even greater and more negative consequences during these times of rapid change in the health care delivery system nationwide. The need for some form of mandatory and regular validation of continuing competence is so apparent that administrators and leaders in many academic and employment agencies and organizations already have initiated or expanded their efforts to implement effective and objective performance assessment systems. Preparing for the extraordinarily complex and unpredictable demands of the new millennium requires that rigorous but realistic learning and assessment opportunities are accessible for providers to upgrade and verify the broad array of essential practice skills. Such ongoing learning and verification are expected to become the norm for initial or continuing licensure and advanced practice certification. It is no longer enough to have completed a program of study sometime in the past. The demonstration of actual competence, in real time situations, is fast becoming the norm for which every student, staff employee, and every educator and administrator must prepare. Achieving and validating continuing competence is one of today's most serious, challenging, and contentious issues. Like major problems of the past, invested leaders and participants across the spectrum of the profession will confront this one as well, with resolve to create effective alternative solutions to fit the circumstances and meet the needs of consumers and diverse members of the profession.

Summary

Successful implementation of a criterion-referenced competency-based evaluation system requires understanding the perspectives of diverse stakeholders and protecting their significant investment in the outcomes of such systems. Incorporating their realistic concerns and shared values for practice standards enables managers and faculty to design and implement more effective and acceptable learning and competency assessment methods. Building bridges of mutual trust and respect are essential components of coping effectively with the potential threats while meeting the demands for more competent practitioners in contemporary health care environments.

Authors

Toni Bargagliotti, DNSc, RN
E-mail: tbargagl@memphis.edu

Toni Bargagliotti, DNSc, RN is Dean and Professor at The University of Memphis Loewenberg School of Nursing.

Marjorie Luttrell, PhD, RN
mluttrel@memphis.edu

Marjorie Luttrell, PhD, RN is an Associate Professor at The University of Memphis Loewenberg School of Nursing.

Carrie B. Lenburg, EdD, RN, FAAN
E-mail: clenburg@naxs.net

Carrie B. Lenburg, EdD, RN, FAAN is Chair of Excellence The University of Memphis Loewenberg School of Nursing.

The Loewenberg School of Nursing has completed a process of curricular change to competency outcomes and performance-based evaluation with Dr. Lenburg as our Loewenberg Chair of Excellence. This article relates ways of reducing threats to the implementation of this competency-based performance assessment system.


© 1999 Online Journal of Issues in Nursing
Article published September 30, 1999

References

American Association of Colleges of Nursing. (1998). The essentials of baccalaureate education for professional nursing practice. Washington, D.C.: Author.

Joint Commission on Accreditation of health care Organizations. (1996). Comprehensive accreditation manual for hospitals: The official handbook. Oakbrook, IL: Author.

Kupperschmidt, B.R., & Burns, P. (1997). Curriculum revision isn't just change: It's transition! Journal of Professional Nursing 13, 90-98.

Lenburg, C.B. (1979). The Clinical performance examination: Development and implementation. New York: Appleton-Century-Crofts.

Lenburg, C.B. (1990). Do external degree programs really work? Nursing Outlook, 36, 234-238.

Lenburg, C.B. (1991). Assessing the goals of nursing education: Issues and approaches to evaluation of outcomes. In M. Garbin (Ed.), Assessing education outcomes. New York: NLN Press.

Lenburg, C.B. (1992-1995). Competency-based outcomes and performance assessment. Unpublished workshop materials for several institutions or organizations, such as Fairleigh Dickinson University, East TN State University, College of Mount St Joseph, the American Association of Critical Care Nurses, and others.

Lenburg, C.B. (1998). Competency-based outcomes and performance assessment: The COPA Model. Unpublished workshop materials: The University of Memphis, and University of Colorado, Health Science Center.

Lenburg, C.B. (1999a). Contemporary issues in nursing education. In B. Cherry and S.R. Jacob (eds), Contemporary nursing: Issues, trends and management (p. 66-97). St. Louis: Mosby.

Lenburg, C.B. (1999b, in process). The competency outcomes and performance assessment model applied to nursing case management systems. In E.L. Cohen & T.G. Cesta (Eds.), Case management: From concept to evaluation, 3rd edition. St Louis: Mosby.

Lenburg, C.B., & Mitchell, C.A. (1991). Assessment of outcomes: The design and use of real and simulation nursing performance examinations. Nursing and Health Care, 12, 68-74.

Luttrell, M.F., Lenburg, C.B., Scherubel, J.C., Jacob, S.R., & Koch, R.W. (1999). Redesigning a BSN curriculum: Competency outcomes for learning and performance assessment. Nursing and Health Care Perspectives, 20, 134-141.

National League for Nursing Accrediting Commission. (1997). Accreditation manual for post secondary, baccalaureate and higher degree programs in nursing. New York: Author.

Nolan, P. (1998). Competencies drive decision making. Nursing Management, 29 (3), 27-29.

Orchard, C. The nurse educator and the nursing student: A review of the issue of clinical evaluation procedures. Journal of Nursing Education, 33, 245-255.

Citation: Bargagliotti, T., Luttrell, M., Lenburg, C. (Sept. 30, 1999): "Reducing Threats to the Implementation of a Competency-Based Performance Assessment System". Online Journal of Issues in Nursing. Vol 4, No. 2, Manuscript 4.