The Subtle Art of Evaluation

Dec. 1, 2002

Personnel evaluations are tricky. The problem: how to provide a fair and honest assessment for leadership and, at the same time, inspire someone to improve rather than to head for the exit sign.

Individual perceptions of an evaluation system that affects promotions, duty assignments, and future careers bear directly on decisions to stay with the Air Force or quit. That retention factor has led service officials to refine the performance rating system–several times–to make it more visible and acceptable.

In fact, the Air Force has tried more than half a dozen evaluation systems over the years and made periodic changes in each before abandoning it in favor of a new approach. Today’s program still is something of a work in progress, but officials feel it comes close to accomplishing the twin goals of giving the service an honest assessment of its members and giving those members blueprints for self-improvement.

In 1947, when the Air Force became a separate service, it still used the Army’s officer evaluation process. This was a simple, multiple-choice form that required supervisors to answer 24 questions by picking from among statements most and least descriptive of the subject’s job performance and personal qualifications. The rater then had to show where the officer fit among all those he had rated.

Two years later, the Air Force had developed its own form. It gave the supervisor twice as many factors to rate and half a page for comments. The ratings were weighted and totaled to give the officer an overall score. The form and the instructions were tweaked periodically over the years, but the same basic practice of matching members against a scale of traits persisted.

Over time, raters tended to give too many officers outstanding reports. In 1974, the Air Force tried to eliminate this kind of inflation by limiting the number of top-box ratings a rater could award. The approach worked to a point but was abandoned after complaints that, in effect, it gave units the power to preselect members for promotion.

An In-Depth Review

In 1995, then-Chief of Staff Gen. Ronald R. Fogleman called for an in-depth review of the officer and, later, the enlisted evaluation processes.

The evaluation review found no major problems with the officer system, according to Col. Carolyn Pratt, chief of the Promotion, Evaluation, and Recognition Division at the Air Force Personnel Center. “They determined that the Officer Performance Report was working as intended,” she said. On the enlisted side there were several recommendations, such as the use of written promotion recommendations and elimination of rating expectations or guidelines.

One thing the 1995 study did change for both officer and enlisted evaluations was to make a bullet format mandatory for the narrative portion, the section where raters describe a member’s capabilities. This section had degenerated to long, wordy descriptive passages that often told little about a member’s performance. While the Air Force had encouraged raters to use a series of terse descriptive phrases rather than complete sentences to reduce the fog of verbiage, it was still optional. “We made it mandatory as a result of the ’95 study,” said Pratt.

Officials said shorter, more succinct wording is more likely to catch the attention of promotion boards. “Obviously,” said Pratt, “the rater who can use a better turn of phrase may engender a better picture in the mind of somebody who is evaluating a record. But I will tell you that the records that stand out are the ones that take the shortest amount of time to get to the point. They are more effective than complete sentences where all the i’s are dotted and the t’s are crossed. You can fall asleep in the middle of those.”

Following the 1995 study, the Air Force also decided to put more stress on feedback–the process in which a rater tells a member where the member fell short and how to improve. Raters now are required to show in writing when they complete such counseling both for enlisted members and officers. The counseling is supposed to be done before the formal evaluation.

“There have been growing pains since we initiated feedback forms and made individuals put on the performance report when that feedback was given,” said Pratt. “We don’t want the performance report itself to be the feedback, because the first time you are told how you are doing shouldn’t be when you get your evaluation. It should be ongoing through the entire reporting period.”

Feedback sessions include showing the member both how he or she is doing and how he or she compares with others in the same peer group. This second assessment, officials admit, is the hardest part of the process for some raters. It should also appear on the evaluation form itself.

Key to Promotions

“The promotion boards are looking for discriminators and when they don’t find them, they tend to do it in reverse by looking at what’s not being said,” explained Pratt. “If you’ve got 10 squadron commanders under a wing, it would help to know if this person is considered by the wing commander to be No. 1 of 10.”

She added, “Boards are looking for that type of stratification these days as opposed to the words that just say this individual is a blue-chip officer. That doesn’t tell me anything. ‘OK, I think they’re good but how good is that?’ We’re looking for hard quantification.”

While evaluations are used for a variety of purposes, including selection for assignments, training, and special duties, the most visible and emotionally charged use for officers is in the promotion process. Officer promotion boards consider awards, decorations, professional military education, and other factors, but levy the most importance on the annual OPR and the promotion recommendation form–a one-time document prepared for each promotion cycle and discarded after that round.

For senior enlisted personnel, the selection process is similar to that for officers, with one major exception–the Weighted Airman Promotion System, a point scoring process. For the middle enlisted grades, scoring well under WAPS is the primary means to promotion.

Those enlisted members competing for senior and chief master sergeant are evaluated using both the WAPS scoring process and a review of their records by selection boards. The boards review performance reports along with other information.

WAPS provides a point score made up of six elements valued at different points. The theoretical maximum is 460 points. The six elements are:

Enlisted Performance Reports covering an airman’s last five years (maximum of 10 reports), with the most recent reports given the most weight–up to 135 points.

  • Fitness examinations– up to 100 points.
  • Skill knowledge tests–up to 100 points.
  • Time in grade–up to 60 points.
  • Time in service–up to 40 points.
  • Decorations–up to 25 points.

The WAPS Debate

The Air Force developed WAPS in the 1970s by studying the elements that selection boards considered, noting what weight they gave to each, and then duplicating the process mathematically. Because the WAPS formula reduces enlisted ratings to a single numerical value and many junior airmen tend to have similar EPR scores, critics of the system say it shifts undue weight to test scores and other factors. Personnel officials disagree.

“I am sure that there are some folks who think that,” said Pratt, “but the WAPS factors were studied very carefully years ago before it was determined that these were the types of factors we needed to look at. They give us the feedback, perhaps not in the same way an evaluation does, but well enough to show the caliber of the individuals.”

Just to be sure the WAPS process still is working as intended, however, the Air Force is taking another look to determine, as Pratt put it, “if in today’s environment we are still looking at the same things the same way, or whether we should make some changes in the formula.” She added, “It has been tweaked slightly over time, but it is not markedly different from what it was when it was first envisioned.”

Whatever its problems, the evaluation process appears to be understood by most members. On surveys, more than three-quarters of those questioned said they understood the systems. Lower percentages rated the process as fair, but officials said the dissatisfaction is less with the process than with the perception of how it is used.

It’s Not the System

“Most of the complaints that we get are not against the system,” said Pratt, “but about a specific situation that the individual finds himself in. Either that or they involve specific raters who may not have seen the individuals the way they see themselves or as they would like to be seen.”

She added, “Of course, we have processes in place to appeal EPRs and OPRs where people believe that the correct information has not been put forward. But the beef normally is not with the system itself but with the people who write the ticket or with the way it was processed.”

The Air Force itself considers the evaluation process a key part of the career system.

Officials admit, though, that not all contenders are going to make it. According to USAF, most officers believe they are in the top 25 percent of the officer force. However, mathematics dictates that not everyone can be at the top. The idea is to make the evaluation system not only fair, but understandable by everyone, especially those who did not receive the top ratings.

The Air Force has added a number of safeguards to help ensure that all contenders do get the fairest shake possible. One is the management level review, made up of senior raters who study promotion recommendations to see that they are properly prepared and send the messages intended. Another is a procedure that protects officers in student status from being at the disadvantage of competing with instructors assigned to the same units. A third is a rule that removes promotion recommendation forms after each board so the officer will not be dogged in the future by a less-than-glowing form.

One way the Air Force has tried to make the evaluation and promotion processes more acceptable to members is by making them more visible. “We still keep the boards on a close-hold in the sense that we don’t allow just anybody to walk in and observe them in session,” said Pratt. “We want to keep safeguards in place to make sure they are conducted the same way time and again.”

The service encourages the people who sit on those boards to be open once the board is completed. “They can’t talk about the deliberative process itself and what they did in that process, but they are encouraged to talk about how the process itself worked, how they were briefed, what kind of charge the secretary gave the board before it convened, how scoring was done, and that sort of thing,” said Pratt. “The members who sit on these boards are highly encouraged to go back out to the commands and to talk about these procedures.”

Air Mobility Command recently circulated the board statistics and included reports by two of the board members who happened to be in that command, noted Pratt. “They outlined their experiences,” she said, adding, “anybody who was looking at the board could tell what went on, how they viewed it, what they looked for, and those types of things.”

Ultimately, however, much of the responsibility for seeing that they receive a fair hearing is left to members themselves.

The Drawdown Effect

Some observers have speculated that the long personnel drawdown of the 1990s eliminated many less-qualified members, making it more difficult to discriminate among those left. Theoretically, those still retained by the service were all outstanding.

Officials discount this contention. They maintain that it was unlikely only top performers survived the cuts.

“We used a variety of programs during the drawdown,” said Col. Carolyn Pratt, chief of the Promotion, Evaluation, and Recognition Division at the Air Force Personnel Center. “Three-quarters of them were voluntary programs, which means we had very little control over who chose to leave and, frankly, we lost a lot of high-quality individuals.”

The involuntary selective early retirement boards, which looked at more senior personnel, “considered the age of the individuals more than the quality of their records,” she said. “They trimmed from the top down as opposed to making it a quality cut. Given another couple of years, those folks would have been gone anyway so it was just slightly earlier.”

The Air Force also conducted one involuntary reduction in force action, targeting more junior personnel. However, Pratt said, “Many of those kids were so young it would be tough to tell you whether they would have turned out to be superchargers or not.”

Bruce D. Callander is a contributing editor of Air Force Magazine. He served tours of active duty during World War II and the Korean War and was editor of Air Force Times from 1972 to 1986. His most recent article, ” The Jet Generations,” appeared in the October 2002 issue.