It’s review season
Make sure you submit your materials
It’s getting close to the end of the year and that means a variety of things to people. Holiday season. Ugly sweater season. Duck season. But at universities, it’s always review season.
The terms assessment and deanlet go hand in hand, each feeding the other in an ouroboros of administrative lust. That passion does not go unanswered because there are so many things to assess. There’s the annual review of faculty. This of course is not exclusive of the 3-year review and the 5-year review on the road to tenure with faculty often submitting both annual and milestone reviews within a few months of each other. And then there is the actual Promotion and Tenure review. But it doesn’t stop with tenure. Now, many states have instituted post-tenure review. And then of course there’s the departmental review. And the grad program review. And the 360 review of leaders when survey respondents get asked fuzzy questions about leadership. And then the leaders get fuzzy multi-page analyses to see if there’s any gaps between self-assessment and supervisor or peer-assessment so you can adjust your leadership style. (As a note, the main suggestion from my assessment was to use my middle finger less often. Luckily for me, it was only a suggestion.)
There is a whole industry dedicated to approaches for review. When analyzing programs and units, there are various methods of analysis. A popular one is SWOT analysis. You don’t know about SWOT?? This is an acronym for Strengths-Weaknesses- Opportunities-Threats, and as it sounds, you list items in each category that are related to the thing being reviewed. Another is the STAR method for assessing how programs overcame challenges. STAR is an acronym for Situation, Task, Action, Results. In this approach you assess the situation, assess the definition of the task, assess the action taken, and then of course, assess the results.
I have a developed a similar approach for defining problems and assessing problem solving. It’s based on simple principles. Formulate the question that needs to be addressed, Understand the problems associated with that question, build Consensus among the team on how to address the problems, and the final phase is Knowledge Utilization to solve the problem. I call this the FUCKU approach. I think this will be very effective and I’m ready to employ it at future meetings.
Early faculty review, Henri Toulouse-Latrec, 1901But beyond program assessment there’s also standardized portals for faculty to make sure that all their effort is being captured for review. These programs require all the data already on the standardized CV to then be input on another standardized form, either online or in another document, so that individual effort can be assessed on the standardized form and then looked at in aggregate by the professional assessors. These tools, as they are unironically called, require more effort to make sure data entry is correct; half of the data that is populated by the administration is either incorrect or misplaced. And then of course there’s all the data that is not captured by the administration that needs to be entered by the faculty.
Time consuming? Sure. Twenty years ago, when I was just a wee Assistant Professor, I was completing one of these very types of assessment tools (see, they have been around for a long time). One of the questions asked us to assign the proportion of our time that was spent in the various academic missions. The values I entered were something like this…
Research 75%
Teaching 15%
Service 5%
Filling-in evaluation forms 5%
I partially wanted to see if anyone was reading it, because clearly that was an underestimate, when you also consider all the training required to use the new tool. In retrospect, even though this was before I was a raving and annoyed scientist, I was still a complete smartass and I’m grateful I’m still here to be annoyed. I don’t think anyone noticed, or if they did, it was before all the deanlets lost their sense of humor.
I’m sure as you’ve read this you’re thinking “Wow, that’s a lot of reviewing. Is it all necessary and when does any actual work get done?” These are both great questions. To the former, I will say that in the years in-between deciding that the last assessment tool wasn’t doing what was intended and implementation of the latest and greatest assessment tool that will definitely work this time, we often resorted to simple CV review, and the world did not end. All the information is there. People who know how to read a CV get it. Faculty still get reviewed. Note that I’m not saying faculty shouldn’t be reviewed; there needs to be review so progress is clear and expectations are communicated. I’m simply proposing that it doesn’t need to be this complicated. And it doesn’t need to be done to enrich the academic assessment-industrial complex with the likely hundreds of thousands of dollars spent to use and customize the tool, with a process that provides fertilization for deanlet existence and growth.
And as for the last question, when does the actual work get done? We are still making incredible strides in science and medicine. But imagine how much better it could be without unnecessary distractions.



I once paid my middle school kid $20 to copy/paste my info from one long form to the new one, a requirement of the day. My colleagues were envious of my excellent idea, which matched the seriousness with which it was prescribed. And none of it mattered except that my kid was as happy with his payment as I was making it!
Brought back a nightmare memory from a few years ago: one of our deanlets proposing to "soften" the SWOT language to another method called SOAR. (apparently there is another one called NOISE, but that is too mean too)
I see you all are Watermark Digital Measures, I MEAN "Faculty Success" users too :)