Hi there, it’s Adam. I started this newsletter to provide a no-bullshit, guided approach to solving some of the hardest problems for people and companies. That includes Growth, Product, company building and parenting while working. Subscribe and never miss an issue. If you’re a parent or parenting-curious I’ve got a podcast on fatherhood and company-building - check out Startup Dad. Questions you’d like to see me answer? Ask them here.
Welcome to another 🔥 Hot Take Alert 🔥 where I opine on something that I feel very strongly about and occasionally try to make it a little bit better. I only do this once every few months so you won’t see another one of these for a while.
Past 🔥 Hot Take Alerts 🔥 have included:
🔥 Hot Take Alert #1: PRDs are the worst way to drive product progress
🔥 Hot Take Alert #5: No you shouldn’t do a Spotify Wrapped campaign
I was having a conversation with a friend (Max) recently about performance reviews. Yes, this is a thing that I talk to friends about. I swear I’m still fun at parties. Usually these are friends who work in startups and technology.
His take:
“Performance reviews are like capitalism. Lots of problems but no one has invented a better system yet.”
Having gone through ~20 years of performance reviews as an individual contributor, a “line manager” and an executive I wholeheartedly agree with this statement.
Performance reviews are kind of like OKRs - we’ve adopted them in startups and tech companies but we don’t really know why or how we got here.
In today’s newsletter I’ll cover the following
The history of performance reviews
Why they’re problematic and not that helpful
What else we can do instead of what we’ve always done
The History of Performance Reviews
Like so many things in startups, Performance Reviews have their origin in military history. They got started during World War I when the U.S. military created a merit-based rating system that they could use to flag (and dismiss) poor performers. Heading into World War II they were revamped to create a force-ranking system that identified the top performers who would then be recommended for officer training.
As people exited the military and brought that style of management to corporate America companies started to adopt performance management and reviews. By the 1940s about 60% of companies were using performance reviews.
This exploded in the 1950’s when the government passed the Performance Rating Act (I am not joking, this was really a thing). It created a new system for evaluation of all civil servants and created the scale that many of us are familiar with today: ‘outstanding, satisfactory and unsatisfactory.’ This was followed in ‘54 by the Incentive Awards Act (yep, also a real thing) which allowed federal employees to reward their subordinates for outstanding accomplishments.
By the 1960s about 90% of employers in the U.S. had implemented a system based off of these two government acts. When inflation exploded in the 1970s companies needed more objective measurements for merit-based compensation further cementing the performance management system. Then, in the 19080s, management powerhouse Jack Welch took this a step further by adopting this military’s forced ranking system at GE. This is the “bell curve” system where the top 10% are rewarded, the bottom 10% are cut and everyone in the middle just continues to exist. This started the terminology around “A” players, “B” players and “C” players that exists today. Thanks Jack.
The consultants doubled down in the 90s when McKinsey published the War on Talent about attracting and retaining the best workers. We then entered the 2000s following the dot com boom and bust which led to fewer managers, flatter organizations, and a pendulum shift away from performance reviews (or, any reviews and feedback at all).
Throughout the next few decades companies experimented with dropping performance reviews altogether, bringing them back, and generally being confused about what they should be doing and optimizing for. When Welch left GE in 2005 as CEO they wound down the forced ranking system a few years later. Not surprisingly they had found that it reduced collaboration and increased internal competition (and politicking).
This brings us to today… there are a myriad of performance review processes now: continuous feedback, 360 feedback, project-based reviews, quarterly feedback, etc. There doesn’t seem to be a perfect system, but I do think there are certain elements that can be assembled to create a better one.
Before we get there though let’s explore why the annual performance review process is problematic.
Why Performance Reviews Are Problematic and Not All That Helpful
First, let’s establish one thing: as a manager, providing regular, actionable feedback and coaching to employees is a good thing. It’s just that most managers aren’t good at this and they fall back on the timetables of whatever system their company participates in. I had a manager once who believed that subordinate employees should be afraid of their manager. You’ll be shocked to learn he wasn’t very good at being a manager.
Here are a few things broken with the typical review process:
It takes way too long to complete
It’s lumpy and full of recency bias
It’s infrequent
It looks backwards more than forwards
It’s typically coupled with promotion and career advancement
The typical review process is annual or bi-annual (2 times per year) and takes weeks of time. In one company I worked for we observed a noticeable drop in our metrics during the annual review cycle (also annual planning) as employees and managers wrote tomes to justify (or defend) a certain rating.
It takes way too long to complete
Partially because it is infrequent, but also because it is tied to promotion and career advancement, the review process takes on a great weight for those who participate. Get a less-than-stellar score and it feels like you’ll be dropped into a hole too steep to climb out of; get an amazing, chart-topping score and you’ll probably be moved into a new, stretch area where you’re a lot less comfortable. The pressure is real and palpable—everything seems like a one-way door decision.
Lumpy and full of recency bias
Most companies will do their performance evaluations once or twice per year. This means both the participant and the reviewer have to pull from six (or twelve!) months of performance history. I don’t know about you, but I can barely remember what I had for breakfast this morning. Unless you’ve been diligent about keeping a running document of personal progress (as an individual and a manager) you’re going to default to what you can remember which is often the most recent memories. Most people aren’t great at actively working against recency bias.
Infrequent
There’s a reason that we do retrospectives at the end of each sprint in product development. The feedback is fresh and can be carried forward into the very next sprint. You can actively work on improving the process in near real-time. With scheduled performance reviews this doesn’t happen as much. There is a tendency to avoid feedback conversations and save everything to the review because that’s where it will be officially documented into “the system.” But if someone is succeeding or struggling it’s best to have the conversation in the moment because you can be specific and actionable. By the way, if you're having trouble with difficult conversations I recommend the PESOS framework from a newsletter article I published a few years ago.
Backwards, not forwards
The very origin of the word review is a French word which means “to see again.” This implies looking backwards at what has been done with minimal emphasis on looking forward. This is another challenge with performance reviews—they are quite literally a look backward at what you have done with less emphasis on what you will do going forward. In fact, if review templates do include the plan for going forward it’s often the last part of the review when people are exhausted and have less to say (because they just want to be finished).
Coupled with promotion and career advancement
Most review processes are tied to the promotion process. You document all the achievements and feedback over the last six-to-twelve months and then make a case for going to the next level in the career ladder. This has a tendency to bias people towards positive feedback only; especially when reviews are done infrequently. There’s a sense that “this is my only chance for a while so I’d better push for that promotion.”
Beyond these five issues there are many more reasons that performance reviews as we’ve always done them aren’t that helpful: different teams have different scales of evaluation, career levels may not be well-defined, leaders can end up grading on a curve, they’re not necessarily outcomes-based, they don’t reward risk-taking, and on and on and on. I’m trying to keep this hot take on the shorter side.
What else we can do instead of what we’ve always done
I don’t want to throw out everything that exists with performance reviews or imply that it’s a bad idea to review the performance of our team members. Swinging the pendulum to no reviews and no feedback would be a far worse outcome for our employees and our companies.
Instead, here are six changes I believe could have a significant impact on the current performance review process and everyone’s participation in it:
Leveling archetypes
Increase frequency
Focus on the future
Decouple from compensation
Customize based on project and employee lifecycle
Specificity
Leveling Archetypes
Most companies will establish a career leveling guide at some point for the various teams in the organization. Reforge has an entire set of artifacts (some of which I created) that help you navigate this process. Start there if you don’t have this already. They all kind of look the same in the end.
But one thing I’ve started to believe in more heavily as of late is the idea of archetypes. I wrote about this for Product Managers here and it was a widely distributed and shared newsletter article. The idea of archetypes as it applies to leveling is that you would have competencies and skills that mapped to the different types of tracks that employees were on. If you’re building features, you’d have an archetype for that. If you’re inventing new adjacencies you’d have an archetype for that. You could also highlight specific people in the organization (risky, I know) who embody those competencies and behaviors that make someone successful on that particular track.
Another opportunity within leveling is to layer in “this, not that” examples. Very specifically identifying what great looks like for a specific competency and dispelling the myths of what people might think great looks like could be very helpful. This might look something like…
Mastering the ability to collect customer feedback looks like:
Having multiple, first-party touch points with customers each week not reading a research report every so often.
A lot of product managers think that they’re doing their job in collecting customer feedback by outsourcing that collection to a research team and reviewing their findings. Nope. Not the job. This clarifies that very specifically.
Further defining mastery is a third way of improving leveling guides. One thing that I see most companies miss is consistency of application. As most grandfathers have said, “even a broken clock is right twice a day.” In practice this means that doing something once does not mean you have mastered it. Seeing consistent and recurrent application leads to mastery. I often coach product managers to understand that consistency is what really matters because that leads to sustainable impact.
Increase Frequency
There is a continuum from continuous feedback to once-per-year. I don’t think you need continuous, but you should be closer to that end of the continuum.
What this means in practice is that you as a manager (or employee!) carve out specific, monthly feedback conversations and work towards a PESOS-style feedback loop for both positive and constructive feedback. If you (manager) observe something you think is great happening, share it with the employee and say “please do more of this.” It can also be helpful to publicly share that feedback (positive = public, constructive = private) because it helps to build that archetype.
If you’re increasing frequency then you don’t need a novel for each feedback discussion. A simple monthly rubric with bullets can suffice: what’s going well, what’s not going well, what needs to change.
My friend Luc Levesque talks about the monthly feedback process in this interview in the NYTimes from a few years back. It doesn’t have to be complicated and you can start today. Even if your company does annual or bi-annual reviews, if you've documented and aligned on monthly feedback with your team members then you’ve got an entire catalog of performance to draw on when it comes time for the company-approved review process.
Focus On The Future
Yes a look back is important and we know those who don’t fail to learn from history are doomed to repeat it. But we also know that in a fast-growing company the future matters more than the past. In the monthly review cadence you work with your team members to lay out the goals for the next month and in an annual or bi-annual process you can explicitly state goals for the next ~6-12 months. The benefit of doing both is that as the landscape changes you’re capturing it in your monthly look ahead. You’re also adjusting the rudder while the boat is sailing and steering it in the direction you want. By also doing this over a longer time horizon you’re working with team members to articulate the future that they want for themselves, which often can’t be achieved in a matter of months.
The way I’ve handled this is with a rolling review process. Every month when you set the goals for the next month you can also “forecast” the goals for a quarter or half-year into the future. This helps you balance the long term with the short term but emphasizes future development over rehashing the past.
Decouple from Compensation
This may be one of the hardest of my recommendations to implement. It certainly makes the lives of HR and finance teams more complex. But hard doesn’t mean impossible. Nothing worth doing is ever easy!
First, separate the timing of reviews and compensation discussions. Reviews can happen with more frequency, lighter touch, and are development oriented (and focused on the future!). Compensation can be reviewed annually with an emphasis on market conditions, cost of living and (this is important) overall company performance.
Second, I recommend focusing the reviews on development only. You can shift the evaluation from past performance to future growth and build development plans that chart a path for where the employee wants to be. Don’t mention compensation. Specifically removing comp from the discussion makes the feedback process a tool for personal and professional growth and not a determinant of their pay. After reviews there could be a comp adjustment, but it’s not something that should be expected.
Third, introduce performance bonuses and variable comp. If something goes incredibly well and achieves a major outcome you can give people a bonus. You’re redistributing a portion of the proceeds to the team that made it happen. You can also communicate this broadly – remember those archetypes?!? It models the behavior you want to see.
Customize Based on Project and Employee Lifecycle
Similar to the performance bonus you can reward exceptional results on a project with additional compensation. Separately though, it’s important to recognize where an employee is along the lifecycle of their competence.
When someone is doing really well we have a tendency to promote or advance them to the next level. This is the Peter Principle in action. The problem is that we don’t acknowledge the coaching and support they’ll need at this next level. Whereas previously they may have been 90-100% comfortable with what they’re doing, in the new role they might be 50% comfortable and have a 50% knowledge gap. An infrequent performance review won’t catch and address this for months. Recognizing that in a new role someone needs more hands-on management (and maybe a dose of micromanagement) is important! I cover this extensively in a recent article that focuses on situational leadership.
Specificity
Be specific. Everywhere. Monthly reviews allow you to do this because you have detailed recollection of the various actions and behaviors you want to call out. But also this is where real time feedback allows you to capture the exact moment when something occurred and when it’s fresh for everyone.
The way that I handle this is with a combination of asynchronous and synchronous feedback. Let’s use a specific example (see what I did there):
I once had a team member who was leading a product review. We had prepped in advance but it was clear that as they reached the midway point of the review they were spinning their wheels and not landing the points they wanted to land. I let them struggle for a few minutes and then intervened to get us back on track. I interjected more in the remainder of the meeting and we ended in a good place.
Immediately after the meeting I messaged the team member and said, “Hey, do you have a few minutes to talk about that review? I’d like to discuss what went well and what didn’t.” We chatted and covered most of the feedback in real time. But I also know that I can be more of an asynchronous thinker and wanted to gather some other inputs so I asked if they’d be comfortable with me following up a second time to provide additional, written feedback. They were and I did.
Being able to asynchronously follow up allowed me the opportunity to get other perspectives beyond my own and provide more inputs to the person I was coaching. Others observed different things than I did which allowed me to get a lot more specific in my feedback and make it more comprehensive.
Closing Thoughts
I come back to what my friend Max said in the beginning, we know there are problems with performance reviews but we don’t have a better system (yet). I don’t think we need to throw everything out and start from scratch, but I do think that some of the changes I mention above can have a profound effect on the quality of the process and its outcomes.
To recap:
Introduce leveling archetypes and a “this, not that” component to leveling guides.
Conduct more frequent and simpler reviews. Monthly is great and real-time course correction is also beneficial.
Focus on the future. Performance reviews should be about steering performance in the direction that you and the employee want it to go. Just rehashing the past and emptying the tank on what has already happened won’t get you there.
Decouple from compensation. Tying compensation to performance reviews changes the dynamic of the conversation. It focuses more on highlighting the good to increase compensation rather than the development opportunities which lead to long-term outcomes for the employee and company.
Customize based on project and employee lifecycle. Practice situational leadership by recognizing where someone is along the competency continuum.
Specificity. Provide specific examples of the behaviors you want to both encourage and discourage.
Following these six improvements can help reduce completion time and recency bias, steer the ship in real time by focusing more on the future and increase employee engagement in the process.