For a group of well-educated, thoughtful people, we do a terrible job of hiring people in the software industry. We put engineers in front of whiteboards and ask them to reproduce obscure algorithms, have hand-wavy discussions about approaches to problems, try to assess “culture fit,” and talk about whether or not we’d want to hang out with candidates after work. The whole situation reeks of, “These are the best ideas we could come up with.”
The way in which most companies hire technical writers is even worse. Hiring managers request writing samples, ask about familiarity with style guides, and lean heavily on experience. None of these things have any relation to a candidate’s ability to do the job:
- Writing samples are essentially meaningless, except as an indicator of a candidate’s taste. That writing sample you’re evaluating might have taken six writers, 200 person-hours, and countless revisions to get right. Was the candidate even involved in its creation?
- Asking about familiarity with a style guide is akin to asking a carpenter about different brands of electric drills. As long as they know how to operate that type of tool, who cares?
- I’ve met incredible technical writers with no formal experience and horrible technical writers with decades of experience. I’ve met great writers from all different academic and professional backgrounds. What matters is a candidate’s ability to do the job.
The first step, then, is to identity the various qualities that successful employees have demonstrated, qualities like:
- Ability to learn quickly
- Subject matter competence
- Good writing
- Basic familiarity with industry standards
- Positive influences on the team
These qualities might differ within your team or company, but you should have a list of them. Historically, what has led to success in your role?
The second step is to design job interviews that assess these qualities, interviews like:
- Give candidates a (fake) tool and a laptop, and ask them to document a particular task involving that tool. Encourage candidates to ask questions. Do the questions demonstrate an understanding of the task and how one might document it? At a prior company, we wrote a fake command-line interface (CLI) that simulated some basic database interactions.
Ask candidates a series of assorted, 101-level questions about the subject matter of your industry, such as:
- What are a few ways in which Java and Python differ?
- What are some reasons you might write a shell script?
- When you type
google.cominto a web browser and hit Enter, what happens?
- How does a router work?
- What are the benefits of using a version control system?
- What are REST APIs, and how do they work?
- What does CSS do?
Look for concise, detailed answers. If candidates use vague language, don’t let it slide. Drill in and see if they self-correct. The goal isn’t for a candidate to bullseye every question. Rather, you’re trying to assess technical foundation and identify any important knowledge gaps.
- Ask candidates to edit or rewrite a poorly-written document.
- Have candidates—on a laptop, not a whiteboard—write a short, math-based program in the programming language of their choice.
Most of these interviews have the added benefit of producing gradable artifacts (i.e. documents or programs). Having something tangible come out of an interview helps control unconscious bias. It lets your colleagues, people who didn’t sit in on an interview, review the work rather than the candidate. It lets you keep your interviews to half a day and still have eight or ten people offer credible opinions on whether you should make a job offer.
The subjective side of the equation matters, too. We all want to work with dependable, enthusiastic, nice people. We don’t want to accidentally hire creeps. Measuring these qualities is hard and requires a skilled, diverse set of interviewers.
Interviewing is a skill like any other, and some incredible employees—my apologies to them—are terrible at it. Maybe they’re too trusting or easily fooled. Maybe they’re non-confrontational with strangers. Maybe they unconsciously root for every candidate and never want to be the voice of dissent, or maybe they seize on every tiny issue with a candidate and never feel comfortable advocating for a hire. Figuring out who is and isn’t a skilled interviewer takes time. Companies should have managers or skilled interviews shadow people who are new to interviewing, and if they never improve, remove them from the pool.
A diverse set of interviewers can be hard to achieve, especially at small companies, but do your best. A shocking number of candidates have glaring, obvious deficiencies when interacting with one gender or another. A smaller, but no less shocking number reveal troubling racial perspectives during lunch, make inappropriate jokes, or assume that any non-engineer has trouble finding the power button on a laptop. Putting candidates in front of people of various ages, genders, ethnicities, job roles, and backgrounds can help ferret out these issues.
Lastly, when you hire people, you have a responsibility to try and help them succeed. A couple specifics:
- New hires should have mentors, peers whom they meet with several times a week. Mentors review work and help new hires navigate the organization. A new hire’s mentor and lead should be different people.
- Leads should offer constructive feedback to new hires after 30, 60, and 90 days. Give people a chance to improve. When I was a manager, I generally knew after three months if a new hire was going to work out long-term.
Just like Modern Technical Writing, I suspect some people will view this blog post as specific to the software industry, but it isn’t. The notion that we should a) identify traits that make people successful and b) try to measure those traits during interviews isn’t specific to software. It’s universal—and frankly, pretty obvious. For whatever reason, though, very few companies do it. In doing so, companies ignore a huge number of amazing candidates, some of whom, I guarantee, will find work at their competitors.