Roger Peng asks a good question on his blog this morning: How would you know if someone is great at data analysis? He says that while he has worked with some great data analysts, the nature of their work is that it’s hard to evaluate the work of someone you don’t know personally. And as Josh Grant pointed out, this isn’t unique to data analysts.
I immediately thought of a database administrator I know. Everyone who works with her knows she’s great at her job, but I doubt anyone who doesn’t know her has ever said “They must have a great DBA!”
Matthew Crawford argues in Shop Class as Soulcraft that white collar work in general is hard to objectively evaluate and that this explains why offices are so political. Employees are judged on their sensitivity and other nebulous attributes because unlike a welder, for example, they can’t be judged directly on their work. He argues that blue collar workers have greater freedom of speech at work because their work can be objectively evaluated.
Colleagues can identify great data analysts, DBAs, and others whose work isn’t on public display. But this isn’t easy to do through a bureaucratic process, and so technical competence is routinely under-compensated in large organizations. On the other hand, reputation spreads more efficiently outside of organizational channels. This may help explain why highly competent people are often more appreciated by their professional communities than by their employers.
Related post: It doesn’t pay to be the computer guy
7 thoughts on “How do you know when someone is great?”
Dead on. I have the same challenges in consulting. Sometimes I’ll get the gig from my professional reputation, only to have it not really translate to anything useful beyond 2 weeks, but I have to earn this trust & technical competency with new people, usually under extreme duress (they wouldn’t have hired me if things were going swell). Part of job is the technical side, trying to kick tail there while at the same time marketing the work we’re doing as actually having value & being, as you say, objective to judge in a blue collar context. It’s really hard.
Thanks for the link!
One place where this issue (I think) really becomes a major problem is in hiring and recruitment. If “white collar” work like software development and data analysis isn’t very objective there can be a lot of difficulty in hiring people. What if someone is an excellent teammate and coder but fails miserably in white board exercises because they can’t devise algorithms quickly under scrutiny? It also means that employers end up, perhaps unconsciously, comparing applicants to existing employees. This might mean that if an applicant doesn’t seem to be as “with it” as someone already working with the hirer, they could be rejected even if they would be an outstanding coworker.
Reputation helps this but it also makes it much harder for new applicants in fields without much experience to gain credibility.
This issue might explain why “objective” measures of schoolteacher performance are so fraught with challenges, both technical and political.
Rather depressing state of affairs for hiring knowledge workers, isn’t it? Not just for hiring managers, but for the market of candidates who are undervalued.
Perhaps StackExchange scores, Github pulls accepted, and other attempts to capture reputation can help mitigate. But I’ve found myself relying mostly on what I hope are root causes—screening for those passionate about the field, ruthlessly curious, applicably smart, and enthusiastic at opportunities to exchange knowledge. You don’t have to completely understand the work they do to know those people are likely to excel.
(Still, though, I much prefer screening for people who do what I’ve done. So much easier on the intuition!)
At least for predictive data analytics, you really can tell someone’s good if they’re doing well on Kaggle: https://www.kaggle.com/.
We’re building a data analysis meritocracy! Maybe the first “white collar” labor meritocracy?
How does a non-welder distinguish a great welder from a merely competent welder? Competence might be recognized by the cleanness of the welds and by a history of welds not failing and perhaps time required to do a fixed unit of work, but excellence would seem more difficult to recognize. If failures are rare (and not important enough to be thoroughly investigated), it would be difficult to determine that a failure was the result of a welder’s lack of excellence. It seems that an excellent welder might be marginally faster than a ‘good’ welder and produce slightly cleaner welds but be several times more likely to recognize a sub-par part than a merely ‘good’ welder. (I am not a welder; that is just an imaginary example.)
For factory-style work, excellence might be less recognizable. Some white collar work is more factory-style (established tools and procedures–if the worker cannot improve tools or procedures, e.g., a manager refuses to allow resources to be “wasted” on such improvements or lack of documentation/source code hinders improvement, then excellence can be suppressed) and some blue collar work is more ‘variable’.
(I suspect most of the time an excellent bus driver would be indistinguishable from a merely competent bus driver. Over years of activity, an excellent bus driver might have measurably fewer accidents and noticeably fewer near misses (which may not be reported) and provide more useful information to the maintenance crew (which would also be less generally known). Then there are human interface matters; a driver excellent at driving might be somewhat rude, and so less excellent as a bus driver. (An excellent bus driver might also be able to resolve passenger conflicts well. These should rarely occur, but that could be an indicator of excellence. First aid skills could be a similar distinction.) A bus driver might also have the incidental task of observing and reporting suspicious or clearly criminal activity. )
Variability of tasks also makes evaluation more difficult. In order to evaluate performance, the difficulty of tasks (including the constraints on resources–tools, time, etc.–and methods–affected by laws, policies, interpersonal relations, etc.) must be evaluated as well as the quality of the results. The importance/weight of the levels of quality for different tasks must also be established.
I agree with Rick Bryan completely. School teachers are producing a rather intangible product, only a small part of which can be objectively measured. The politics of the situation (most citizens have a stake in the educational system) puts great pressure on politicians to impose simplistic solutions. The teacher’s colleagues may know more about his or her performance than an evaluating administrator, and certainly more than a multiple choice test (popular in the U.S.) can show. But can colleagues objectively evaluate teaching without being influenced by the teacher’s collegiality? Then again, teacher collegiality is a good thing, strongly promoted to good effect in high-scoring countries like Japan. I don’t think there are any easy solutions, but we Americans keep looking for one.