Monday, April 25, 2016

Why I gave your paper a Strong Accept

See also: Why I gave your paper a Strong Reject

I know this blog is mostly about me complaining about academics, but there's a reason I stay engaged with the research community: I learn stuff. Broadly speaking, I think it's incredibly important for industry to both stay abreast of what's going on in the academic world, as well as have some measure of influence on it. For those reasons, I serve on a few program committees a year and do other things like help review proposals for Google's Faculty Research Award program.

Apart from learning new things, there are other reasons to stay engaged. One is that I get a chance to meet and often work with some incredible colleagues, either professors (to collaborate with) or students (to host as interns and, in many cases, hire as full-time employees later on).

I also enjoy serving on program committees more than just going to conferences and reading papers that have already been published. I feel like it's part of my job to give back and contribute my expertise (such as it is) to help guide the work happening in the research community. Way too many papers could use a nudge in the right direction by someone who knows what's happening in the real world -- as a professor and grad student, I gained a great deal from my interactions with colleagues in industry.

Whenever I serve on a program committee, I make it a point to champion at least a couple of papers at the PC meeting. My colleagues can attest to times I've (perhaps literally) pounded my fist on the table and argued that we need to accept some paper. So to go along with my recent post on why I tend to mark papers as reject, here are some of the reasons that make me excited to give out a Strong Accept.

(Disclaimer: This blog represents my personal opinion. My employer and my dog have nothing to do with it. Well, the dog might have swayed me a little.)

The paper is perfect and flawless. Hah! Just kidding! This never happens. No paper is ever perfect -- far from it. Indeed, I often champion papers with significant flaws in the presentation, the ideas, or the evaluation. What I try to do is decide whether the problems can be fixed through shepherding. Not everything can be fixed, mind you. Minor wording changes or a slight shift in focus are fixable. Major new experiments or a total overhaul of the system design are not. When I champion a paper, I only do so if I'm willing to be on the hook to shepherd it, should it come to that at the PC meeting (and it often does).

Somebody needs to stand up for good papers. Arguably, no paper would ever get accepted unless some PC member were willing to go to bat for it. Sadly, it's a lot easier for the PC to find flaws in a paper (hence leading to rejection) than it is to stand up for a paper and argue for acceptance -- despite the paper's flaws. Every PC meeting I go to, someone says, "This is the best paper in my pile, and we should take it -- that's why I gave it a weak accept." Weak accept!?!? WEAK!?! If that's the best you can do, you have no business being on a program committee. Stand up for something.

In an effort to balance this out, I try to take a stand for a couple of papers every time I go to a PC meeting, even though I might not be successful in convincing others that those papers should be accepted. Way better than only giving out milquetoast scores like "weak accept" or -- worse -- the cop-out "borderline".

The paper got me excited. This is probably the #1 reason I give out Strong Accepts. When this happens, usually by the end of the first page, I'm getting excited about the rest of the paper. The problem sounds compelling. The approach is downright sexy. The summary of results sound pretty sweet. All right, so I'm jazzed about this one. Sometimes it's a big letdown when I get into the meat and find out that the approach ain't all it was cracked up to be in the intro. But when I get turned on by a paper, I'll let the small stuff slide for sure.

It's hard to predict when a paper will get me hot under the collar. Sometimes it's because the problem is close to stuff I work on, and I naturally gravitate to those kinds of papers. Other times it's a problem I really wish I had solved. Much of the time, it's because the intro and motivation are just really eloquent and convincing. The quality of writing matters a lot here.

I learned a lot reading the paper. Ultimately, a paper is all about what the reader takes away from it. A paper on a topic slightly out of my area that does a fine job explaining the problem and the solution is a beautiful thing. Deciding how much "tutorial" material to fit into a paper can be challenging, especially if you're assuming that the reviewers are already experts in the topic at hand. But more often than not, the PC members reading your paper might not know as much about the area as you expect. Good exposition is usually worth the space. The experts will skim it anyway, and you might sell the paper to a non-expert like me.

There's a real-world evaluation. This is not a requirement, and indeed it's somewhat rare, but if a paper evaluates its approach on anything approximating a real-world scale (or dataset) it's winning major brownie points in my book. Purely artificial, lab-based evaluations are more common, and less compelling. If the paper includes a real-life deployment or retrospective on what the authors learned through the experience, even better. Even papers without that many "new ideas" can get accepted if they have a strong and interesting evaluation (cough cough).

The paper looks at a new problem, or has a new take on an old problem. Creativity -- either in terms of the problem you're working on, or how you approach that problem -- counts for a great deal. I care much more about a creative approach to solving a new and interesting (or old and hard-to-crack) problem than a paper that is thoroughly evaluated along every possible axis. Way too many papers are merely incremental deltas on top of previous work. I'm not that interested in reading the Nth paper on time synchronization or multi-hop routing, unless you are doing things really differently from how they've been done before. (If the area is well-trodden, it's also unlikely you'll convince me you have a solution that the hundreds of other papers on the same topic have failed to uncover.) Being bold and striking out in a new research direction might be risky, but it's also more likely to catch my attention after I've reviewed 20 papers on less exciting topics.


1 comment:

  1. Nice post, especially the point about picking some papers to champion. It's far too easy to find reasons to reject papers. As a side note, I think we should be accepting more papers in general, as I think paper-as-final-outcome---in any venue, regardless of imprimatur---is a horrible metric for impact, given the randomness of the process.

    I still remember a PC I was on where I reviewed a paper on BGP.

    In the meeting, my argument for strong accept (which echoes your comment above) was simple: I learned something. The chair was astute enough to observe that if I learned something about BGP from reading the paper, then other people very likely would, too.

    ReplyDelete

Startup Life: Three Months In

I've posted a story to Medium on what it's been like to work at a startup, after years at Google. Check it out here.