IWCV
10: Top 5% of accepted NIPS papers, a seminal paper for the ages.
I will consider not reviewing for NIPS again if this is rejected.
9: Top 15% of accepted NIPS papers, an excellent paper, a strong accept.
I will fight for acceptance.
8: Top 50% of accepted NIPS papers, a very good paper, a clear accept.
I vote and argue for acceptance.
7: Good paper, accept.
I vote for acceptance, although would not be upset if it were rejected.
6: Marginally above the acceptance threshold.
I tend to vote for accepting it, but leaving it out of the program would be no great loss.
5: Marginally below the acceptance threshold.
I tend to vote for rejecting it, but having it in the program would not be that bad.
4: An OK paper, but not good enough. A rejection.
I vote for rejecting it, although would not be upset if it were accepted.
3: A clear rejection.
I vote and argue for rejection.
2: A strong rejection. I’m surprised it was submitted to this conference.
I will fight for rejection.
1: Trivial or wrong or known. I’m surprised anybody wrote such a paper.
I will consider not reviewing for NIPS again if this is accepted.
Quality
Clarity
Originality
Significance
Committee 1 | |||
Accept | Reject | ||
Committee 2 | Accept | 22 | 22 |
Reject | 21 | 101 |
Public reaction after experiment documented here
Open Data Science (see Heidelberg Meeting)
NIPS was run in a very open way. Code and blog posts all available!
Reaction triggered by this blog post.
Committee 1 | |||
Accept | Reject | ||
Committee 2 | Accept | 10.4 (1 in 16) | 31.1 (3 in 16) |
Reject | 31.1 (3 in 16) | 93.4 (9 in 16) |
Committee 1 | |||
Accept | Reject | ||
Committee 2 | Accept | 22 | 22 |
Reject | 21 | 101 |
Committee 1 | |||
Accept | Reject | ||
Committee 2 | Accept | 10 | 31 |
Reject | 31 | 93 |
\[ y_{i,j} = f_i + b_j + \epsilon_{i, j} \]
\[f_i \sim \mathcal{N}\left(0,\alpha_f\right)\quad b_j \sim \mathcal{N}\left(0,\alpha_b\right)\quad \epsilon_{i,j} \sim \mathcal{N}\left(0,\sigma^2\right)\]
\[ \alpha_f = 1.28\] \[ \alpha_b = 0.24\] \[ \sigma^2 = 1.27\]