They do give the formula they use, roughly the fraction of people who didn't get refunds. Someone is eligible for a refund if the prediction was at least 51% and they didn't get in or the prediction was less than 50% and they were accepted.

What's wrong with this picture? Suppose everyone who was eligible for a refund got one. Consider people who they predict have a 60% chance of acceptance. This means 40% of them should not be accepted. But if they are all accepted they would have considered this a perfectly accurate prediction though it clearly is not. Conversely if 60% of them were accepted, this is what you expect but they would consider that only a 60% accuracy rate. And if they predict 50% the formula counts this as an accurate prediction even if all or none of them were accepted.

Either we have the very unlikely scenario that the rounding to zero or one of the prediction is a very good predictor or more likely that not many people claim the refunds they are entitled to. When you make a claim to accuracy that doesn't match the service you provide you end up giving no claim to accuracy at all.

More to the point: this scheme could be a money-maker even if the company totally ignored each student's application materials. Like the lazy desert weatherman who gets paid by his accuracy, and always predicts a sunny day.

ReplyDeleteVery nice post!

ReplyDeleteThey should use a quadratic penalty scheme.

ReplyDeleteThis is an interesting problem.

ReplyDeleteWhat is a quadratic penalty scheme? Is it provably good?

I was thinking about this problem once with regard to online medical records. I figured it might one day be possible to go online and discover the probability that you'd acquire some medical conditions. The question is, how do we know the system performs well.

ReplyDeleteOne simple approach is to give everyone the same probability. If p percent of people eventually develop condition y, everyone has a p percent chance of getting y. This approach seems "correct" in some sense, it is certainly not optimal.

In the case of the college service, you need a penalty system that gives full refunds for exactly this type of blanket prediction. Many penalty systems could work, but I'm not clear what the right choice is.

I suspect that the big problem with measuring the accuracy of this service is that some students take their advice. If the report comes back, "Your chances of becoming a Harvard freshman are 0.001 percent," the student may well decide to save his money and apply to the local community college. Since the student was advised not to apply and ultimately didn't get in, this presumably counts as a successful prediction. They ought to be counting only those students who actually applied and were rejected.

ReplyDeleteOnce we solve that problem, then ideally we would like to know the actual percentage of students accepted for each percentage predicted. For example, taking the subset of students who got a 60-percent prediction, the service would score perfect accuracy if 60 percent of them were admitted. Making the corresponding calculation for each percentage from 0 to 100 then gives you a graph of accuracy as a function of predicted success. Reducing that graph to a single scalar score is not trivial, but simply taking the average of the accuracies at all percentages is a reasonable way to start.

The actual scoring algorithm the service uses is not ideal, but on the other hand it doesn't strike me as being obviously unfair or designed to make the service look good. They leave themselves no middle ground for doubtful cases. In effect, they apply a threshold at 50 percent and turn each prediction into a simple binary value. Given that this algorithm is the basis of their guarantee, maybe they should also communicate results to the applicant in the same form. Don't tell the poor baffled kid, "Your chances are 54 percent." Just say, "Yes: Apply" or "No: Forget about it." If they can actually achieve 98 percent accuracy on that basis, I'm pretty impressed.