Thought this was cool: Machine Learning that Matters
It’s an excellent critique of the way that ML research is currently practiced and some things we can and should be doing to improve that. Specifically, Kiri advocates that we focus more on problems that have impact outside the field of ML itself. If the clever learning methods that we invent are not gaining penetration into “domain fields” and are not being used to solve real problems, do they really matter at all?
I’d like to write a long and glowing review of her paper, but I don’t have time right now. But it’s an excellently written and very accessible paper — I encourage everybody to read it. One bit that I would like to call out, though, is her set of proposed “Machine Learning Impact Challenges”:
- Discovery of a new physical law leading to a published, referred scientiﬁc article.
- Improvement of 500 USCF/FIDE chess rating points over a class B level start.
- Improvement in planning performance of 100 fold in two diﬀerent domains.
- Investment earnings of $1M in one year.
- Outperforming a hand-built NLP system on a task such as translation.
- Outperforming all hand-built medical diagnosis systems with an ML solution that is deployed and regularly used at at least two institutions.
from Ars Experientia: http://cs.unm.edu/~terran/academic_blog/?p=100