At the intersection of technology, finance and the Pacific Rim.

Wednesday, September 23, 2009

Netflix Prize

The Netflix prize was given this week--the $1MN award is for the team that was able to improve the algorithm for movie recommendations by 10%---it represents a landmark way (and I do not use the word "landmark" lightly) of how development works. Also please note the "human engineering"--understanding how people think and behave when devising systems. Wired Magazine writes:

the Netflix Prize competition has proffered hard proof of a basic crowdsourcing concept: Better solutions come from unorganized people who are allowed to organize organically. But something else happened that wasn’t entirely expected: Teams that had it basically wrong — but for a few good ideas — made the difference when combined with teams which had it basically right, but couldn’t close the deal on their own.
Ironically, the most outlying approaches — the ones farthest away from the mainstream way to solve a given problem — proved most helpful towards the end of the contest, as the teams neared the summit.

For instance, BellKor’s Pragmatic Chaos (methodology here) credits some of its success to slicing the data by what they called “frequency.” As it turns out, people who rate a whole slew of movies at one time tend to be rating movies they saw a long time ago. The data showed that people employ different criteria to rate movies they saw a long time ago, as opposed to ones they saw recently — and that in addition, some movies age better than others, skewing either up or down over time. (Finally, someone has explained whySnakes On A Plane seemed more fun at the time than it does now.)

By tracking the number of movies rated on a given day as an indicator of how long it had been since a given viewer had seen a movie, and by tracking how memory affected particular movie ratings, Pragmatic Theory (later part of the winning team) was able to gain a slight edge, even though this particular algorithm isn’t particularly good at predicting which movies people will like when run on its own.

Another example: According to Joe Sill of The Ensemble, Big Chaos (the Austrians who also became part of the winning team) discovered that viewers in general tend to rate movies differently on Fridays versus Mondays, and certain users are in good moods on Sundays, and so on. The team essentially devised a three-dimensional model that incorporated time into the relationship between people and movies.

Taken on its own, the fact that a viewer rated a given movie on a Monday is a horrible indicator of what other movies they’ll want to rent — a crucial part of Netflix’ business (it says its recommendations are better indicators of what people will rent than their “most popular” lists). But combined with hundreds of other algorithms from other minds, each weighted with precision, and combined and recombined, that otherwise inconsequential fact takes on huge importance.


Post a Comment

Links to this post:

Create a Link

<< Home