If you think that's a viable proposition feel free to start it. I'm not convinced that the economics add up, nor that recognized industry experts want to spend the majority of their time reading a huge amount of content and rating it for pennies an hour.
I don't think Google have too much faith in their algorithm - they know it's flawed. But it's the least worst algorithm anyone has come up with, and adding human tweaks leads it subject to subjective bias.
>If you think that's a viable proposition feel free to
start it.
Yeah, I don't think it's a bad idea. I don't have the funding to start it, of course, and VCs crap their pants at the thought of anything that has overhead, so it's probably a non-starter.
>nor that recognized industry experts want to spend the majority of their time reading a huge amount of content and rating it for pennies an hour.
The recognized experts would be paid more than pennies an hour, and they wouldn't need to spend the majority of their time reviewing content. They'd be "super-reviewers", so their opinions would hold a lot of weight. It'd be a way for them to make some extra money without a lot of overhead, something they'd do occasionally for an hour here and there. Honestly the main thing we'd be looking for from these people is information about the cutting edge; things that are trending that we haven't picked up yet, things that are new and thus don't have a lot of consensus markers but are still worth attention, and information about the perception of the content within the industry. That classification can be used to inform on a variety of axes that could be good search parameters. We'd need to make sure we got opposing industry leaders so that the index didn't become solely representative of a single viewpoint.
Normal reviewers in a position analogous to a news reporter are more affordable, more consistent, and can classify the majority of content for a sector fine. Maybe Yahoo! could take their niche content mills and reassign the staff to rate pages in their index, then maybe their search will get somewhere.
Then you'd have MTurk style reviewers who provide the bulk of the content rankings and just give back a few basic pieces of info. These are the people that would be working for $2-$3/hr or less, at their convenience.
All of this is on top of a more traditional automated ranking algorithm that would use consensus markers and computer-perceptible quality markers to rank content. There's not necessarily an obligation that every page is sampled and reviewed by a human.
It'd be great if we could get good traffic data too; we'd be able to see where people are actually going instead of just what they put links back to.
>But it's the least worst algorithm anyone has come up with, and adding human tweaks leads it subject to subjective bias.
The bias is there regardless, it's just filtered through different parameters. This is inescapable. In general, not just in algorithm and computer design, we need less faith in cold systems and more faith in human intervention and judgment.
Yes, you have to be aware that any process is subject to gaming, manipulation, or bias, but I think affording sufficient room for human opinion and circumstantial judgment as very highly-weighted inputs prevents most of the egregious failures caused by runaway systems.
I don't think Google have too much faith in their algorithm - they know it's flawed. But it's the least worst algorithm anyone has come up with, and adding human tweaks leads it subject to subjective bias.