Now that the university challenge is completed, we have also found out that the top score of the university-only grand prize competition can be surpassed by Foldit players in just 3 days.
I'm interested in your feedback on the Grand Challenge competition specifically with respect to determining the parameters of any further challenge competition of that sort. Obviously making sure that everyone is included in the competition should be a must. But what about the prizes? My thought was that just cash is likely not a good prize format, but perhaps a donation to the charity of choice for the winning team? Or a paid trip to visit the Foldit team in Seattle? Or perhaps, making Foldit be strictly voluntary and contain no other prizes is the right thing to do? I welcome your feedback below.
hi zoran,
I'm part of Team Purdue, which was one of the prize winning university teams. On behalf of our team, we actually thought it would have been fantastic if we had a chance to just get together and meet the other top teams and talk have an open talk about our strategies that helped us win the challenge. I think we could learn a lot from this and push the bar higher in the next event, just like the meetings in CASP. So, maybe this could be in a conference format of some sort… like an informal session in RosettaCon (if appropriate)? this is just my quick thought. But, meeting the actual foldit team would have been really nice as well!
Or, at least heavily slanted toward those with real protein backgrounds (not the wanna-be experts).
I would have no incentive to even make a serious attempt at a money contest since there are experts here playing the game who WILL win every time. Why would I waste the time?
Just look at this feedback as an example. http://fold.it/portal/node/989003 Your "regular" player would never stand a chance against this type of player, except maybe as dumb luck and the odds would be so low that you would have to have money contests every week for that to have a chance of happening.
I guess I just have to respectably disagree with the idea that non-experts would stand no chance. If anything, actual evidence seems to suggest the contrary:
Example 1: According to Figure S3 of the supplemental section of the Nature paper published earlier this year, most of the top player have little biochem background.
Example 2: It took only a few days for AD to exceed the top score from the University Challenge, for which many of the participants are "experts" whose field of research directly involves protein folding.
==> Now, I do understand your concern that technical knowledge could be an unfair advantage. To prevent the possibility that experts would easily pick up the vast majority of prizes, one compromise might be limiting the number of prizes that each individual can win.
@infjamc - Your examples are of the general population of players, and heavily skewed because the top players are those who play all the time, and have learned to use the tools to their advantage, and I'll be the university players were mostly new to the game with little experience and without an arsenal of highly developed scripts to help them along.
I don't know if prot-bustr has formal training on this topic or not, but I expect there aren't more than a handful of players who could have pointed out that the was a missing what-ever-its called bond in the current puzzle.
You start playing this for money, the trained experts will start coming out of the woodwork and the "regular" players (@ vixxy - those without biochem or protein folding training) will get pushed down the leaderboard. Those who play this for fun without the formal training will not be able to compete once the protein experts learn the game.
The whole money thing is disgusting - it really has no place if this is to stay a game.
we've actually analyzed the value of background expertise for the nature paper and later. i can definitively say that background in proteomics or biochemistry is a very poor predictor of success in foldit often showing negative correlation. our conclusion is that a degree in biochemistry field does not self select for good foldit players. We postulate that a PhD in biochemistry has little to do with spatial reasoning abilities which is really what foldit is about. Clearly our top 20-30 players are generally not professionaly involved in proteomics, with perhaps one or 2 exceptions.
the university challenge confirmed this as well. the university teams that competed were not a few undergrads who casually competed. We had teams from most of the highly acclaimed research teams, exclusively graduate students and postocs (people that actually do all the real work in biochemistry labs). These teams stayed up nights competing. They were obviously incentivized by the award. Still the largely non-expert Foldit community (though highly skilled in protein prediction through foldit) needed only 3 days of voluntary folding to outperform the university research labs. the university experiment certainly shows that experts will compete with proper incentives. it also shows that they are not likely do do well.
It is very likely that if the prized competition is open to all, it would attract much more of non-experts into the game than experts.
"I don't know if prot-bustr has formal training on this topic or not, but I expect there aren't more than a handful of players who could have pointed out that the was a missing what-ever-its called bond in the current puzzle."
Point taken. Still, knowledge of disulfide bonds alone does not necessarily translate to success. In addition to what Zoran has already mentioned, here's a personal anecdote: Having taken courses in bioengineering and even participated in Rosetta 3.x development during my undergrad years, I've had quite a few chances to explain some of the more technical aspects of protein folding when asked by my teammates. Still, there are five of them who consistently fold better than I do even without the specialized "insider information." To me, this is an indication that (a) there is quite a bit of difference in skill level between the Top 10 and the 11-20 ranges, and (b) specialized knowledge can help, but is not sufficient.
==> Now, regarding your concern that "those who play this for fun without the formal training will not be able to compete once the protein experts learn the game": While I don't like the idea of commercializing protein folding either, I could tolerate it given proper implementation. One possibility might be segregation of professional players from amateur players in competitions in a manner similar to professional sports; in Foldit terms, this could mean that participation in puzzles with tangible prizes would disqualify oneself from being able to play or be ranked in the vast majority of puzzles, which don't come with prizes.
My concern is that Foldit will fold - funding can be fickle and scientists move on to other areas of research interest.
If humans can get to a quality starting fold quickly, can that ability be commercialised by Foldit so that puzzles, tool development, the fabulous people who keep it all up and running are paid for and we get to have lots of great puzzles and better tools; and the science benefits should also increase of course. Ideally we would get more feedback on the results as well.
Zoran states that experts do not fold better, (note that the uni teams had access to all of the public recipes and of course could have written their own) but folders can certainly learn as much as they feel is useful to them - I learnt about disulphide bonds recently and wasted several hours trying to get them to bond in a Foldit threaded start puzzle :-).
An annual competition would be great, I like Vixy's idea of personal solo prizes and team prizes going to an international charity. I also think the game scores should be reset each year or maybe after the final CASP results are known every two years, so that new people have a chance to get to the top, with a leader board being maintained over multi-years.
Also it would be great to see more acknowledgement on the Foldit site of the individual folders who spent many, many hours on CASP. And an explantion of what the results mean in plain English. lol
I don't see why people can't be paid to fold if anyone is prepared to pay for them to do so. The game scoring is a huge incentive but often people talk about the highest score not being the best. More feedback required on that one.
The value to science should be the basis for any prizes.