In 2008, Google initiated an ideation challenge called 'Project 10 to the 100'. They asked quite openly 'What would help most?' and received more than 150,000 ideas from people all over the globe. As Google's 'Project 10 to the 100' and many other real life examples show (e.g. Cisco's 'I-Prize' or Nokia's 'Tune Remake'), the number of submissions to crowdsourcing contests can be stunning. However, the majority of submitted ideas is usually of very low quality. According to the so-called Sturgeon's (1958) law, 90% of everything is crap. Firms are often overloaded by too much 'noise' generated on crowdsourcing platforms. They face the problem of not being able to filter and select the best ideas (or only being able to do so with substantial effort). Recently, scholars have proposed that the integration of crowdsourcing communities into the evaluation process appears to be a very promising method to filter high-quality ideas. In this explorative study, Georg Terlecki-Zaniewicz analyzed eleven crowdsourcing platforms, and concluded with a framework that makes the evaluation of crowd-sourced ideas through community evaluation both more efficient and more accurate.