Summary by Ioana CrowdDB: answering queries with crowd sourcing. - Michael J. Franklin, Donald Kossmann, Tim Kraska, Sukriti Ramesh, Reynold Xin The paper presents a relational query system ( CrowdDB ) that uses crowdsourcing in order to answer queries that otherwise cannot be answered , due to incomplete data or subjective comparison. The system extends SQL DDL and DML with a number of operators, in witch human input is considered and is obtained by generating small task for crowdsourcing platforms. The experiments presented in the paper are ran in October 2010, over 25.000 HIT on AMT and measure the response time and quality of the answer. The main findings are : - the HIT Group size influences the performance of the group ( the response time of the same query decreases when the size of the HIT Groups increases); - there is a tradeoff between throughout( HITS competes per unit of time) and completions percentage; - ( an obvious ) increase in the payment of the task increases the quality of an answer ( in the example from 1 cent to 4 cents ) ; Questions : 1. How would a quality feature be integrated in CrowdDB?( quality of response ) 2. How reliable is the response obtained? Does the availability of the operators influence the response ? Summary by Patrick CrowdDB is an attempt to answer queries using crowdsourcing. The idea is to use crowdsourcing to handle problems that are too expensive to either solve or express for a machine to solve. Amazon Mechanical Turk is an online system allowing humans to solve HIT (Human Intelligence Tasks) for a pre-defined payment. The authors propose a DBMS where queries can be converted to a user interface which turks can fill out missing data. The experiments seems fairly decent. Summary by Zahid CrowdDB: Answering Queries with Crowdsourcing Michael J. Franklin, Donald Kossmann, Tim Kraska, Sukriti Ramesh, Reynold Xin Summary All quires are not answerable through machine so some of them need human interaction or feedback to get answers. These queries have not answers in databases. So to get answers of these queries authors worked on CrowdDB that uses human input through crowdsourcing. The traditional closed-world assumption for query processing does not hold for human input. For a human oriented queries operators are needed to integrate the crowdsourced data. For a high performance and good results it need workers training , worker affinity, motivation, and locations, all these factors can get through heavy cost. Traditional data bases try to answer the query according to information that it has, and if there is not information regarding to any query than it consider no answer. So that¡¯s why human interaction or intervenes need to understand the query and try to answered it. The main contributions of this paper is as follow. They proposed simple SQL schema and query extensions that enable the integration of crowdsourced data and processing. They present the new design of CrowdDB having new crowdsourced query operators and plan generation techniques those are helpful to combine the crowdsourced and traditional query operators. Authors described different methods to generate the effective and automatically user interfaces for crowdsourced tasks.