Summary by Ioana CrowdDB: answering queries with crowd sourcing. - Michael J. Franklin, Donald Kossmann, Tim Kraska, Sukriti Ramesh, Reynold Xin The paper presents a relational query system ( CrowdDB ) that uses crowdsourcing in order to answer queries that otherwise cannot be answered , due to incomplete data or subjective comparison. The system extends SQL DDL and DML with a number of operators, in witch human input is considered and is obtained by generating small task for crowdsourcing platforms. The experiments presented in the paper are ran in October 2010, over 25.000 HIT on AMT and measure the response time and quality of the answer. The main findings are : - the HIT Group size influences the performance of the group ( the response time of the same query decreases when the size of the HIT Groups increases); - there is a tradeoff between throughout( HITS competes per unit of time) and completions percentage; - ( an obvious ) increase in the payment of the task increases the quality of ( an answer ( in the example from 1 cent to 4 cents ) ; Questions : 1. How would a quality feature be integrated in CrowdDB?( quality of response ) 2. How reliable is the response obtained? Does the availability of the operators influence the response ? Summary by Patrick CrowdDB is an attempt to answer queries using crowdsourcing. The idea is to use crowdsourcing to handle problems that are too expensive to either solve or express for a machine to solve. Amazon Mechanical Turk is an online system allowing humans to solve HIT (Human Intelligence Tasks) for a pre-defined payment. The authors propose a DBMS where queries can be converted to a user interface which turks can fill out missing data. The experiments seems fairly decent. Summary by Adnan Paper 1:CrowdDB Not all queries can fetch the correct or required result. When it comes to searching information from a pool of database systems the role of the query is very important. Sometime system is not intelligent enough to fetch the required information. System can give error as result or sometimes empty in result. For example if one uses name ¨¬I.B.M? in a query to find the match against I.B.M. This might not result the possible matches like IBM, or International Business Machines. Though it depends on the algorithm used but still human intervention can help to find the exact or closest to the required information. Similarly computer systems might not be good at finding the required picture by comparing some where the systems may be good with the numbers, and this is opposite case if we use human power. So adding some human efforts can help to achieve the required information. This paper suggests all these points to make an effective information hunting method. CrowdDB extends traditional searches using some human interference. AMT (Amazon Mechanical Truk) does many tasks including complex tasks like translating or editing text. Human efforts can be in the form of fining new data and/or comparing data on the bases of user provided query. CrowdDB provides physical Data independence for the crowd. That means which query has to be dealt by CrowDB and which has to be dealt directly by database. The paper also explains the challenges faced to purpose the initial solutions. AMT has established his own truck technology which further uses different terms like HIT, Assignment and HIT group. These are different terms used against a assigned Job and then the requester as some rights. Create HIT and Approve Assignments are some of those rights. CrowdDB Design Considerations includes Performance and Variability, Task Design and Ambiguity, Affinity and Learning,Relatively Small Worker Pool and Open vs Closed world. The paper also explains CrowdDB Architecture in detailed followed by how the data is placed into the SQL queries. Both user and the truk team members get different user interfaces to perform their operations. The paper presents simple SQL DDL and DML extensions for crowdsourcing. Finally it purposes that the combination of higher power database and human power can help to achieve quality results. CrowdDB: Answering Queries with Crowdsourcing Michael J. Franklin, Donald Kossmann, Tim Kraska, Sukriti Ramesh, Reynold Xin Summary All quires are not answerable through machine so some of them need human interaction or feedback to get answers. These queries have not answers in databases. So to get answers of these queries authors worked on CrowdDB that uses human input through crowdsourcing. The traditional closed-world assumption for query processing does not hold for human input. For a human oriented queries operators are needed to integrate the crowdsourced data. For a high performance and good results it need workers training , worker affinity, motivation, and locations, all these factors can get through heavy cost. Traditional data bases try to answer the query according to information that it has, and if there is not information regarding to any query than it consider no answer. So that¡¯s why human interaction or intervenes need to understand the query and try to answered it. The main contributions of this paper is as follow. They proposed simple SQL schema and query extensions that enable the integration of crowdsourced data and processing. They present the new design of CrowdDB having new crowdsourced query operators and plan generation techniques those are helpful to combine the crowdsourced and traditional query operators. Authors described different methods to generate the effective and automatically user interfaces for crowdsourced tasks.