1. How do web search tools make it more efficient to find information?
    Search tools are constantly exploring the web even when no one is asking a question. They are visiting old and new sites, indexing relevant key words and links on each page. It then makes copies of this information, and builds an index that is quickly searched through when a search is entered. It does not search through every web page the moment after you send in a question, that would be a lot slower. It also uses binary search.
  2. When you type a word or phrase into the Google search engine, what is the search algorithm that is being used? Explain in your own words the process used by Google’s search engine.
    The algorithm first has to make sense of the question so it knows what to look for. Then, it scans pages for matching keywords or phrases until it compiles every page that might be useful to you. Then it determines which is the most useful by seeing if the words are used multiple times, in a row, checking the source, etc. Finally, it adjusts the order of the information partly based on the potential to sell more advertising, among other things.
  3. What is a captcha? How has the collective efforts of Internet users contributed to analyzing images through captchas?
    A captcha is a collection of distorted letters that the user must repeat in order to prove a spam bot isn’t using their site to increase their PageRank. Spam companies hire people to put in captcha answers in order to spread their spam.
  4. “The architecture of human knowledge has changed as a result of search.” Do you agree? Explain your reasoning.

    I agree. People now depend on the Internet for every little tidbit of information. They also assume that the information is always correct. This changes how we access information (through computers and phones instead of books and encyclopedias) and how we perceive it (less emphasis on source and more emphasis on it’s location on the web pages).

  5. What are the differences between Figures 4.10 and Figure 4.11 in the book? Why are there differences even though they are both a Google search results page?
    Some of the results in 4.11 are in Chinese. The same things is searched, but all different results appear. The difference is that the first image uses English Google while the Second uses Google Chinese.
  6. How do you think mobile computing might have influenced web searches?
    I think it made having the top link even more important because you can only see a couple links at a time. A lot of people are less likely to scroll down if they think the same information is available right in front of them immediately.
  7. Would you retain your search history or delete it? Why?
    I guess I would retain it. But it depends on the case. Deleting search history can just get recovered by the company if it need be, so deleting illegal information probably wouldn’t do much for me. But deleting gift ideas from a computer that my family also uses is a good idea so I my gift is a surprise.
  8. Should a researcher place absolute trust in a search engine? Why or why not?
    Of course not! As the chapter says, it is easily manipulated, and is manipulated even unintentionally at times. This means misinformation can easily be at the top of the page, so it’s best to cross reference information that really matters to you.
  9. The authors claim “search is a new form of control over information” (p. 111) and “search is power” (p. 145). Why might it be important to talk about the social implications of searching on the Internet?
    Based on what (mis)information is most easily available, you are influencing the general opinion of the public, regardless of reality. This means if some evil company took over – or even if Google had a change of heart – they could totally pollute our search results with inaccurate propaganda. This puts information in a precarious place, and gives power to any clever manipulators out there.
  10. How have search trends been used to predict information? What are the positive and negative impacts of using trends to make predictions?
    Search trends are analyzed and the algorithm makes assumptions that if many people suddenly start searching the same thing, and clicking on the same couple results, all people will want this information. This can change which sites are given priority and higher PageRanks. The algorithm will most likely be right, which means trendy information that many people want will be easier to find. However, if the algorithm does get it wrong, all sorts of either misinformation or just irrelevant information will be too accessible.