Saturday, February 23, 2013

Google Monopoly In Ruin - More Evidence That Google Blocks Content - People Are Awakening To The Fact That Google Places You Into A Closed Box Of Ignorance, Self Delusional And Narcissism

google is in pain, google is closed source, google monopoly of information is destroyed

Google penalty

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Google penalty is a general term that refers to a negative impact on a website's search ranking based on updates to Google's search algorithms. The penalty can be an unfortunate by-product, or an intentional penalization of various black-hat SEO techniques.
Notable cases of Google penalties occurred in 2009, when many websites reported a drop of more than 50 places in search rankings based on the same search term; in 2011, when Overstock.com was specifically penalized for poor SEO techniques;[1] and in 2012 with the release of Google's Penguin algorithm.

References

Source: https://en.wikipedia.org/wiki/Google_penalty

Google Panda

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Google Panda is a change to Google's search results ranking algorithm that was first released in February 2011. The change aimed to lower the rank of "low-quality sites" or "thin sites",[1] and return higher-quality sites near the top of the search results. CNET reported a surge in the rankings of news websites and social networking sites, and a drop in rankings for sites containing large amounts of advertising.[2] This change reportedly affected the rankings of almost 12 percent of all search results.[3] Soon after the Panda rollout, many websites, including Google's webmaster forum, became filled with complaints of scrapers/copyright infringers getting better rankings than sites with original content. At one point, Google publicly asked for data points[4] to help detect scrapers better. Google's Panda has received several updates since the original rollout in February 2011, and the effect went global in April 2011. To help affected publishers, Google published an advisory on its blog,[5] thus giving some direction for self-evaluation of a website's quality. Google has provided a list of 23 bullet points on its blog answering the question of "What counts as a high-quality site?" that is supposed to help webmasters "step into Google's mindset".[6]

Contents

The Panda process

Google Panda was built through an algorithm update that used artificial intelligence in a more sophisticated and scalable way than previously possible.[7] Human quality testers rated thousands of websites based on measures of quality, including design, trustworthiness, speed and whether or not they would return to the website. Google's new Panda machine-learning algorithm was then used to look for similarities between websites people found to be high quality and low quality.
Many new ranking factors have been introduced to the Google algorithm as a result, while older ranking factors like PageRank have been downgraded in importance. Google Panda is updated from time to time and the algorithm is run by Google on a regular basis. On April 24, 2012 the Google Penguin update was released, which affected a further 3.1% of all English language search queries, highlighting the ongoing volatility of search rankings.
On September 18, 2012, a Panda update was confirmed by the company in its official Twitter page, where it announced, “Panda refresh is rolling out—expect some flux over the next few days. Fewer than 0.7% of queries noticeably affected"[8]
Another Panda update began rolling out on January 22, 2013, affecting about 1.2% of English queries.[9]

Significant differences between Panda and previous algorithms

Google Panda affects the ranking of an entire site or a specific section rather than just the individual pages on a site.[10]
In March 2012, Google updated Panda and stated that they are deploying an "over-optimization penalty," in order to level the playing field.[11]

See also

Panda Recovery

Google says it only takes a few poor quality, or duplicate content, pages to hold down traffic on an otherwise solid site. Google recommends either removing those pages, blocking them from being indexed by Google, or re-writing them [12]. However, Matt Cutts, who is the head of the Webspam team for Google, warns that re-writing duplicate content so that it is original may not be enough to recover from Panda -- the re-writes must be of sufficient high quality. High quality content brings "additional value" to the web. Content that is general, non-specific, and not substantially different from what is already out there should not be expected to rank well: "Those other sites are not bringing additional value. While they’re not duplicates they bring nothing new to the table." [13].

References

 Source: https://en.wikipedia.org/wiki/Google_Panda

Google Penguin

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Google Penguin is a code name[1] for a Google algorithm update that was first announced on April 24, 2012. The update is aimed at decreasing search engine rankings of websites that violate Google’s Webmaster Guidelines[2] by using now declared black-hat SEO techniques, such as keyword stuffing,[3] cloaking,[4] participating in link schemes,[5] deliberate creation of duplicate content,[6] and others.

Contents

Naming the algorithm update

The Penguin update went live on April 24, 2012. However, Google did not announce an official name for it until two days later.[1]

Penguin’s effect on Google search results

By Google’s estimates,[7] Penguin affects approximately 3.1% of search queries in English, about 3% of queries in languages like German, Chinese, and Arabic, and an even bigger percentage of them in "highly spammed" languages. On May 25, 2012, Google unveiled another Penguin update, called Penguin 1.1. This update, according to Matt Cutts, was supposed to affect less than one-tenth of a percent of English searches. The guiding principle for the update was to penalise websites using manipulative techniques to achieve high rankings. Penguin 3 was released Oct. 5, 2012 and affected 0.3% of queries.[8]

The differences between Penguin and previous updates

Before Penguin, Google released a series of algorithm updates called Panda[9] with the first appearing in February 2011. Panda aimed at downranking websites that provided poor user experience. The algorithm follows the logic by which Google’s human quality raters[10] determine a website’s quality.
In January 2012, so-called page layout algorithm update[11] was released, which targeted websites with little content above the fold.[12]
The strategic goal that Panda, Penguin, and page layout update share is to display higher quality websites at the top of Google’s search results. However, sites that were downranked as the result of these updates have different sets of characteristics. The main target of Google Penguin is spamdexing (including link bombing).

Google’s Penguin feedback form

Two days after Penguin update was released Google prepared a feedback form,[13][14] designed for two categories of users: those who want to report web spam that still ranks highly after the search algorithm change, and those who think that their site got unfairly hit by the update. Google also has a reconsideration form through Google Webmaster Tools for the 700,000 sites. Matt Cutts explained that over 600,000 of them were about black hat and less than 25,000 about unnatural links.

See also

References

External links

Source: https://en.wikipedia.org/wiki/Google_Penguin

SafeSearch

From Wikipedia, the free encyclopedia
Jump to: navigation, search
SafeSearch is a feature of Google Search that acts as an automated filter of pornography and potentially offensive content.
A 2003 report by Harvard Law School's Berkman Center for Internet & Society stated that SafeSearch excluded many innocuous websites from search-result listings, including ones created by the White House, IBM, the American Library Association and Liz Claiborne.[1] On the other hand, many pornographic images slip through the filter, even when "innocent" search terms are entered. Blacklisting certain search terms is hindered by homographs (e.g. "beaver"),[2] blacklisting certain URLs is rendered ineffective by the changing URLs of porn sites, and software to tag images with copious amounts of flesh tones as porn is problematic because there are a variety of skin tones and pictures of babies tend to have a lot of flesh tones.[3] Google's ability to filter porn has been an important factor in its relationship with the People's Republic of China.[4]
On 11 November 2009 Google introduced SafeSearch Lock,[5] which allows users with Google accounts to lock on the "Strict" mode of SafeSearch in Google's Web, image and video searches. Once configured, the user can log out of their Google account and the setting will stick to prevent any change to the filtering level.
There are alternative search sites which provide an equivalent to the Google.com homepage, but with SafeSearch enabled by default.[6]
On 12 December 2012 Google removed the option to turn off the filter entirely, forcing users to enter more specific search queries to get adult content.[7][8][9]

References

  1. ^ Benjamin Edelman (April 14, 2003). "Empirical Analysis of Google SafeSearch". Harvard University. Retrieved 3 February 2013.
  2. ^ "Canada's The Beaver magazine renamed to end porn mix-up". AFP. January 12, 2010. Retrieved 3 February 2013.
  3. ^ Paul Festa (July 2, 2001). "Porn sneaks past search filters". CNET News. Retrieved 3 February 2013.
  4. ^ Owen Fletcher (September 6, 2009). "Google Porn Filter Gained China's Thumbs-up". PC Word. Retrieved 3 February 2013.
  5. ^ Pete Lidwell (November 11, 2009). "Locking SafeSearch". Google Official Blog. Retrieved 3 February 2013.
  6. ^ Dino Grandoni (12 December 2012). "Google Porn Just Got More Difficult To Search For". Huffington Post. Retrieved 3 February 2013.
  7. ^ Casey Newton (December 12, 2012). "Google tweaks image search to make porn harder to find". CNET News. Retrieved 3 February 2013.
  8. ^ Matthew Panzarino (12 December 2012). "Google tweaks image search algorithm and SafeSearch option to show less explicit content". TNW. Retrieved 3 February 2013.
  9. ^ Josh Wolford (December 16, 2012). "Google No Longer Allows You to Disable SafeSearch, and That Makes Google Search Worse". Web Pro News. Retrieved 3 February 2013.
Source: https://en.wikipedia.org/wiki/SafeSearch

Virus recentemente redirecionados exploram o nome blekko. Os utilizadores procuram alguns problemas chamados virus blekko, que redireciona-os do google search, eles mencionam este virus como um malicioso sequestrador de browser que é feito para redirecionar o eu trafego do navegador para sites. Dentro dos resultados pagos por Blekko podemos ver propagação de malware. Blekko reivindica ser uma página de internet/programa de computador legítimo. Adicionalmente, ele publicita produtos falsos, que poderia roubar a sua informação pessoal.

O blekko está habilitado de o bloquear de resetar a configuração do seu browser, para ter a certeza blekko.com é a pagina inicial do seu browser de internet. Além disso, ele faz com que telas pop up apareçam com sites phishing e informação falsificada. O blekko também acrescenta favoritos e bookmarks, sem o conhecimento do usuário. Quando o usuário está infetado com o virus Blekko Redirect, até os procuradores Google, Firefox ou Bing são redirecionados para Blekko.com. Se pensa, que está infetado com o virus Blekko Redirect, nós recomendamos-lhe de fazer um scan completo do sistema com um respeitável programa antimalware, para assegurar que Blekko Redirect será removido.

1 comment:

Related Posts Plugin for WordPress, Blogger...