Thursday, May 30, 2013

SEO Bee Tree | "HillTop Algorithm"

Source : seoisrael.com
Category : SEO Bee Tree

At the end of 2003, the Google search engine started using a new algorithm for ranking its search results - the Hilltop Algorithm. Opinions are varied regarding the exact time when this new algorithm was put to use (whether during the Florida Google Dance or before), but there is a general agreement that it is already in use

The Meaning of the Hilltop Algorithm

SEO Bee TreeThe Hilltop algorithm was created by Krishna Baharat from California between the years 1999-2000. On January 2001, he patented a certain algorithm in partnership with Google. The patent is different from the one presented in the original article, and it is not clear which of the two options is the one that is actually used. Today, Krishna works for Google. Although Google never officially announced that they are using one of these algorithms, it seems that they couldn't overlook their advantages.

If in the past Google was interested in the number and the quality of incoming links to a certain page, and not in the question where these links came from, today the game has changed significantly.

There is a new webpage status called Expert Documents. In case an expert document links to a webpage, this serves as a "vote of confidence" for the target page, and as a result this page's rank increases in the search results. There will be different expert documents for different search words, when the major difference between Krishna's article and the registered patent is in the way expert documents are selected.

A page that didn't receive a link (a vote of confidence) from at least two expert documents, will not receive a score from the algorithm at all! Although it is possible that it will still appear in search results because of other factors (e.g.: PageRank and factors of the page itself), but the webpage's location will be clearly damaged.

The key difference between the original article and the registered patent is in the way an expert document is selected.

Pre-defined expert document list

If you read the article written by Krishna Baharat (you can read the original Hilltop article here), it looks like the search that is based on the new algorithm consists of three steps:

Locating expert documents: Constructing a pre-defined expert documents list from available webpages. This list is general and relatively permanent. Next, linked sites are removed from the list (see below)

Correlating expert documents with the search query: When a search is executed, it goes through the expert document list, thus creating a sub-list from the extensive expert documents list for the same topic.

Assigning a LocalScore to the search: A LocalScore is given to any page that comes up in the search according to the anchor text of incoming links from pages that are included in the expert document list that was created in the last step. If the page has less than two links from expert documents there, the page would receive no score whatsoever from the algorithm.
SEO Bee Tree
Hilltop algorithm

The definition of expert documents is pages on a certain topic that were especially created in order to direct visitors to information regarding the chosen topic, meaning that they contain links to other sites that are not affiliated with them. Examples of this kind of pages are: index pages, institutional websites (.org, .gov) and university websites (.edu). The meaning of this fact is that it's not enough to receive links from commercial sites as before. You should receive links from authoritative pages, which are naturally harder to come by.

The importance of correct registration in indexes and other expert websites has increased greatly, while the importance of links from "simple" sites and pages decreased. An additional result is a greater importance of the anchor text of the links that appear on expert documents that link to your site.

Search and ReRank

If you look at the patent that Krishna Baharat registered in cooperation with Google (that can be called the Hilltop Patent), you can see that the search consists of three steps according to the new algorithm:

Initial search: Performing an initial search on all search words (similar to the original pre-algorithm search).

Filtering affiliated pages: Removing affiliated pages (see below).

Assigning a LocalScore: Assigning a LocalScore to pages on the list according to incoming links from pages that are also found on the list. The basic assumption is that the listed pages are the most relevant to this search, therefore only links from these pages will be counted.

Affiliated Pages

A clean up of the expert pages includes removing pages that share the same domain (domain.com, domain.co.il, two.domain.com, www.domain.com) or pages that are located in the same IP group (the first three numbers of the IP number are identical: e.g.: - 212.125.23.XXX).

What about marginal search terms?

Depends on which Hilltop version you believe in. If you refer to the second version, it looks like all it takes is the creation of an initial list that is big enough (after page filtering). If there are not enough websites, the algorithm will not be activated.

If you believe in the first version, then if the search does not appear in the popular search terms for which expert documents were prepared (or found in real time), Google goes back to using the old search method (meaning, without Hilltop).

How do you deal with Hilltop?

First, keep doing what you were doing so far - optimizing your site and building links for PR improvement. The significance of these factors may have decreased, but they are still very significant.

The key to dealing with Hilltop is recognizing the websites and webpages that are defined as experts in your website's field. How do you find them? Depends on which version you believe in. If we are dealing with a pre-defined list, then DMOZ will be one of the experts, and so will Yahoo!. Other indexes may appear as experts, but are not necessarily so.

According to the second version, you need to perform a search with certain keywords, and try to get links from the webpages that appear in the search results. In addition, you should check the incoming links to pages that appear at the top of the search results, and try to get the same links.

Tuesday, May 28, 2013

SEO Bee Tree | "How to Recover from Penguin 2.0"

Source :  searchengineguide.com
Category : SEO Bee Tree

For the approximate 2.3% regular US-English queries that it affects, many SEOs have been waiting to see if the traffic to their website will be negatively affected.
Recover from Penquin

While many black-hat practitioners that were affected by Penguin 1.0 can only hope their rankings don't plunge further into the depths of obscurity, the rest of us are crossing our fingers and hoping that we made it through relatively unscathed.

However, if your website was affected by Penguin 2.0, there are a few things you can do to ensure that your website gets back on the straight and narrow. And remember: the majority of positive comebacks after Google algorithm updates are from websites genuinely wanting to provide the best possible website for users instead of those just looking to skirt by until the next Google Zoo animal is released.

If you were affected by Penguin 2.0, read on. I've done my best to guess what Google has changed, based on information from Matt Cutts prior to the launch of Penguin 2.0, as well as what Google aimed to solve with Penguin 1.0. As the SEO community performs tests and learns more information about specifics of Penguin 2.0, the collective knowledge of the update may change. For now, if you've been hit and are looking for likely answers, here's what you need to do to recover your rankings.

1. Learn From Your Mistakes

The previous Google algorithm updates have already told us what Google is looking for-- great content, natural link profiles (both inbound and outbound), and an organic, natural link velocity that steadily increases month by month. If you were penalized by previous Panda or Penguin updates, then it's time to make the changes necessary to get back into Google's good graces.

If Panda and Penguin 1.0 were slaps on the hand, then consider 2.0 the wake-up you need to get your website completely back on track. The next updates will be even more intuitive and stringent, so making those big content-focused changes now can prevent even further lost traffic in the future.

2. Stop Shady Practices

A crucial step toward recovering from Penguin 2.0 is to stop all "shady" practices as determined by Google. This includes unnatural link building, and spammy or keyword-stuffed onsite content.

Besides looking for these types of black hat practices, Google is getting better at recognizing when sites are trying to be deceitful, especially when it comes to focusing on typically-spammy search queries, catching sites that are participating in link swapping schemes, and content that contains keyword stuffing or unnatural links. The easiest step toward recovering from Penguin 2.0 is to stop these sorts of practices all together.

3. Pick up The Pieces

The bridge between destruction and normalcy is the recovery phrase, which may take weeks or months, depending on the website and level of black hat SEO practices that Google believed they were involved in.

If you believe that your website was penalized because of shady link building practices (which is likely if you were hit by Penguin 2.0), you'll need to identify what links could be causing you harm, and then attempt to have them removed. Any links you fail to get removed, be sure to disavow.

To identify what links could be causing you harm, use a tool like Open Site Explorer or Majestic SEO. Alternatively, have a professional perform a link profile audit to identify harmful links for you. Here's a step-by-step walkthrough I wrote that describes how to audit your own link profile.

You can disavow links through Google Webmaster Tools. This tool should be used with caution and only after personally reaching out to these websites to get the links removed. I recommend disavowing all harmful links (even ones that have been successfully removed); it can't hurt.

Besides attempting to clean up your external link profile, you should also do a comprehensive audit of all pages on your website and make a list of what needs to be done internally. Work on publishing only high-quality content that provides value to the reader and includes only natural anchor text and useful internal linking. Make sure you are correctly set up for Google Authorship, and never use any keyword stuffing or unnatural language that may make Google think you are attempting to make your content appear to be something it's not.

4. Put Best Practices in Place

Once you begin to clean up the damage, it's time to set a plan for moving forward. This can include a recovery plan, depending on how extensive your website is, which may include targeted goals for rewriting all website content or the steady removal of any lingering inbound links acquired via black-hat methods.

After these goals are set, the emphasis should be on strictly implementing best practices for all future websites and content creation. All content and website links should:

  •     Use natural language
  •     Use natural anchor text, and only when it provides value
  •     Focus on providing the highest quality content
  •     Create value for humans instead of the search engines
  •     Contain NO black hat SEO practices

The SEO team should work closely with the content developers to flip the focus and message back onto providing quality over quantity, as well as a natural variety across content platforms. For instance, twenty poorly written articles about buying dresses online for an e-commerce clothing website aren't going to fair as well as four well-written, seasonal guide blog posts, an e-book on style or fashion, and a robust social media profile all about fashion and style.

For many SEOs who worked in the era where anything goes, these algorithm updates are an unsettling change that has been difficult to learn. However, when it comes to SEO, websites should take on an "adapt or die" mentality, as search engines are becoming more intuitive as to both what a user is searching for online and the quality of the websites that want to provide it. 

Monday, May 27, 2013

SEO Bee Tree | "Google Penguin 2.0 Update is Live"

Source : http://searchenginewatch.com
Category : SEO Bee Tree

Webmasters have been watching for Penguin 2.0 to hit the Google search results since Google's Distinguished Engineer Matt Cutts first announced that there would be the next generation of Penguin in March. Cutts officially announced that Penguin 2.0 is rolling out late Wednesday afternoon on "This Week in Google".

"It's gonna have a pretty big impact on web spam," Cutts said on the show. "It's a brand new generation of algorithms. The previous iteration of Penguin would essentinally only look at the home page of a site. The newer generation of Penguin goes much deeper and has a really big impact in certain small areas."

In a new blog post, Cutts added more details on Penguin 2.0, saying that the rollout is now complete and affects 2.3 percent of English-U.S. queries, and that it affects non-English queries as well. Cutts wrote:

We started rolling out the next generation of the Penguin webspam algorithm this afternoon (May 22, 2013), and the rollout is now complete. About 2.3% of English-US queries are affected to the degree that a regular user might notice. The change has also finished rolling out for other languages world-wide. The scope of Penguin varies by language, e.g. languages with more webspam will see more impact.

This is the fourth Penguin-related launch Google has done, but because this is an updated algorithm (not just a data refresh), we’ve been referring to this change as Penguin 2.0 internally. For more information on what SEOs should expect in the coming months, see the video that we recently released.

Webmasters first got a hint that the next generation of Penguin was imminent when back on May 10 Cutts said on Twitter, “we do expect to roll out Penguin 2.0 (next generation of Penguin) sometime in the next few weeks though.”

Matt Cutts Tweets About Google Penguin

Then in a Google Webmaster Help video, Cutts went into more detail on what Penguin 2.0 would bring, along with what new changes webmasters can expect over the coming months with regards to Google search results.

He detailed that the new Penguin was specifically going to target black hat spam, but would be a significantly larger impact on spam than the original Penguin and subsequent Penguin updates have had.

Google's initial Penguin update originally rolled out in April 2012, and was followed by two data refreshes of the algorithm last year – in May and October.

Twitter is full of people commenting on the new Penguin 2.0, and there should be more information in the coming hours and days as webmasters compare SERPs that have been affected and what kinds of spam specifically got targeted by this new update.

Friday, May 24, 2013

SEO Bee Tree l "How Search Engine Operate Beginners Guide"

Source : seomoz.org
Category : SEO Bee Tree

Search engines have two major functions - crawling & building an index, and providing answers by calculating relevancy & serving results.

Imagine the World Wide Web as a network of stops in a big city subway system.

Each stop is its own unique document (usually a web page, but sometimes a PDF, JPG or other file). The search engines need a way to “crawl” the entire city and find all the stops along the way, so they use the best path available – links.

SEO Bee Tree
 

 “The link structure of the web serves to bind all of the pages together.”

Through links, search engines’ automated robots, called “crawlers,” or “spiders” can reach the many billions of interconnected documents.
Once the engines find these pages, they next decipher the code from them and store selected pieces in massive hard drives, to be recalled later when needed for a search query. To accomplish the monumental task of holding billions of pages that can be accessed in a fraction of a second, the search engines have constructed datacenters all over the world.

These monstrous storage facilities hold thousands of machines processing large quantities of information. After all, when a person performs a search at any of the major engines, they demand results instantaneously – even a 1 or 2 second delay can cause dissatisfaction, so the engines work hard to provide answers as fast as possible.

Search engines are answer machines. When a person looks for something online, it requires the search engines to scour their corpus of billions of documents and do two things – first, return only those results that are relevant or useful to the searcher’s query, and second, rank those results in order of perceived usefulness. It is both “relevance” and “importance” that the process of SEO is meant to influence.

To a search engine, relevance means more than simply finding a page with the right words. In the early days of the web, search engines didn’t go much further than this simplistic step, and their results suffered as a consequence. Thus, through evolution, smart engineers at the engines devised better ways to find valuable results that searchers would appreciate and enjoy. Today, 100s of factors influence relevance, many of which we’ll discuss throughout this guide.

How Do Search Engines Determine Importance?

Currently, the major engines typically interpret importance as popularity – the more popular a site, page or document, the more valuable the information contained therein must be. This assumption has proven fairly successful in practice, as the engines have continued to increase users’ satisfaction by using metrics that interpret popularity.


Popularity and relevance aren’t determined manually. Instead, the engines craft careful, mathematical equations – algorithms – to sort the wheat from the chaff and to then rank the wheat in order of tastiness (or however it is that farmers determine wheat’s value).

SEO Bee Tree I "Top 18 SEO Tips and Tricks for Beginners"

Source :  webdeveloper
Category : SEO Bee Tree
Seo bee tree

It’s not easy to say what the top 10 best SEO tips really are, especially with all the so-called “experts” claiming to know the latest SEO tricks and “secrets Google doesn’t want you to know about.” It would take you days, even weeks to sort through this mess. So we thought we’d make it easy for beginners at SEO and condense all the clutter into what we call our Top 10 SEO tips and tricks.

Keep in mind, though, that we did not list these in any particular order. #1 will be just as important as #10 and so on. Our goal is not to say which tips are more or less valuable, but rather, to give beginners a starting point in their SEO efforts. These are all important and they apply to any industry or organization.

1.  Don’t Give Up
I know I mentioned at the start that these top 18 SEO tricks and tips are in no particular order, but I’m going to have to take that back, because this last one is of prime importance.
I can’t count the number of times I’ve seen beautiful websites go to waste because their owners just gave up on the SEO game. Too much work, too much time, too much money, so they say.

2.  Read the Top Online SEO forums
This is probably the most simple SEO tip on this top 10 list. It doesn’t take any special software or expert knowledge. All you have to do is read.
Although the top SEO experts tend to stay away from the forums (they’re too busy running their own firms!), there are still plenty of knowledgeable people who are more than happy to help you succeed in your SEO efforts. There is an absolute goldmine of free SEO tips and tricks available on SEO forums. If you’re paying attention, you’ll be able to recognize the experts from the novices. Once you do, pay extra attention to each of their posts, as they’ve been through it all and back.

3.  Focus on Link Diversity
Recently, research out of SEOmoz confirmed what many top SEOs already knew: it’s not just how many links you receive, it’s from how many different websites. Google loves to see link diversity in your backlink profile, so give them what they want to see and try to score links from as many different sites as possible.

4. Bing SEO = Google SEO
A common mistake SEO bloggers make is they think they have to optimize their website for Google and Bing separately, somehow thinking that these two search engines have drastically different ranking algorithms. The truth is that yes, their algorithms are different, but it’s no use trying to use two separate strategies to rank your sites on both. What works for one will probably work for the other, and most of their algorithm differences are things you can’t control anyway (keywords in your domain name, for example).

5. SEO Comes Before Web Design
Web design is inherently Cool. SEO…not so much.
And although we’re the first to admit that web design is extremely important, it’s a good idea to take care of your SEO before you work on your web design, or at least, your web designer should have a professional SEO at his side to consult him.
This simple SEO tip will save you hours of headaches later down the line, as it’s an absolute pain to go back and have to change your core web design to make it search-engine friendly. Much better to fix this at the start.

6. SEO is Always Tough for Beginners
It can be a rough first few weeks and months for beginners who want to learn SEO. There’s a lot of new terminology to memorize (what the heck is Joomla?), lots of outdated information (do meta keywords still matter?), and the competition is getting fiercer by the day. But here’s the good news: every SEO expert today was a beginner at some point in the past, and they reached guru status by a constant focus on improving their SEO game. No one is born with SEO knowledge; it doesn’t “come naturally” to anyone since SEO itself was invented years after each of us was born.

7. SEO Copywriting is Everything
Link-building gets all the attention, but just remember that the only reliable information Google has about your site is the content you write on it. Think of your blog posts as little spider webs, each of them serving to “catch” another segment in your target audience. Let’s say your site is about cars. There are sports car fans, luxury car fans, convertible fans, monster truck fans, etc. It would be impossible to attract all of these different types of fans with a site just about “cars”–that’s too generic. But writing a blog post about sports cars, another post about monster trucks, etc, will help you capture all of these different types of fans.

8.  Google SEO is a Slow Process
This is one of the more simple SEO tips to understand, although one of the most frustrating. No matter how well you take heed of Google SEO tips and tricks, the process is going to be slow. No one is able to rank a site #1 overnight unless the keyword is completely useless. No one. And this makes sense for Google, too, as it prevents the SERPs from fluctuating too much. You have to prove yourself for more than just a week for Google to reward you with a top 10 ranking. And if we’re talking about the top keywords, it very well might take you or your SEO company an entire year. This goes for Google as well as Yahoo and Bing, and for any CMS–Wordpress, Joomla, whatever. No one is exempt from this trust-building phase.

9. Start With Local SEO
It can be tempting to shoot for the stars and set your goals on keywords like “Accountant” or “Auto Insurance.” But before you go after these blockbuster keywords, it makes a lot more sense to target easier local SEO keywords like “Houston Accountant” or “Miami Auto Insurance,” depending on where your business is located.

10.  Free SEO is Still SEO
There’s a misconception out there that SEO has to cost an arm and a leg. And while it’s true that the very best SEO services will cost you (mainly because it costs the SEO company to provide it!), there are plenty of free SEO optimization techniques you can apply to your blog or corporate website. You can:
  • Shore up your on-page SEO
  • Analyze your competition with free SEO tools
  • Research keywords with the Google Keyword Tool
  • Contact other bloggers for guest blogging opportunities
  • Write an epic, pillar article for your blog that will attract links and visitors for years
11.  Social Media
This is very important term in building the good repo in the search engines. Many websites such as Technorati, Digg, Stumbleupon, Facebook, Google Buzz, Twitter, Flickr etc. comes into play to do it. All you have to make an account at that sites and connect your site to them and start uploading articles to have an good increase in ranking.

12.  Internal Linking and External Linking to your website
We should clearly understand the balance of internal and external linking. Hence, we should inter-link website pages to each other to enjoy the flow of Page rank juice and use no follow tag to our external linking.

13.  Use Rich Keywords according to your website
Well! keywords are words that are used to reveal the internal structure of the website.  We can use as many keywords to describe your website like "facebook" - denotes the www.facebook.com.

14. Link Building is also an another factor
In real, Google and Alexa determines sites repo and rank with the linking of the website with another. You can also buy links with some companies to enjoy good PR.

15. Submit your Website sitemaps to the popular search engines
Hence Sitemaps refers to the list of website posts and articles and submitting that to Google,yahoo,Bing will bring your all posts in search results by scrolling them.

16 . Ads and Advertisements
Now you can advertise your website with Google Adwords, Facebook ads etc. on the payment seats to bring more successful traffic and good SEO results to your website.

17.  Use of proper Meta Tags in your website or Blog
Meta tags is the special HTML tag that provides information about a Web page. Unlike normal HTML tags, meta tags do not affect how the page is displayed. Instead, they provide information such as who created the page, description of the site, rss, what the page is about, and which keywords represent the page's content. Many search engines use this information when building their indices.
Example :
SEObeetree

18.  Never rename your webpage name unless your website is new
For established and popular websites, renaming your webpages will kill your rank in the search engines, you are essentially starting from scratch in terms of SEO. So if you are redesigning your site, remember keep the old file names.

Note: If you implement of the above tips and tricks in your website, then you will definitely get a Google Page rank and good Alexa rank in short period of time. Thanks!