Web buzzed yesterday with talk about Google Buzz. A lot of people I know have already got Google Buzz. If you haven’t, check out this write up by RWW.
Google Buzz has a feature called ‘image viewer’. In the demo it was showed that the image viewer fetches image from external sources Picasa and Flickr and display the images in full size within Google Buzz. I with others was concerned that Google might be pulling off images from ANY webpage instead of just Picasa and Flickr. This would not help websites with a lot of images as Google will ‘steal’ their traffic. I tested this with a few different websites and found that this is not the case with websites other than Picasa and Flickr.
Another thing I noticed that there are 2 ways to add links to GBuzz. One of the way is better than the other because it provides leverage with the anchor text. Surprisingly I did not see ‘nofollow’ attribute on Google profiles for the links shared using Google Buzz. I haven’t looked deep into it but if it is true, it will be helpful with SEO. Checkout the video below for a demo of how to add the link.
What does it change for SEO and webmasters?
Duplicate Content Issues – We have seen multi level drop down menus and option boxes causing Google to crawl and index duplicate content following URL patterns based on the GET strings. One of the advanced optimization technique used by SEOs for work flows is to add static links on the pages to next steps, but use a drop down menu for a better customer experience. For instance the URLs for steps may look like /step1.html -> /step2.html -> /step3.html. However, its counterpart form might generate URLs like /step1.html?v=widget1 -> /step2.html?v=widget1&x=widget2 -> /step3.html?v=widget1&x=widget2&y=submit.
These would certainly create multiple URLs for the same content. Solutions to these may involve adding optional parameters in Google Webmaster Tools (only for Google) or perhaps using POST instead of GET.
Last week I went to lunch with a friend to a nice Mexican restaurant in San Francisco called Mercedes Restaurant. It was easy to find on Google and had great reviews. But I noticed that the word Mercedes in the title of the homepage was regular type face in Google search result pages instead of being bold. It took me a few seconds to realize that it was actually misspelled as Mercdes. After looking at the source of their website, I found an H1 tag on the page which was misspelled the same way as the title. As a side note, the H1 tag is for some reason not visible on the page.
Both the title and the H1 tag are important elements of SEO and creating brand awareness. To make things worse, they have both of these tags used globally for each page on the website. Google and other search engines help them by assuming (based on Click Through Rate, I guess) that their website is what people are looking for when they search for mercedes restaurant San Francisco.
This was one of the several examples where website owners don’t pay attention to detail on the copy for the website and are solely dependent on the search engines to find a connection between the misspelled keywords and the actual website.
If you have a small website with 10-20 pages, you should follow these steps:
Run an automated spell checker like this on your website.
Tool will help you, but you will have to manually go through the results.
Specifically check Title and Meta Description (they are visible on viewing the source of the page).
Go to the main search engines and search for your brand name/main keyword and make sure that the results show your website as you intend to.
XML Sitemaps are a great way to provide a complete list of your webpages to Google for efficient crawling and discovery. Although by submitting your sitemap to Google’s Webmaster Tools does not mean that all your pages will be indexed by Google, but it increases chances of Google discovering more of your website’s content and indexing it. By using XML Sitemaps you can tell the search engines about how often you update a page/section of the website and the priority of the pages.
Back in 2007 major search engines (Google, Yahoo, Ask, MSN) agreed to a common protocol for discovering a sitemap from a website using robots.txt. Enabling autodiscovery on your sitemaps makes it easier to manage the addition of new sitemaps across all the search engines.
Sitemap autodiscovery has a very simple line addition to the robots.txt file and is well documented on sitemaps.org.
Ever wondered where to begin with when optimizing for HTML meta tags for your website? Do you think Meta Keywords help in gaining rankings? Do you think that Meta Descriptions are useless because Google does not use them for ranking? Here is a quick snapshot for optimizing the head tags on a webpage.
Title tag is your best friend. Not just it helps your pages rank well, also helps to catch user’s attention on Search Engine Result Pages (SERPs). Once you have done your homework with keywords, use the main keyword(s) as the first thing in the title. Brand name is important, but if it doesn’t have the keyword(s) in it add it after the keywords. Limit the length of the title tag to 60-65 characters including spaces and the dividers. Use dividers to seprate out the keywords and brand name. Once you have achived decent rankings, gather keyword data for that page to tweak and test variations of the keyword(s) to increase CTR or further rankings.
Meta Description is not used by major search engines to calculate rankings. That does not mean meta descriptions are not helpful in getting traffic. In most cases meta descriptions are shown as the snippet below the title on SERPs. The snippet is a major factor which helps the search engine user decide if they want to click the URL or not. Meta Descriptions for your website should be less than 160 characters. It should describe the content of the page as accurately as possible. Adding information that is not on the page in your meta description may get you clicks, but will also increase the bounce rate and time on the site. It will hurt the brand reputation and may also effect rankings on SERPs. Keywords should be added to the meta description not for the rankings, but to gain more visibility and higher CTR. Search Engines bold the keywords which help user making decision on what link to click.
Meta Keywordsused to be a factor in rankings in the past. They are no longer used by any major search engine to generate rankings neither are they displayed anywhere on the SERPs. Yahoo! is the only search engine that uses meta keywords to discover content, but does not use for rankings. Since Yahoo search is going away, there won’t be any reason to add these tags after that. We recommend not to use the meta keyword tag because it does not have any SEO value at this time. One negative impact by adding meta keywords tag and filling it with your best keywords is that it gets super easy for your competitors to grab your keywords and start optimizing their website.
Rel=Canonical is a very new meta tag which is honored by all the 4 major search enignes. It is a great asset for dynamically generated websites as they are prone to duplicate URLs for same content. We recomend adding this to your large/small website. However, don’t just solely rely on it, continue to have 301 redirects where needed. Testing over a period of 2 months, we found Rel=canonical does not always work as advertised.
Meta Robots Tag is not required, but good to have. You should use <META NAME=”ROBOTS” CONTENT=”NOODP, INDEX, FOLLOW”> for the pages you would like search engines to crawl and index. INDEX attribute tels search engines to index the page in their index, FOLLOW attribute tells search engines to follow the links on the page and NOODP tells search engines not to use snippets from Open Directory Project. If you see your page’s snippet which is not same as meta description, generally it is being picked from Dmoz. We came across a few CMS systems in the past which had “NOINDEX, NOFOLLOW” as the default for all the pages. Make sure if you are using a third party CMS this is not the case.
A friend sent me a link to a podcast (mp3 file) for a radio show by Michael Savage. Forward to 19 minutes in the show and you’ll hear Joseph Farah owner/editor of WorldNetDaily claiming that Google, Bing and other search engines are burying his website’s pages. His website writes “anti Obama” stories and seeking Obama’s [...]
We get this question a lot. A lot of webmasters and marketing managers are generally confused about getting links from paid directories. Does Google see it as a paid link? Why do so many people add links to paid directories? Here is the answer from Google’s Matt Cutts in a video: Think before choosing a [...]
The greatest human on the earth, Chuck Norris, has personal rules. No one till now knew that they were written for SEOs and webmasters. His rules are encrypted, Think Mantra deciphered them only for you: I will develop myself to the maximum of my potential in all ways – Chuck Norris talks about working hard to optimize [...]
A few days ago Google’s webspam team’s head Matt Cutts dropped a bomb by clarifying how Google calculates page rank flow on pages with links with nofollow attribute. Nofollow attribute was initially introduced in 2005 to curb the blog comment spam. The idea was to add nofollow attribute to each of the user comment link. [...]