Wednesday, February 17, 2016

Making an optimized website for SEO


Making an optimized website for SEO

Now that you know what is SEO and what are main factors that Google takes into account when positioning a website, you need to learn what you have to do to make your page has opportunities to position high in the SERPs .
In this chapter we will discuss how to optimize the main ranking factors as well as major SEO problems that arise when optimizing a website and possible solutions.
We divide the topics in this chapter in 4 main areas:
  1. Accessibility
  2. indexibility
  3. Content
  4. meta tags

1. Accessibility

The first step in optimizing the SEO of a website is to allow access to search engines to our content. That is, you must check whether the web is visible in the eyes of search engines and especially how they are viewing the page.
For various reasons which will be explained later it may be the case that the search engines can not correctly read a web, a prerequisite for positioning.
Aspects to consider for good accessibility
  • Robots txt file
  • Robots meta tag
  • HTTP status codes
  • Sitemap
  • web structure
  • JavaScript and CSS
  • Speed ​​of the web

Robots txt file

The robots.txt file is used to prevent search engines from accessing and indexing certain parts of a web. It's very useful to prevent showing on Google search results pages that do not want. WordPress for example, from accessing files administrator robots.txt file looks like this:
User agent: *
Disallow: / wp-admin
EYE: You must be very careful not to block the access of search engines to your entire web without realizing it as in this example:
User agent: *
Disallow: /
We must check that the robots.txt file is not blocking any significant portion of our website. We can do so by visiting the url or through Google Webmaster Tools in "Track"> "Tester robots .txt "
The robots.txt file can also be used to indicate where our sitemap by adding the last line of the document.
Therefore, an example of full robots.txt to WordPress would look like:
User-agent: *
Disallow: / wp-admin
Sitemap: http: //
If you want to go deeper into detail on this file, we recommend visiting the website with information about the standard.

Robot meta tag

The meta tag "robots" is used to tell robots search engines whether they can or not to index the page and whether they should follow the links it contains.
When analyzing a page you should check for any error meta tag that is blocking access to these robots. This is an example of how these labels would be in the HTML code:
<Meta name = "robots" content = "noindex, nofollow">
Moreover meta tags are very useful to prevent Google from indexing pages that do not interest you, such as pages or filters, but follow the links to continue tracking our web. In this case the label would read:
<Meta name = "robots" content = "noindex, follow">
We can check the meta tags by right-clicking the page and selecting "view page source code."
Or if we go a little further with the tool Screaming Frog we can see at a glance which pages across the web have implemented such a label. You can see it in the "Directives" tab and in the field of "Meta Robots 1". Once you have located all the pages with this tag just you have to eliminate them.

HTTP status codes

In the event that any URL returns a status code (404, 502, etc.), users and search engines can not access this page. To identify these URLs we recommend also using Screaming Frog, because it quickly shows the state of all URLs on your page.
IDEA: Every time you make a new search Screaming Frog exports resulted in a CSV. Thus able to gather all in one Excel later.


The sitemap is an XML file containing a list of the pages of the site along with some additional information, such as how often the page changes its contents, when was the last update, etc.
A small excerpt from a sitemap would be:
<Loc> </ loc>
<Changefreq> daily </ changefreq>
<Priority> 1.0 </ priority>
</ Url>
Important things you should check about the Sitemap, that:
  • Follow the protocols, otherwise Google will not process it properly
  • Be uploaded to Google Webmaster Tools
  • Be updated. When you update your site, make sure you have all new pages in your sitemap
  • All pages are in the sitemap are being indexed by Google
In the event that the web does not have any sitemap we create one, following four steps:
  1. Excel generates all the pages we want to be indexed, we will use the same Excel to believe by making the search for HTTP response codes
  2. Create a sitemap. For this we recommend the tool Sitemap Generators (simple and very complete)
  3. Compare your pages to excel, and that are in the sitemap and removes excel we do not want to be indexed
  4. Upload the Sitemap using Google Webmaster Tools

web structure

If the structure of a web is too deep to Google it will be more difficult to reach all pages. So it is recommended that the structure is no more than 3 levels deep(not counting the home) since the robot Google has a limited track a web time, and the more levels you have to go through less time will be to access to deeper pages
It is best to always create a web structure horizontally rather than vertically.

Vertical structure

Vertical web structure

horizontal structure

Horizontal web structure
Our advice is to make an outline of all the web where you can easily see the levels that have, from the home to the deepest site and be able to calculate how many clicks it takes to get there.
Located at what level is each page and if you have links pointing to it again using Screaming Frog.

JavaScript and CSS

Although in recent years Google has become smarter when reading these technologies must be careful because the JavaScript can hide part of our content and CSS can mess it showing it in a different order that sees Google.
There are two methods for how Google reads a page:
  • plugins
  • Command "cache"
Plugins and Web Developer or Disable-HTML help us see how "tracks" web search engine. For this you open one of these tools and disable JavaScript. We do this because all the dropdown menus, links and texts must be able to be read by Google.
Then we disable the CSS, and we want to see the actual order of the content and CSS can change this completely.
Command "cache"
Another way to find out how Google sees a site is using the "cache" command
Enter "cache:" in the search box and click "Printer friendly version". Google will show you a photo where you can learn how to read a website and when was the last time I agreed to it.
Of course, for the command "cache" function properly our pages must already be indexed in Google's index.
Once Google indexes for the first time a page determines how often returns to visit for updates. This will depend on the authority and relevance of the domain it belongs to that page and how often it is updated.
Either through a plugin or command "cache:" make sure that you meet the following points:
  • You can see all the links menu.
  • All web links are clickable.
  • No text that is not visible with CSS and Javascript enabled.
  • The most important links are at the top.

Upload speed

Google robot has limited time to browse our site, at least later each page to load more pages get get time.
You should also note that a charge of very slow site can make your bounce rate is triggered, so that becomes a vital factor not only for positioning but also for a good user experience.
To view the loading speed of your website we recommend Google Page Speed ​​, there you can check what the problems that slow down your site in addition to finding the tips that Google offers to tackle them are. Focus on those with high and medium priority.


Once the robot Google has agreed to a page the next step is that the indexed,these pages will be included in an index where they are sorted according to their content, their authority and their relevance to make it easier and faster to Google access they.

How to check if Google has indexed my site correctly?

The first thing you have to do to see if Google has indexed your website properly isto search for the "site:" command , thus Google will give you the approximate number of our web pages you have indexed:
Command site on Google
If you have linked Google Webmaster Tools on your website you can also check the actual number of pages indexed by going to Google Index> Index Status
Knowing (about) the exact number of pages in your website, this information will help you to compare the number of pages that Google has indexed to the number of actual pages of your website. They can happen three scenarios:
  1. The result in both cases is very similar. It means that everything is in order.
  2. The number on the Google search is lower , which means that Google is not indexing many páginas.Esto happens because it can not access all the web pages.To solve this part reviews the accessibility of this chapter.
  3. The number on the Google search is higher , which means that your website has a problem of duplicate content. Surely the reason that there are more pages indexed than they actually exist on your website is to have duplicate content or Google is indexing pages that do not want to be indexed.

Duplicate Content

Having duplicate content means that for several URLs have the same content.This is a very common problem that often is involuntary and can also have negative effects on the positioning in Google.
These are the main reasons for duplicate content:
  • "Canonicalization" page
  • Parameters in the URL
  • Pagination
It is the most common reason for duplicate content and occurs when your homepage has more than one URL:
Each of the above target the same page with the same content, if it is not given to Google what is the correct not know what must be positioned right position and may not be the version you want.
Solution. There are 3 options:
  1. Make a redirection on the server to ensure that there is only one page displayed to users.
  2. Subdomain define what we want the main ( "www" or "non-www") in Google Webmaster Tools. How to Set the subdomain.
  3. Adding a tag "rel = canonical" in each version that targets those considered correct.
  • Parameters in the URL
There are many different parameters, especially in e-commerce: filter products (color, size, punctuation, etc.), management (retail price, relevance, higher price, grid, etc.) and user sessions. the problem is that many of these settings do not change the content of the page and that creates many URLs for the same content.
It is this example there are three parameters: color, minimum price and maximum price.
Add a label "rel = canonical" to the original page, and avoid any confusion on the part of Google with the original page.
Another possible solution is indicated through Google Webmaster Tools> Trace> URL parameters which parameters should ignore Google when indexing the pages of a website.
  • Pagination
When an article, list of products or pages of labels and categories have more than one page, duplicate content issues may occur even if the pages have different content, because all are focused on the same subject. This is a huge problem in the pages of e-commerce where there are hundreds of items in the same category.
Currently the rel = next and rel = prev tags allow search engines know which pages belong to the same category / publication and thus it is possible to focus the full potential of positioning on the first page.
Using the NEXT and PREV parameters
1. Add the rel = next on the part of the code to the first page:
  • link rel = "next" href = "" />
2. Add in all pages except the first and last labels rel = next and rel = prev
  • link rel = "prev" href = "" />
  • link rel = "next" href = "" />
3. Add to the last page rel = prev
  • link rel = "prev" href = "" />
Another solution is to find the parameter page and enter the URL in Google Webmaster Tools so that it is not indexed.


Cannibalization occurs when keywords on a web page there are several competing for the same keywords. This confuses the search engine, not knowing what is the most relevant for that keyword.
This problem is very common in e-commerce, because having multiple versions of the same product "attack" with all the same keywords. For example, if you sell a book version paperback, hardcover and digital version, 3 pages will with virtually the same content.
Create a main product, from which you access the pages of different formats, which will include a canonical tag that points to that page. Optimally, each keyword will always be focused on a single page to avoid any problem of cannibalization.

3. Content

Because in recent years it has become quite clear that content is king Google. Let us offer a great throne then.
Content is the most important part of a website and as much as is well optimized SEO level, if not relevant with regard to the searches that users will never appear in the top positions.
To make a good analysis of the content of our website you have a few tools at your disposal, but the most useful is to use the page with the JavaScript and CSS disabled as explained above. In this way you will see what content you are reading Google really and in what order he is ready.
When analyzing the content of pages you should ask yourself several questions that will guide you through the process:
  • Does the site have enough content? There is a standard measure of how much is "enough", but should contain at least 300 words.
  • Is the content relevant? It should be useful to the reader, just ask yourself if you would read that. Be sincere.
  • Do you have important keywords in the first paragraphs? In addition to these we use related terms because Google is very effective relating terms.

A page will never position for something that does not contain
  • Do you have keyword stuffing ?  If the content of the page "sins" of excess keywords at Google will not make any grace. There is no exact number to define a keyword density perfect key, but Google advised to be as natural as possible.
  • Do you have misspellings?
  • Is it easy to read? If nothing is done we tedious reading, will be fine. Paragraphs should not be too long, the letter should not be a size too small and it is recommended you have pictures or videos that reinforce the text. Remember to always think about what you write public.
  • Does Google can read the text of the page? We have to stop the text is in Flash, images or Javascript. This will check viewing text-only version of our website, using the Google cache command at: www. and selecting this version.
  • Is the content is well laid out? It has its H1 tags, H2 etc. corresponding images are well layouts created etc.
  • Is it linkable? If you do not provide the user how to share is not likely to do so. It includes buttons to share on social networks in visible places on the page that do not interfere with the display of the content, whether a video, photo or text.
  • Is it current? The more updated content will be your greatest frequency tracking Google on your website and better user experience.
You can create an excel with all pages, texts and keywords you want to appear in them, so it will be easier to see where you reduce or increase the number of keywords in each page.

4. Meta tags

The meta tags or meta tags are used to convey information to the search engines what the page is about when they have to sort and show your results.These are the most important tags to keep in mind:


The title tag is the most important element of the meta-tag element. It is the first thing that appears in Google results.
When optimizing the title must take into account that:
  • The label must be in the <head> </ head> code.
  • Each page should have a unique title.
  • should not exceed 70 characters, otherwise it will appear cut.
  • It should be descriptive about the content of the page.
  • It must contain the keyword for which we are optimizing the page.
We must never abuse the keywords in the title , this will make users wary and Google thinks we are trying to deceive you.
Another aspect to consider is where to put the "mark", ie: the name of the web, usually is usually placed at the end to give more importance to keywords, separating these from the name of the web with a dash or a vertical bar.


Although not a critical factor in the positioning of a web affects considerably the rate of clicks (click-through rate) in search results so.
For the meta-description will follow the same principles as the title, only its length should not exceed 155 characters. For both titles to meta-descriptions must avoid duplication, here we can see in Google Webmaster Tools > search Appearance> HTML Improvements.

meta Keywords

At the time the meta keywords were a very important factor of positioning, but Google discovered how easy it is to manipulate search results so we eliminated as a factor in positioning.

Tags H1, H2, H3 ...

H1 tags, H2, etc. They are very important to have a good structure information and a good user experience as defined hierarchy of content, something to improve SEO. We give importance to the H1 because it is usually in the top of the content and the higher one keyword is more important given Google.

Tag "alt" image

The "alt" tag images are added directly in the code from the image itself.
<Img src = "" alt = "keyword molona" />
This label has to be descriptive in relation to the image and content of that image , because it is what you read Google to trace and one of the factors used to position it in Google Images.


You know how to make a page for SEO optimized and there are many factors that optimize if you want to appear in top positions of search results. Now surely you wonder what are the keywords that best position themselves to my web are?
We do not know exactly what these keywords are, but we can help you find find out in the next lesson!

Final Words

If You Need Any Kind Of Information about Blogger Contact us