WE ARE IN LOVE WITH CLIENT
Recently, we discovered how client loving working with us by their 5 star recommedation
Top
In the announcement of coming back in 2014, Magento 2 seemed to be the promising items because of the improvement in performance and scalability, higher quality code, easier installations and simplified integrations. Until now, Magento 2 has launched and introduced on Magento’s official blog. A recent analysis reveals some common SEO mistakes in new Magento 2 stores, which is actually helpful for users launching their own Magento 2 store.
Robots.txt blocking layered navigation parameters
Generally, layered navigation or other filtering and sorting parameters are the things that you try to avoid. However, the truth is that common ways, blocking them with a robots.txt disallow, is not the best solution to deal with this case. The most convincing explanation is whatever you do; they can still be indexed and can’t be crawled. Instead of trusting in robots.txt disallow approach, some of you tend to apply Meta noindex, follow on URLs with those parameters. Overall, one remarkable thing is do not apply both robots.txt disallow and Meta noindex.
Robots.txt NOT blocking the site search results
As mentioned in part 1, robots.txt disallow is not a feasible application, site search results are usually the thing desired to be disallowed.
Google’s Panda algorithm is an outstanding tool for punishing websites that totally enable indexation of huge amounts of site search data because Google put a lot of effort to stop displaying search results within search results.
Nevertheless, some Magento 2 websites still ignore disallowing and even they also link to their site search results from homepage through logos of different brands as a query parameter.
Miss availability in schema.org microdata markup for product offer
Microdata markup is known as the technique that Google and major search engines can get and find some information in the content you offer in terms of your price, reviews, etc. It will support you in reaching higher level of CTR and updating automatically for Google Merchant Center. From the point of view of both SEO and PPC, it is the items should be considered.
Pointing layered URLs back to category with rel canonical
At the time when Rel canonical was released, Google wanted to apply it in solving content duplication. But it did not really make sense because some cases were not duplication when layered filters actually changed the content of the URL.
What you really want is to place Meta noindex, follow on those layered URLs and get them out of index while enabling the link juice to flow through them through your navigation and product listings to other pages that you need to rank.
Indexing both http and https versions of the website
A typical example for the case of content duplication and where rel canonical should be used to consolidate the duplicates into a preferred version is owning both http and https versions of the same URL indexed. While https is considered as a ranking factor, http version of the same URl should have a rel canonical pointing it to https version.
Homepage title “Home page”
Homepage is usually the page that includes link equity and can rank for your most notable keywords. Therefore, the name of “Home page” seems not to show all the elements or your purposes of your sites.
Source & Image Inchoo.net