Jump to content
Forum²

4 Common Mistakes E-commerce Websites Make Using JavaScript


Guest Justyna Jarosz

Recommended Posts

Guest Justyna Jarosz
Posted

Despite the resources they can invest in web development, large e-commerce websites still struggle with SEO-friendly ways of using JavaScript.

 

And, even when 98% of all websites use JavaScript, it’s still common that Google has problems indexing pages using JavaScript. While it's okay to use it on your website in general, remember that JavaScript requires extra computing resources to be processed into HTML code understandable by bots.

 

At the same time, new JavaScript frameworks and technologies are constantly arising. To give your JavaScript pages the best chance of indexing, you'll need to learn how to optimize it for the sake of your website's visibility in the SERPs.

 

[HEADING=1]Why is unoptimized JavaScript dangerous for your e-commerce?[/HEADING]

 

By leaving JavaScript unoptimized, you risk your content not getting crawled and indexed by Google. And in the e-commerce industry, that translates to losing significant revenue, because products are impossible to find via search engines.

 

It’s likely that your e-commerce website uses dynamic elements that are pleasant for users, such as product carousels or tabbed product descriptions. This JavaScript-generated content very often is not accessible to bots. Googlebot cannot click or scroll, so it may not access all those dynamic elements.

 

 

Consider how many of your e-commerce website users visit the site via mobile devices. JavaScript is slower to load so, the longer it takes to load, the worse your website’s performance and user experience becomes. If Google realizes that it takes too long to load JavaScript resources, it may skip them when rendering your website in the future.

 

[HEADING=1]Top 4 JavaScript SEO mistakes on e-commerce websites[/HEADING]

 

Now, let’s look at some top mistakes when using JavaScript for e-commerce, and examples of websites that avoid them.

 

[HEADING=2]1. Page navigation relying on JavaScript[/HEADING]

 

Crawlers don’t act the same way users do on a website ‒ they can’t scroll or click to see your products. Bots must follow links throughout your website structure to understand and access all your important pages fully. Otherwise, using only JavaScript-based navigation may make bots see products just on the first page of pagination.

 

[HEADING=3]Guilty: Nike.com[/HEADING]

 

Nike.com uses infinite scrolling to load more products on its category pages. And because of that, Nike risks its loaded content not getting indexed.

 

5To7foKgdycp4Eb2_2Qw95OkuXZ_TUu1oAtEAYmQ0CCZMjXJ2MGsVgwN8rKz5H_kDoysc4M5YOby-qTp_7bd1BtzracdzC8HyGzHcpuo2Me60aLW61AKee4nkORhNEyLi8icrB2epHJKCPGFna6x6UiOZHWC2m4WWJdWkfpMBV4smyasMK-lOubsg9NyqQ

 

For the sake of testing, I entered one of their category pages and scrolled down to choose a product triggered by scrolling. Then, I used the “site:” command to check if the URL is indexed in Google. And as you can see on a screenshot below, this URL is impossible to find on Google:

 

aoYw4HZHpXyrGsB49isT_xmk_YoL9eeOgE-3igtkGmsKc8eqXh0UKjVBL2NVK5GimzRh-neN7wUOaYOvmvbjK-WT6dnDBdbyMpzXdYqSO_PhWD1aOIgmntONoDoCobomOQjAGJuYKkIfrToxlI250wuE49X_QCCBvRZ6IPrR6YAZs4b8Tvo8ib8qujRr5w

 

Of course, Google can still reach your products through sitemaps. However, finding your content in any other way than through links makes it harder for Googlebot to understand your site structure and dependencies between the pages.

 

To make it even more apparent to you, think about all the products that are visible only when you scroll for them on Nike.com. If there’s no link for bots to follow, they will see only 24 products on a given category page. Of course, for the sake of users, Nike can’t serve all of its products on one viewport. But still, there are better ways of optimizing infinite scrolling to be both comfortable for users and accessible for bots.

 

02NcW19knbsyXZ4Fp6ikXj_W1mliqtAMcdCaSnbsn14Lv105q5vS_79vMxUCD4k72iDgAjbWdntZeq2vHckdjv7Vfc5kPb2XnQg0_Uo_kGpUj2fPSr8KY3-OIyx8Y6LO6gNeHDYbSEElKgq1qxYexE4uWTi0wxE1dLVT31VHhQrtXdA2ucd_IyEpzClsxw

[HEADING=3]Winner: Douglas.de[/HEADING]

 

Unlike Nike, Douglas.de uses a more SEO-friendly way of serving its content on category pages.

 

They provide bots with page navigation based on links to enable crawling and indexing of the next paginated pages. As you can see in the source code below, there’s a link to the second page of pagination included:

 

NFEJ5VHedNlaTvxSpWNcOZ_Qb0QQgciCS7xKA7nFzTMHdfbeA7lWBbjkMpoa5BLQQg-dJhM8nGnEjpLrPLhJ5HsiLiLr6t1FkTnqJdzTM4uWB5qPdY8mXpAkgwx5q8FDuKHnKmuDWOAacE8-tL6mmbmwGGxzjIi4NbQEA-5ihW1a_mmo9OxF09kmiJjGbA

 

Moreover, the paginated navigation may be even more user-friendly than infinite scrolling. The numbered list of category pages may be easier to follow and navigate, especially on large e-commerce websites. Just think how long the viewport would be on Douglas.de if they used infinite scrolling on the page below:

 

CVE60RhE_eE4c_Q-456XUf3Ye6hAPIXjw7--BTEw9v49dyvyYdRKG6IO0goUdQ3EaFl30bAZzcMh612MjtYZGBBMOhtVftbzw6Z8al7KwhROhYxZ6bamvWR5z7JbBLkLJ3BlKHZvR-IwjjQszWAV2Nj_qt8EAk3rJZ78Cf6oyWFTB6WLJivOIxpYe35ToA

[HEADING=2]2. Generating links to product carousels with JavaScript[/HEADING]

 

Product carousels with related items are one of the essential e-commerce website features, and they are equally important from both the user and business perspectives. Using them can help businesses increase their revenue as they serve related products that users may be potentially interested in. But if those sections over-rely on JavaScript, they may lead to crawling and indexing issues.

 

[HEADING=3]Guilty: Otto.de[/HEADING]

 

I analyzed one of Otto.de’s product pages to identify if it includes JavaScript-generated elements. I used the What Would JavaScript Do (WWJD) tool that shows screenshots of what a page looks like with JavaScript enabled and disabled.

 

Test results clearly show that Otto.de relies on JavaScript to serve related and recommended product carousels on its website. And from the screenshot below, it’s clear that those sections are invisible with JavaScript disabled:

 

I0mAcSgLSiTOaGuLkjuIT11Dk96qUL_BxVBlSQ_raQt-sLiLmXOvE39t1bQgHjuCt9FVv7QdtJlqsPx9NeYXjExY0URo5bosmUewiHYFK9FAoUgYHgPDdCaYA3I5YuiktXEFT9_h-Kb4AC_G1vDYWpzyMXS_fQL42Y1KVGvMQlrxgyzhRd3IMINj85T2dA

 

How may it affect the website’s indexing? When Googlebot lacks resources to render JavaScript-injected links, the product carousels can’t be found and then indexed.

 

Let’s check if that’s the case here. Again, I used the “site:” command and typed the title of one of Otto.de’s product carousels:

 

U1-wIxNAJVdi__DBGAObArEx_m4CStJeJi5zuh0jY-37g77t-2Buw5n3z601na_jmn5hVJREDz-pmvNmP0xSGNnD3GABQpFnqtG9vmsiL--pKJbQ5ImVZWObqJiGjeKD-XUlVg9IUYrbCk6tIBj7Y3ekKFnrEBot65dXYWjblXtUCrQzsXQwp1LoZDeq2Q

 

As you can see, Google couldn’t find that product carousel in its index. And the fact that Google can’t see that element means that accessing additional products will be more complex. Also, if you prevent crawlers from reaching your product carousels, you’ll make it more difficult for them to understand the relationship between your pages.

 

[HEADING=3]Winner: Target.com[/HEADING]

 

In the case of Target.com’s product page, I used the Quick JavaScript Switcher extension to disable all JavaScript-generated elements. I paid particular attention to the “More to consider” and “Similar items” carousels and how they look with JavaScript enabled and disabled.

 

As shown below, disabling JavaScript changed the way the product carousels look for users. But has anything changed from the bots' perspective?

 

IGjoIgXDS8qt-GwUWO9Kr-TT00Zy582-IguoMahkHrj1XYTsLM_8OctdTMoBS2LDqzM-01LINgEHI43DkPYy4mfSEnlk0SD91fu_yzdcrUuhiclnAB-5fEIke2wGLXPzrAjveTK23K6EnhzatpxG-fdUTnohxRi-lT5CIyrCXNU08gSX1KChFzEYzZrWBg

 

To find out, check what the HTML version of the page looks like for bots by analyzing the cache version.

 

To check the cache version of Target.com’s page above, I typed “cache:https://www.target.com/p/9-39-...”, which is the URL address of the analyzed page. Also, I took a look at the text-only version of the page.

 

qWeu6rcMjrfks2ykPhOp2cGFMmdPlGg1B7U_1-d-AXHKiVNDTUehgCWxE_pjUYOYMLpuhM5B4_ztATbWZHUhryB3JhmUXFtStw611TRYjkwaMc0uVDKtIauertTLMJ_NUtflJ0ASEJhrPevDDQogv_7z5JVS4pjI9xazNXnEDCnpWXmTcDT4O6QwDCV9gA

 

When scrolling, you’ll see that the links to related products can also be found in its cache. If you see them here, it means bots don’t struggle to find them, either.

 

However, keep in mind that the links to the exact products you can see in the cache may differ from the ones on the live version of the page. It’s normal for the products in the carousels to rotate, so you don’t need to worry about discrepancies in specific links.

 

But what exactly does Target.com do differently? They take advantage of dynamic rendering. They serve the initial HTML, and the links to products in the carousels as the static HTML bots can process.

 

However, you must remember that dynamic rendering adds an extra layer of complexity that may quickly get out of hand with a large website. I recently wrote an article about dynamic rendering that’s a must-read if you are considering this solution.

 

Also, the fact that crawlers can access the product carousels doesn’t guarantee these products will get indexed. However, it will significantly help them flow through the site structure and understand the dependencies between your pages.

 

[HEADING=2]3. Blocking important JavaScript files in robots.txt[/HEADING]

 

Blocking JavaScript for crawlers in robots.txt by mistake may lead to severe indexing issues. If Google can’t access and process your important resources, how is it supposed to index your content?

 

[HEADING=3]Guilty: Jdl-brakes.com[/HEADING]

 

It’s impossible to fully evaluate a website without a proper site crawl. But looking at its robots.txt file can already allow you to identify any critical content that’s blocked.

 

This is the case with the robots.txt file of Jdl-brakes.com. As you can see below, they block the /js/ path with the Disallow directive. It makes all internally hosted JavaScript files (or at least the important ones) invisible to all search engine bots.

 

 

aToyGO8-TVSL7ZG79YStSRTq9nyPN-0ZgCtmaqVHPuUnzYslGXebHkbary70IIlty946h_XW938S5wIGIzbkLwub0f8G-5RXqqGTCERstvJhDf4gicCrn3CNrdEYizvnWOiW6arTp87gJx3tRTsKvC9CJEwK2KiBqMA9oWEl3y1cwceUHj1fzWlPuvQhOA

 

This disallow directive misuse may result in rendering problems on your entire website.

 

To check if it applies in this case, I used Google’s Mobile-Friendly Test. This tool can help you navigate rendering issues by giving you insight into the rendered source code and the screenshot of a rendered page on mobile.

 

I headed to the “More info” section to check if any page resources couldn’t be loaded. Using the example of one of the product pages on Jdl-brakes.com, you may see it needs a specific JavaScript file to get fully rendered. Unfortunately, it can’t happen because the whole /js/ folder is blocked in its robots.txt.

 

DE_2uVIEdmog_q4KUxQiSraXaC5KTdj5nriejPyrRTVzLmbgNxZpnKSIhdhqY4JJeAExIBZ2soxKSayPOfDLrIyxRBtul50upa8QquhKpiZfsFHVRUthU1TMtPa_OLoIDbun8tMRBGuqlCKjouf62a6dVic7qIOhbmdS3QllQuLOTZfbCJDBbAnxCx-0yw

 

But let’s find out if those rendering problems affected the website’s indexing. I used the “site:” command to check if the main content (product description) of the analyzed page is indexed on Google. As you can see, no results were found:

 

wH6GgRvdsIPnjLAIIauuPYWDGBNeaxAVMTjcdwFcBlHeszgFsMKfwTRsDIvN7lcwywo8dTDunLIDoqP2gcofbZ7Vn5a2c4PfcWP95IT8Taz52I7haMeg0ys6ftqyBVPv6N9smgAagBEzlttM3tE9-ns8CJliFmnRWt82nw6jt_2n-vMHEtAgK2y3Y6pDbQ

 

This is an interesting case where Google could reach the website's main content but didn’t index it. Why? Because Jdl-brakes.com blocks its JavaScript, Google can’t properly see the layout of the page. And even though crawlers can access the main content, it’s impossible for them to understand where that content belongs in the page’s layout.

 

Let’s take a look at the Screenshot tab in the Mobile-Friendly Test. This is how crawlers see the page’s layout when Jdl-brakes.com blocks their access to CSS and JavaScript resources. It looks pretty different from what you can see in your browser, right?

 

d8gl8XzZWNU30tY29AyagnT3yjUCPkQ6WwtGB8fK-YtKPMID92rowF3_wg7wg2eK6tkW48k6-d5QKMnV8FbTqIkV-3wnKT4uqU4b32O1hcHBXjGj_tkExDkV0km221SwbqTTqkWcMrOHphIVxmFoIQsHxiDArv7kfB2pp5Fejj1noZ0vBj5-jfJ6AWcVZA

 

The layout is essential for Google to understand the context of your page. If you’d like to know more about this crossroads of web technology and layout, I highly recommend looking into a new field of technical SEO called rendering SEO.

 

[HEADING=3]Winner: Lidl.de[/HEADING]

 

Lidl.de proves that a well-organized robots.txt file can help you control your website’s crawling. The crucial thing is to use the disallow directive consciously.

 

Although Lidl.de blocks a single JavaScript file with the Disallow directive /cc.js*, it seems it doesn’t affect the website’s rendering process. The important thing to note here is that they block only a single JavaScript file that doesn’t influence other URL paths on a website. As a result, all other JavaScript and CSS resources they use should remain accessible to crawlers.

 

JSjrt-PAANSZKHj1tWtM8upJppCvXOgtKM1qEfvjTvZG39PZXtCYItHsj2U_syk0BDRfGa7r4KEs6GXKSZKGM2q2hkcbohs5rOv1SSSWamGXk8RzZdMB4aod1J0_BQlMIImr0dkXsMWOs3VTzQH6jPJwq90t7qzWFGQnXMhoXwiGUqb6CElqTHN9swjieQ

 

Having a large e-commerce website, you may easily lose track of all the added directives. Always include as many path fragments of a URL you want to block from crawling as possible. It will help you avoid blocking some crucial pages by mistake.

 

[HEADING=2]4. JavaScript removing main content from a website[/HEADING]

 

If you use unoptimized JavaScript to serve the main content on your website, such as product descriptions, you block crawlers from seeing the most important information on your pages. As a result, your potential customers looking for specific details about your products may not find such content on Google.

 

[HEADING=3]Guilty: Walmart.com[/HEADING]

 

Using the Quick JavaScript Switcher extension, you can easily disable all JavaScript-generated elements on a page. That’s what I did in the case of one of Walmart.com’s product pages:

 

dnJKzqXQNYKQ1yurdOnMjZyZzX-U8OxJe6iRtbUJ7FhcuoNLE-rbaRDvyxhs9p5f3rcEbPZeNduHOpCE-i1QOcMjsVtWgJ18LlqQ_3mfqwOTyGrTxBC4x_HrP8Roj91u0x3WDkUZqv8u3xCHBvTApJbGe01OF3SYZ69Yw2Sl53sLr1rGJZ4gOzFvBWeRyg

 

As you can see above, the product description section disappeared with JavaScript disabled. I decided to use the “site:” command to check if Google could index this content. I copied the fragment of the product description I saw on the page with JavaScript enabled. However, Google didn’t show the exact product page I was looking for.

 

ObE407_fEs5gZabw65dhkSnawl3RJRLulrlKwkPohdRsYE2sfXiW0NOb0R-g2w2-XY--lv7rK40gPLbD_HBbDgvjDN51VVZDbBxIsVpW3K_6pjAJVTDsxDM0oMqCJ8qknM57X6DQ7549kNnAbOwAsrW1MShrQPHBTTTJgpWGHiWbJZL-LU3b_i2xzeNREg

 

Will users get obsessed with finding that particular product via Walmart.com? They may, but they can also head to any other store selling this item instead.

 

The example of Walmart.com proves that main content depending on JavaScript to load makes it more difficult for crawlers to find and display your valuable information. However, it doesn’t necessarily mean they should eliminate all JavaScript-generated elements on their website.

 

To fix this problem, Walmart has two solutions:

 


  1. Implementing dynamic rendering (prerendering) which is, in most cases, the easiest from an implementation standpoint.
     
     

  2. Implementing server-side rendering. This is the solution that will solve the problems we are observing at Walmart.com without serving different content to Google and users (as in the case of dynamic rendering). In most cases, server-side rendering also helps with web performance issues on lower-end devices, as all of your JavaScript is being rendered by your servers before it reaches the client's device.
     

 

Let’s have a look at the JavaScript implementation that’s done right.

 

[HEADING=3]Winner: IKEA.com[/HEADING]

 

IKEA proves that you can present your main content in a way that is accessible for bots and interactive for users.

 

When browsing IKEA.com’s product pages, their product descriptions are served behind clickable panels. When you click on them, they dynamically appear on the right-hand side of the viewport.

 

Although users need to click to see product details, Ikea also serves that crucial part of its pages even with JavaScript off:

 

RUvRUwQj1RrhQlOytzNm-68SrNQCrVOsM9KDaNBZ97ZRMf23qX9lAHzj15hGnuwODHWRghO2ePUdKipXqVe0V2LdCTzplzCeymoFx8uywORahNSdwAgbqqX0jR18VMxu_CCMdgB2kbr9itJE_glGQPjSIPI9PAcSCYr8aaRxQ91j8OhNZxEd_NzPuTXM4w

 

This way of presenting crucial content should make both users and bots happy. From the crawlers’ perspective, serving product descriptions that don’t rely on JavaScript makes them easy to access. Consequently, the content can be found on Google.

 

[HEADING=1]Wrapping up[/HEADING]

 

JavaScript doesn’t have to cause issues, if you know how to use it properly. As an absolute must-do, you need to follow the best practices of indexing. It may allow you to avoid basic JavaScript SEO mistakes that can significantly hinder your website’s visibility on Google.

 

Take care of your indexing pipeline and check if:

 

 

If my article got you interested in JS SEO, find more details in Tomek Rudzki’s article about the 6 steps to diagnose and solve JavaScript SEO issues.

 

Continue reading...

×
×
  • Create New...