Senior SEO Consultant

Share
  • 0
  • 0

I was very fortunate and humbled this year to speak at BrightonSEO on the main stage as part of the advanced SEO track, alongside the supremely knowledgable and awesome Jamie Alberico and Aysun Akarsu.

Below is the YouTube recording of my talk, as well as my talk transcribed. You can find a copy of my slides over on Slideshare.

BrightonSEO Talk Transcript

What I’m going to talk about today’s really something we presented at Boston last year and that is, the concept of Edge SEO. And the concepts of Edge SEO really is just about doing SEO through CDNs, bypassing the actual server stack from our tech stack and not really circumventing DevOps, but just finding that edge case solution where necessary.

So, what a CDN actually is, as I’m sure most of you will be aware, it’s essentially a network of multiple cloud servers. You upload your site to an Inferi, eight pings or a cache response of your website closer to the user, and they come with many actual out of the box benefits including, some speed, bandwidth optimization of your server itself, content optimization and your uptime availability, especially if you’re actual server is made of wet tissue paper and fall over immediately, you get some security benefits, so if you’re WAF/DDoS mitigation most commonly use IC from just people reverse proxy in WordPress blogs, as long as we’re on the platforms, good set platforms, and then the Edge SEO stuff.

So, a lot of people like if you ask me, and I would say why we actually need to do Edge SEO. And the simple fact of the matter is these challenges we all face day to day, is development cues which are backlogged from multiple fake days. It’s the actual lack of business buying to investing and finding an ROI where necessarily isn’t by direct correlation. There are platforms out there which, let’s be honest, are just for straight save that allow us to actually get the things we want.

So, how this actually works is very simple. You have your browser, maker response to your server, and obviously, this in the middle, is the CDN or the cloud technology as we want to call it. All they’re doing is modifying that a little bit in the middle. So, to the user, from the browser and Google, we’re getting what we want, but at the same time, we’re not having to invest in the DevOps.

An example of it is implemented in Hreflang in this way. On the left, you’ve got the browser response. So, to Google, to the user, to the third party validation solves, the first part it solves, the Hreflang is present. But if you take a direct server response, we’ve actually not implemented Hreflang directly on the server on the website. That might be because we actually physically can’t, the platform doesn’t allow it, the development queue to do it’s eight months long, but we need to get the Hreflang on there, and that gives us that work around that actually is validated by Google, works with Bing, and allows us to actually implement international SEM.

So, when I spoke about this last year, Cloudflare was very much the only kind of name in producing essentially what they call the Cloudflare worker. Now, they are different from a service worker because of service worker his jobs great, but that’s just executing a browser and those wonderful things like push notifications that we all love and for various have a lot of triggers.

The space in about a year has grown a hell of a lot. Later this year Akamai are releasing their own form of Edge Worker. Fastly are also building one in Web assembly, so Wasm and Lambda, the AWS have also got other solutions. In the middle, we’ve got quite applications, we have slough, which is the one we built, Spark which is Chris Screens, and Logflare, which is in the Cloudflare apps. So, these are all base cleaning you don’t have to be able to write to perform a javascript, to make use of this technology. On the other side, we’ve already got Distilled ODN and a/b rankings.

So, we’ve got distilled and a/b rankings which allow us to actually do changes as well. Not necessarily with CDN, but again it’s a bypass of DevOps stark, so it falls in the Edge category.

Pulling this on a table format, this means we can actually do many things which we might not necessarily have been able to do before, for example, log file collection through sales-force commerce cloud is a nightmare but soon we will be able to potentially do it through all four major CDN by just…

Worker code cases differ a hell of a lot and it basically is situation dependent. They come on once a week and come across reader act, so things like getting pages don’t support them, so you need to be able to migrate that properly, a/b testing, overriding that data, Hreflang and dynamic pre-rendering of JavaScript which is sometimes when you answer a little bit.

The examples of this in action, so, the standard one, using distilled, very, very bottom of the HTML page, a little JavaScript rewrite means that you can have a dynamic title tag pulling through JV invasion onto the tab, so, automatically. Again, redirects, it’s just a little code and it’s very simple today and it helps that if you are running limits in your  .htaccess file, or you just have a means of implementing.

Pseudo log file collection element is interesting because the two main culprit platforms are Salesforce Commerce Cloud and Shopify. When you collect the logs in this way, you essentially get a standard log. As opposed to taking it from the server, you’re taking it from the initial request as Google Bot picks the sign. The issue however is, with Shopify, which was one of the main sites we divided the solution for via grey cloud did at the moment with Cloudflare meaning about any Edge technology with Shopify at the moment is not possible simply because no traffic passes through Cloudflare network and it’s been that way since about January this year. So, hopefully with Akamai and other methods, we can resolve this, but at the moment we’re stuck with Shopify as a/b Testing in the end. Without being able to do the actual live changes, it just wants to simply do radar x based on traffic. We can just do it via Edge, do it 50, 50, split roles or just pass traffic easy enough. And that can easily be done through things like the Slough back end. I’m pretty sure it can be done through spark as well or just simply just by actually implementing with JavaScript directly onto the CDN.

I have a fun thing that we’ve started using and we started at the moment is actually the pre-rendering of JavaScript in a dynamic fashion. So, we know that JavaScript pre-rendering can be expensive, it can be complicated, and it can be fraught with many of other issues, so, a way around this, I’m being honest, this next diagram is pretty much for developers, but the way around it essentially you’re using Cloudflare workers to take a cached version of URL based on a site map and then serving the cache version for a Google cloud function to the actual search engine itself, and then a lot of clients start rendering for actual general users.

In simple terms, when the request comes in, we need to actually first identify where the requests come from, whether that is a search engine or just an easy user. As with dynamic rendering, if it is a search engine, we check the actual cache we’ve got stored in Google cloud. But what if there is no cache available?

If it’s just the general user, we give them a client-side rendered version of the page.

If there’s no cache, we, first of all, build the worker, so, you trigger pre-rendering. Now, the important thing here is we have to most of the time actually take, because if we’re asking Google to render a page and we’re doing the seven seconds to load that, that’s ultimately negative, and we’re showing Google that the page is slow to load. It might impact rankings performance. If we can return it after one second when it needs to be a trigger to do something else. So, when we call it short, we’re meant to return favorite free.

Now the interesting thing is we’ve tried with some pro delays as well and also return afters, but when we work it that way we can officially say, “Here’s a favorite free, come back in 10 seconds. Co-relatively sometimes it happens, sometimes it doesn’t, but by returning about favorite free, we’ve not actually said to Google, “This isn’t pretty rendered, this is slow. It’s negative.”

We’ve called caching issues. We essentially have to look at it in a batch on a chronological basis. So with this technology for most sites, a lot of core pages don’t change that frequently, so we do a crawl or upload a sitemap. This can be an automated crawl process, whereas URLs have installed and cached every 24 or 48 hours, and for more frequent and cache in, obviously, if that can be set to go higher. If you were using the sitemap, obviously there were more restrictions, so, we have to make sure we’re updating the sitemap. We have to make sure XMLs are updated and we have to make sure there are no issues around originally. And if we crawl the name of the site again, we have to have a little bit of human interaction in there just to basically make sure the process is occurring and happening naturally.

So, aside from the actual technical elements and things, one of the thing we’ve noticed in the past year of implementing this with clients and working it on different bases, is that actually, well, this is a new solution to existing problems which don’t marry or prove existing processes both in terms of marketing teams, in terms of development deployment, and being honest, most developers aren’t even aware where’s technology used in. So, on the pros, we need to understand, first of all, the workers themselves codes are via JavaScript, Akamai’s I believe are going to be finalized and via JavaScript to November when Edge Worker is released. Fast lead is in WASM, so we need to understand the languages and if we’re able to actually control and verify from the output. Looming Cloudflare, Akamai, it’s a simple one-click deployment pullback, which is again bringing its own issues which I’m going to go on around but I can’t access actual stability of an in structure.

The reason that Xero, DevOp is required to stress isn’t a circumvention of developers. Developers need to be involved in these processes because as described in the cons, there is a very small potential to implement new front-end books to systems. So, if we’re going behind developers’ backs invading various pushes to the site or on the sources, we need to make sure that what we’ve tested is working on a staging environment before pushing its production. There is a possibility to add some additional latency to things, but I mean is between 10 and 50 milliseconds. What we’ve actually found is, if you’ve got a naturally slow website anyway, it’s going to be closer to 50, if you’ve got a fast website, it will be closer to 10, if not negligible.

This is why also internal processes are important. So, because this basically introduces a whole plethora of risks and potential mismanagement. And let’s be honest, if someone goes into the back-end of Cloudflare and doesn’t know what they’re doing they could pretty much take a site down. So, this is where you need to pre-do processes more around responsibility and accountability of how this is handled within a business, that can be part of the change management process. So, again, involving developers, if a deployment calls weekly fit in around that schedule as opposed to making new processes around it. Through testing, we might have to develop new debugging processes because obviously this is a new implementation method, and might not be able to pick things up in nutritional ways depending on what’s being implemented because, being honest, with multiple fingers you can do this so we haven’t even explored yet, business security aspect as well because we are changing the request between the server and the browser response.

Now, Edge can be used to implement things such as calls policies, cross-site scripting policies, and everything and blocking them that way, but still, we’re adding in that extra element, and we need to just be conscious of our involving for our sake, and make sure that they know this is happening. And then also there’s a compliance aspect as well because based on some legal things you might have, especially with GDPR, data passing through your, versus data passing outside European Union, but traffic has to pass through CDNs to make this work.

Going back to the earlier point, restricting access to CDN are found in most businesses, sometimes it’s tight and there’s actually no way of getting control of it whatsoever, which is good. Otherwise, anyone pretty with log-ins for back-end in the Cloudflare can do harm. With vests and especially when you’re implementing changes through CDN to have a live site, this needs to be locked down or have a process around it so effectively with responsible individuals with accountability around there and not everyone can just go, “Oh, I want to change your title tag. Let’s just do it for Cloudflare. Let’s just do it for Akamai and then bypass a defined process.

In Cloudflare back and everything else as well. Not only just around the edge itself, but it also affects things like if you’re passing your SSL through Cloudflare, you can immediately decrypt your sign. Similarly, I’ve seen new masks before. Cloudflare has got the ability for you to go in and out of WAF rule and effectively just block a country from being able to access your website, which I have seen people do because they don’t want to ship item to America, and so they just block America in Cloudflare and wonder why the site is not writing in anymore.

So, that’s really where we are in terms of Edge now and where we can go in the future essentially is down to see what we are able to do in terms of processes. For technologies of that, and most websites run on CDNs, whether it be Cloudflare, Akamai, there’s rumors Incapsula are also producing things like this, but it’s essentially, we’re limited by our own processes. For most websites out there, you’ll never ever need to do this.

You’ll have development capability, you’ll have WordPress plugins, you’ll have modules on Magento, absolutely fine, no need. But this is for the Edge cases where being able to implement, Hreflang could potentially reverse a trend and potentially over a year save jobs, increased revenue.

Share
  • 0
  • 0