fbpx

Google Search Console to Alert Users About Site Hacks and Malware by @MattGSouthern

Google added a ‘Security Issues’ tab in Search Console that will report on harmful activities like site hacks and malware.
Within this tab, Search Console will notify webmasters about anything on their site that prompts Google Chrome to display a warning to visitors.

We hope you don’t need to use a Security Issues tab 🔐 in the new Search console, but if you do – this tool helps you find & fix 🛠 hacking & malware on your websitehttps://t.co/CfiFsXXShg pic.twitter.com/mcPXM8ub9G
— Google Webmasters (@googlewmc) January 30, 2019

Possible security issues may include:
Hacks
Malware
Harmful downloads
Uncommon downloads
Deceptive pages
Unclear mobile billing
Search Console will also provide information about how to fix specific security issues.
When the issues are resolved, webmasters can request a review from Google’s team.
Alternatively, if there are no security issues detected, the tab will display a message saying “No issues detected” along with a reassuring green checkmark.
The tab can be found by logging into Search Console and navigating to Security & Manual Actions > Security Issues.
If you’re logged into Search Console right now you can click this link to visit the new Security Issues section directly.

Subscribe to SEJ
Get our daily newsletter from SEJ’s Founder Loren Baker about the latest news in the industry!

How to 3x Your Blog Traffic with Technical SEO by @seo_travel

Blogging has become an industry in itself with many people now making full-time careers from it and companies exploiting blogs as a key way of attracting new business.
As the influence of blogs has increased, it has naturally become more competitive and more difficult to stand out from the crowd. Bloggers are investing huge amounts of time and money into their content, so simply sitting down and publishing some words you wrote in your kitchen is unlikely to cut it these days.
There are too many ways of making a blog successful to cover in one blog post, but there are certainly a more manageable set of things you can do to improve your blog’s performance in search specifically that a surprising amount of bloggers overlook.
If you have a blog that has been built off the back of a great brand and fantastic social media presence, but you haven’t paid too much attention to SEO, then this post is for you.
I’m going to share exactly what we did to more than triple a travel blog’s search traffic over a 12-month period and take them from the tens of thousands of visits per month to the hundreds of thousands.

Our work has focused on technical SEO activity rather than content production or off-site work.
It’s important to highlight that this blog already had a very good presence and lots of good things already going for it, so I wanted to break down the starting points in a bit more detail first before we get into the nitty-gritty of what accelerated the growth.
Links
The site already had a very good link profile, with a wide variety of links on strong, top-tier publications like CNN and the Independent, along with lots of links on other blogs they had built relationships with.
I’m not going to go into much detail on how to do this as it warrants its own post, but the key approaches were:
Guest Writing: Writing posts for other blogs or getting featured via interviews etc. This is very easy for bloggers with non-commercial sites and is a very scalable way to develop a good link profile.
PR: Building relationships with journalists or pitching stories to big publications that can gain you links and mentions on very powerful sites.
Other great resources on getting links for your blog:
Content
The site has been around a long time so it had accumulated lots of content which was well written and had been edited and targeted with SEO in mind.
As a result, a lot of it was ranking well and bringing traffic to the site and seemingly performing very well.
If you’re just getting started on your blogging journey then populating the site with really good, quality content should be a high priority for you.
You can read more about that at the links below:

So, as I highlighted originally, the key part of our activity that took the site from tens of thousands of visits per month, to hundreds of thousands of visits per month, was technical SEO work.
I’m going to break down all the key elements we addressed below, so if you’re sat with a blog in a similar position to what I’ve described above you can implement these actions to help unleash your blog’s traffic, too.
I’ve prioritized these in a way that I believe has had the biggest impact (with the largest impact first), but this is obviously up for discussion and we can’t be sure what influence each individual action had as these were all implemented on the same timeline.
Indexation Issues
A common issue for blogs, especially those that have been around a long time, is having lots of URLs indexed by Google that are not genuine pages and offer no value to users.
These included regular offenders for WordPress sites, such as:
Category pages.
Tag pages.
Author pages.
Archive pages.
But also:
We crawled the site and identified the patterns behind the key offenders in this area and either noindexed them or updated Search Console to stop Google crawling them.
Thin Content
The site had a huge amount of pages with extremely thin content present.
These were basically category pages with a small intro added that were clearly created with SEO in mind to target long-tail phrases.
However, it was done to such a degree that the pages were of extremely low quality and added very little value for a user landing on it.
The potential upside of this kind of page wasn’t enough to warrant the time required to add content to them, so these were either removed or noindexed.
Page Speed
When we started work the site’s page speed was extremely poor due to various fonts, large images, caching, and various other issues.
We used some plugins to help improve this, which isn’t the dream solution (building a site more efficiently from the ground up is preferable).
But for bloggers on a tight budget and with limited resources and knowledge, you can still make some significant steps forward.
Cannibalization
For some of the site’s key money phrases, there were multiple pages present that were targeting the same topic.
Google was chopping and changing between which of the pages was ranking so it was clear it was unsure which the best choice was. This is usually a good sign of content cannibalization and suggests you should merge those pages into one top quality page.
We did just that, and soon saw the ranking page settle down and ranking performance jump forward significantly and stay there consistently.
XML Sitemap
The site had a variety of sitemaps submitted in Search Console, many of which listed URLs which we did not want to be crawled, let alone indexed.
We trimmed this so the sitemaps present only listed URLs with good quality content present so it was much clearer what should be indexed and which content was most important on the site.
Aggressive Ads
Advertising is the way most bloggers make their money, so telling them to cut it down is not a popular conversation.
However, if you go overboard on your advertising then it can become ineffective and even harm your overall performance so you get less traffic, less conversions, and therefore less pennies in your piggy bank.
Finding the balance is key and it’s been shown that Google’s recent updates have been hurting sites with excessive advertising taking precedent over unique, quality content.
Page Structure
An issue we see regularly with blogs and websites in general is that header tags are used for styling rather than structure.
H1, H2, and H3 tags should be used to clearly illustrate the structure of your page so Google can map it on to the phrases it would expect to see mentioned on the topic being covered.
If your site’s headers are being used for styling then get on to your developer and get it changed so you use these elements in a more tactical and optimized way.
Internal Linking
We worked closely with the client to clean up internal links and improve how they were being used. This included:
Fixing any dead internal links that were linking to broken pages.
Fixing internal links that took users and bots through redirect chains, so the link pointed directly to the correct destination.
Adding more links to important pages throughout the site.
Link Updates
As I mentioned initially, the site had some excellent links that it had established over the years through natural approaches and some more direct efforts.
Some of these more direct efforts involved getting optimized anchor text to key money pages which had been a bit overzealous at times. We believed it was potentially causing the site to be held back from ranking for phrases in that area and for that page in particular.
Fortunately, the owners still had contact with many of the sites where this was in place, so we advised which to contact to have their anchor text updated to make it either branded or more generic (e.g., click here).

There were other elements involved too, but those above were the key issues that were wide reaching and causing significant performance issues.
It’s rare to find one silver bullet with technical SEO, but if you chip away at the wide variety of issues that can impact you then you can see some serious improvements.
The theory of marginal gains certainly applies here, and I’d advise any blogger who is well established to pay close attention to these kinds of issues if they haven’t already.
We also haven’t yet implemented all the recommendations we’ve suggested. One key outstanding one is implementing “hub pages” that guide people into all the key content on a topic.
In travel, this is very much destination focused, and there is a lot of search interest to gain if you create high quality pages for those hubs. This is the key focus to move on to next to help accelerate this site’s progress further, and there is a huge amount of potential in it once implemented.
So if you’re a blogger with lots of great content and links, but you haven’t yet paid any attention to your technical SEO, do it now!
Make sure you aren’t leaving significant amounts of traffic on the table – you may be sitting on huge growth potential. Time to kick into gear!
More Resources:
Image Credits
Featured Image: Created by author, January 2019In-post Images: Created by author, January 2019

Subscribe to SEJ
Get our daily newsletter from SEJ’s Founder Loren Baker about the latest news in the industry!

404 Sitemaps To Remove Them In The New Google Search Console

Google’s John Mueller was asked on Twitter if there is a way in the new Google Search Console to remove old Sitemaps files. He said you can 404 them to remove them. Maybe Google will add a button to delete old Sitemap files there but for now, 404ing them will eventually get them to be removed.
Here is the Q&A on that topic:

Yep, I’m sure we’ll continue to have ways to remove sitemap files from there. If the sitemap file URL (“sitemap.xml” or whatever you use) starts returning 404s, we’ll stop checking it over time automatically too.
— 🍌 John 🍌 (@JohnMu) January 30, 2019
Here is what the section looks like:

Forum discussion at Twitter.

Most SEOs Want Fetch As Google Even With URL Inspection Tool

Google told us the Fetch as Google tool is going away and being replaced by the URL Inspection Tool. But some SEOs, well, seems like the majority of SEOs, want Google to port over the Fetch as Google tool as well. A Twitter poll with 88 results shows about 75% want both the Fetch as Google and URL Inspection tool.

#SEO do you think the URL Inspection tool is a fitting replacement for the Fetch and Render tool in old GSC? Or would you want both the tool and the Fetch Google Report? Which do you need?
— Kristine Schachinger (@schachin) January 30, 2019
I think the poll was worded in a way that lead to the results to be higher here. Who here wants more versus less?
But personally, I’d be fine with just the URL Inspection tool. I think it covers most, if not all, of what I’d personally use with the Fetch as Google tool. If you disagree, maybe Google will read the comments here but if not, use the feedback link in the new Google Search Console side bar to let them know you really want it.
Forum discussion at Twitter.

How to Address Security Risks with Robots.txt Files by @s_watts_seo

The robot exclusion standard is nearly 25 years old, but the security risks created by improper use of the standard are not widely understood.
Confusion remains about the purpose of the robot exclusion standard.
Read on to learn how to properly use it in order to avoid security risks and keep your sensitive data protected.
What Is the Robots Exclusion Standard & What Is a Robots.txt File?
The robots.txt file is used to tell web crawlers and other well-meaning robots a few things about the structure of a website. It is openly accessible and can also be read and understood quickly and easily by humans.
The robots.txt file can tell crawlers where to find the XML sitemap file(s), how fast the site can be crawled, and (most famously) which webpages and directories not to crawl.
Before a good robot crawls a webpage, it first checks for the existence of a robots.txt file and, if one exists, usually respects the directives found within.
The robots.txt file is one of the first things new SEO practitioners learn about. It seems easy to use and powerful. This set of conditions, unfortunately, results in well-intentioned but high-risk use of the file.
In order to tell a robot not to crawl a webpage or directory, the robots exclusion standard relies on “Disallow” declarations – in which a robot is “not allowed” to access the page(s).
The Robots.txt Security Risk
The robots.txt file isn’t a hard directive, it is merely a suggestion. Good robots like Googlebot respect the directives in the file.
Bad robots, though, may completely ignore it or worse. In fact, some nefarious robots and penetration test robots specifically look for robots.txt files for the very purpose of visiting the disallowed site sections.
If a villainous actor – whether human or robot – is trying to find private or confidential information on a website, the robots.txt file’s disallow list can serve as a map. It is the first, most obvious place to look.
In this way, if a site administrator thinks they are using the robots.txt file to secure their content and keep pages private, they are likely doing the exact opposite.
There are also many cases in which the files being excluded via the robots exclusion standard are not truly confidential in nature, but it is not desirable for a competitor to find the files.
For instance, robots.txt files can contain details about A/B test URL patterns or sections of the website which are new and under development.
In these cases, it might not be a true security risk, but still, there are risks involved in mentioning these sensitive areas in an accessible document.
Best Practices for Reducing the Risks of Robots.txt Files
There are a few best practices for reducing the risks posed by robots.txt files.
1. Understand What Robots.txt Is for – and What It Isn’t For
The robots exclusion standard will not help to remove a URL from a search engine’s index, and it won’t stop a search engine from adding a URL to its index.
Search engines typically add URLs to their index even if they’ve been instructed not to crawl the URL. Crawling and indexing URL are distinct, different activities, and the robots.txt file does nothing to stop the indexing of URLs.
2. Be Careful When Using Both Noindex and Robots.txt Disallow at the Same Time
It is an exceedingly rare case in which a page should both have a noindex tag and a robot disallow directive. In fact, such a use case might not actually exist.
Google used to show this message in the results for these pages, rather than a description: “A description for this result is not available because of this site’s robots.txt”.
Lately, this seems to have changed to “No information is available for this page” instead.
3. Use Noindex, Not Disallow, for Pages That Need to Be Private yet Publicly Accessible
By doing this you can ensure that if a good crawler finds a URL that shouldn’t be indexed, it will not be indexed.
For content with this required level of security, it is OK for a crawler to visit the URL but not OK for the crawler to index the content.
For pages that should be private and not publicly accessible, password protection or IP whitelisting are the best solutions.
4. Disallow Directories, Not Specific Pages
By listing specific pages to disallow, you are simply making it that much easier for bad actors to find the pages you want them to not find.
If you disallow a directory, the nefarious person or robot might still be able to find the ‘hidden’ pages within the directory via brute force or the inurl search operator but the exact map of the pages won’t be laid out for them.
Be sure to include an index page, a redirect, or a 404 at the directory index level to ensure your files aren’t incidentally exposed via an “index of” page. If you create an index page for the directory level, certainly do not include links to the private content!
5. Set up a Honeypot for IP Blacklisting
If you want to take your security to the next level, consider setting up a honeypot using your robots.txt file. Include a disallow directive in robots.txt that sounds appealing to bad guys, like “Disallow: /secure/logins.html”.
Then, set up IP logging on the disallowed resource. Any IP addresses that attempt to load the “logins.html” should then be blacklisted from accessing any portion of your website moving forward.
Conclusion
The robots.txt file is a critical SEO tool for instructing good robots on how to behave, but treating it as if it were somehow a security protocol is misguided and dangerous.
If you have webpages that should be publicly accessible but not appear in search results, the best approach is to use a noindex robots tag on the pages themselves (or X-Robots-Tag header response).
Simply adding a list of URLs intended to be private to a robots.txt file is one of the worst ways of trying to keep URLs hidden and in most cases, it results in exactly the opposite of the intended outcome.
More Resources:

Subscribe to SEJ
Get our daily newsletter from SEJ’s Founder Loren Baker about the latest news in the industry!

Call Now ButtonCall Now!