Tuesday, January 22, 2013

How to Protect Yourself from Duplicate Content


Duplicate content happens when the same or slightly different content (text and/or image) appears on more than one web page on the Internet. It can be internal, when it appears within one single website (different pages have the same content, or same article has several different URLs), or external, when other websites copy or steal your original content, with or without your consent.

The duplicate content can negatively affect the experience when the popular search engines’ users are looking for certain information on the Internet. They won’t be satisfied if instead of interesting articles they’ll find useful, the results are full with links to different websites but with the same or similar information. In order to provide the best possible service, the search engines are constantly working on improving their algorithms to help them better detect duplicates on the Web, and even though they won’t directly ban plagiarist websites, they’ll certainly penalize such activities by lowering their page rankings and overall website authority. Here’s how you can protect your content:

Copyright your content. One of the first steps in preventing others from stealing your website original content is letting them know that your work is under copyright protection and must be properly cited if used. You can add a simple statement on the web page footer, but it would be better if you are clearer and up front, using a disclaimer, or adding a “Protectedby …” badge, like those available on various duplicate content detection sites.

Perform internal audits. Since duplicate content can also appear within the same website, intentionally or not, a great way to make sure your website is properly optimized is to internally audit it for duplicates. It often happens that the category pages and the full article pages contain very similar, if not the same information, and this can also be true for product description pages that differ only in few product characteristics. Before you start fighting the external content scrappers, make sure your website is safe from within. Use robots.txt to manually select what pages should be indexed, and Google Webmaster Tools or Yahoo! Site Explorer to stay up-to-date with any additional problems.

Actively check and monitor for duplicates. The second your new article is published on your website, particularly if it is an authoritative website with unique and creative articles people want to read and share, plenty of other blogs and websites will try copy or rework it so they can republish it as their own original content. Because of this, it is essential for you to constantly check and monitor the popular search engines results for duplicates that may harm your website’s SEO efforts. Look up Google’s Blog or set Google Alerts for unique strings of your original text, or check and monitor for copies of your web page URLs on the Internet using some of the available tools and software like PlagSpotter or CopyScape.

Tuesday, January 15, 2013

What is Duplicate Content and How it Affects SEO


Duplicate content is a major issue on the Internet today, a problem most websites are faced with. Every time the same or very similar content appears on more than one place (web page, URL) on the Internet, on the same or on different website, a duplicate is being created. These duplicates ruin the user experience when searching the Web for information. When looking up certain keywords, the users are only interested in relevant and useful links – they don’t want to see a list of websites with the same text reappearing over and over. Since the search engines are there to provide the best service for their users, it’s logical that the duplicate content issue will negatively affect the website SEO.

One of the most common ways duplicate content is being created on the Internet is by copying and pasting other websites original content, or taking parts of several different texts and slightly reworking them to be presented as unique and original. If you submit your own article on several different websites, you’ll also create duplicates. In addition to this, duplicate content can also result from applying poor web development techniques, developing bad link structure, and making bad SEO decisions.

As mentioned before, the search engines want to deliver quality results to their users, and they want to do that fast so their users will be more satisfied. But due to the complexity of the used algorithms, crawling, analyzing and indexing all those pages takes time and valuable resources, and when you add the duplicates it becomes very clear why Google and other popular search engines have to transfer the pressure on the webmasters and SEOs so that they’ll be motivated to eliminate their duplicate pages.

So how does duplicate content affect SEO?

Google and other search engines probably won’t ban your website just because of few duplicates, but there are other ways the duplicate content issue will affect your website SEO. When the search engines identify duplicates, they’ll only list the original page in their search results, and the decision which content is the original will depend on the age of the page, the PageRank, the website authority, the number of incoming links, etc. This means that your website won’t be listed in their results if you are publishing copy-pasted texts, and since the purpose of SEO is getting listed and ranked higher in the result pages, the negative effect of the duplicate content on SEO is more than obvious.

The duplicates also negatively influence the search engine PageRank distribution, because lots of the links are getting diffused. Lower page rank means lower website ranking and bad SEO. In addition to this, the search engines find and index only a particular number of pages from every website, depending on the authority of the website, and if there are lots of duplicates the additional pages you publish on your website will be indexed much slower.

The duplicate content has the same negative effects even on commercial websites SEO. They often have several different pages for the same products, differing only by a certain characteristic, and if the descriptions are exactly the same or just slightly different on all the product pages, they are creating duplicates that will also hurt their SEO efforts.

Wednesday, January 9, 2013

Plagiarism Detection: PlagScan vs. PlagSpotter


Plagiarism, or stealing other people’s language and ideas, is a known problem that exists since forever, but the development of the Internet in the last two decades gave this problem a completely new dimension, providing quick access to thousands of free and available to everyone resources on any given subject. Instead of learning, developing critical thinking and creating greater value, many students and authors are now choosing to copy, or often just slightly rework others original works mixing several different sources. Plagiarism is a fact of today – the question is how to stay protected.

There are different tools and software people use these days to detect plagiarism and protect their original work. School and university professors can, for example, use the PlagScan professional and academic plagiarism detection service to analyze their students’ essays, thesis and dissertations and see if parts of their work have been plagiarized. They can copy and paste the text, upload a whole paper or add as many documents as they need, and PlagScan will check the content and deliver a detailed PDF, plain text and docx formatted report via email that shows the percentage of duplicated work, the sources of the original content, and highlighted potentially plagiarized phrases so they can quickly assess, during the regular proofreading, whether the particular match is plagiarism or just an acceptable citation.

Webmasters who often buy content for their websites and want to make sure their authors are delivering only unique writings can also benefit from this plagiarism detection software. There are different types of accounts – single and power user, depending on the volume of checked documents, and also a special version for registered organizations, where each user has an individual sub-account through which he/she can directly upload the documents to the organization’s user account.

PlagScan is a paid service, with an internal credit point system, but the first-time users can also register for a free test account if they want to check the offered quality. Since PlagScan is certain their users will be satisfied with their product, they additionally offer a full refund, within two weeks after the initial purchase.

PlagScan checks the uploaded content comparing it to the user’s own database and its own global documents, and for the Web documents they use the search index of Yahoo. But authors who write for the Internet, bloggers and webmasters who want to protect their website content from stealing and to automatically monitor their web pages for plagiarism can better benefit from the PlagSpotter duplicate content checking and monitoring tool.

This online tool will instantly scan the entered URLs for duplicate content on the Internet, and provide information of the percentage of matched content, together with a list of the external URLs that contain the matching text. Based on this data SEOs and bloggers can undertake the needed actions to inform the popular search engines of the detected plagiarism, which can penalize and even block the stealing websites if they don’t remove the duplicates.

There’s a free version of PlagSpotter where you are only allowed to enter one URL at a time, and different paid packages that let you automatically monitor the URLs you’ve selected for duplicate content, on a daily or weekly basis, and get email notifications for the % of duplication. For additional protection, web owners can embed the "Protected by PlagSpotter" badge on their website to warn others from stealing their content. The paid plans also have a free 7 day trial offer.