Tuesday, October 19, 2004

 

Spoof the Referrer

1.
plog中添加资源防盗链

By bruce
From http://blog.9zi.com/post/1/928

网站流量占用最大的是文件下载、大尺寸图片。很多人喜欢引用其他站点的图片,导致非本站显示访问的流量猛增加。

不管出于什么原因,各种防盗链手段都来了。


防盗链原理:

http标准协议中有专门的字段记录referer

一来可以追溯上一个入站地址是什么

二来对于资源文件,可以跟踪到包含显示他的网页地址是什么。

因此所有防盗链方法都是基于这个Referer字段

网上比较多的2种

一种是使用apache文件FileMatch限制,在httpd.conf中增加
SetEnvIfNoCase Referer "^http://host1\.vhost\.com/" local_ref=1
SetEnvIfNoCase Referer "^http://host2\.vhost/" local_ref=1 #虚拟主机地址访问
SetEnvIfNoCase Referer "^http://202\.112\.20\.108/" local_ref=1 #ip访问
FilesMatch "\.(ibfipbgifpngjpgjpeg)"> #大小写忽略
Order Allow,Deny
Allow from env=local_ref
/FilesMatch>
这种很方便禁止非允许访问URL引用各种资源文件

第二种是使用rewrite,需要增加apache的mode_rewrite,支持.htaccess文件目录权限限制
在虚拟主机根目录增加.htaccess文件,描述从定向,把非本地地址refer的图片文件都从定向到警告图片上。
RewriteEngine On
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?myhost.net /.*$ [NC]
RewriteRule \.(pngjpeggifjpg)$ http://myhost.net/abc.gif [R,L]
有个好处是,不同的虚拟主机用不同的描述定义。

我在解决plog禁止盗链的时候,发现个问题,也算个好方法。
plog把所有资源都自己管理起来,用resserver.php来动态显示,这样统一的入口方便添加权限操作。
同时造成上面2种方法无法使用,因为不再是apache直接访问资源文件,而是php通过文件读取。

因此只能在代码中做手脚:在读取资源文件输出之前,加如下判断代码

$referer = $_SERVER['HTTP_REFERER'];
$selfurl = $_SERVER['HTTP_HOST'];
if(false == strpos($referer,$selfurl))
{
echo '非法盗链!';
exit(1);
}
这里有些偷懒,直接看引用地址中是否包含host地址,不过原理就是这样,判断referer是否是本站地址。

我们常常在下载的时候,也碰到盗链网站无法下载,报盗链的问题。要下载这类文件最简单的方法就是改referer

比方flashget中,网址下面的"引用"一栏中,直接填写下载地址就可以了。

2.
网站图像防盗Apache配置有妙法

来源不详

每个网站所有者都在尽力美化自己的网站,使它看上去更酷、更具有吸引力,其中最常见的方法就是使用图片、Logo及Flash等。但是,这也会带来一个问题,因为越漂亮、越吸引人的网站,漂亮的图片和Flash等就容易被别的网站悄悄的盗用。下面我们就一起讨论如何防止网站图像被盗用。

需要解决的问题

简单的说,这里有两种不同的盗用行为:
1. 使用HTML标记IMG在自己的网站中引用网站的图片。
2. 从网站上下载图片,然后放在自己的网站上。

对于第一种的盗用行为,合法网站的图片被用来美化装饰其它网站,这种盗用对合法网站的损害比较大,因为访问非法网站的访问者其实是从合法网站获取图片的,合法网站的日志文件充满了访问请求记录,并且带宽被非法访问消耗,而合法网站却没有得到任何好处。这种类型的盗用通过技术手段完全可以被防止。

第二种类型的盗用相对来说比较阴险,浏览者在非法网站直接访问非法的图片,而合法网站的版权受到侵害,却得不到赔偿,甚至无法发现这种盗用。因为Web的工作方式对这种类型的盗用实际上无法被阻止,但是可以使得这种盗用更加困难。

完全杜绝这两种盗用行为是不现实的,但是通过技术手段可以使得这种盗用非常困难。在Apache环境下,通过配置可以限制网站图片被盗用。

标识需要保护的文件

作为网站管理员,最大的希望就是能够保护网站上所有文档,但是从技术角度考虑这种想法是不现实的,因此我们这里只讨论对图片文件的保护。

作为保护的第一步,首先需要标识出需要保护的文件,然后才能进一步对被标识的文件进行保护。在Apache配置文件中添加如下内容:



[这里添加保护限制命令]



将容器命令包含在或等容器中,或者单独列出,不处于任何保护容器中,这样就会对网站所有文件进行保护,甚至可以存放在.htaccess文件。将该容器放在不同的位置,保护的范围机会有所不同。

Referer HTTP头字段

当用户访问Web服务器请求一个页面时,用户浏览器发送的HTTP请求中会有一个被称为HTTP请求头(HTTP Request Header)的信息,这个头信息中包含客户请求的一些信息,例如发出请求客户主机的浏览器版本、用户语言、用户操作系统平台、用户请求的文档名等,这些信息以变量名/变量值的方式被传输。

在这些信息中,Referer字段对于实现防止图片盗用非常重要。Referer字段指定客户端最后一个页面的URL地址。例如,如果用户访问页面A,然后点击在页面A上到页面B的链接,访问页面B的HTTP请求会包括一个Referer字段,该字段会包括这样的信息“这个请求是来自于页面A”。如果一个请求不是来自于某个页面,而是用户通过直接在浏览器地址栏输入页面A的URL地址的方式来访问页面A,那么在HTTP请求中则不会包括Referer字段。这样对于我们防止盗链有什么帮助呢?Referer字段是帮助判断对图像的请求是来自自己的页面,还是来自其它网站。

使用SetEnvIf对图像进行标记

作为一个简单的例子,假设需要保护的网站的主页面为http://my.apache.org,这时候希望限制所有不是源于本网站的网络访问请求(例如只允许访问包含在本网站页面内的图片)。这里可以使用一个环境变量作为一个标记,如果条件满足时就设置该变量,如下所示:
SetEnvIfNoCase Referer "^http://my\.apache\.org/" local_ref=1

当Apache处理一个请求时,它会检查HTTP请求头中的Referer字段,如果该请求来源于本网站(也就是请求页面的URL为本网站域名),则设置环境变量local_ref为1。

在双引号中的字符串是一个正则表达式,只有匹配该正则表达式,环境变量才会被设置。本文不讨论如何使用正则表达式,这里只需要理解SetEnvIf*命令会使用正则表达式作为参数。

SetEnvIfNoCase命令的“NoCase”部分表示这里的正则表达式忽略大小写,'http://my.apache.org/'、'http://My.Apache.Org/'或 'http://MY.APACHE.ORG/'都可以匹配条件。

在访问控制中使用环境变量

Apache配置文件中的Order、Allow和Deny命令可以实现对文档的基于环境变量的访问控制,使用Order、Allow和Deny命令首先要考虑的是Allow和Deny命令的顺序对于Apache处理结果的影响,应该以下面的方式使用:
Order Allow,Deny

这里表示Apache首先处理该HTTP请求相关的Allow命令,然后处理相关的Deny命令。这种处理方式的默认策略是Deny,所以除非有明确的允许的设置,否则该请求就会被拒绝,任何非法访问将无法成功。

因此,在Apache的配置文件httpd.conf中添加如下命令,来实现本地引用发挥作用:

Order Allow,Deny
Allow from env=local_ref

这样只有在local_ref变量被定义的情况下,该请求才会被允许;否则其它所有请求和访问将会被拒绝,因为这些请求不满足Allow条件。

注意,请不要在.htaccess和httpd.conf中使用容器命令,这里不需要该容器命令,除非有特殊的需求,例如希望Get请求和Post请求进行不同的处理。

把这些相关设置放在一起,在Apache的配置文件中就会有如下内容:

SetEnvIfNoCase Referer "^http://my\.apache\.org/" local_ref=1
FilesMatch "\.(gifjpg)">
Order Allow,Deny
Allow from env=local_ref
/FilesMatch>

如上配置可以存放在服务器配置文件httpd.conf中,或者存放在.htaccess文件中,最后的效果是一样的:在这些命令作用的范围内,只有从本网站引用的图片才可以被访问。

对图片进行水印处理

上面介绍的方法并不能完全防止图像盗链,这是因为有些执著的盗用者可以伪造Referer值来盗用图片,使相关设置失效,所以不可能完全防止网站图片被盗链,但是上面采取的措施会使得盗链变得很困难。

此外,还有一个防止图片被盗用的方法,就是对网站的图片都进行水印处理。对一个数字图片进行水印处理是指在图片中加入一个特殊的签名编码,并且可以进行验证和检测,数字水印并不会降低图片的质量,甚至可以实现图像被切割以后的剩余部分仍然会包括水印信息。图片被再次编辑、打印,并再次扫描以后,水印仍然可以被检测到。因此,水印技术是一个非常好的保护图片不被盗用的技术。

记录盗用请求

如果想知道自己网站的艺术品是否被盗,可以尝试使用同样的侦测和环境变量来记录可疑请求。例如,在httpd.conf文件中添加如下命令,那么会在/usr/local/web/apache/logs/poachers_log文件中记录所有具有非法的Referer头信息的访问请求:


SetEnvIfNoCase Referer "!^http://my\.apache\.org/" not_local_ref=1
SetEnvIfNoCase Request_URI "\.(gifjpg)" is_image=1
RewriteEngine On
RewriteCond ${ENV:not_local_ref} =1
RewriteCond ${ENV:is_image} =1
RewriteRule .* - [Last,Env=poach_attempt:1]
CustomLog logs/poachers_log CLF env=poach_attempt

在上面代码中,头两行为条件设置标记(也就是没有正确的本地Referer的图片文件),RewriteCond检测是否该标记被设置,然后RewriteRule设置第三个标记,最后一行使得这样的访问请求被记录在特定的文件中。

上面简单介绍了在Apache环境下,如何通过配置来限制网站图片被盗用的方法,抛砖引玉,希望大家将自己更好的经验介绍出来。

3.
Spoofing the Referrer using HttpWebRequest
By Dave Wanta on VB.NET
From http://www.developerfusion.co.uk/show/4672/

noticed the article the other day on your website about "Spoofing the Referer During a Web Request" Immediately after reading it I was wondering if you can do this using ASP.NET, the answer is a resounding "YES, of course!". This works because the http standards allow the client to actually dictate the HTTP_Referer variable.

Here is the code:


Function FetchURL(SomeURL as String, Referer As String) as String
Dim WebResp as HTTPWebResponse
Dim HTTPGetRequest as HttpWebRequest
Dim sr As StreamReader
dim myString as String
HTTPGetRequest = DirectCast(WebRequest.Create(SomeURL),HttpWebRequest)
HTTPGetRequest.KeepAlive = false
HTTPGetRequest.Referer = Referer
WebResp = HTTPGetRequest.GetResponse()
sr = new StreamReader(WebResp.GetResponseStream(), Encoding.ASCII)
myString = sr.ReadToEnd()
Return myString
End Function
You can then call this using the following:

Dim PageString As String
PageString = FetchURL("http://www.google.com/","http://www.microsoft.com")

4.
Proposal on referrer spam: Background and blacklists

From http://underscorebleach.net/jotsheet/2005/01/referrer-spam-proposal

Referrer (or referer) spam has become a serious problem in the blogosphere. We need an intelligent way to eliminate this growing nuisance. I've thought about and researched this for the past few days, and below I offer a proposal for a technological solution to this problem. It requires programming, and I am not a programmer, so I welcome suggestions, corrections, and improvements to this proposal.
I hope that this blog entry can serve as something of a starting point for information about referrer spam as well as a sandbox for exchanging ideas about methods of curbing or eliminating it.
Table of Contents
Background
Doesn't rel="nofollow" solve the problem?
Recommended webmaster practices
The .htaccess arms race is unwinnable
Technical characteristics of referral spam
Idea #1: Filter referrer URLs against Jay Allen's MT-Blacklist
Idea #2: Filter referrer IPs against spam blacklists
Conclusion
Addenda
Other resources
TrackBacks
Comments
Post a comment
Background
I will not go into much detail about the definition or origins of or reasons for referral spam. Please refer to other sites for that. I will mention that spammers are not stupid, and their activities always have a purpose. Spammers' activites consume their own resources, and as long as bloggers continue to publish records of referrers, it will be profitable and worthwhile for referral spammers to continue in their endeavors.
Doesn't rel="nofollow" solve the problem?
As you may have heard (or were hinted at), an illustrious coalition of blogging and search engine companies recently announced support for a new HTML attribute designed primarily to combat comment spam. Potentially, it's even more effective for referral spam. The attribute is called rel="nofollow",and many bloggers are already praising it as the silver bullet the Web's been waiting for.
The idea is actually quite simple; the hard part was getting the major players (Google, Yahoo, MSN, etc.) to agree on it. Basically, if a link is tagged with the rel="nofollow" attribute, it won't contribute to that site's PageRank. ("PageRank" is a Google-specific term, but I'm using it in the generic sense here.) Blogging tools such as Movable Type have implemented this standard by inserting the nofollow attribute in links in comments and TrackBacks. This link would not boost my PageRank even a smidgen:
That means comment spammers and referral spammers won't get rewarded for their nefarious activities on websites that implement nofollow. So, is the problem solved? Maybe. Partially. But ultimately? Not in my view. Here's why:
nofollow will never reach 100% adoption, so there will always be some incentive (even if it's decreased) to spam.
Spammers have shown that they do not care whether their techniques are effective in specific so long as they are effective in general. I have never published my referrer logs and referrer spammers have no real reason to hit my site, yet they do. They are targeting the blogosphere, not my site. Thus, as long as the blogosphere remains even partially vulnerable to referral spamming, it will continue. (Mark Pilgrim
agrees with me here.)
The resources required to fight spam, especially referral spam, so far oustrip the resources required to create it that nofollow is not a strong enough disincentive.
To expand upon point #3, consider just how easy it is to create referral spam. It's far easier than comment spam—there are myriad tools in MT, WP, and other publishing systems to combat this nuisance, so comment spam is not nearly as simple an enterprise as it used to be. It's also simpler to create than spam e-mail (and most e-mail users are protected by at least some sort of spam filter; logfiles are not). Referral spam is one HTTP request. The client need not acknowledge the response. It need not send anything but a simple packet with formatted text.
Here, why don't you spam Google for fun: go to wannaBrowser, enter Google.com as the Location, and enter anything (how about "This is referral spam!") as the Referrer. Voila! You sent referral spam to Google. Amazing.
Moral of the story: Since rel="nofollow" is not a panacea, bloggers are still going to get referrer spam.
Recommended webmaster practices
Referer spam is a problem because spammers can improve their sites' Google PageRank by getting listed on popular sites through spoofing of the HTTP_REFERER field in an HTTP request. (Jay Allen has suggested in the past that referral spammers want clickthroughs, but in an e-mail exchange, now agrees that they probably do it for the PageRank. Still, clickthroughs could be part of the equation, given that so many of these spamming sites are shut down quickly by their hosts.)
Best practice #1: Don't publish your referrers
If bloggers (and other website maintainers) did not publish this information, spammers would not bother to send these spoofed requests to blogs—it would be pointless. (For a humorous example, check out a blog entry on this very subject that's actually being targeted by pr0n site referral spammers.) Therefore, I propose that bloggers discontinue this practice. Others agree. I, for one, have never clicked on a link published in a blog's "Sites referring to me" (or similar) section. I think many bloggers simply believe this is a neat feature and have not evaluated its detrimental effect on the blogosphere as a whole.
Best practice #2: If you must publish referrers, include the page in robots.txt
If you're married to the idea of publishing referrers, you might want to try dasBlog 1.7, which looks to have built-in support for a referral spam blacklist. Also, take note on this great idea from Dave Winer (of Radio UserLand fame):
Winer says, "A couple of weeks ago we finally figured out why porn sites add themselves to referer pages on high page-rank sites: to improve their placement in search engines. Last night at dinner Andrew Grumet came up with the solution. In robots.txt specifically tell Googlebot and its relatives to not index the Referers page. Then the spammers won't get the page-rank they seek."
Grumet's idea is echoed in a recommendation for b2evolution users. Of course, this works only if you publish your referrers separates from the rest of site's content. If it's embedded, robots.txt can't help.
Best practice #3: Rob spammers of PageRank with rel="nofollow"
With the introduction several weeks ago of rel="nofollow", you can also rob the spammers of PageRank at the link-level, not just the page-level using robots.txt. All links referrer section of your website linking to externals websites should carry the rel="nofollow" attribute, without question.
Best practice #4: Gather a cleaner list of referrers using JavaScript and beacon images
As detailed by Marcel Bartels, referrer statistics gathered from beacon images loaded via JavaScript document.write statements are far more trustworthy than what the raw web server logs will contain. You may choose to disregard the referrers section of your server logs altogether and rely wholly on beacon images for referrer stats.
The .htaccess arms race is unwinnable
Referrer spammers are becoming more clever. They're registering odd- or innocuous-sounding domains that redirect to the "mothership"—sites with names like houseofsevengables.com (not teen-fetish-sex.com) that are difficult for a human to distinguish from a legitimate website. It's especially difficult because bloggers like to pick odd-sounding domains for their websites anyway. (For some fascinating speculation about referrer spam, see Nikke Lindqvist's post, "Referral spammers - why are they doing it and what should we do about them?")
In response to the ever-growing problem, many bloggers, including me, have begun fighting an unwinnable war with the referer spammers at the .htaccess level with mod_rewrite. (Some have even taken steps to automate this, such as with Referrer Spam Fucker 3000 or homebrew scripts).
But it's not working. Take a look the following:
My current .htaccess rules to block referral spamming websites. There are 58 separate lines of RewriteCond.
he Referring Sites section of the Analog report for underscorebleach.net for 13 January 2005.
There are legitimate referrers in the file, but many illegitimate sites as well. Can you tell at a glance which are which? Not without visiting them... and that's a problem. The .htaccess grows, and the spam still comes in. And more RewriteCond's = greater chance of false positives.
Moral of the story: The arms race of .htaccess blocking is unwinnable.
Technical characteristics of referral spam
I started to look at the individual HTTP request made by the referral spammers. Here's an example:
216.204.237.7 - - [13/Jan/2005:01:58:00 -0800] "GET /mt/mt-spameater.cgi?entry_id=764 HTTP/1.0" 200 5472 "http://www.paramountseedfarms.org/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.1.4322)"
The spammer has taken pains to make the request look legitimate. The user_agent string looks very much like MSIE. (Interestingly, the request is in HTTP/1.0. Perhaps one could write a rule to exclude logfile entries in HTTP/1.0 with referrers. Normally, spiders that use HTTP/1.0 do not pass a referrer. Someone else more knowledgable than me would need to verify this.)
Also, it's not as if all of the referral spamming is coming from the same IP or set of IPs. Someone is commanding a large set of zombies here. (About a year ago, juju.org tackled the referral spamming problem with some nice directions and found that the spam was coming from a single IP, but I believe things have gotten more complex since then.) I trawled through the logfile for 13 January 2005 for eight random referrer spams and found eight different IPs.
BUT what is special about the request from the referrer spammer is that his IP is probably blacklisted somewhere. Now, as long as we follow through on the recommendation above to stop publishing referrers, there's no need to try and block the request in real-time. Besides, this would be a waste of resources and would hurt the 99% of users who are legitimate. However, we can query blacklists, such as through Distributed Sender Blackhole List (DSBL), at logfile analysis time to filter out the referral spam.
Also see Nikke Lindqvists's technical analysis of referral spam.
Idea #1: Filter referrer URLs against Jay Allen's MT-Blacklist
Previously on this site, I have criticized MT-Blacklist. That doesn't mean I don't think Jay Allen has done great work, and the current blacklist is a masterpiece—when used properly.
In this situation, I believe the blacklist could be a powerful, efficient weapon against referrer spam. See the current master blacklist file and compare it against my .htaccess rules, for example. Sites like houseofsevengables.com and canadianlabels.net are listed in the master blacklist file.
Therefore, if a logfile analysis program was to filter referrers against this list, it would certainly help root out spam. Also, the master blacklist is a simple text file that can be downloaded from a website (and also easily mirrored). It seems to me that this idea could be easily implemented. In fact, Omar Shahine has already written a .NET class to filter URLs against the MT-Blacklist.
The master blacklist isn't perfect, however, and a quick check of the file against the referrers that got through on 13 January 2005 shows that few or none of them were listed. That's why we should also consider Idea #2.
Another interesting development to note in this area is the Manila Referrer Spam Blacklist (MRSB). It seems to still be in the experimental stage at this point, but its XML-RPC approach is interesting. It would be fairly trivial to write plugins for popular blogging software allowing users to contribute spamming URLs to the MRSB database. The trick, I believe, would be in the vetting process. Right now I don't see that one exists (or I just don't understand it).
UPDATE 1/21/05: The idea is starting to catch fire. (Perhaps I originally posted this entry in one of those rare times when a few people are thinking about the same problem and arrive at the type of solution via multiple paths.) In any case, Tony at juju.org has developed the derefspam.pl Perl script to filter log files against Jay Allen's blacklist. In a similar vein, Rod at Groovy Mother wrote a patch for AWStats to do the same thing. Mark McLaughlin has followed suit for Shaun Inman's Shortstat. Great work!
UPDATE 2/1/05: Peter Wood has extended this idea to mod_security, writing the Perl script blacklist_to_modsec to combine the Jay Allen's blacklist and web server-level spam control. This goes "beyond the blog," baby. Nice.
Idea #2: Filter referrer IPs against spam blacklists
I recently implemented Brad Choate's MT-DSBL, a plugin that checks a commenter's IP against the blacklist maintained at DSBL (a service that keeps a list of open relays). I believe the general idea of combatting comment spam by harnessing the DSBL or DNS-based blackhole lists could also be used to ferret out referral spam.
I queried eight randomly selected referrer spamming IPs against OpenRBL.org. This website queries 28 blacklists and returns a Positive/Negative score, with a "positive" indicating that the IP is listed on the given blacklist.
#
IP address
Positive/Negative
1
66.237.84.20
4/24
2
213.172.36.62
0/28
3
61.9.0.99
1/27
4
68.47.42.60
7/21
5
203.162.3.77
1/27
6
193.188.105.16
2/26
7
213.56.68.29
7/21
8
200.242.249.70
7/21
The above table's scores are as of evening, 13 January 2005 (CST). They may be different if you check an IP's blacklist presence now.
My proposal for log file filtering of referrer spamming is rather simple:
For a request with a referrer, query the IP against a blacklist. This might be DSBL or another list. I'm certainly not the best one to decide.
If the IP is blacklisted (or has a high score among a multitude of blacklists), refrain from listing that referring URL in any section of a site's Web stats.
Once a given site has been identified as a referral spam hostname (e.g. houseofsevengables.com, as mentioned above), do not bother querying the blacklist again for any IPs with this hostname in the HTTP request. This is simply for efficiency's sake.
Once an IP has been identified as a referral spamming IP, do not bother querying the blacklist again. Again, efficiency's sake.
UPDATE 1/16/05: Also, Chris Wage has written up a great set of directions for using the mod_access_rbl module in Apache to match IPs against DNSBLs. While this won't catch 100% of referral spamming IPs (see the variation in scores above), it should cut down on the number that get through. You might wonder what sort of affect this method would have on site performance. Here's Chris' response:
Response time is affected, but not much for normal usage. The query responses are cached by [your] local nameserver on the same network, so the most someone would notice is a slight delay on the first load of the page.
Conclusion
Referral spam will not go away until bloggers make it a useless enterprise for spammers. Spammers are not stupid, and they will gradually stop the practice if they see that their efforts have no return.
In the meantime, I propose the above methods for filtering the referrer stats of websites. This is performed at logfile analysis time, not when the HTTP request is made. It seems to me that Ideas #1 and #2 could be combined, with #1 more efficient for client and server and #2 more likely to be up-to-date in real-time.
I welcome all comments. I am certainly no expert in these matters, but in searching the Web, I have found a lack of discussion in this area.
Addenda
It's also been suggested that Web stat scripts could check the referrer's website for a link back to one's website. If no link is found, the script would assume that the site is a bogus, spam URL. I see two problems with this approach:
Blog indexes change quickly. What's on the index page at 2:00 p.m. might be gone at 2:30 p.m. This can be because the blogger deletes the link to your site or because the index "rolls over" and displays only the past 10 entries.
Spammers could quickly adapt. They could simply link to every site they spam.
If you do use the .htaccess method to combat referrer spam, I suggest wannaBrowser to test your rewrite rules. It's the simplest way to see whether you're properly blocking spam URLs. The htaccess blocking generator will help you write the rules. SixDifferentWays has a pretty complete post on battling the spam this way. Ed Costello's article is also good.
Other resources
CentreBlog's background on referrer spam
Tao of Mac's "Referrer Spam Should Be a Crime"



<< Home

This page is powered by Blogger. Isn't yours?