Hello, Michael Barrett here.
Over the few short years that PayPal has been in existence, we’ve been very successful: we now have 78 million active customers globally – more than the population of many countries. We also have an online system that’s designed to make it easy to move money around the globe. And, we have a team of the usual paranoid information security types like me who do our utmost to keep our core online engine safe and secure.
Given all of that, it’s no particular surprise that our brand and our customers were early targets of phishing, and continue to be targets to this day. I have written occasionally on the question of how companies can help their customers protect themselves online, especially from phishing. Perhaps the most detailed of these was in a white paper that I co-wrote, and it gives a pretty good level of detail on our thinking on an entire strategy for addressing phishing: https://www.thepaypalblog.com/2008/04/a-practical-app/
Because of our status, we’re also used often used as an example of whatever particular idea an individual security researcher is arguing for. Alas, that has recently happened when Randy Abrams, Director of Technical Education at ESET, published a blog entry under the sensational title, “PayPal admits to phishing users.” (For the usual infosec crowd who reads thesecuritypractice.com, you all know that ESET is a good and well-recognized anti-virus solution vendor.) You can find the blog entry here: http://www.eset.com/threat-center/blog/2009/12/03/paypal-admits-to-phishing-users#comments
The reason that I’m commenting on this is as follows. In our white paper, we didn’t cover the topic of links in e-mails, but we’ve researched it extensively and have come to certain conclusions about this.
First, we doubt the effectiveness of removing all links in e-mails as a way to eradicate phishing. Every single Internet user on the planet would have to understand that no PayPal e-mails contain links. The whole point of phishing as a crime is that it exploits customer cognitive dissonance around how the Internet and links work.
Thus, even if we didn’t send e-mails with links in them, there would be nothing to prevent criminals from sending phishmails with links, and some small percentage of users would almost certainly fall for those e-mails.
Secondly, the reason that links are so popular in e-mails is because they simplify things for the recipients of the e-mails. In our own analysis of this, we found that there were indeed a certain number of links that we could remove from our e-mails, and we have done so. But, there were others that would be very problematic – sending someone a reference to a transaction, for example. (“Log on, go to your transaction history page, scroll forward three pages, go to the transaction three quarters of the way down the page …”) We have to balance our risk prevention techniques with our customer experience. It’s a delicate balance, and one that we take very seriously.
Instead of removing all links in e-mails, we have embarked on a program to ensure e-mail standardization of legitimate PayPal e-mails. The basic rule that we’re trying to enforce is that links in legitimate PayPal e-mails will always be through paypal.com, and they will always be HTTPS links. We believe that this is defensible territory and that customers can be trained to look for this, although we don’t actively do this much as most users are confused by the link destination URL vs. the link text.
We have indeed done a number of things around consumer education. Our first rule – which Mr. Abrams indeed followed –“forward uncertain PayPal e-mails to [email protected]” is generally a very good one. We also have another one which is effective – “if you’re unsure of the legitimacy of an e-mail, close the e-mail, open a new browser and go to the website concerned (https://www.paypal.comin our case) and just log on there.”
Finally, I will make an admission – we had an error in the system which processes e-mails to [email protected]. It mis-categorized a legitimate e-mail as a spoof one. Of course, we don’t like to make errors. I will explicitly note this though – no harm was done. The infosec crowd very well understands the “crossover error rate” concept. I would much rather that our spoof e-mail processing veers towards extremely low false negative rates, even if it errs occasionally as in this case towards a false positive or two. The consequences of false positives are simply that we look like twits, and those who have the proverbial ax to grind can exploit that. The consequences of false negatives could seriously threaten the online safety of our customers, and so we go to some lengths to ensure that doesn’t happen.
Hi Michael,
You state "Because of our status, we’re also used often used as an example of whatever particular idea an individual security researcher is arguing for."
the fact is that many in the security community advocate not having links in email from financial entities, or many online services.
Yes, a small percentage will still click on the link. There isn't a 100% solution, but if you can make a significant impact then it is often worth it. That's where cost comes into play.
HTTPS is encryption, not security. The bad guys sometimes use HTTPS. It is a false sense of security if a user is taught to believe that HTTPS=Legitimate.
"(“Log on, go to your transaction history page, scroll forward three pages, go to the transaction three quarters of the way down the page …”"
I suspect that most users looking at transaction details are looking for very recent ones that appear near the top of their page in most cases. There are other solutions, such as giving transactions a unique identifier and allowing a search function. Yeah, security is often a bit less convenient, but sometimes it's the right thing to do.
I agree that the false positive on the email was a fairly harmless error. Completely harmless in my case, but it could be troubling and confusing to a technically naive user. I too am far less worried about the false positive rate than the false negative rate, but it does illustrate the point of the challenges of teaching users to spot a real phish. You have a highly sophisticated system written by experts that uses rules to identify real and fake phishes. If this system makes a few mistakes, imagine how many mistakes normal users are going to make. That's why I advocate teach behavior over identification. The simple behavior is don't click on links to financial institutions, and other online accounts. A blanket rule that a legitimate PayPal email never contains a link reinforces the behavioral education and makes it much simpler to understand for the normal user.
Posted by: Randy Abrams | December 05, 2009 at 09:29 AM
Very interesting!
You say "we’ve researched it extensively.." Can you say more about what research means in this context? If I recall correctly, your phishing paper presents a rigorous analysis of the problem, but doesn't present a lot of empirical data.
Did you do controlled user studies to see if your conclusions held, at least in test situations?
Posted by: Adam | December 05, 2009 at 10:03 AM
A correction...
You state:
"We have indeed done a number of things around consumer education. Our first rule – which Mr. Abrams indeed followed –“forward uncertain PayPal e-mails to [email protected]” is generally a very good one."
I did not follow this advice. I did not submit a suspicious email. I submitted a known legitimate email to PayPal and commented that it had problems.
Posted by: Randy Abrams | December 07, 2009 at 11:41 AM