Mozilla went on the record over the weekend with their specific proposal for how to implement Do-Not-Track (DNT) mechanisms via HTTP-header (with the goal to enable users to opt-out of Online Behavioral Advertising (OBA) at internet-scale, if they wish). Here are the relevant links:
Mike Hanson's blog post is the best analysis on the topic I've seen so far, but it still fails to call-out anti-fraud tracking explicitly as a behavior that should not be interfered with by DNT. He lists: advertising, personalization, and metrics as functions DNT should avoid interfering with, but given the nature of the Mozilla DNT proposal, sites still have to implement an interoperable OBA opt-out mechanism to avoid even that level of interference. Will every browser vendor have their own interface we have to interoperate with to avoid having all "3rd-party tracking" -- which of course doesn't actually mean 3rd-party, but simply means "from domains other than the one in the location bar" -- blocked by their browser?
I continue to be disappointed by the Do-Not-Track discussion as captured by the media and blogosphere for its continued disregard for the security considerations of this new header. Our concern is not about some geeky, low-level afterthought. We are talking about a fundamental negative consequence on internet security brought about by an industry-wide knee-jerk reaction to the FTC that is currently poised to remove mission-critical consumer protection mechanisms (commonly used by sites that require user authentication, including but not limited to financial services and commerce) if security considerations remain in the shadows of this policy debate much longer. The internet community must come together and explicitly carve-out anti-fraud tracking as a behavior to sustain, if not embolden, in the post-DNT world.
Carving-out first-party tracking is a reasonable step that I don't intend to criticize, but it's a blunt instrument that is insufficient for the task. Anti-Fraud tracking is not always exclusively conducted via first-party cookies, nor is OBA exclusively negated by blocking 3rd-party cookies (as Mike Hanson points out in his post). Whatever DNT turns out to be, it must be based on a use-and-obligations model, i.e. a mechanism that triggers the required obligation for the specific use being addressed by the mechanism, which is OBA in this case. I'm pleased to see Mozilla headed in that direction, but there is still a long way to go before we've got this right.
The core issue is introduced in this sentence from Mike Hanson's post: "I propose that the user's intent can be captured in a simple rule: If the Do-Not-Track header is present, and the site has a "tracking opt-out" mechanism, the mechanism should be activated. If the site does not have an explicit opt-out mechanism, the user should experience only content from their first-party relationship with the page being viewed."
I interpret the intent of that to mean:
Sites must implement an interface between the HTTP "DNT" Header and their current opt-out mechanisms
This implementation must be interoperable across all browsers and sites
If anything goes wrong and the browser cannot interpret the confirmation from the site that it has implemented it's OBA opt-out mechanism in compliance with the DNT header setting in the browser, then the browser will initiate the "more harmful than helpful" behavior of blocking all but obvious domain-based, first-party content and tracking mechanisms.
Step 3 highlights the nest of issues we need to resolve if we want to ensure the post-DNT-web is better at protecting personal information from unauthorized disclosure than the web we have today.
Because this is a tractable problem, we are not against the DNT movement per se. We are against DNT gone wrong. What we need now is an all-hands standardization effort inclusive of the many proposals, all the various influencers, under a collaborative framework with a proven track record of facilitating timely compromise on thorny standardization issues, to bang out a rough consensus set of policies and protocols for DNT. This is not a call to stop the DNT effort, but to elevate DNT out of the noise of media coverage and into serious analysis and standardization work... before it's too late to get it right.
..back in May (abstract below). Since then, we’ve been waving our paper around and pursuing the action items outlined therein with some modest success, which we'll discuss as things develop.
Abstract
Web-based malware and attacks are proliferating rapidly on the Internet. New web security mechanisms are also rapidly growing in number, although in an incoherent fashion. In this position paper, we give a brief overview of the ravaged web security landscape, and the various seemingly piece-wise approaches being taken to mitigate the threats. We then propose that with some cooperation, we can likely architect approaches that are more easily wielded and provide extensibility for the future. We provide thoughts on where and how to begin coordinating the work.
Also, we held a successful BoF at IETF-78 Maastricht last month, named "HASMAT" (for "HTTP Application Security Minus Authentication & Transport"), the result being that a new IETF Working Group will be formed (which may be named HASMAT or something else), into which the HSTS spec will land. Once that happens, the spec will likely become a working group item, and thus the filename and URI for that new version will change (just fyi). We'll post about that when it occurs.
Many (most?) of us, when accessing a "secure" web site, have at one time or another been presented with a browser dialog indicating something isn't quite right with the site's certificate, and offering the ability to ignore the issue and proceed. Sometimes proceeding is not a good idea because it may lead to a man-in-the-middle attack.
Collin Jackson and Adam Barth present the details of such situations, as well as a means to remediate them, in their paper ForceHTTPS: Protecting High-Security Web Sites from Network Attacks. Essentially, ForceHTTPS enables a server to signal to browsers that it wishes to be interacted with only over secure transport, e.g. TLS/SSL. Part of the idea here is if the user enters, say, "http://www.paypal.com", the browser will rewrite it as "httpS://www.paypal.com/", and initiate the HTTP connection over secure transport. Another aspect is that if there are any certificate errors upon secure transport establishment, the connection will simply fail and the user won't be presented the opportunity to "click through" warnings.
Now, working with Collin and Adam, we've produced a refinement in the form of a HTTP Header Field specification entitled Strict Transport Security (STS). We're talking with the W3C about standardizing it there. There are already two Firefox extensions implementing it, Giorgio Maone's NoScript, and Sid Stamm's ForceTLS. Both Giorgio and Sid have blog entry's concerning STS, too:
Additionally, Google Chrome has STS functionality implemented and it is working its way through the development channel process.
We (PayPal) are excited by the positive feedback we're receiving and the implementation work. We're looking forward to having our customer's security improved.
I read a rather worrying criticism of Firefox 3.0 on “Risks” the other
day, which made me realize that perhaps there isn’t a common agreement amongst
the infosec industry about the threatscape and how we should prioritize our
response to them. Specifically, the
complaint against Firefox 3.0 is that the user experience has been deliberately
crafted to make it hard to accept self-signed certificates. The argument is that there are times when
simply establishing an encrypted tunnel (i.e. an SSL session) is all that’s
needed.
I certainly wouldn’t argue that encryption is unnecessary, just that the
threat has changed. While our old
“friend” Mallory isn’t particularly busy these days, it’s pretty clear that
he’d be having a field day if he could easily penetrate communications across
the Internet. The attacker however is no
longer limited to passive eavesdropping. Modern attacks use active DNS spoofing, active MITM attacks and the
like, on public networks. The main
threats these days are against the weakest link in the chain – the end
user. That’s why phishing is such a
popular method of e-crime – it’s simple and it works. It relies completely on the gullibility of
users in clicking on links in e-mails apparently from organizations with whom
they have a relationship.
However, it’s equally clear that almost everyone who wants to
communicate securely using a browser can afford an SSL certificate from CAs
such as GoDaddy, Thawte, etc. The cost
of single certificates from these sources can only be described as nominal.
My company is a major target of phishing, and as such we’ve spent quite
a bit of time researching what anti-phishing approaches work We published a whitepaper on this topic (which
can be found on the company blog at www.thepaypalblog.com), which explains this
in detail. However, a couple of relevant
conclusions are that: 1) the vast majority of users simply want to be
protected, 2) there’s no single “silver bullet”, and 3) that what we describe
as “safer browsers” such as IE 7, and Firefox 3.0 are a significant part of the
solution based on their improvements in user visible security indicators and secure-by-default
behaviors.
I conflated two or three separate ideas in that last sentence, and I
should explain them. The general logic
is that most users should never be presented with a security dialog that gives
them a choice – if they are, there’s typically at least a 50:50 chance that the
wrong decision will be made. Instead,
the browser should make the decision for them. However, in the case of self-signed certificates it’s almost impossible
to see how any technology can disambiguate between legitimate uses and criminal
ones.
When viewed through this lens, the changes to the Firefox user
experience for self-signed certificates makes perfect sense. It’s not that self-signed certificates are
impossible to use – but for most users, the experience will be such that they
won’t accept them. In the unsafe world
in which we live, that will be the right choice. For organizations which wish to use
self-signed certs internally, it is still technically possible – but it will
require either explicit user training, or deployment of pre-installed
certificates on PCs.
I should also add that the major security features which have been added
into the most recent browser versions (and which we believe are necessary in
order to be considered ‘safer’) are exactly those which impact this area. That is: support for Extended Validation
certificates, which make it clear to end users whose web site they’re on; and
support for spoof-site black lists, so that users can’t easily reach
spoof-sites.
While I’m personally a great supporter of the “Risks” list, I think it’s
important that the infosec industry speaks with good consensus about
risks. In this case, I believe that the
criticism of Firefox 3.0 was simply misguided and ill-informed. This is not helpful.