I frequently find myself attached to some public wifi hotspot trying to get work done, and while I try to make most of my connections via secure methods (e.g., all my email takes place over encrypted connections), most of my web surfing takes place in cleartext. Occasionally, I’ll read some weblog post about the various hosted VPN services and think that I should just use one of them, but never really get around to it. This week, I finally bit the bullet… but rather than subscribing to one of the services, I just set up my own VPN server at home to use.

I have a Linux machine in my home network, and I flirted with the idea of installing OpenVPN on it and using that as my server, but due to a few weird complexities in where that machine sits on my network, that wasn’t the most appetizing idea to me. It was then that I wondered whether someone had built a VMware virtual appliance with OpenVPN support, and it turns out that PhoneHome was just the ticket I was looking for. On my home Windows 2003 Server box, I started that puppy up in VMware Player; it took about a half-hour’s worth of tweaking to get it set up just perfectly for me, and another half-hour to get my home firewall (well, really a Cisco router with a detailed set of access rules) set up to play nicely with the server. Now, I have an easy-to-run, easy-to-connect-to VPN server that allows me to have a secure connection no matter where I am, and that just rocks.

One of the things I was worried about was that the VPN would massively slow down my network connection; between the bottleneck of encrypting all the tunneled traffic and the bottleneck of my home internet connection, I was pretty sure I’d be less than impressed with the speed of an always-on VPN. Surprisingly, the connection is pretty damn fast, though — I appear to have the full speed of my home T1 available to me.

speed test over VPN

If anyone’s interested, I’m happy to share details of the changes I made to the PhoneHome VMware appliance, and any other info you might want.

So if you’re even tangentially exposed to news about the internet (or listen to NPR’s All Things Considered!), you might have heard about a major, major weakness that was discovered not too long ago in the security behind the way that hostnames are turned into IP addresses, a weakness that could easily lead to all kinds of hacks, exploitations, and general insecurity on the ‘net. Most of the folks responsible for the DNS servers — the bits of software that are affected by this — were quickly briefed about the flaw and given a chance to respond, and nearly all of them just as quickly released patches to their software to make the hacks much harder to accomplish. Apple was certainly part of the former group (having been briefed on May 5th), but was not part of the latter group; by July 8th, all other operating system vendors had patched the vulnerability, while it took Apple until July 31st to roll the patch out the door. And within a few days, folks were noticing that Apple’s patch only handled half of the issue, remedying Mac servers while ignoring Mac client (i.e., desktop) machines.

Interested in which other Apple platforms were both affected by the DNS flaw and unpatched, I did a little playing with my Airport Extreme wifi router today — I figured it was a good platform to test, seeing as the way it fits into the environment that surrounds this particular DNS flaw is more as a server (a device being asked to resolve DNS names) than as a client (a device doing the asking). Additionally, there’s currently a lot of concern that even with all the patches released over the past month, it’s exactly devices like these — home routers that do DNS resolution alongside network address translation — that are going to prove the hardest to secure. Fortunately, after doing a bit of network sniffing, I can report that the Airport Extreme currently does exhibit patched behavior, properly choosing random ports from which to send its DNS queries. (And thus, that ISS post I linked to isn’t exactly correct when it says that no NAT vendor is performing source port randomization — clearly, Apple is doing the right thing when it comes to the Airport Extreme.)

The whole techie-bringing-San-Fran-to-its-knees story keeps getting better and better — it turns out that the passwords that Terry Childs gave to San Francisco mayor Gavin Newsom only allowed access to the network from a single hidden computer in the Hall of Justice. Better still, Childs locked well over a thousand modems in filing cabinets throughout various city agencies, all of which are connected to the system and which might be capable of allowing him (or others) access to the network to wreak more havoc. Was anyone overseeing the design and implementation of this network?

One of my favorite stories over the past week has been San Francisco losing control of its agency-wide network (SkyNet, anyone?) to a “rogue” employee who had designed the system and then locked everyone but himself out of administering it. The whole thing smacks of a bad plot for the next Die Hard movie (“Die Harderest, where Bruce Willis has to interrupt his shuffleboard game to save the Transamerica Building!”), or at least of the third installment of the Camel Club. Well, alas, the drama is over — yesterday, Mayor Gavin Newsom visited the employee’s holding cell and was able to coax the administrative login information out of him, and now the city is understandably going to look into how one lone person was able to singlehandedly control the pipes through which the majority of San Francisco’s inter-agency traffic flows.

For those who are all excited about Twitabit, the service that promises to queue up Twitter postings if the service is down, consider these two factlets:

  1. Twitabit asks you to type in your Twitter password — as in, you’re on a page on twitabit.com and asked to type in your password to another site entirely;
  2. Twitabit appears to have not one word of a privacy policy, or any other text that’ll help you understand why on Earth you should trust them with your password to another site.

Ummmm, no thanks. No thanks at all.

By being the current big-man-on-campus, it seems that the iPhone drew some security scrutiny its way, leading to what looks like the first real malware for the device. John Gruber, who has turned into a 24/7 defender of All That Is iPhone, might have to eat a few of his words

Over the past half-decade, there’s no denying that for all the amazing things the internet has brought us, it’s also been the source of quite a bit of annoying crap in everyone’s lives. From spam (\/|@GR@, anyone?) to phishing attempts to search results polluted with splogs to malware, a lot of people are out there exploiting the inherent trust model on which the fundamental internet protocols were based. What we’re all left with is an email system in which more than 9 out of 10 messages sent are spam, and with commerce websites and other online communities that have no way to trust their users other than to force us to create entire new identities on each of them if we want to use them. I don’t think it’s that wild a guess to say that many of us spend as much time each day dealing with all of these issues as we did performing their analogs in the non-internet-enabled world (driving to stores, writing letters, making phone calls), but today, the stakes are a lot higher — our family photos, bank accounts, and credit card numbers are all out there waiting for someone to exploit a hole in the armor and scurry off with them.

It’s because of this that I’m so happy to see an initiative like OpenID succeeding. A few years ago, the idea of OpenID was floated by the inestimable Brad Fitzpatrick (the father of LiveJournal, now a Six Apart property) as a way for people to carry around virtual identity cards on the net, and to securely use those credentials as a way of demonstrating to others on the internet who they really are. Between then and now, OpenID’s development has taken place out in the open, on mailing lists and wikis and web forums, and the result is a technology that Microsoft adopted last week and AOL has been quietly rolling out to its online service and instant messenger users for a few months now. That’s a great adoption rate, and I’d like to think that it’s because it’s a technology that’s sorely needed on today’s web. I’m not naive enough to think that it’s a salve to cure all the net’s wounds — for example, there’s still work to be done to make sure that anonymous ID providers don’t become the way spammers and miscreants get around the system — but I’m hopefuly enough to recognize that OpenID might be one of the more important building blocks to us all being able to trust our online interactions just a bit more.

I admit to not paying much attention to the whole fracas around the Boston Police Department shutting down parts of the city to “disarm” what turned out to be guerrilla art marketing geegaws, but thankfully, a bunch of other have been doing so… and they’re thus now in a position to point out the overt idiocy of the Boston Police and prosecutorial machinery. First stop is Teresa Nielsen Hayden’s post, which puts this event in the context of another genius move by the BPD, the 2006 “bomb scare” arrest of a man who was protesting by reenacting the famous Abu Ghraib photo outside an Army recruiting center. Then comes Bruce Schneier, who reminds us that the only terrorizing that was done came at the hands of the BPD, not the artists; the devices were up for over three weeks in Boston, and over ten weeks in other cities, and all of a sudden the BPD decided that it had to panic and go apeshit. And finally, Wired’s John Browne with a look at the laws involved, concluding that the only way the Boston prosecutors will be able to fulfill their promise to throw the book at the artists is if they demonstrate both that they intended to instill fear and that anyone would reasonably believe the devices to constitute some threat… something that the whole up-for-many-weeks-without-incident thing probably contradicts. (some via the inestimable Rafe)

Shannon and I are in London for the holidays, so in an effort to clear off some of the tabs in my browser, here are some of the things I’ve been hoarding in my bookmarks.

  • The guy behind DallasFood.org did an amazing job over the past month figuring out the sham behind Noka chocolates, and published a ten-part series reporting his results. It’s an amazing bit of investigation, really.
  • Security expert Bruce Schneier finally weighed in on the Automated Targeting System, the U.S. government system that assigns each of us a score which pretends to predict the terror threat we pose. Unsurprisingly, he finds it a waste of money, time, and effort.
  • For those of you considering buying a .Mac account, you might want to read John Siracusa’s rant — it’s written from the perspective of a developer thinking about implementing some of the synchronization features of .Mac, but he also goes into some detail about his disappointment with the service.
  • Anil’s obit of James Brown is a must-read. So go read it.

Knowing a few people with security clearances, I’ve heard a bit about the oft-proposed idea to allow folks holding such clearances to avoid the screening mess at domestic airports. It’s always sounded like a fine idea to me… that is, it sounded like a good idea until I read today’s column by Bruce Schneier on what a bad idea it would be. And the thing is, he’s totally right — in order to fulfill this goal, the government would have to:

  • create an easily-portable ID that identified people with formal, government-sponsored security clearances;
  • set up a centralized database with records of all these people (something which, for a variety of reasons, doesn’t already exist);
  • implement methods through which security screeners could check an ID to make sure that the clearance is both valid and current;
  • train TSA screeners on how to do handle the new system and procedures.

In Schneier’s own words:

This issue is no different than searching airplane pilots, something that regularly elicits howls of laughter among amateur security watchers. What they don’t realize is that the issue is not whether we should trust pilots, airplane maintenance technicians or people with clearances. The issue is whether we should trust people who are dressed as pilots, wear airplane-maintenance-tech IDs or claim to have clearances.

Wow. Last night, Shannon and I excitedly made our flight reservations for a Christmas in London; this morning, we awoke to news that the entire world of British air travel has been rocked. My sister and her kids flew back from London to New York last week for a vacation, but unfortunately my brother-in-law was scheduled to fly back tomorrow… he managed to get onto a flight this afternoon, and is now sitting in what he describes as an empty Heathrow terminal waiting to find out if his flight will get out at all. What a nightmare, for trans-Atlantic travelers and for airlines on the whole.

There was quite a bit of teeth gnashing across the web throughout the evening yesterday as TypePad, LiveJournal, and all the other hosted Six Apart websites went dark; we learned late in the night that the cause was a “sophisticated distributed denial of service attack” against the sites. Digging a little deeper, though, it doesn’t look like this is a particularly accurate description of what happened — but instead of this being a case of the folks at Six Apart trying to cover up some internal issue, it instead looks like they’re being far too gracious in not revealing more about another company, Blue Security, which appears to have been responsible for the whole disaster. An explanation of this requires a slight bit of background.

Blue Security is a company which has recently garnered a little bit of notoriety on the ‘net due to its unorthodox method of attempting to control the problem of spam email. Last summer, PC World publshed a reasonably good summary of Blue Security’s antispam efforts; a charitable way of describing the method would be to say it attempts to bury spammers in unsubscription requests, but a more accurate description would be that the service performs outright denial-of-service attacks on spammers, and does so by convincing people to install an application (Blue Frog) on their computers which launches and participates in the attacks. Without a doubt, Blue Security’s system has generated controversy from the perspective of both unsolicited emailers and regular ‘net citizens alike, so it’s not all that surprising that the spammers recently began fighting back. One of the methods used against Blue Security has been a more traditional denial-of-service attack against the company’s main web server, www.bluesecurity.com, an attack which was effective enough to knock that web server offline for most of yesterday.

OK, so why is any of this information — about a company completely unrelated to Six Apart — important background? Because according to a post on the North American Network Operators Group mailing list, at some point yesterday the people at Blue Security decided that the best way to deal with the attack was to point the hostname www.bluesecurity.com to their TypePad-hosted weblog, bluesecurity.blogs.com. This effectively meant that the target of the attack shifted off of Blue Security’s own network and onto that of Six Apart, and did so as the direct result of a decision made by the folks at Blue Security. (The best analogy I can think of is that it’d be like you dealing with a water main break in your basement by hooking a big hose up to the leaking joint and redirecting the water into your neighbor’s basement instead.) Soon thereafter, the Six Apart network (understandably) buckled under that weight and fell off the ‘net, and over four hours passed before packets began to flow again. (And given that the www.bluesecurity.com hostname was still pointed at TypePad for most of today, I’d imagine that the only way those packets began to flow was as the result of some creative filtering at the edge of its network.) Judging from the outage, it’s unlikely that Blue Security gave them any warning — although who knows whether a warning would’ve prevented the basement from filling up with water all the same.

So, returning to my original point: saying that Six Apart’s services were taken down as the result of a “sophisticated distributed denial of service attack” is an incredibly gracious statement that only addresses about 10% of the whole story. The other 90% of that story is that Blue Security, a company with already-shady practices, decided to solve its problems by dumping them onto Six Apart’s doorstep, something I’m pretty damn sure isn’t part of the TypePad service agreement. I know that ultimately, the denial-of-service attack came from the spammers themselves, but it was specifically redirected to the Six Apart network by Blue Security, and I hope that they get taken to the cleaners for this one.

(I’ve just begun experimenting with the social bookmarking/commenting site Digg; as I’m clearly in favor of more people understanding how the outage came to occur, feel free to Digg this post.)

Update: Computer Business Review Online has picked up the story, and has some other details. Netcraft also has a post on the DDoS, and News.com picked up the bit from them, but there’s not much more in either bit.

Hmmmm — I wonder how many of these credit card holders are going to call their card issuers to find out whether their accounts have been compromised. “Hi, I’ve made a bunch of online porn purchases over the past few years, but I just heard that the company which billed me went and released information about millions of credit cards onto the internet… am I affected?”

I run a web-based email application on my domain, and it’s coming up on time for me to renew the SSL certificate that keeps people’s email sessions secure. For the past four years, I’ve used Thawte to issue the certificate, mostly out of inertia, but looking at their offerings today, I noticed that the price for my type of certificate has somehow increased 20% since the last time I renewed (from $299 to $349 for a two-year certificate). Given that I can’t imagine the actual cost to Thawte of issuing a certificate has increased one cent during that time period, it’s time for me to do a little comparison shopping.

In the past, I’ve stumbled across a few alternatives to Thawte (and Verisign, the questionably-trustworthy company which owns Thawte) when it comes to issuing SSL certificates. There’s InstantSSL, which currently is offering a two-year cert for $100, but which only issues chained-root certs (requiring the installation of additional layers of trust in order to get the whole thing recognized by a web browser as truly secure). It’s a bit cumbersome, and there are a few webservers out there that don’t support chained certificates, so if you’re interested in this route you’ll want to make sure that you check into this. (The certs issued by GoDaddy and DigiCert suffer from the same issue.)

RapidSSL looks like a very reasonable alternative ($70 for one year, $121 for two years), and they’re running a free one-year promotion right now for people switching from Thawte. Their certs are single-root, and provide up to 256-bit encryption, and appear to be well-supported, so they might be getting my business soon.

I have a few weeks to mull all this over; does anyone have any other specific recommendations (or warnings of companies to avoid)?

A group of online scammers managed to set up a website, pretending to be part of Mountain America Credit Union, that collected the credit card information of MACU users who were tricked into visiting the site. This, by itself, isn’t all that frightening — there are probably hundreds of sites out there that try to do the same thing. In this case, though, the scammers managed to get a secure certificate for the site (the component that then puts the little locked icon in a user’s browser interface), something they did by tricking Geotrust, one of the companies that provides those certificates. (The process of granting those certificates is supposed to involve due diligence on the part of the company, wherein they make sure that the people asking are who they say they are, and that they represent the entity they claim to represent.) Similarly, the scammers managed to convince ChoicePoint that they were legitimate, lending more evidence to unsuspecting consumers that they were actually giving their financial information to their bank. (Of course, we’re talking about the same ChoicePoint that gave the personal information of hundreds of thousands of people to criminals, and both had an enormous fine levied against them, and had serial future audits imposed on their continued business practices.) The remarkably-adept internet security organization SANS has a detailed review of the incident, something that’s worth a read.

The mechanisms of trust that exist on today’s internet are all based on private actors — companies like Verisign, Geotrust, and ChoicePoint — which are supposed to go through strict processes to make sure that people are who they say they are. (For example, when I got an security certificate for a webserver I run on my domain, queso.com, I had to fax my business articles to the company granting the certificate, and provide them with financial information that they could then use to link me back to my company.) We’re learning more and more, though, that we can’t even trust those private actors, something that undermines everything we think of as transactional security on the web.

During my pediatrics residency, I built a pretty sizable content management system to host educational and curricular material for my department, a system that’s remained in operation for quite a while now. Over the past few months, though, two senior pediatricians (and regular content contributors) let me know that they were unable to log into the system from home; from their descriptions, they were presented with the login form, entered their information and submitted it, and were immediately returned to the same form without any errors. On its face, it made no sense to me, and made even less sense given the fact that there are a hundred or more regular users who weren’t having any problems logging in. The fact that two people were having the same problem, though, made it clear that something was breaking, so I started taking everything apart to see where the problem was rooted. (This was of particular interest to me, since I use the same authentication scheme in a few other web apps of mine, some of which contain patients’ protected health information.)

Looking at the mechanism I had built, the system takes four pieces of information — the username, password, client IP address, and date and time of last site access — and combines them into a series of cookies sent back to the user’s browser in the following manner:

  • the username is put in its own cookie;
  • the username, password, and client IP address are combined and put through a one-way hash function to create an authentication token, a token that’s put into a second cookie;
  • finally, the date and time of the last site access is put into a third cookie.

To me, all four pieces of information represent the minimum set needed for reasonable site security. The username and password are obvious, since without them, anyone could gain access to the site. The client IP address is also important for web-based applications; it’s the insurance that prevents people from being able to use packet sniffers, grab someone else’s cookie as it crosses the network, and then use it to authenticate themselves without even having to know the password (a type of playback attack known as session hijacking). (This isn’t perfect, given the widespread use of networks hidden behind Network Address Translation as well as the feasibility of source IP address spoofing, but it’s a pretty high bar to set.) And finally, incorporating the date and time of a user’s last access allows me to implement a site timeout, preventing someone from scavenging a user’s old cookies and using them to access the site at a later time.

Looking at that system, I struggled to find the bit that might be preventing these two users from being able to log in at home. I already had a check to see if the user’s browser allowed cookies, so I knew that couldn’t be the problem. These same two users were able to log into the site using browsers at the hospital, so I knew that there wasn’t some issue with their user database entries. That left me with a bunch of weird ideas (like that their home browsers were performing odd text transformation between when they typed their login information and when the browser submitted it to the server, or that their browsers were somehow modifying the client IP address that was being seen by my application). None of that made any sense to me, until I got a late-night email from one of the two affected users containing an interesting data point. He related that he was continuing to have problems, and then was able to log in successfully by switching from AOL’s built-in web browser to Internet Explorer. (He has a broadband connection, and mentioned that his normal way of surfing the web is to log onto his AOL-over-broadband account and using the built-in AOL browser.) When the other affected user verified the same behavior for me, I was able to figure out what was going on.

It turns out that when someone surfs the web using the browser built into AOL’s desktop software, their requests don’t go directly from their desktop to the web servers. Instead, AOL has a series of proxy machines that sit on their network, and most client requests go through these machines. (This means that the web browser sends its request to a proxy server, which then requests the information from the distant web server, receives it back, and finally passes it on to the client.) The maddening thing is that during a single web surfing session, the traffic from a single client might go through dozens of different proxy servers, and this means that to one web server, that single client might appear to be coming from dozens of different IP addresses. And remembering that the client IP address is a static part of my authentication token, the changing IP address makes every token invalid, so the user is logged out of their session and returned to the login page.

Thinking about this, it hit me that there are precious few ways that an authentication scheme could play well with AOL’s method of providing web access. For example:

  • The scheme could just do away with a reliance on the client’s IP address; this, though, would mean that the site would be entirely susceptible to session hikacking.
  • The scheme could use a looser IP address check, checking only to make sure the client was in the same range of IP addresses from request to request; this would likewise open the site up to (a more limited scope of) session hijacking, and would be a completely arbitrary implementation of the idea that proxy requests will always take place within some generic range of IP addresses. (Of note, it appears this is how the popular web forum software phpBB has decided to deal with this same problem, only checking the first 24 bits of the IP address.)
  • The scheme could replace its checks of the client IP address with checks of other random HTTP headers (like the User-Agent, the Accept-Charset, etc.); to me, though, any competent hacker wouldn’t just playback the cookie header, he would play back all the headers from the request, and would easily defeat this check without even knowing it.
  • Lastly, the scheme could get rid of the client IP address check but demand encryption of all its traffic (using secure HTTP); this would work great and prevent network capture of the cookies, but would require an HTTPS server and would demand that the people running the app spend money annually to get a security certificate, all just to work around AOL’s decision on how the web should work.

In the end, I added a preference to my scheme that allows any single application to decide on one of two behaviors, either completely rejecting clients that are coming through AOL proxy servers (not shockingly, the way that many others have decided to deal with the problem), or allowing them by lessening the security bar for them and them alone. I check whether a given client is coming from AOL via a two-pronged test: first, I check to see if the User-Agent string contains “AOL”, and if it does, I check to see if the client IP address is within the known blocks of AOL proxy servers. If the client is found to be an AOL proxy server, then (depending on the chosen behavior) I either return the user to the login page with a message that explains why his browser can’t connect to my app, or I build my authentication token without the client IP address and then pass the user into the application.

Finding myself in a situation where users were inexplicably unable to access one of my web apps was reasonably irritating, sure, but the end explanation was way more irritating. Now, I have to maintain a list of known AOL proxy servers in all my apps, and potentially, I have to get involved in teaching users how to bypass the AOL browser for access to any apps that require the stronger level of security. Of course, it’s also helped me understand the places where my authentication scheme can stand to be improved, and that’s not all that bad… but it still makes me want to punish AOL somehow.

Does anyone remember ChoicePoint, the data warehousing company that gave criminals access to the personal data of over 150,000 U.S. consumers back in 2004? When the story broke about a year ago, I made note of how ChoicePoint itself actually had been part and parcel of the problem, and lamented the way in which the media was portraying ChoicePoint as a victim rather than as a participant in the destruction of privacy. In light of that, I’m superbly happy to see that the Federal Trade Commission agreed with me today, fining ChoicePoint $10 million and noting that the firm had failed to tighten its internal security despite specific federal warnings going back as far as 2001. The firm also has to pay $5 million into a consumer redress fund, establish comprehensive information security programs, and submit to biennial security audits through the year 2026. (Of course, ChoicePoint netted $147 million in 2004, so part of me would have loved to see even steeper fines; that would have been as clear a message as possible that putting American consumers’ personal data at risk is a corporate practice that will effectively lead to the end of your corporation.)

You’d have to be living in a cave to not have heard news last week about a Windows security flaw that’s already being talked about as one of the worst, and most dangerous, ever found. (The executive version: there’s a flaw in a part of Windows devoted to interpreting image files that lets those image files contain actual program code which can do Very Bad Things to a computer. And the worst part is that all someone has to do to trick the computer into running that program code is get that computer to display the trojan-horse image — like getting the user to surf to a web page, or even just read an email. Microsoft’s security bulletin is here.) While I’m not usually prone to Microsoft bashing, it’s a pretty pathetic statement that the bug was found last Tuesday, and the danger of the bug was validated the very next day, but we’re now six days later and don’t have a patch from the folks in Redmond. And sadder still, a patch has been written by someone totally unaffiliated with Microsoft, Ilfak Guilfanov. (The well-respected Windows security expert Steve Gibson explains how Ilfak’s patch works here.) If I were administering a slew of Windows machines, I’d have to think long and hard about not distributing Ilfak’s patch as soon as possible, and then uninstalling it once Microsoft gets around to issuing something more official.

Update: now that the folks at SANS (possibly the most knowledgeable and well-respected computer security experts in existence) are recommending using Ilfak Guilfanov’s patch, I think that sysadmins who choose not to use it are asking for their networks to get compromised. They’ve also produced an MSI installer that is suitable for unattended installation via policy files, something that should make most admins of large Windows sites pretty happy.

A ruling came out of the Florida courts yesterday that’s managed to pique my interest a bit. In the case, a group of accused drunk drivers requested access to the program code for the breathalyzer that was used to document their blood alcohol levels; the court agreed with their request, and ordered the state to provide them with the code. The kicker is that the manufacturer of the breathalyzer claims the source code as a trade secret and is refusing to surrender it to the state, meaning that all of the drunk driving convictions obtained by using the device can now be called into question (and potentially overturned).

To me, this makes perfect sense. If a tool is going to be used to document some fact that’s used to make decisions about right and wrong — criminal and legal — then that tool better be as transparent as possible so that experts can be sure it works the way it’s advertised. In medicine, we would never make clinical decisions based on experimental or unverified test results; in fact, there’s an entire certification process through which new laboratory tests must be put before they can be used to make clinical decisions, and that process forces the people who develop and manufacture the tests to open their processes up to independent experts for verification. Why should the criminal justice system treat tools used to gather evidence in a different manner? (This is all the more important in the Florida case, as the breathalyzer in question has a questionable accuracy record (PDF), and was even subject to a recent software recall.) Conversely, why would a police department feel comfortable using a tool that operates in a completely hidden, unverifiable way?

It makes me happy when rigorous scientific standards find their way into places they logically belong.

Can anyone give me a single reason why so many websites ask users, when making a purchase or creating an account, to type their email addresses into the form twice?

cnn really wants your email address

Is there anything inherently more secure, reliable, or useful in forcing users to type the same thing over again?

ebay also wastes your time

I fully understand why most forms ask for a password twice; when a users can’t see what they’re typing (because most password fields obscure any input behind little dots or asterisks), then a good way to increase the likelihood that they type what they intend to type is to have them to type it twice.

more repetition at southwest.com

But when email addresses are displayed right there for users to proofread as they type them, it’s incredibly annoying to have to pointlessly type them in a second time. Hell, the forms might as well ask you to type your name in twice, too.

and even more at the dallas morning news

(Second pet peeve: why do websites offer a “reset my password” function that fails to start the process off with email that makes the user confirm that they want their password reset? It’s amazingly shortsighted; it lets people who want to mess with others just reset their passwords at will.)

For those who didn’t know, the folks at EasyDNS have been the targets of intermittent denial-of-service attacks for the past few weeks, and this morning brought a renewed round against their servers. Just an FYI, which could help explain why you might be getting occasional “host could not be found” errors in your travels around the web today.

openid

For those who care about such things, I’ve added OpenID support to the site, meaning that you can authenticate and leave comments using any OpenID identity that you might have. (For example, if you’re one of the eight and a quarter million LiveJournal users, you have an OpenID — and there are rumors that TypeKey will become an OpenID service very soon, too.) I’m still not at the point where I’m willing to mandate that people somehow authenticate in order to be able to comment, but the ease of using OpenID makes it look like that’s getting closer and closer to being a possibility.

For the even fewer that care about the details: since this site runs on Movable Type 3.2, it was pretty simple to install Mark Paschal’s OpenID comments plugin. After that, I had to add a single line to my individual archive template, add a few new styles to my stylesheet, update the Javascript file that governs display of various elements of the comment box, and republish all the archives. (I ended up isolating a bug or two along the way, but Marc was kind enough to reply to my pestering emails letting me know that he’s fixed all the same bugs and added a few things here and there in a soon-to-be-released update. That being said, I’m happy to help anyone who’s interested in setting up OpenID support before then, so just drop me a line if that’s you.)

Slate continues its reasoned look at the current state of airport security, this time in a piece penned by Christopher Hitchens. The pullquote that sets the tone of the piece:

The time elapsed between Sept. 11, 2001, and today’s writing (1,364 days) is only slightly less than the time between Pearl Harbor and the unconditional surrender of Japan (1,365 days). And airport security is still a silly farce that subjects the law-abiding to collective punishment while presenting almost no deterrent to a determined suicide-killer.

Shannon and I have been traveling a lot (wedding planning, birthdays, etc.), and waiting to get through security at the Philly airport about a month ago, I surmised that it’s only a matter of time before we’re all standing in long lines leading to the metal detectors, stripped down to our underwear, shuffling along and hoping that we don’t get selected for random colonoscopy. At least we’ll all have a little more motivation to stay in shape…

Oh, great — there’s word on the IP mailing list that there’s now an eBay phishing scam that actually uses redirecting links which originate on eBay’s own servers, making it that much harder for lay people to know that they’re being taken for a ride.

To explain a little bit more: various web services have occasionally made use of scripts that redirect users to other locations. That is to say, the user visits a URL on website A, and a script running at that URL on website A does some bit of processing and then sends the user on to website B. They do this for any number of reasons; Yahoo does it to gather statistics on how many people use the entries in their directories, Movable Type does it to try to prevent comment spammers from gaining too much worth in search engine listings, and Google does it for a bit of both reasons. (You can hover over those three “does it” links to see that they all originate on the servers of the respective web services; you can click on them to see that they all take you back to this website.) Unfortunately, the nefarious elements of the web — spammers, multilevel marketers, and outright thieves — have taken advantage of these redirection services to try to make their scams look more legitimate; they bank on the fact that more people are likely to click on a google.com link than an im-a-scam-artist.info link. Some of the redirection services are designed so that it’s nearly impossible to take advantage of them in this manner (i.e., Movable Type); others are designed completely open, and any user can change the URL to change the site that sits as the final destination of the redirection. It’s the latter group that are open to exploitation by thieves and miscreants, and that have been a source of much consternation to IT security people for the past few years.

Well, we learned today that it turns out eBay is running its own open redirector, which means that those emails you get saying that you urgently need to go and “correct” your eBay password and billing information might have links with actual ebay.com addresses in them. This is obviously a cause for concern, and a sound reason to remember the advice that until the world figures out a good solution to problems just like this, it’s best to avoid clicking on any email links claiming to be from businesses that need to help you verify your account status, payment options, or any other financial information.

I’m so freaking sick of today’s headlines claiming that “hackers” somehow broke into ChoicePoint’s (obscenely comprehensive) consumer databases and obtained information which allowed them to then steal people’s identities. This is a story that’s been discussed on Dave Farber’s Interesting People mailing list since yesterday, and the truth of the matter — reported correctly only by MSNBC thus far — is that a group of criminals managed to create fake businesses and then set up entirely valid accounts with ChoicePoint in the name of those businesses, and then obtained the information about consumers via those accounts.

Notice the difference? If it’s reported that nefarious hackers broke into ChoicePoint and stole the data, then ChoicePoint comes out looking like a victim. On the other hand, if it’s reported that the failure was in ChoicePoint’s internal mechanisms for verifying the validity of an account application, the existence of the company behind that application, and the right of that company to obtain credit information, then ChoicePoint is revealed as a remarkably large part of the problem. Add to that the fact that ChoicePoint is only notifying consumers in the one state that requires them to (hell, there isn’t even a note about it on the company’s news release page), and doing so four months after they sold consumer data to criminals, and the story truly does take on a different character.

It’s funny — the American airport security loophole that Andy Bowers revealed in Slate today actually occurred to me last time Shannon and I flew back from Philly, but I immediately assumed that there was something I was missing and stopped trying to figure it out. Apparently, I wasn’t missing anything! Of course, now it won’t surprise me if the TSA does away with Internet check-in while they figure out how to negate this.

While checking in for his flight from London’s Gatwick Airport to Dallas-Fort Worth, Cory Doctorow found himself asked for a list of the names and addresses of every single person with whom he’d be staying in the U.S., a request which was explained as the result of some unnamed security regulation. He asked for escalating levels of detail about the unusual request, to much confusion, and eventually was told that his Platinum AAdvantage cardholder status absolved him of any requirement to provide the list. (That last part is the oddest to me — could there really be TSA directives that are as specific as making exceptions for people who are members of the elite frequent-flyer programs? If so, can AAirpass members expect to have a certain amount suspicious information ignored given their contribution to the business of air flight?)

It frightens me how much about air travel is now dictated by some functionary’s proclamation that an odd rule or occurrence is the result of heightened security. (My own, way less-significant, example: last month, Shannon and I were unable to check in online for the return leg of a flight for which online check-in for the first leg hadn’t been a problem. When I called to ask why, I was told that the representative didn’t have a definite answer, but that it was very likely to be security-related. It was clear that that statement ended the conversation, and ended any inquiry into whether there could actually be a problem with the online check-in system.) It’s all just so silly; I hope that, at a minimum, John Gilmore’s case ends up forcing a greater deal of transparency upon the security-related apparatus that has grown so prominent over the past four years.

Another equally troubling question is this: How could someone of my fundamental incapacity have come so close to heading the department of the United States government charged with protecting our country from acts of terrorism? Is anyone else horrified by this? Is anyone besides me even slightly bothered?

The McSweeney’s not-so-cloaked take on Bernard Kerik’s failed nomination is just priceless.

It’s disappointing to see an information security organization as good as SANS get an issue about information security so painfully wrong. In its weekly NewsBites newsletter (issue 48, not available in the online archive at the time of writing), the following entry appears as a link to an eWeek article:

—Spammers Exploit Anti-Spam Technology - DomainKeys
(29 November 2004)
Spammers have begun using DomainKeys to make their fake messages appear legitimate. DomainKeys was one of the more promising technologies designed to eliminate forging, but spammers appear to have co-opted it.
http://www.eweek.com/print_article2/0,2533,a=139951,00.asp

What’s the problem with this? That in this case, DomainKeys are actually doing their job, not somehow being controverted. Much like Sender Policy Framework, Yahoo’s DomainKeys technology is not an antispam solution, but an antiforgery solution. As it’s described on that Yahoo! page (and by Ars Technica in a review), DomainKeys provides a way for email recipients to see whether or not a piece of email comes from the sender it claims to have come from. In other words, DomainKeys only helps assess whether or not an email really did come from billg@microsoft.com; it specifically makes no claims about helping users figure out whether or not his product will actually make your penis grow five inches overnight.

So when SANS says that “spammers appear to have coopted” DomainKeys, everyone should all be ecstatic — that means that email users and administrators gain the ability to know for certain when email comes from certain mail servers and domains, and thus be able to block those servers and domains with absolute confidence that it’s the right thing to do. Shame on SANS (and Dennis Fisher at eWeek) for not knowing the difference.