A little bit ago, I wrote a piece about how a new start-up, Bit.ly, was ignoring the wishes of web content producers by creating cached copies of pages that are explicitly marked (by those same content producers) with headers directing that they not be cached. So here we are, three weeks later, and it crossed my mind that maybe Bit.ly had fixed the problem… and disappointingly, they appear to still not give a flying crap. (That’s their cached version of this page, a page that couldn’t make itself any clearer that it’s not to be cached.)

I hate to push this to the next level, but is it time to drop Amazon a DMCA notice saying that the page is copyrighted (as all works are, once they’re fixed in a tangible medium) and is being hosted on Amazon’s network?

(And one other thing: how annoying is it that when Bit.ly’s caching engine makes its page requests, it doesn’t send any user agent string, so it’s literally impossible for a website owner to identify the Bit.ly bot programmatically? They appear to be running the caching engine off of an Amazon EC2 instance, as well, so there’s not even a way to watch for a known IP address — it’ll change as they move around the EC2 cloud. Nevermind pissing in the pool; the Bit.ly folks are out-and-out taking a dump in the pool.)

If you, like me, let Firefox 3 create a new profile for you when you installed it over the past week or two, you might want to revisit the step you’ll need to take to fix Firefox’s annoying I’ll-forget-all-your-cookies bug feature. It didn’t hit me until today that the reason my bank site couldn’t remember my login name was because Firefox was dropping the relevant cookie…

It’s amazing that we sit here, four years later, and the same broken behavior is accepted by the Mozilla folks.

bitly_logo.png

There’s been quite a bit of press today about bit.ly, a new service from the folks at switchAbit; it’s a service that adds page caching, click-through counting, and a bunch of semantic data analysis atop a URL-shortening service that’s very much like TinyURL and others (and others!). Reading the unveiling announcement, the part that interested me most was the page caching part — they bill it as a service to help prevent link rot (i.e., when a page you’ve linked to or bookmarked then goes away), which would be a great service to those folks who rely on linked content remaining available. (And since they store their cached content on Amazon’s S3 network, robustness and uptime should be great as well.)

That being said, having worked with (and on) a bunch of caching services in the past, I also know that caching is a feature that many developers implement haphazardly, and in a way that isn’t exactly adherent to either specs or the wishes of the page authors. So I set out to test how bit.ly handles page caching, and I can report here that the service does a great job of caching the text of pages, a bad job of caching the non-text contents of pages, and a disappointingly abhorrent job of respecting the wishes of web authors who ask for services like this to not cache their pages.

I’m overall pretty happy about the release of Firefox 3, but am I the only one who’s seeing sporadic issues on Macs? On my iMac (2.1 GHz PowerPC G5), it’s painfully slow, so slow that I’ve started using Safari as my primary web browser. And on my MacBook Pro (Intel Core 2 Duo), I’ve had at least a dozen instances in the past three or four days where, during the loading of a tab, I get the spinning rainbow beachball of death and have to force-quit the app.

Ugh.

This might be my favorite news article correction ever, from my old school paper, the Columbia Spectator:

CORRECTION: This submission misstates that one Dalai Lama admitted to having sex with hundreds of men and women while knowing that he had AIDS. Additionally, the submission misstates that many monks participated in the dismemberment of female bodies. In fact, there is no factual evidence to substantiate either of these claims. Spectator regrets the error.

I mean, that’s just awesome. Nice work there, editors…

This Mozilla bug report thread might very well be the best thread I’ve ever read. There are a lot of developers who truly want to help track down a bug that someone’s reporting, but since they’re unable to replicate the bug, they ask the reporter to test out a few specific other builds, and he totally freaks out, SCREAMS IN ALL CAPITAL LETTERS, and makes various and sundry claims of them ruining his computer. It’s just awesome, a perfect encapsulation of how the internet makes some people go a little batshit insane.

Talk about a fuckup of gargantuan proportions: last night, the co-founder of the webhosting company Dreamhost launched a script to trigger a billing cycle for the 2007 end-of-year, but mistakenly used December 31st, 2008 as the run date, meaning that all accounts had their bills run for the entire year of 2008. And that means that if accounts were set to automatically pay, people’s credit cards were charged and bank accounts were debited for one or two years’ worth of charges, leaving a slew of customers with overdraft and over-credit charges from their banks, not to mention other planned transactions that now can’t take place given hundreds or thousands of dollars in unanticipated charges. (I received over $600 worth of bills from them via email, bills that weren’t auto-paid only because the credit cards they have on file for me are thankfully expired.)

From the sounds of things in the Dreamhost Status comments and discussion forum, despite the fact that the webhosting company is already trying to work through the mistaken charges and reverse them, it’s going to lose a bunch of business over the fiasco — even with reversing the charges, it sounds like most of the folks who’ve been assessed overdraft fees aren’t going to be able to avoid them, at least not without quite a bit of effort and fighting with their individual banks (something for which I’m sure the customers will be oh-so-grateful to Dreamhost). And the scope of the overall problem is made clearest by the fact that Dreamhost’s account management control panel has been down all morning, probably because every single customer is trying to get more information about why their accounts were charged and what they should do about it.

Ironically, the most recent Dreamhost newsletter, written by the same co-founder of the company in his trademark (and tiresome) jokey style, had the following item in it:

4. New Office!
 
Another important thing I’ve been doing instead of writing newsletters is looking out the window of our NEW OFFICE:
 
http://blog.dreamhost.com/2007/12/21/were-so-high-right-now-you-dont-even-know
 
If your next web hosting bill from us is mysteriously tripled, now you know why.

Talk about bad timing on Josh’s part… or perhaps, talk about a good lesson in the error of joking about things that could easily become the catalysts that drive customers into the arms of your competitors.

OK, it’s now been about a week since my first installation of Mac OS X 10.5 (aka Leopard), and after the first round of notes, I have a few other observations to throw out there.

  • I get the distinct feeling that my MacBook Pro’s battery life is a bit shorter under Leopard than it was under Tiger — there are a bunch of posts to various Mac-related websites saying the same thing, so there appears to be power in numbers here, and not just a funny feeling on my part. That sucks; over the next few weeks, I’ll get the sense of whether it’s a real issue or not.
  • The new version of Mail.app seems to have some sort of seriously screwed-up issue with my IMAP account configuration. Last night, my MacBook Pro sounded like an airline tooling up to take off, and I discovered that Mail.app was re-downloading nearly 3 Gb of mail that it already had in its cache, for reasons that I can’t even begin to fathom. Right now, I’m sitting here five minutes after I asked Mail.app to quit, and can still see four processes in the program’s Activity window that don’t seem to want to give way; clicking the stop icon on one of them just led to a spinning rainbow beach ball of doom, and I’ll have to force-quit the app entirely. All in all, a functioning Mail.app is a must-have for me, and if I can’t figure this one out, it’ll mean either moving to another mail application entirely, or downgrading to Tiger.
  • Speaking of unstable, I’ve now had to force-quit Finder itself a half-dozen times under Leopard. The first two times, I connected to shared drives, and then noticed that the drives didn’t appear in the “Shared” section of Finder’s sidebar, despite me clearly being connected to them; that meant that in order to disconnect from the shares, I had to manually issue the unmount commands from the terminal prompt. Force-quitting Finder restored their listings in the sidebar, and all related functionality. Then, I noticed that one time, I created a new Spotlight search template and checked the box to have it saved to the sidebar but it didn’t show up; again, force-quitting the Finder fixed the issue. Both of these issues have recurred a few more times, which is pretty annoying. And notably, all the other functionality of the Finder has remained intact during these periods — it’s not like I needed to force-quit the Finder to restore all functionality, just the “Shared” and “Search For” lists in the sidebar.
  • And speaking of Spotlight, Leopard’s new implementation might be better in a lot of ways, but it’s pretty broken in a bunch of others. The biggest problem I’ve run into is that Leopard’s Spotlight seems to operate with a bunch of internal, undocumented filters in place that hide whole classes of potential search results, something that others have also noticed. As an example, if you use the basic Spotlight search interface to look for a file that lives in any of the system-type directories (e.g., the system-wide or user-specific Preferences folders), you won’t find the file — those are now excluded from the general search results. The only way to actually get Spotlight to show them to you is to use the advanced interface, enable the “System Files” choice in the criteria drop-down, and set it to “include” — and this setting only applies to your current search. You can create a template for this that you can then access for future searches, but you can’t ever access that template through the basic, upper-right-hand-of-your-screen search interface; instead, it’s a two-step process where you have to manually start your search from the template and then choose it (rather than “This Mac”) from the “Search:” options that are displayed at the top of your search result pane. This is just inane.
  • Finally, Leopard lost all knowledge of the printers that I had set up on all my machines… and given that all three machines are in a work environment with multiple printers and setups, this was quite a pain in the ass.

I received the oddest phone call yesterday, a robocall from DirecTV (from whom we currently receive our television service). It went more or less exactly like this:

Hello, my name is Diane, and I’m with DirecTV. From time to time, we like to call our customers with information about our latest promotions and specials, but we cannot call you with these, as you’re on our do-not-call list. We’d like to offer you the opportunity to update your status with us; press 1 if you want to remove your listing on our do-not-call list, or press 3 if you want to stay on the list.

Does anyone else find this the slightest bit weird — receiving a call from a company which acknowledges that they shouldn’t be allowed to call you, and asking if you still want that to be the case? In any event, the phone call is in explicit violation of DirecTV’s own “Do Not Call Policy”, which in part reads:

DIRECTV’s Outbound Telesales Department is a department within DIRECTV that engages in telemarketing to existing DIRECTV customers. The Outbound Telesales Department will not call any DIRECTV customer who has communicated his or her desire not to be called.

Given that DirecTV was fined $5.35 million back in 2005 for violating the federal do-not-call registry, you’d think that the company would be exquisitely sensitive to the ways in which is decides to make marketing telephone calls. After receiving the call yesterday, I thought that perhaps DirecTV was being clever — regardless of whether I want calls from them or now, by calling me they couldn’t be violating the do-not-call law because I’m an established customer of theirs. Turns out that I was wrong, though — according to the FTC (see question #9), they must adhere to the wishes of any established customers who don’t want to receive marketing calls, or they face an $11,000 fine per call. Looks like it’s time to file a complaint.

Two updates: first, it looks like I wasn’t the only one to get the phone call; pity for them they stirred the Consumerist beast. Second, it looks like there’s a bug with the FTC do-not-call registry complaint form; if you, like pretty much every American, have a phone number that’ll expire off the registry soon and you update your listing, you’ll be unable to file any complaints for 31 days because the FTC system thinks yours is a totally new listing. That’s stupid.

So, I guess that the dozens of times I’ve been at my local Home Depot and seen a “saw not working” sign on the panel saw, there’s an even-odds chance that the employees just didn’t want to be bothered to help their customers… fabulous.

Wow — the entire Windows Genuine Advantage system is currently down, meaning that every single copy of Windows XP and Vista that tries to authenticate as legitimate is failing the authentication. For users of Vista, this means that the operating system then assumes that it’s a pirated installation and turns off functionality (like the Aero interface, DirectX, and a few other things) — all because Microsoft’s own server infrastructure died. The MS Forums appear to be down right now, but there are reports that the company has promised a fix by Tuesday (wow — three days!), and that users who managed to post to the WGA forum are rightfully outraged by what’s happening.

I’m a person who generally thinks that there’s too much bashing of Microsoft out there, but I have to say that when the company’s anti-piracy features start disabling functionality on legitimately-purchased copies of Windows Vista, all because of an outage on Microsoft’s own end, then every cent of lost business and increased customer support costs is richly deserved.

If the two-day Skype outage from last week was the result of a flaw in Skype’s own software, why did the company only release an updated Windows version of its client? What about the Mac and Linux users — does the robustness of the software on those platforms not matter?

Now that pop-under ads have made a resurgence on the web — and nefarious webheads have managed to figure out how to make them happen even with Firefox or IE locked down pretty tightly — I have an idea that I’d love to see implemented. It’s rooted in the basic problem that by the time a user closes a web browser window and sees all the accumulated pop-under ads, he or she has no clue which website was the cause, and as a result, no idea which website should be the target of unabashed loathing. Simply put, the idea is that any web browser window should have a feature which shows the user the exact website address being viewed in the window that spawned the popup. That way, it would be clear as a bell which website was responsible for accepting ad content (or worse, purposely programming content) which behaves this egregiously, and it’d be much easier for users to then avoid those websites — voting with our pageviews, as it were.

Who’s with me?

Ah, the iPhoners now get to see what the difference is between a product Apple controls in every way and a product for which it relies on AT&T to provide some level of service. I can’t fathom why a company with as reasonably great a record as Apple wanted to jump into bed with a company as awful as AT&T… it’s just weird.

Update: Gizmodo gets on the bandwagon and describes the amazingly wide gap between the iPhone-buying experience at an AT&T store and at an Apple store. Guess which store’s staff sucked awfully, treated customers like intruders, and did everything to not give information or assistance to the people who wanted to give them money?

Seriously, I’ve been killing myself trying to figure out why the speakers on my Mac Mini at the office have been popping and crackling at me for the past few days; alas, it looks like my upgrade to OS X 10.4.10 is to blame. What could they have possibly changed in the OS to cause this? It’s one of the odder bugs related to a system upgrade I’ve ever experienced…

Update (7/3/2007): it looks like Apple has fixed the bug; go grab Audio Update 2007-001.

This morning, I was greeted by a Firefox alert letting me know that version 2.0.0.4 is available for installation. I decided to read the release notes, and found this gem among the info applicable to all systems:

When trying to print web pages with text areas, if the text area contains a misspelled word and spell checking is enabled, all the following content of the text area will not be printed. You can right-click in the text area and uncheck “Spell check this field” to turn off spell checking temporarily while you print.

Are you kidding me? I mean, we’re not talking about an edge case here — spellcheck is turned on in textarea fields by default, and there are a hell of a lot of people who print various forms they’re about to submit on the web as a form of recordkeeping. The folks at Mozilla decided that this rookie crap isn’t the sort of thing that should be fixed before releasing the browser upgrade? (And better still, it appears the bug was fixed back in December of 2006, but that nobody has deigned to include the fix in the current Firefox builds.) Awesome.

Given all the Motorola RAZR 2 hoopla I’ve seen on the various gadget and tech weblogs over the past day, I feel compelled to mention that my hopes for the phone would be exactly nil, given how large and stinking a piece of poop the RAZR is. It is, by far, the worst phone I’ve ever used, and I’ve probably had a dozen or more cellphones in the past decade — I typically now just leave it at home and forward it to my work phone, and there isn’t a day I decide to take it with me that I don’t end up wanting to throw it off the 14th Street Bridge.

That is all.

Today, I figured I’d do a one-month check-in on the fact that Google Maps is lost when it comes to mapping Washington, DC, and the verdict is: still totally, completely horked! It’s horked in a different way now, though; the link from my original post works, but other ones don’t work worth a damn at all. (And while neither of those is a link to our house’s address, our house is one of the addresses that’s unmappable… meaning that all the various bookmarks for directions we’ve sent people over the past year still don’t work at all.) There’ve been no further replies from the folks at Google, either; Matt Cutts replied to that prior post of mine in the comments and followed up with me by email a few days later, but he now appears to be going on a one-month work hiatus and doesn’t look to be receiving email.

I seriously can’t believe that the folks at Google don’t care about the bug in their address parsing routines, but the truth appears to be evident in the fact that they remain broken.

Update: I just got an email reply from Matt Cutts (too quickly for it to be due to this post!), and in working through some examples, it looks like the breakage might be specific only to the various C Streets in Washington, DC — addresses on C Street SE (and the other four quadrants) don’t work, and addresses on all other one-letter streets appear to map fine. He’s going to bug the mapping folks again, so we’ll see what happens!

If you live or spend any amount of time in Washington, DC, you might have noticed a problem recently: Google Maps essentially no longer works here. Sometime in mid-February, it appears that the folks behind the previously-amazing mapping service updated the address parser that it uses, and at this point the parser doesn’t have any clue how to understand the one-letter streets and quadrant system that’s used throughout the District of Columbia.

Take this map link, which is supposed to show 500 E Street SE (the address of our local police station). You don’t have to be eagle-eyed to see that that’s not the address the map shows; here’s a MapQuest view of the distance between Google’s mapped location and the true one, nearly five miles away. Try to use Google Maps to locate any address on a lettered street in the District, and you’ll get the same result.

I’ve avoided posting about this for a little bit in the hope that Google would get around to fixing it… but there are a half-dozen posts or threads in the Maps troubleshooting group, dating as far back as the last week in February, that have gone completely unanswered by Google. Similarly, I’ve personally had email correspondence with “The Google Team” which reassures me that “they’re aware of the issue” but neglects to mention anything about whether they care about the issue, despite me pressing the question and getting a similarly cookie-cutter reply. Since our house is on an essentially-unmappable street, none of the map links I’ve sent people over the past year work anymore, and Shannon and I have pretty much stopped using Google Maps for any of our regular direction-finding for trips out and about on weekends.

I know Google is a huge company now, and that it’s hard for them to reply to the concerns of individual users, but when a change they made causes one of their larger products to stop working entirely in a reasonably large and well-traveled city, you’d think that they’d get hop onto fixing that. So far as I can tell, though, you’d be thinking wrong.

While adding a bunch of scheduled meetings to my 2007 calendar today, I came across a fascinating little bug in Microsoft Entourage, a bug that’s related to the US decision to shift around a bit the start and end of Daylight Savings Time. (The move was part of the Energy Policy Act of 2005, and is ostensibly temporary until the government can study the changes and determine if they truly do result in energy savings.) Because Entourage predates the DST changes by quite a bit, it gets confused between March 11th and March 31st of 2007, and between October 29th and November 4th of 2007 — the former is because the start of DST was shifted back from April 1st to March 11th, and the latter is because the end of DST was shifted forward from October 29th to November 5th. The result of the bug is that the calendar shows all event times as an hour later than those which were entered (for example, a start time of 8:30 AM in the event detail dialog box shows up as 9:30 AM on the calendar).

I’m sure that Microsoft will get around to fixing this sometime before mid-March of 2007, but until then, be aware of the bug when you’re scheduling events with Entourage!

Wouldn’t you know that my Linux box — with a runtime of 200+ days — would choose this morning to puke and die, the same morning that was my very first as an attending on the clinical pediatric oncology service of my new hospital. As it played out, the server died at around 4:10 this morning, I left the house at around 6:30 without realizing it, and it wasn’t until Shannon sent me a message at around 10:00 that I had the “oh, f*@$” moment. It was a weird crash, but it’s back up and running (as this posting will attest!). Sorry about that!

A group of online scammers managed to set up a website, pretending to be part of Mountain America Credit Union, that collected the credit card information of MACU users who were tricked into visiting the site. This, by itself, isn’t all that frightening — there are probably hundreds of sites out there that try to do the same thing. In this case, though, the scammers managed to get a secure certificate for the site (the component that then puts the little locked icon in a user’s browser interface), something they did by tricking Geotrust, one of the companies that provides those certificates. (The process of granting those certificates is supposed to involve due diligence on the part of the company, wherein they make sure that the people asking are who they say they are, and that they represent the entity they claim to represent.) Similarly, the scammers managed to convince ChoicePoint that they were legitimate, lending more evidence to unsuspecting consumers that they were actually giving their financial information to their bank. (Of course, we’re talking about the same ChoicePoint that gave the personal information of hundreds of thousands of people to criminals, and both had an enormous fine levied against them, and had serial future audits imposed on their continued business practices.) The remarkably-adept internet security organization SANS has a detailed review of the incident, something that’s worth a read.

The mechanisms of trust that exist on today’s internet are all based on private actors — companies like Verisign, Geotrust, and ChoicePoint — which are supposed to go through strict processes to make sure that people are who they say they are. (For example, when I got an security certificate for a webserver I run on my domain, queso.com, I had to fax my business articles to the company granting the certificate, and provide them with financial information that they could then use to link me back to my company.) We’re learning more and more, though, that we can’t even trust those private actors, something that undermines everything we think of as transactional security on the web.

During my pediatrics residency, I built a pretty sizable content management system to host educational and curricular material for my department, a system that’s remained in operation for quite a while now. Over the past few months, though, two senior pediatricians (and regular content contributors) let me know that they were unable to log into the system from home; from their descriptions, they were presented with the login form, entered their information and submitted it, and were immediately returned to the same form without any errors. On its face, it made no sense to me, and made even less sense given the fact that there are a hundred or more regular users who weren’t having any problems logging in. The fact that two people were having the same problem, though, made it clear that something was breaking, so I started taking everything apart to see where the problem was rooted. (This was of particular interest to me, since I use the same authentication scheme in a few other web apps of mine, some of which contain patients’ protected health information.)

Looking at the mechanism I had built, the system takes four pieces of information — the username, password, client IP address, and date and time of last site access — and combines them into a series of cookies sent back to the user’s browser in the following manner:

  • the username is put in its own cookie;
  • the username, password, and client IP address are combined and put through a one-way hash function to create an authentication token, a token that’s put into a second cookie;
  • finally, the date and time of the last site access is put into a third cookie.

To me, all four pieces of information represent the minimum set needed for reasonable site security. The username and password are obvious, since without them, anyone could gain access to the site. The client IP address is also important for web-based applications; it’s the insurance that prevents people from being able to use packet sniffers, grab someone else’s cookie as it crosses the network, and then use it to authenticate themselves without even having to know the password (a type of playback attack known as session hijacking). (This isn’t perfect, given the widespread use of networks hidden behind Network Address Translation as well as the feasibility of source IP address spoofing, but it’s a pretty high bar to set.) And finally, incorporating the date and time of a user’s last access allows me to implement a site timeout, preventing someone from scavenging a user’s old cookies and using them to access the site at a later time.

Looking at that system, I struggled to find the bit that might be preventing these two users from being able to log in at home. I already had a check to see if the user’s browser allowed cookies, so I knew that couldn’t be the problem. These same two users were able to log into the site using browsers at the hospital, so I knew that there wasn’t some issue with their user database entries. That left me with a bunch of weird ideas (like that their home browsers were performing odd text transformation between when they typed their login information and when the browser submitted it to the server, or that their browsers were somehow modifying the client IP address that was being seen by my application). None of that made any sense to me, until I got a late-night email from one of the two affected users containing an interesting data point. He related that he was continuing to have problems, and then was able to log in successfully by switching from AOL’s built-in web browser to Internet Explorer. (He has a broadband connection, and mentioned that his normal way of surfing the web is to log onto his AOL-over-broadband account and using the built-in AOL browser.) When the other affected user verified the same behavior for me, I was able to figure out what was going on.

It turns out that when someone surfs the web using the browser built into AOL’s desktop software, their requests don’t go directly from their desktop to the web servers. Instead, AOL has a series of proxy machines that sit on their network, and most client requests go through these machines. (This means that the web browser sends its request to a proxy server, which then requests the information from the distant web server, receives it back, and finally passes it on to the client.) The maddening thing is that during a single web surfing session, the traffic from a single client might go through dozens of different proxy servers, and this means that to one web server, that single client might appear to be coming from dozens of different IP addresses. And remembering that the client IP address is a static part of my authentication token, the changing IP address makes every token invalid, so the user is logged out of their session and returned to the login page.

Thinking about this, it hit me that there are precious few ways that an authentication scheme could play well with AOL’s method of providing web access. For example:

  • The scheme could just do away with a reliance on the client’s IP address; this, though, would mean that the site would be entirely susceptible to session hikacking.
  • The scheme could use a looser IP address check, checking only to make sure the client was in the same range of IP addresses from request to request; this would likewise open the site up to (a more limited scope of) session hijacking, and would be a completely arbitrary implementation of the idea that proxy requests will always take place within some generic range of IP addresses. (Of note, it appears this is how the popular web forum software phpBB has decided to deal with this same problem, only checking the first 24 bits of the IP address.)
  • The scheme could replace its checks of the client IP address with checks of other random HTTP headers (like the User-Agent, the Accept-Charset, etc.); to me, though, any competent hacker wouldn’t just playback the cookie header, he would play back all the headers from the request, and would easily defeat this check without even knowing it.
  • Lastly, the scheme could get rid of the client IP address check but demand encryption of all its traffic (using secure HTTP); this would work great and prevent network capture of the cookies, but would require an HTTPS server and would demand that the people running the app spend money annually to get a security certificate, all just to work around AOL’s decision on how the web should work.

In the end, I added a preference to my scheme that allows any single application to decide on one of two behaviors, either completely rejecting clients that are coming through AOL proxy servers (not shockingly, the way that many others have decided to deal with the problem), or allowing them by lessening the security bar for them and them alone. I check whether a given client is coming from AOL via a two-pronged test: first, I check to see if the User-Agent string contains “AOL”, and if it does, I check to see if the client IP address is within the known blocks of AOL proxy servers. If the client is found to be an AOL proxy server, then (depending on the chosen behavior) I either return the user to the login page with a message that explains why his browser can’t connect to my app, or I build my authentication token without the client IP address and then pass the user into the application.

Finding myself in a situation where users were inexplicably unable to access one of my web apps was reasonably irritating, sure, but the end explanation was way more irritating. Now, I have to maintain a list of known AOL proxy servers in all my apps, and potentially, I have to get involved in teaching users how to bypass the AOL browser for access to any apps that require the stronger level of security. Of course, it’s also helped me understand the places where my authentication scheme can stand to be improved, and that’s not all that bad… but it still makes me want to punish AOL somehow.

Back in October, I wrote about some Bank of America customer deciding that he would use my Gmail account’s address as the destination for all of his online banking notices, and about how the BoA reps painstakingly claimed to not be able to do anything to deal with the error. The story ended OK, though — I gave them a second chance by calling back a few days later, and ended up getting a competent manager who found the right accountholder and then called him to ask him to correct his error. For two weeks or so, the notices stopped — but then they started right back up again, with the same last four digits of the account number. The realization that the same person put the wrong email address into his BoA account preferences a second time made my brain hurt, so I just put it on the back burner and hoped that it would sort itself out (ha, ha). Alas, they kept coming, so today, I called BoA again.

In contrast to that first phone call back in October, this time the company performed admirably. The first-tier rep understood how annoying this is and got me to his manager quickly (saying that he didn’t have the authority to browse the account database or cold-call customers). The manager spent a few minutes looking up every accountholder with the same first initial and last name as me (which corresponds to the format of the Gmail account), and in about four minutes, she had him. She promised that as soon as we hung up, she’d again contact him, and she’d also leave a detailed note in my account so that if when this happens again, it won’t even take this long to handle.

As frustrating as bad customer service is, good customer service can be even more gratifying.

Oh, hallelujah — the Internet Explorer dev team has finally decided to fix a bug that’s almost always the cause of excruciating pain to me when I stumble over it, the famous <select> element that doesn’t allow accurate placement within a page’s layers. (For my reference as much as anyone else’s, I tend towards Joe King’s bug workaround, since I can implement it almost exclusively in Javascript, making it easy to peel out of the page when it’s no longer needed.)

About a month ago, I started getting mail to my Gmail account from Bank of America that contained a bunch of information about bank account deposits, withdrawals, and balances. Trouble is, it isn’t my bank account; all the emails just say something to the effect of “This is an alert for the account with the last four digits XXXX,” and then tell me to log into my online banking account to see details about the alert. The emails come at the tune of one or two a day, and have nothing in them to indicate how I can let Bank of America know that some accountholder put the wrong email address into their preferences.

Tonight, I called BoA’s online banking customer service department and explained the issue to them. The woman put me on hold for a few minutes, and then came back to tell me that there’s nothing they can do about it, and that I should “just ignore the emails.” I was a little incredulous, and asked if they really don’t have a way to search their database by email address, figure out the accountholder, and contact them to let them know their error, and she said that that was all true — the only way they can search their records is by account number. I asked for her supervisor, who came to the phone and repeated their inability to do anything at all. I reiterated that I had the last four digits of the account number, and she said that there was still nothing they could do. She recommended that I just delete the emails, and hope that the owner of the bank account comes to realize his or her error.

Now, being a database programmer, I know that she’s wrong, and that there’s certainly someone within the BoA system who has the ability to search their database by email address. (For example, if an investigator from the Department of Homeland Security called them and told them that they had intercepted a suspicious email, would they really send the DHS rep packing?) What makes me sad is that they’re just plain unwilling to try. When I explained that we have five accounts with BoA, it didn’t make any difference; when I explained that it was hard to justify continuing to use a bank that was so unwilling to try to do the right thing, it made an equal amount of zero difference. So now I’m forced to resort to reporting the emails to Gmail as spam (which they really are, since they’re unsolicited email that I’ve tried to put a stop to by contacting the originator), and writing a letter to the (un-emailable, un-faxable) escalation department at BoA seeing if anyone there realizes the stupidity of this. And when we eventually leave Boston, we’ll see whether BoA retains our business…

Warning: this might be one of my geekiest entries in a while. If you don’t care one whit about such things as internet protocols and development bugs, you’ll either want to ignore it, or read it and mock me.

After realizing that the two torrents I posted over the past 24 hours didn’t work at all, I started digging into the BitTorrent tracker I use (BlogTorrent) to see what the problem could be. After a lot of excavation, it turns out that BlogTorrent (and Broadcast Machine, its more mature cousin) is a little deaf to one of its configuration parameters, a parameter that’s likely to be less-often used, equally likely to be safely ignored in a bunch of circumstances, but is nonetheless both important and mandatory on network setups like mine. I posted a bug on the Sourceforce page for the tracker, so we’ll see what happens.

All this being said, I have to admit that I didn’t find it very easy to debug the entire BitTorrent conversation, a fact that made it much harder to find the problem and work out where it was located. BitTorrent itself uses a dead simple protocol, but I’m underwhelmed by the information about peers, seeding, and the like that’s available from BlogTorrent and the standard BitTorrent clients. I had to do a lot of webserver logwatching and launching of netstat dumps to figure out what was broken; I’d love to find a tracker which provides enough data so that I don’t need to repeat the performance.