On the Soapbox

Welcome to McDonaldΓÇÖs; would you like some mangling with your encoding?

Saturday, May 7, 2011
Keywords: Technology

As a programmer, I see the world a bit differently, which is why the error on this McDonald's receipt caught my attention:

This, of course, is what happens when U+2019 (the right single quotation mark) is encoded in UTF-8 and then decoded using the IBM "OEM" code page (CP437). My guess is that the machine used to print the receipt is too simple (and/or old) to recognize UTF-8. However, if you look at the receipt's date, it does print apostrophes correctly, with the desired directionality. Which brings us to their second mistake: the right single quotation mark is semantically incorrect*: for possessives, the correct punctuation is an apostrophe (U+0027). Since the apostrophe falls within the 7-bit ASCII range, it is encoded identically in UTF-8 and CP437 (and many other encodings) and is generally immune to encoding mangling. This is why the apostrophe prints correctly, but not the right single quotation mark.

So first, McDonald's used a semantically incorrect* punctuation mark (quotation instead of apostrophe) and then they sent that character, encoded in UTF-8, to a machine that apparently only understands CP437. You would think that one of the biggest companies in the world would be more technically competent than this.

I suspect, however, that another major company may share in part of this blame: Microsoft. Specifically, their word processor auto-"corrects" apostrophes into single quotes. While this "correction" is semantically correct if the apostrophes were being used as quotes, it is not* when the apostrophes are actually used as, well, apostrophes. It is plausible that someone could have used Word to draft possible greeting texts for use on receipts and then copied-and-pasted them into the receipt system.

________________
* Yes, I know that the Unicode Consortium sanctions the use of U+2019 as an apostrophe. No, I do not agree with their position on that.

This entry was edited on 2011/05/07 at 02:06:11 GMT -0500.

Two tales of "smart" software gone awry

Friday, July 3, 2009
Keywords: Technology

Don't mess with my data

In an era when our lives are increasingly defined by and stored as data, it's important to keep that data safe. Backups are not always enough, because before you can use a backup to recover when something goes wrong, you first need to know that something has gone wrong. While the catastrophic death of a hard drive is hard to miss, subtler forms of corruption can be much more difficult to detect.

A couple of years ago, one of the sticks of memory in my desktop computer developed a defect. It was a very, very small defect: a single bit (not even a full byte) that had gone bad. While one bit does not sound like much, it can still be problematic, especially when dealing with compressed data in which even one bit of corruption has the potential to ruin all of the data that comes after that it. But since this was just one bit that had gone bad, it was not readily apparent that there was a problem. The effect of a bit of bad memory depends on what that section of memory is currently being used for, which means that the effect of this bad bit can potentially vary frequently, as memory gets shuffled around and reallocated to various tasks. Most of the time, there was no discernible effect. Sometimes, there might be minor glitches in software that could be easily misattributed to a programming bug. And sometimes, such as when that piece of memory is used as a part of a file buffer, it could corrupt a file on the disk, which, depending on the type of the file, could easily go unnoticed.

Fortunately, I had a habit of hashing important files and periodically checking those hashes, as a sort of a trip wire to alert me to potential problems with data integrity. When I started getting hash mismatches, I knew that something was wrong. I had originally suspected a faulty disk, but when I considered that I had also ran into a small handful of minor, but odd software glitches, I decided to test my memory, which was how I discovered the defect. Without my practice of hashing important files, it would have almost certainly taken me months longer to realize that there was a problem.

This afternoon, when I checked the hashes of my music files, I was a little alarmed to see 20 hash mismatches. I quickly noticed that only two folders were affected: every file within those two folders failed the check, while every file outside of those two folders were fine; this obviously did not look like random data corruption. I recalled that, while testing Windows 7 RC on my desktop computer, I had played these two folders of music (which were located on my laptop; this was done over the network) using Windows Media Player (I usually use Winamp to play my music). Comparing the altered files with my backup revealed that WMP had altered the files' metadata (stuff like tags, ratings, etc.).

Now, I understand that WMP is just trying to be "smart" and "help" me organize my music, but I already have my music organized just the way I want it using the file system hierarchy, thank you very much. And most importantly, I didn't ask for its help. I didn't tag the music, I didn't click any of those silly rating stars, I didn't do anything beyond dropping a couple of folders into WMP and hitting the play button. The expectation should be that when a user opens/reads/plays some file, it should be a read-only operation: in other words, "don't mess with my data!" Only when the user is explicitly editing the file or its associated metadata should programs open a file in anything other than read-only mode.

Among the many problems raised by this "smart" behavior is excess disk activity. The metadata that WMP altered was located at the start of each file, which meant that any alteration of the data that changes the size of the metadata would require rereading and rewriting the entire file to disk, which is an expensive operation when done locally, and is a downright idiotic thing to unnecessarily do over a network where, aside from incurring an expensive disk cost, you will also incur an even more expensive cost to shoot the entire file over the network and back again.* This "smart" behavior also altered the timestamp of the file, which ruins the ability to search for and identify files based on timestamps. And, of course, this erodes my ability to detect file integrity problems by altering files and thus raising false alarms.

An unwanted wake-up call

While testing Windows 7 RC on my old desktop, I ran into a problem where the system would not stay in hibernation. I would hibernate the system, and within minutes, it would boot itself back up. I quickly discovered that this was a problem with the Wake-on-LAN (WOL). The default settings for the network driver in Windows XP was to wake only on a magic packet, while the default settings for the driver in Windows 7 was to wake on any directed packet, which was problematic since the brain-dead router provided by the ISP was probing machines on the network every few minutes. This problem was easily diagnosed and fixed by setting the WOL to respond only to magic packets.

That night, shortly after I had gone to bed, my desktop booted itself up. I crawled out of bed, disabled the WOL completely, and hibernated the system again; having just dealt with the system inappropriately waking from WOL, I had incorrectly assumed that this was somehow related to that. Then it happened again the next night. Increasingly frustrated at the system waking up, I disconnected the network cable the third night and double-checked to make sure that I did not have any system wakeups scheduled in the system BIOS and that Windows Update was indeed set to my usual setting of manual. I also checked the operating system's logs, which did log the wakeup, but unhelpfully noted that the source of the wakeup was "Unknown". Still, it happened again the next night. This time, I noticed my bedside clock when the system woke up: it was half past the hour. That's an unusually round time for the system to be waking up randomly, I thought. I examined the operating system's logs, and while they unhelpfully indicated that the source was "Unknown", they did contain one very important piece of information: the system woke up at exactly half past the hour (well, it was off by 0.3 seconds, but that's close enough). These wakeups are almost certainly scheduled.

Having already ruled out Windows Update and BIOS-scheduled wakeups, I turned to the Task Scheduler service, which I always have disabled on Windows XP (in Windows 7, it is not possible to disable it, in part because it now plays a far more prominent role than it did back in the days of NT 5.x). I was shocked to see the enormous list of items in the task scheduler; it took a long time for me to wade through all of them. Some were triggered events along the lines of "do A if B happens", and some were scheduled events along the lines of "do A at 10 AM every day". Almost every one of the scheduled events were instances of Windows trying to be "smart" and "helpful" and performing periodic self-diagnostics, checks, updates, etc. I got trigger-happy and purged well over half of the items in the task scheduler, including virtually all of the scheduled events; almost none of the items in the task scheduler were necessary or even particularly useful. Among those purged were two events (or was it just one? I don't recall the specifics) that were scheduled for the time when the computer was waking up. After purging the task scheduler, I never experienced the late-night wakeup problem again.

It should be noted, however, that Microsoft did not intend for the system to be woken up for these mundane tasks: while events have the option of waking the computer from sleep or hibernation, none of these events were configured to exercise that option: they were configured to fire only if the computer was awake. So I had encountered what appears to be a bug with the task scheduler (well, this is pre-release software, after all). Nevertheless, this highlights the pitfalls of a "smart" system that operates on autopilot. I can understand the usefulness for the average computer user who would appreciate the system looking after itself, but I have always preferred keeping the reins in my hands.

________________
* Some readers may wonder why WMP simply does not make use of file system streams. After all, storing metadata is one of the purposes of file system streams. The problem with out-of-file storage is that it requires file system support, which means that such metadata would be lost if a file is transmitted over the Internet, burned to a CD, or copied to a USB drive that uses FAT (many devices can't afford the overhead of anything but the simplest of file systems, which is why is still so common).

Why Google's Chrome Ads are Worrisome

Saturday, May 9, 2009
Keywords: Technology

Google has announced that they will begin airing TV ads for their Chrome web browser. I think that this is the first time since the original Netscape-IE browser wars a decade ago that there has been a browser TV ad.

And all this coming from a company that prided itself on having grown by word-of-mouth; unlike Yahoo!, Excite or MSN, Google did not advertise itself until it had already become established, and even then, its use of advertisements has been very limited. So Google's aggressive marketing of Chrome stands out, both because it has been so long since browsers were advertised on TV and because it is in such stark contrast to how Google normally operates.

So what happened to the viral word-of-mouth of a truly great an exciting product? What happened to the grassroots nature of the so-called "Web 2.0"? A TV ad for an Internet product is blatant astroturfing. When Mozilla took out a full page ad in the New York Times for the Firefox 1.0 release, it was funded by user donations—it was as grassroots as you could get. Could it be that Chrome is not living up to Google's hype?

Chrome's multi-process model was supposed to eliminate the problems of memory leakage and instability, but it only served as a poor cover-up for Chrome's high resource usage and instability. Despite the initial statements otherwise, Chrome never actually had per-tab processes, and tabs would often get grouped into the same process, and processes would often get reused. The end result was a leaky Chrome whose multi-process model did very little except contribute to the already-high resource footprint. Aside from the fancy UI effects, Chrome turned out to be all looks and no substance. After having seen just how poorly Chrome performed on my old 800 MHz laptop (I use it as a tool to stress-test application performance; Firefox 3, BTW, passed with flying colors), I can't help but wonder if these campaigns are an attempt to compensate for Chrome's lack of shine. When I was a gullible, naïve little kid, I was told that companies that advertise are those who need to compensate for poor products; while this is, of course, not true in many cases, I can't help but wonder if Chrome fits this profile.

PS: Another, more conspiratorial possibility is that Google is simply desperate to dominate the browser market so that it can tighten its grip on how people access the Internet. In that case, this is particularly worrying because, of Google's competitors, only IE and Safari (which shares the same layout engine as Chrome) are independently funded. Both Mozilla and Opera depend on Google for funding, and Google's attempts to muscle Chrome into the market not bode well for them or for the browser market as a whole.

PPS: Performance issues aside, another reason why Chrome has had trouble catching on is the lack of extensibility and an extensibility community. I could not live without the QuickDrag extension, for example, and while Mozilla's own stats indicate that many users do not use addons, among the people who matter—the tech-savvy people who strongly influence the product choices of their less tech-literate family and friends—addons are very important, and the loss of even a single favorite addon can be a deal-breaker.

This entry was edited on 2009/05/09 at 03:32:15 GMT -0400.

From Windows Me to Windows 7: Running 6.1.7000 on an Inspiron 2500

Sunday, February 8, 2009
Keywords: Technology

I recently decided, on a complete whim, to install the Windows 7 public beta on my old laptop (named Badlands). Although I no longer use Badlands as much as I used to, I still use it for testing, to make sure that things that I write use the CPU and other system resources efficiently (since inefficiencies are easier to spot when you are running on older, less forgiving hardware) and sometimes to see, out of curiosity, how modern apps fare on legacy hardware (notably, I was surprised to discover just how poorly Chrome performed and how unusable it was, despite all of Google's hubbub about their attention to performance; Firefox, on the other hand, performed superbly, with the exception of a hungry throbber).

So I got the wild idea to try out Windows 7 (which I had previously been playing with in a VPC and on a dual-boot partition of a live machine) on Badlands to see just how well it'll work, and to my great surprise, things actually worked out quite well... but first, a little bit of background about Badlands.

Badlands is a Dell Inspiron 2500 that I got back in 2001. It's so old that the operating system that was installed when I first got the machine was Windows Me. Here are the specs:

  • CPU: 800 MHz Celeron (180nm Coppermine core) - In addition to having a low clock speed, it's an outdated design with an IPC (instructions per clock) that's fairly low compared to that of modern processors
  • Memory: 512 MB of PC100 SDRAM - The memory capacity was upgraded a couple of times through the life of the machine; PC100 memory has only a small fraction of the memory bandwidth/speed of newer memory technologies
  • Video: Onboard 82815 graphics - Just 4 MB of video memory, and no support for 32-bit color mode; don't even think about 3D
  • HDD: 40 GB, 4200 RPM - The original hard drive died years ago, and this is the replacement.

I wasn't expecting Windows 7 to even install--I was fully prepared to see a message informing me that the installation cannot proceed due to inadequate hardware. But it did proceed, although there was an early sign of the troubles that lay ahead: the installer was running in 8-bit color mode, which told me that Windows probably did not have a driver for the graphics chipset. With the exception of everything looking extremely ugly (since the installer was designed for at least 16-bit color mode), the installation itself was smooth and uneventful.

After install, I was greeted by a cramped, unusable 640x480 display. At least it's using 16-bit color now, but the 640x480 looked horrible, not only because there was too little screen space to do much of anything, but because it was not the native 1024x768 resolution, and we all know how ugly LCDs look when they are not using a native resolution. Also, the sound driver was missing. And my wireless adapter refuses to work. Oh, and the Ethernet driver was missing too. How fun.

Without any sort of network connectivity, I was forced to dust off my USB key. Intel does not provide NT 6.x drivers for the 82815 video chipset. The last driver that they provide is for Windows 2000 (it can also work on Windows XP, but since XP already includes the driver, the only people downloading it were Windows 2000 users), and it dates back to early 2002. As expected, the driver's installer refuses to install, so I extracted the contents of the driver and installed it manually through the Windows device manager. I was not expecting this to work, and it did not. After some Googling, I found that someone was able to get 82815 graphics to work under Vista by using an older driver. So I downloaded an older driver from 2001 from Dell's website, and, after a bit of wrangling to get them installed, they worked (much to my surprise). So now I was running with a much more comfortable native 1024x768 resolution in 24-bit color mode. Things looked pretty (well, as pretty as they can be without Aero).

Next, I downloaded Intel's Ethernet drivers and manually installed them through the device manager (I never liked the "helpful" applications that Intel installers like to put on your system). Finally, network connectivity!

The next goal was to get wireless networking to work. I verified and re-verified that the drivers that were installed were correct. I even ran the diagnostics (not that I expected any sort of automatic diagnostics to be of any help, but I was desperate and out of options). I eventually gave up and decided to ignore the issue for the time being.

Next was sound. This was easy, since Windows Update found a driver for the laptop's integrated sound system (no, it didn't find any new drivers for the wireless; I had already checked). I told Windows Update to install the driver, and the instant the sound system came on, the adapter started connecting to my network. As it turns out, the lack of a sound driver was somehow interfering with the function of the wireless adapter. I don't know how or why (nor have I ever encountered anything quite like it), but it was good to have the wireless adapter finally working.

Now with all of the hardware issues taken care of, it was time to actually use the system. I started installing software, including the latest Firefox trunk nightly. After the initial hurdles of resolving the hardware issues, everything else worked without a hitch. Of course, it wasn't as responsive as XP. But it was comparable. And, to my pleasant surprise, it was definitely usable. Not bad, considering that this is hardware that was never intended to support or run Windows 7 and that I was not expecting the installation to even succeed.

Overall, I was very impressed by how well Windows 7 performed on this old system. No, it was not the smoothest experience, but it was acceptable (and surprisingly, it was a bit better, with respect to perceived performance and responsiveness, than Ubuntu 8.04, which I briefly used on Badlands several months ago). The fact that an old video driver from 2001 designed for an older generation of Windows (with a very different driver model) worked on Windows 7 is also a testament to one of the most important things that makes Windows dominant: its attention to backwards-compatibility and the overall Windows ecosystem. You can still run a number old DOS-era programs on the 32-bit version of Windows 7; is there any other modern OS where this would be possible without recompilation? The only reason Mac OS X is able to "innovate" is because Apple has no qualms about saying "screw you" to its supporting ecosystem--which is also easier for them to do because their small market share means that their ecosystem is tiny to begin with. As much as I love open source, it must be said that one of the major reasons why Linux still remains a niche is because Windows tends to and cultivates its ecosystem (and heaping the blame on Microsoft being some sort of oppressor serves nothing more than a means to shrug off responsibility).

PS: In case anyone is curious, the WEI for this setup was 1.2/CPU, 1.4/RAM, 1.0/Gfx, and 3.5/HDD).

Edit: Added WEI scores and fixed typo: I meant to say "without recompilation", not "with".

This entry was edited on 2009/03/15 at 19:49:17 GMT -0500.

Will Yahoo! Messenger use XMPP?

Friday, June 13, 2008
Keywords: Technology, Jabber

From the Yahoo! press release announcing that they are outsourcing search advertising to Google:

In addition, Yahoo! and Google agreed to enable interoperability between their respective instant messaging services, bringing easier and broader communication to users.

Gee, doesn't this sound familiar? Here's to hoping that this attempt at interoperability works better than the Google-AIM one and that this brings another player to XMPP (Yahoo! seems like it would be more supportive of open standards like XMPP than AOL-TimeWarner).

This entry was edited on 2008/06/13 at 01:18:38 GMT -0400.

What makes Firefox 3 great?

Monday, May 12, 2008
Keywords: Technology

Last night, the code for Firefox 3 RC1 was handed off for building, and as I type this, the first candidate builds for RC1 are being spun. I've been using the nightly builds on my test machine (actually, a VPC) for a while now. At first, it was just meant as a test installation so that I could get a feel for what the upcoming browser would be, but now I find that I'm doing more browsing on that test install in a VPC than I am on my Firefox 2 installs on the real computer. That I'm using the Firefox 3 nightlies more than Firefox 2 despite it being run in a virtual machine is a testament, I think, so how great the new version is. So what makes it so great?

  1. Places: the new SQLite-based storage of bookmarks and history is much faster and allows for cool new things like the new location bar. I must admit, like many users, I hated the new location bar at first. It took a bit of getting used to and some adjustment in how I used the location bar, but now, I find it to be utterly indispensable, and it is the primary reason why I am using Firefox 3 more than Firefox 2.
  2. Firefox 3 is noticeably faster and more responsive.
  3. As a result of improvements such as the use of jemalloc and a new garbage collector, Firefox 3 uses less memory.
  4. The new graphics backend offers various benefits, such as the smooth scaling of images.
  5. Firefox 3 strives to appear more native, so it fits in better with the OS that it is running on. Of special interest to me, Firefox 3 looks better than Firefox 2 on Windows Classic.
  6. The download manager has been improved. First, it no longer uses RDF and thus doesn't suffer from slowdowns when the list gets too long (the use of RDF was a perfect illustration of how too many people today are mis-using XML for things for which XML is an insanely bad idea). Second, it allows download resumption. Third, it shows a general status indicator (# of downloads and est. time remaining) in the browser's status bar so that you can keep the manager closed and still keep track.
  7. The remember password prompt has been redesigned so that you could choose to remember the password after you have successfully logged in.

Sensible Product Naming

Monday, May 12, 2008
Keywords: Technology

Back in 2004, when the Firebird browser ran into naming difficulties and a search for a new name was initiated, I quietly wished for what I knew was a nearly-impossible outcome: that AOL would relinquish all claims to the Netscape name and donate that name to the Mozilla Foundation, so that it could be used for the name of their new browser. After all, Netscape is a browser's name, and if AOL was no longer in the browser business, why would it keep the name?

There were several reasons I wished that Firebird would be rebranded as Netscape. First, the Netscape logo has always been pretty. The elegant N on a starry background and a ship's steering wheel superimposed with the constellations were beautiful and evocative of the idea of "exploring" the Internet--I remember using Netscape 1.0 and how much that branding imagery colored my initial experience of browsing the web. Second, it would be poetic, for Firebird was originally named Phoenix, and if it could be named Netscape, then that would allow it truly live up to to the intent of its name. And most importantly, it was a name that made sense. "Netscape Navigator" gave some hint at what the product does: it navigates the net.

Back in 2003 when Mozilla announced that Firebird would become their new flagship product, they also announced that the final product name would be "Mozilla Browser", and that Firebird was just the project's temporary codename. Branding discussions from that time talked about the need to reinforce the "Mozilla" name (since Mozilla's first objective has always been the Gecko platform) and the need for clarity about what the product does. The "Mozilla Firebird" name doesn't give anyone any clue whatsoever what the nature of the product. It does not matter for people who are familiar with the product, but given that Mozilla is the underdog trying to claw its way up, a name like that made little sense at the time, and even today, it still makes little sense.

But I suppose this was all in line with modern marketing styles where things are given names that have absolutely nothing to do with the function or purpose of the product. What would have guessed that "Song" was an airline? Outside of the tech-savvy minority, who the heck has any clue what "Twitter", "dodgeball", or "del.icio.us" are? On the other hand, "Facebook" and "MySpace" have names that at least hint at what it is that they do. Similarly, Microsoft and Google are consistent with their naming: "Microsoft Word", "Windows Media Player", "Google Earth", "Gmail/Google Mail" are all examples of products whose use of a generic product name helps shift the emphasis to the parent brand name and clarifies what the product itself is. "Mozilla Browser", "Mozilla Navigator", and "Netscape Navigator" are names that would follow that same pattern of sensible product names.

But alas, to my horror, it was announced in 2004 that the official product name for the Firebird project would be "Mozilla Firefox". This was wrong in so many ways. The initial reaction upon hearing that name from many people then (and still today) is, "what the heck is a firefox?!" The name was appropriate for a code name or for an inside joke, but not for a product name. It offered no clue as to what it did. It adds an extra step to the evangelism and marketing of the product because you must first explain to someone that "Firefox" is a web browser. I was also disappointed because I saw Phoenix as the rebirth of the Netscape lineage, and now the final name had nothing to do with Netscape or Phoenix/Firebird, and the metaphor was, sadly, lost.

Epilogue: With the initial success of Firefox, AOL resurrected the Netscape brand and released a couple of browsers based on Firefox bearing the Netscape name, but these releases played second-fiddle to Firefox, and Netscape slid further into obscurity. Earlier this year, the Netscape brand was closed for good. RIP...

Why Microsoft should not buy Yahoo!

Friday, February 1, 2008
Keywords: Technology

Microsoft made an unsolicited $44.6 billion bid to buy Yahoo! today. This was not unexpected, as there have been discussions, rumors and speculation of a Microsoft buyout of Yahoo! since 2005, but it is very surprising that Microsoft would actually pursue such a course of action.

At $44.6 billion, this would be by far the most expensive acquisition in Microsoft's history and is a 62% premium over the current stock price. Is a sinking company like Yahoo! worth this price? The biggest problem with this acquisition is that neither company has much to offer the other. Does Microsoft really need two AJAX e-mail services that emulate a desktop app interface? Does Microsoft really need two search engines? Or two map services? Or two instant messaging systems (both of which have already interoperable with each other for some time now)? Technologically, Yahoo! has very little to offer Microsoft--much of what Yahoo! has, Microsoft has as well. There would be very little, if anything, to be gained from merging Microsoft and Yahoo! technologies and software, and such a move would also be extremely costly and complicated. What about talent? Microsoft has no lack of good talent, and if stories like this one about the Vista development process is any indication, Microsoft's problem isn't a lack of talent, but an organization that impedes the effectiveness of talent, in which case, how would a larger overall company and a Yahoo! demoted from independence to corporate division help with this problem? This leaves us with Yahoo!'s user base of around 130 million users per month (as of a year ago, in December 2006). Is each Yahoo! user really worth over $340? Especially since Yahoo is a company in stagnation or even decline? How would Microsoft grow Yahoo!'s user base? And vice-versa?

Ultimately, this is a terrible idea. Steve Ballmer and Microsoft's board are yahoos for proposing what could very well turn into the next AOL-Netscape. On the other hand, at a 62% premium for a fading company, Yahoo!'s shareholders would be stupid to not accept such a lucrative offer. So unless Microsoft gains some sense and pulls out of this one, this disaster is a fait accompli.

This entry was edited on 2008/02/01 at 10:42:09 GMT -0500.

Finally, AIM+XMPP

Friday, January 18, 2008
Keywords: Technology, Jabber

It's about time! Unfortunately, federation hasn't been enabled yet. Why this matters.

Where did WGA Notifications go?

Monday, July 16, 2007
Keywords: Technology

Last Thursday, as I was checking something on Microsoft Update, I noticed that I was no longer being offered WGA Notifications in the critical updates list. That's very odd, I thought. I didn't tell MU to hide the item, nor have I installed it. I switched to another computer and fired up Microsoft Update. Still nothing. In the past, Microsoft has withheld certain updates from certain locales (in fact, WGAN itself was rolled out at different times in different regions), so I wondered if, for some reason, Microsoft had disabled the offering of WGAN to certain groups of users. To check for that, I searched the update catalog for WGAN by name and by the KB number. Nothing. It wasn't all that long ago when I last saw WGAN being listed in the update catalog. Of course, people can still find it at its MSKB article and in the Microsoft Download Center, but it's through Automatic Updates and Microsoft Update that Microsoft shovels WGAN onto people's computers, and it's no longer there.

Of course, I don't care much about WGAN, nor do I care much about whether or not Microsoft is shoveling it onto people's computers. But I was interested to see what the reaction was online, so I looked around. Nothing. Nothing? Yes, I meant nothing. Not even on the once-contentious talk page of the WGA Wikipedia entry. Ever since Thursday, I've searched Google Blog Search and Technorati to see what people had to say about it. And here's the amusing thing: all my searches for recent posts about WGA Notifications turned up only rants and tirades against how evil WGAN is, about how evil Windows is, about how evil Microsoft is (though sadly, none of them recognized Jobs' much more pronounced monopolistic ambitions), etc., etc. Not that I have much love for WGAN either, but it struck me as rather odd (and amusing) that all these people ranting about how annoyed they are at WGAN never picked up on the fact that for the past several days, WGAN was missing from its primary distribution channel!

So what happened? I have no idea. Perhaps they temporarily pulled it in preparation for an update? But it is very unlike them to retract downloads prior to an update. Or have they finally recognized the cost of the bad PR and are quietly shelving it? Or is this just a temporary glitch in Microsoft Update? Guess we'll find out in time...

I really should use the MSKB more...

Saturday, July 7, 2007
Keywords: Technology

Normally, when I run into some problem or some sort of quirky behavior with some piece of software, I'll just curse at it, get frustrated, get annoyed, and then accept that it's just how it is. Unless the problem is fairly disruptive, blatantly a bug, or obviously something that I can fix myself, I usually don't give it much thought after the obligatory round of grumbling.

But every now and then, I do get frustrated enough at the minor quirks that I end up trying to fix it, and if it's a problem with Windows, then that means a trip to the Microsoft Knowledge Base (MSKB). It's a treasure trove, and over the years, I have almost always found a KB article that addresses the exact issue that I experienced. Yet, despite this, I have never have gotten into the habit of using MSKB. Whenever I encounter a minor quirk, I tend to automatically accept it as just a part of life, and it often doesn't even occur to me that there may be a fix for this minor thing, and worst of all, this dismissal happens automatically, without me even realizing it.

Anyway, to save power and to reduce the amount of heat produced, I often put my laptop in a sleep mode if I'm not using it for an extended period of time. The problem with this is that every time the laptop comes out of sleep, the network connection sometimes behaves strangely for a short while (less than a minute), and once I bring the laptop out of sleep, I can't put it back to sleep immediately. I have to wait a little bit before I can do that. And all this time, it never occurred to me that this is a problem that could be fixed; I had just thought that this was a natural and normal part of sleeping and waking the laptop and that perhaps the OS needs to perform some tasks after being reactivated from sleep. Well, as it turns out, KB308467 addresses this exact issue. Offers an explanation of why it happens and how to work around it. And now, I can sleep and wake my laptop without any quirky network behavior and the process is now virtually instantaneous: takes about a second to put it to sleep and about a second to wake it up. Neat, huh?

And the problem of some optical drives mysteriously entering PIO mode? Although I've had this happen to me only once, I've seen this happen to other people back when I used to moderate an optical drive forum. We never thought much of it since it was a relatively rare thing. Turns out there's a fix for that too from the MSKB...

This entry was edited on 2007/07/07 at 14:01:14 GMT -0400.

Google, a.k.a., Microsoft v2

Tuesday, July 3, 2007
Keywords: Technology

During the Microsoft antitrust battle of the 1990's, I recall Microsoft making the point that the computer industry was volatile and that monopolies are always at risk of naturally collapsing. Their best example was IBM, which was itself involved in a long, prolonged antitrust battle in the 1980's. Except for the resources wasted in the courtroom, nothing came out of those antitrust proceedings, but nothing needed to: IBM's monopoly had naturally collapsed, largely thanks to Microsoft.

Microsoft at the time of its own antitrust trial was fond of portraying its relationship with IBM as one of a young, relatively small upstart negotiating its way through a world dominated by a large, mature, and well-established corporation. And although the relationship between the two in the early 90's were terrible, the two never really directly competed. IBM did release OS/2, but OS/2's prominence was very limited and, for the most part, the two never directly battled. Direct competition came not from Microsoft, but from companies like Compaq, who were making IBM clones. After all, IBM was primarily a hardware company, and Microsoft was primarily a software company. But if they did not directly compete, why do people credit Microsoft with IBM's demise, and why did Microsoft view itself as a sort of rival to IBM?

Although the end of IBM's reign came directly from the rise of the ironically-termed "IBM-compatible PC", what had really happened was a platform shift. The hardware was no longer the platform of primary importance; this role had shifted to the operating system. In other words, it was no longer important to buy a computer bearing the IBM brand. Any computer with any brand would do, as long as it ran DOS or Windows, because it's on DOS and Windows that everyone's applications ran on. Contrast that with Apple computers, from the Apple II to today's Macs, where the hardware and operating system are vertically integrated and thus getting the operating system that ran the software that you wanted necessarily also meant getting hardware of a certain brand. The divorce of that hardware from the operating system destroyed IBM's monopoly power, which made possible competition in the hardware market, thus destroying IBM's monopoly and paving the way for the rapid-paced evolution of hardware and uptake of the PC in the 1990's (this, by the way, is the primary reason why I am so thankful that Apple ended up being marginalized; tight-fisted vertical integration of hardware and software has long been Apple's MO, and had they been at the helm, all the fast-paced innovation of the 90's would have been largely muted, and even today, Apple's "innovations" are largely aesthetic while all the real work of more powerful hardware development falls upon the legions of generally little-known hardware manufacturers made possible by the lack of hardware-software vertical integration).

Microsoft is quite cognizant of the way by which IBM fell from its pedestal, and this was why they were so fearful of Netscape and why they launched an all-out effort to destroy it. In hindsight, Microsoft never needed a heavy-handed tactic to destroy Netscape, since, despite all the hoopla over the antitrust violations, 90% of Netscape's demise could be attributed to the fact that Navigator 4 was by far the worst, most unstable, most buggy, and most atrocious browser to have ever been widely released. Microsoft's illegal actions only hastened the death that Netscape brought upon itself. But had Netscape actually produced a decent product, and had the conditions in the 90's been right (keep in mind that broadband was rare and many people were still not connected at all), Netscape could have posed a threat to Microsoft because if applications started to move online, then it would be Netscape's browser that would be the most important platform, and the choice of operating system would be marginalized, much like how the choice of computer brand was marginalized. Microsoft knew this, and it knew that it could not repeat IBM's mistake, so they tried to protect themselves through vertical integration, so when the day comes when online applications supplant offline applications, they would be running atop Microsoft's platform, not Netscape's.

Fast-forward a decade, and we are now only beginning to see the first glimmer of the world of truly functional web applications, in a very immature state in the form of AJAX and Flash. Although online applications are still far from supplanting offline applications, for a large segment of the population, especially among newcomers and casual users, the applications that matter most are e-mail, chat, web browsing, word processing, and maybe some multimedia playback,and all of which are applications that are independent of the operating system (some, like web-based e-mail, has been OS-independent for a long time, and some, like word processing, only broke free from the confines of the OS fairly recently with the launch of Google Docs). Through the standardization of web browsers, the browser has become less important, marginalizing the safety net that Microsoft had hoped to win with Internet Explorer, and with the prevalence of broadband and a growing number of people comfortable with the online world, this is the beginning of the end for Microsoft's monopoly. The recent uptick in Apple sales is in part due to the iPod halo effect and Apple's effective cult indoctrination brainwashing marketing department, but it is also greatly helped by the fact that as more and more of the applications that people care about are located online (either because of applications moving online or a shift in the things that people care about), the importance of the operating system is reduced, thus naturally destroying Microsoft's monopoly power (note that monopolies are not necessarily bad; it's monopoly power that is bad).

By now, it should be apparent why Microsoft's CEO, Steve Ballmer, made a private remark a couple of years ago (that was later leaked to the public) about wanting to "fucking kill" Google. A young, relatively small upstart is threatening to destroy a large, mature, and well-established corporation's world by pulling the rug from underneath them by shifting the platform away. Of course, the analogy isn't perfect. Although Google is the primary gateway to the Internet (just as Windows is the primary "gateway" to desktop applications), Google is not a monopoly in part because there are far fewer networking effects conducive to the formation of a natural monopoly. In layman's terms, Google is a quasi-monopoly of choice since people use Google because they choose to and they can easily switch to another search engine to find the same sites while Microsoft is a monopoly of necessity since certain programs only runs on Windows. And if Google were to become a monopoly, it would be one with very limited monopoly powers.

Finally, what about the browser? The thing that Microsoft had so long feared? W3C standardization has helped reduce the browser to just a commodity product whose choice is becoming less and less important. Gecko, which was built from scratch from the ruins of Netscape, still clings onto the notion of the browser as a platform, and as such, it is the most powerful and robust browser engine, making Gecko almost like an OS (after all, the Firefox browser, Thunderbird e-mail client, and Seamonkey communications suite are all rendered, independently of OS, by the Gecko engine--as in the UI and controls are all handled by Gecko, much like how Windows apps are rendered by the WinAPI). Although there is the chance that the model of the browser as a quasi-OS may bear fruit in the future, I have my doubts about whether this model will take off.

This entry was edited on 2007/07/03 at 15:29:23 GMT -0400.

Hey, Verizon, turn off your &#^@% wildcard!

Saturday, June 30, 2007
Keywords: Technology, Rant

I made a typo this evening and was confused when I found myself staring at a red page with Verizon's logo and a search box. It took me a few seconds to realize that some time today, Verizon had set up a DNS wildcard that was redirecting incorrect web addresses to their search portal, complete with a "helpful" message telling me that the address I entered was invalid and that perhaps I should search for what I was looking for via their search engine.

First, I wish that companies that try to pull this sort of unethical perfidy would stop trying to claim that they are providing "services" aimed at "helping" their users. That is pure, unadulterated bullshit. Modern browsers, by default, will do a search for what you typed into the address bar if a DNS error is received. There is, therefore, nothing to be gained from loading up Verizon's search page. In fact, there is much that is lost. First, if you type "szdmfewo.com" into the address bar, Internet Explorer will automatically search for that term once it receives the DNS error. Verizon does not even do that. It just takes you to a search page without even so much as pre-filling the search form. Thus, for novice users, Verizon's "helpful" scheme is actually less useful and even confusing for novices who are so used to the automatic search that they use the address bar as a sort of search box. For intermediate users, browser-based error search respects user choice. For example, you can configure it to use your favorite search engine. Verizon does this group of users a disservice by taking that choice away from them. And as for advanced users, the very idea of this sort of DNS hijacking through the improper use of DNS wildcards is an anathema. We may have programs or scripts that rely on being able to correctly detect DNS errors in order to function. We prefer seeing error pages when we do something wrong instead of some glossy hand-holding mechanism. And more than anything else, we bristle at the very notion that someone else butting in, reducing our choices, and changing things in a way that breaks the specifications under which the Internet functions. In other words, I'm fucking pissed.

To add insult to injury, Verizon has carefully hidden away all information about how to contact them for feedback. I was finally able to find a way to e-mail them--via an online form that limited the message to 70 bytes. Perhaps Verizon is aware of the torrents of unanimous protests and complaints after VeriSign and Earthlink tried DNS wildcarding. Verizon does offer a way to "opt out" of this system through a DNS server that they provide. However, these instructions are buried and are understandable only by people with a fair amount of technical proficiency. Their solution is not very elegant, either. The routers that Verizon provide their DSL customers get the DNS server addresses by DHCP, which means that they cannot be changed. Thus, in order to use the "opt out" server, each individual device on the network must be changed to use a hard-coded DNS server, which reduces the usefulness of DHCP on the network, eliminates the benefits of the router acting as a local network DNS cache, and is just a bloody pain to do. Also, Verizon provides only one DNS server, which means that people who opt out of this "service" of theirs will lose DNS redundancy.

As for the search engine itself, it is actually a meta-engine created by InfoSpace. It queries Google, Yahoo!, Microsoft, and Ask and aggregates their results. So the big search engines end up doing all the work while InfoSpace and Verizon, like leeches, reap all the advertising profit (no doubt the reason behind this "service"). InfoSpace, by the way, is a shady company responsible for such fine flotsam as Dogpile and Zoo.com. Envious of the advertising revenue of the major search engines but can't compete with them? Easy: just shove your leech engine down the unwilling throats of your customers, disrespecting and disrupting the Internet experience of your customers, and profit not from having a good product or serving your customers, but by abusing power and leeching. Despicable. Utterly despicable.

This entry was edited on 2007/06/30 at 01:45:08 GMT -0400.

Security Advisory #935423: The story of a vulnerability

Tuesday, April 3, 2007
Keywords: Technology

On December 20, 2006, a security company named Determina privately reported to Microsoft a vulnerability in how Windows handles animated cursors. According to Microsoft, they immediately began to "investigate" the issue in December. Not much was heard about this bug until last Wednesday when McAfee reported to Microsoft about a new attack using a previously unknown method. Microsoft then launched their incident response process and issued a security advisory the next day.

On the surface, a bug in how Windows handles animated cursors does not seem to be a very serious problem. (Edit: The following passage has been updated.) However, it is possible in CSS to specify different mouse cursors--for example, if the site owner wishes for the mouse to remain an arrow instead of turning into a hand when hovering over a link. It is thus possible to embed animated cursors on a web page or in an e-mail. Because mouse cursors are rendered by the operating system and not by the browser (the browser effectively tells the operating system to change the cursor), this is an operating system bug that affects all browsers that support the changing of mouse cursors through CSS (which is a part of the official CSS specification). What this means is that if you just visit an infected website, you will be immediately infected without the need for you to do or click anything. Or, if you open, read, or even preview an infected e-mail, you will be immediately infected even if you don't open any attachments in the e-mail. In other words, malware that exploit this vulnerability are extremely virulent. Infected websites could either be created by hackers, or they could be legitimate websites that have been discreetly compromised and hacked to deliver a payload without the website owner's knowledge (which meant that Microsoft's recommended temporary solution to "not visit untrusted websites" is not very practical because it is possible for any "good", but poorly-secured, website to become unwitting infection vectors). Once infected, anything is possible, depending on the payload delivered by the virus--including the possibility of a complete takeover of one's computer and the compromise of all data on it.

The severity of the problem grew significantly worse on Friday and over the weekend, a spam campaign was launched with e-mails that either contained a malicious payload or that had links to websites that delivered a malicious payload. Later in the weekend, a tool was made public that allowed anyone to attach any payload of their choosing to an infection vehicle that exploited this vulnerability. In response, various security groups raised their "alert level", including SANS, which hadn't raised its alert level in almost a year. Recognizing the severity of the problem, Microsoft announced on Sunday that they have a fix and that they will be releasing it on Tuesday. Microsoft always releases their security updates at the same time, on the second Tuesday of each month (known as "Patch Tuesday"), and it is only in rare and severe circumstances, like this one, that Microsoft will deviate from the schedule and release a security update early and separately from the rest.

What is most interesting about this story is how Microsoft responded. This incident never would have happened and goodness-knows-how-many computers would never have been compromised (it will be impossible to measure how many computers were infected because of the large number and wide diversity of different payloads that exploited this vulnerability, though it should be safe to assume that the number will likely be very high) if Microsoft had just fixed the problem in December instead of just "investigating" it for over a quarter of a year. Microsoft clearly didn't think very much about this problem until it was too late. Like most other vulnerabilities, this one uses a buffer overflow, and these are generally very easy to fix. In fact, on Friday, mnin.org was able to locate the exact location of this particular bug, and eEye had created an unofficial "quick-and-dirty" fix for the problem on Thursday. On the other hand, Microsoft, with their vast resources and intimate knowledge of their own code, took more than three months to "investigate" the problem. I suppose this is one of the things that makes Mozilla software more secure. Firefox is not devoid of security vulnerabilities--there has been so many that I've lost count (though still not as many severe ones as Internet Explorer), but after observing how they have handled their security vulnerabilities, it becomes clear that they take an approach that might be described as being excessively paranoid. Each vulnerability, no matter how obscure, is treated with great urgency, and most security flaws are patched within a few days of their initial reporting, even if no attacks exploiting the vulnerability exist and even if the vulnerability has never been publicly disclosed. This is in contrast of Microsoft's practice of quietly sitting on known security vulnerabilities as long as no attacks that exploit them exist and as long as a particular vulnerability has not yet been publicly disclosed, in the gamble that perhaps nothing will ever come of it. Well, this time, Microsoft lost the gamble in a very grand way, and average computer users are paying for it with hijacked systems and compromised data. (Edit: Unfortunately, in this case, because the bug is a OS-level bug, it will affect all browsers, even those not made by Microsoft, but it does serve to illustrate a different approach to such bugs that Microsoft takes.)

Update: Interestingly, a similar vulnerability was reported and fixed in 2005. See Determina's notes.

This entry was edited on 2007/04/03 at 16:05:35 GMT -0400.

And this is why we don't use Internet Explorer...

Saturday, March 17, 2007
Keywords: Technology

Read this recently at http://adblockplus.org/blog/speaking-of-ie-security

It could just as well read out your mail or change your mail password. It could also go into your banking account if you happen to be logged in. Information on this vulnerability has been published April last year and still unpatched in both Internet Explorer 6.0 and 7.0.

It's no secret that IE is a bad browser, but I honestly didn't know that it was this bad until just now. I had thought that with a fully-updated IE7, I'd only be vulnerable to relatively new zero-days, not something this old. With open exploits like this along with more and more computers being infected with trojans and keyloggers, it is no wonder that the official Gmail support forums are peppered with sob stories about people who lost everything when their mail accounts were "hacked".

Fun with Vista

Sunday, March 4, 2007
Keywords: Technology

After much prodding and insisting, I was finally convinced to install a trial copy of the new Windows Vista in a Virtual PC. After using it for about an hour, here are my impressions...

The Good:

  • Nice aesthetics. The default look is definitely much better than the immature cartoony look of XP. Not that this is relevant since I never use the default look and instead prefer a custom-tweaked classic theme.
  • Much better sounds; they're gentler and less jarring. However, it is very easy to copy the sounds from Vista back into XP, so this is my no means an exclusive feature.
  • Nicer fonts. Once again, these can by easily copied back into XP, and in fact, I have been using Vista fonts in XP for some months now. The downside to this is that the new Vista fonts pretty much require ClearType, which I am still ambivalent about using.
  • The new user directory structure rocks and is much more Unix-like. "Documents and Settings" was just too fucking cumbersome, if you ask me.
  • User Account Control is a nice, much-needed security feature and should hopefully reduce the number of compromised machines joining botnets.

The Bad:

  • User Account Control is moronically implemented. It doesn't always trigger when it should. There are numerous cases (such as using notepad to edit HOSTS or copying system files using batch files) where the system would just flatly deny access to something instead of popping up the UAC dialog. This is because the UAC dialog trigger is controlled by a set of APIs and the offending software has to request it. A more sensible approach would be to act more passively and pop up the UAC dialog any time UAC denies permission to anything. This makes UAC much more robust and means that software makers don't have to update their software to make UAC requests. Of course, this wouldn't be a problem if UAC wasn't even active for administrative accounts. Right now, as an administrator, there are some things that I simply cannot do because UAC is active even for administrative accounts and the non-passive implementation of the UAC dialog means that I am never even given a chance to approve an action and to override the permission denial. This forces me to disable UAC in order to just be able to perform certain administrative tasks. This, of course, poses additional problems since UAC is a system-wide setting and disabling it affects all users, not just my administrative account. How lovely. Microsoft could've solved this simply by making the UAC dialog pop up instead of denying permission. Or better yet, Microsoft could just disable UAC for all administrative accounts and make the default first account a standard user account instead of an administrator account. What is the point of making the first user an administrator if it is going to so severely cripple administrative accounts with UAC?
  • This whole 3D graphics thing has been taken too far. How do I know this? When Minesweeper complains about the lack of 3D graphics acceleration. Excuse me, why in bloody hell does a simple 2D game like Minesweeper need 3D graphics? The end result? Minesweeper is excruciatingly slow and only marginally responsive. The tiles are also much bigger (and uglier, too); not good for a timed game where speed is essential.
  • The bloat and the size. Clean install of Vista takes over 5GB of space.

The Ugly:

  • Vista on the whole feels inconsistent and clobbered together. I suppose this is to be expected for such a large piece of software. For example, at one point in the wizard to set up Windows Mail, it asks me to press the back button to return to the previous page. But when I looked down in the lower right, there were the usual Next and Cancel buttons that one expects to see in a wizard dialog, but no back button. It took me a while to realize that the back button was nowhere near the next button and was instead in the upper left and instead of a text button, this was a graphical Internet Explorer back button. Sitting all by itself (the next button was still in the lower right). There are countless examples of these inexplicable inconsistencies all around and makes the OS just seem confusing at best and amateur at worst.
  • I understand the need to make the desktop icons 48 pixels by default. It really helps the older folks reading on high-resolution screens. But did you know that you can set the icon size to 48 pixels in XP as well? In fact, you can set it to 16, 32 (the default in XP) or 48 pixels. But in Vista, regardless of whether the setting is on 16, 32, or 48 pixels, the desktop icons remain at 48 pixels, which wastes far too much space. Making the default at 48 is okay by me; making it the default and the removing the ability to set it back to 32 is just plain immoral. I have absolutely no respect for software that violate the First Rule of user interface design: the end-user is always right. Edit: Okay, so I finally found where to change this setting, but this brings up a new rant: why aren't these settings all grouped into one place (or at least accessible from one place?). Each new version of Windows seems to have increasingly scatterbrained controls, and Vista is certainly no exception! And if they are going to change the way the desktop icon sizes are controlled, then why did they not remove the old pixel-based icon size settings (which now do absolutely nothing)? Amateurs!
  • While Microsoft has taken steps to make the control panel more sensible (for example, grouping once-disparate network settings together in one window), everything still feels scattered. The network controls now involve so many separate dialogs that could've been merged together that the network controls feels so much more confusing than ever before. And remember UAC? I tried looking for them in the Security Center, which explains UAC and which also displays the current UAC status. But it doesn't have anything to control UAC and doesn't indicate where UAC is. Ultimately, I had to search the help manual to find where the UAC controls actually were.
  • The network status indicator in the system tray no longer opens the network status (like IP, connection time, packets in/out, etc.). In fact, even if you right-click on it, you don't get a menu item for the network status. That requires a trip to the control panel.

Conclusion:

I tend to be fairly conservative when it comes to software, but despite that, my initial reaction to Vista has so far been much more sour and unpleasant than my initial reaction to any other major new software to date. Perhaps I need to use it for more than just an hour. Also, while most people who criticize Vista then go and hug Mac OS X, I am not one of them. I am comparing Vista against XP, not against OS X, which, for the record, I dislike even more than Vista.

In any case, I actually had more rants than what is posted in this entry, but I've forgotten what some of them were (yes, I have a very bad memory); I'll update this post as I remember them.

This entry was edited on 2007/03/04 at 13:57:04 GMT -0500.

Does "at example dot com" work?

Wednesday, February 14, 2007
Keywords: Technology

It seems to me that as more and more people obfuscate their e-mail addresses in the form of "johndoe at example dot com", this isn't going to be a very effective method any more. For example, this Google search for "at gmail dot com" (with the quotes) yields over a million results, most of which can be readily parsed straight from the search results page. So for people who use this sort of obfuscation, does this prevent you from getting on spam lists?

Jabberizing AIM

Monday, January 15, 2007
Keywords: Technology, Jabber

This is a follow-up to this post from September.

When Google purchased a 5% stake in AOL is late 2005, it was announced that Google Talk and AOL Instant Messenger will become interoperable. However, a year passed, and there was no interoperability in sight, so many people, including me, began to think that it would never happen. After all, why would AOL open up what is arguably its most prized asset for a company with a measly 5% stake?

Well, according to Internet rumor mills on both Google's end and AOL's end, it's looking like the interoperability is going to happen this year. Better late than never, I guess.

What is most exciting about this, however, is what it will mean for the future of Jabber/XMPP if the largest IM network adopts Jabber/XMPP. And that is a big "if" because it's possible for AOL to achieve interoperability without actually adopting Jabber. The easiest way to achieve interoperability would be to set up a Jabber transport that acts as a sort of crude proxy in which Jabber users still need to register for an AIM account and where the transport basically acts as a liaison that hooks the AIM account to the Jabber account. A transport would be a superficial solution and one nullifies a number of the main benefits of Jabber/XMPP, namely the unifying of e-mail and IM addresses. The upside of transports is they are easy to implement; third-party Jabber transports that allow Jabber accounts to communicate with AIM, MSN, Yahoo!, etc. have existed for years and have been deployed by many organizations who use Jabber transports as a secure means for people on the internal network to connect to these external networks.

The other way to achieve interoperability is to make the AIM network speak XMPP. The AIM network is already "bilingual", in that one can communicate on the network using either TOC or Oscar, so adding XMPP support would simply involve adding a third parallel protocol. I am hoping that this is the solution that they are seeking because it would do much to advance Jabber/XMPP and because of the elegance of having "native" XMPP support. I think that this might be the case because of the wording of the Google rumor (though the language is ambiguous enough that it could just as easily be read the other way as well), because of how long it has taken (if they were just setting up a transport instead of doing "true" interoperability, they could have done it in a matter of weeks), and because of how nicely this fits with recent developments in the AIM network. With the launch of the @aim.com e-mail service, AOL has been getting people to equate buddy list screennames with e-mail address, pushing the idea that one's screenname is now example@aim.com instead of just example. This is the first step to paving the way for a Jabber-like paradigm. And with AOL pushing a custom domain service, they now have people on the AIM network whose screennames are not in the form example@aim.com, but are instead in the form of name@example.com. This sort of change would make a true Jabber implementation almost a necessity.

In any case, this is all just speculation. Here's to hoping that the interoperability happens the right way and that Jabber/XMPP will get the boost that it needs in 2007.

Winning the Spam Whack-a-Mole

Saturday, December 9, 2006
Keywords: Technology

This is an interesting article from the New York Times about the recent escalation in spam. Volume has increased, and now with the pervasiveness of image spam, filtering is starting to break down. The end of the article was the most interesting:

Some antispam veterans are not optimistic about the future of the spam battle. "As an industry I think we are losing," Mr. Peterson of Ironport said. "The bad guys are simply outrunning most of the technology out there today."

It's about time people have realized this. Filtering as a way to combat spam was necessarily doomed to failure. This is because to expect filtering to eliminate spam is much like expecting cold relief medicine to forever eliminate the common cold. Antispam filtering addresses only the symptoms of spam. Furthermore, filtering works by telling the difference between spam and non-spam, which is ultimately an artificial intelligence problem. Since artificial intelligence does not exist (and even if it did exist, it would be costly to implement on a scale large enough to handle spam), filtering relies on a hodgepodge of heuristics that works only because spammers have not done a very good job of disguising spam from non-spam. It is only a matter of time before spammers put enough effort into blending spam with non-spam that these heuristics will break down completely.

The Times article also fails to recognize that spam is not limited to just e-mail. This blog, for example, is bombarded with thousands of comment spams each month. Forums, hosting providers, instant messaging networks, etc. are all targets of spams as well. Even server logs have become the target of referer spam (I get thousands of those each month, too). Even if there was a way to implement effective filtering for e-mail, spammers will just move to another medium. And what will be the cost--in terms of bandwidth, computing resources, and false positives--of continuing this endless arms race in pursuit of a solution?

It is easy for people who look at statistics saying that 90% of all e-mail is spam to despair and to proclaim that spam will destroy the Internet, but such a point of view is missing a critical point: spammers are very few in number, and they are empowered to hog the stage only because of their ability to commandeer vast resources for themselves.

Dismantling botnets is the key to dealing with spam, and from my perspective, it is the only way to "win" and to "save" the Internet. Yet, there was no mention of dealing with the botnet problem in the Times article and recent articles about combating botnets all deal with superficial solutions like tracking botnets and shutting down the command and control for botnets. However, such solutions are themselves doomed to failure because they too deal with botnets only on a superficial level. The reason why botnets get so little attention is because dismantling them ultimately requires tighter security at the level of individual computers, and it is difficult to get Joe Sixpack to properly secure his computer against botnet hijacking, so instead, the publicity and attention is put on mitigating the problems post hijacking. Why are there so sensational articles written about spam and botnets but so few about how easily it is for the average computer user to get his/her computer hacked and taken over by a botnet? So how do we solve the botnet problem?

  1. User education: This is difficult and most likely will be limited in its effect, but it won't hurt to try. For starters, ISPs and major computer makers could include a prominent flyer in the products that sell of things not to do (instead of burying this information deep inside a manual that most people will never read). A national awareness advertising campaign would help a lot, too (given how much money is already being spent combating the damages caused by botnets, this will be relatively cheap).
  2. Better OS security: Hopefully, Vista will alleviate this problem. However, for the large existing base of XP users, not too much more could be done. Microsoft's WGA encourages people to avoid updates from Microsoft, thus causing many computers in poorer regions like Eastern Europe and China where many computers fail WGA to be unprotected, but the damage from that has already been done, and loosening up on WGA now will only help the future Vista user base (though a loosening up now could help in the future if Vista proves to be just as vulnerable as XP). Speaking of WGA, why not implement something similar, but for security? A WGA-like system that checks to see if the OS has all the latest security patches and then nags the user when that isn't the case?
  3. Network monitoring: There are some networks that monitor traffic coming out of a computer on the network for signs of infection and scan computers on the network for known vulnerabilities. If an infected computer is found to be sending botnet-like traffic or if a computer is found to have an unpatched security hole, then the computer is blocked from the network and the owner notified. This is probably the single most promising solution because, by notifying the owner of the problem, it raises the awareness of the botnet problem for the average users who are otherwise oblivious to it, and if a quarantine is used, then it will also ensure that particular computer will remain disconnected from the botnet. Such a system would be automated and could be implemented without action by the end user (unless, of course, the end user is found to be infected and is blocked). Unfortunately, very few networks of importance (i.e., the major ISPs) implement such a solution even though most of the botnet computers in the US are located one of the major consumer ISPs.

Note that government legislation is missing from my list of solutions. Contrary to its favorable description in the Times article, the CAN-SPAM Act was, quite frankly, a useless piece of legislation that did nothing except increase regulatory bureaucracy and gave the illusion that something was being done about spam. Almost all of the things hawked in spams are already covered by various anti-fraud and other criminal laws, and similarly, hacking into and commandeering someone's computer with neither their knowledge nor their permission is already illegal, so any additional botnet legislation would be superfluous. If government were to get involved, the role that it would play would be one of addressing the externality problems of botnets. ISPs currently have little incentive to implement the sort of network health monitoring I suggested above because they would bear the cost while everyone else will reap the benefits. Similarly, a user education campaign that reduces the size of botnets will help everyone who is connected to the Internet and is thus a positive externality. A government subsidy would thus be appropriate here to deal with these sorts of externalities.

Northwest Airlines is Stupid

Saturday, December 9, 2006
Keywords: Technology

This is a follow-up to this post...

Got an e-mail today about my Northwest Airlines miles. The problem? The e-mail came from worldperks.miles@mpmvp.com. WTF is the mpmvp.com domain name?! I visit the domain in the browser, and I get a SSL certificate warning because the certificate was signed for nwa.mpmvp.com and not for mpmvp.com or www.mpmvp.com (the correct solution would have been to redirect mpmvp.com and www.mpmvp.com to nwa.mpmvp.com; this is the first sign that the IT "professionals" who set this up are utterly incompetent). The site looks just like the NWA website. Okay, I understand that these companies like NWA often enter into various affiliate programs with places like points.com and it may be temping to set up the joint website on a separate domain to avoid the hassles of dealing with the NWA DNS hostmaster. But this is BS because it is trivially easy to set up a points.nwa.com zone and then delegate it (NS records) in DNS to the people at points.com so that it can be administered entirely independently of NWA's DNS. It is these utterly incompetent and appallingly stupid setups that make user education about security that much harder. Oh, and there was no SPF record either. Where do they find these idiots?!

Ms. Dewey

Saturday, December 9, 2006
Keywords: Technology

WTF?! This is the latest addition to Microsoft's Windows Live line of products. It's Clippy reborn... except much more obnoxious... and much more useless.

Dumb JavaScript "Obfuscation"

Thursday, December 7, 2006
Keywords: Technology

Saw this hilarious post in my RSS reader this morning. What makes all this even funnier is that this so-called "obfuscation" was just encoding the script into hex, so that ridiculous table could've been done away with by just using parseInt(str,16). If someone's gonna do something dumb, at least make it an elegant 1-liner. :P

Rant/peeve of the day: E-mailing Photos

Monday, November 27, 2006
Keywords: Technology

At the risk of sounding like a zealous e-mail nutcase, I am going to put my foot down and firmly declare my belief that sending large binary attachments (e.g., photos, big PowerPoint presentations, MP3s, etc.) by e-mail is simply immoral and wrong. They are, dare I say it, sinful. Here are the three reasons why attachments in e-mails are evil:

First, e-mail was designed as a way to send messages, not parcels. It was designed from the start as a way to send plain text. It was not designed as a way to send parcels of binary data because there are a number of other, more suitable, ways to accomplish that, through methods like UUCP (though it too was 6-bit), FTP, and later, HTTP, SFTP, and various P2P protocols. If one ever looked at the source of an e-mail containing a binary attachment (e.g., the "Show Original" option in Gmail or the "Message Source" option in Outlook Express), one will quickly see just how text-oriented e-mail is. There is simply no way to send binary data using SMTP (although the new 8BITMIME handling can potentially alleviate this, it is rarely ever used). As a result, binary data must somehow be encoded as plain text if it is to be transmitted via SMTP. The most common encoding used today, Base64 (other encoding methods--e.g., Uuencode--work in a similar way and suffer from the same problems), converts 8-bit binary data (base-256) into 6-bit plain text (base-64; 26 capital letters, 26 lowercase letters, 10 digits, and two other characters make up the 64 characters). This means that any binary data transmitted via SMTP automatically incurs a 33% storage and bandwidth penalty in addition to a processing penalty because of the need to encode and decode between 8-bit and 6-bit data.

Second, in addition to this overhead inefficiency, there is now a certain inelegance added to e-mail. There now needs to be a way to tell the difference between regular text data and binary data that has been re-encoded into a block of text. This is done rather inelegantly by randomly generating a string of text, checking to make sure that this random string does not already exist somewhere in the encoded e-mail, and then inserting this random bit of text as needed to serve as a sort of ad-hoc boundary between the text and attachment sections of e-mails. Coupled with the encoding and decoding, this adds a certain degree of complexity to e-mail handling software, making the e-mail handling process more error-prone and adding more hurdles to the process of writing custom e-mail handling tools. Although modern webmail interfaces and e-mail programs are now so good at handling attachments that all the underlying grotesqueness is obfuscated and hidden far away from the user, this was not the case a decade ago when it was not too uncommon to encounter problems sending, receiving, and decoding attachments (been there, done that, got the t-shirt). Just because e-mail attachments work smoothly nowadays does not change the fact that underneath the veneer, it is still an ugly bastardization.

The final argument against e-mail attachments is one of infrastructure. Unlike HTTP, FTP, etc., e-mail is not a way to directly send data between two computers. For example, when someone at gmail.com e-mails someone at hmc.edu, the e-mail first travels from the user's computer to Gmail's "server". It then travels from Gmail's "server" to a server run by the Postini company, which then scans the e-mail for viruses and also determines if the e-mail is spam (it used to be that this filtering was handled by HMC's own server, but it was eventually outsourced to a commercial company, presumably because this filtering process was overloading the system). After processing, the Postini server then sends the e-mail back to yet another server at HMC that the students can then connect to and retrieve their e-mail. That is, unless they have an e-mail forward set up, in which case, that e-mail is transmitted once again to yet another chain of mail servers. So in this scenario, in the process of getting from one person's computer to another's, an e-mail message passes through at least four (and maybe five or six if there is a forward) different servers! Aggravating this problem SMTP retransmission, there is no pipelining. A SMTP server must fully receive the entire e-mail, save it to memory or to disk, perform any necessary processing (spam checking, virus scanning, or digital signature verification in the case of DomainKeys, all of which are somewhat taxing, hence why more and more organizations are outsourcing e-mail processing), and then finally retransmit it. In contrast, when a file is passes through a bunch of routers when going from one computer to another, there is pipelining because each router does not have to wait for all the packets to arrive before sending it off to the next. While this feature of SMTP (which is what gives e-mail the robustness needed for reliable communication) is not very problematic when dealing with small messages a few kilobytes in size, multi-megabyte files are not well-suited to this form of message transmission and will result in high latencies and even delays. In the worst case, some SMTP servers will simply fail when the message size becomes too great for them to efficiently process.

Trying to shoehorn the ability to send files into a system that was never designed for such use introduces a significant inefficiency in the packaging of the message, introduces ugliness into the structure the message, and involves the use of a system of message transmission that is far from ideal for the transmission of large data. The problem, unfortunately, is that none of the other ways to transmit data is as accessible. A peer-to-peer method, such as using the file sending function of various instant messaging systems, is very efficient, but will work only if both people are online at the same time. HTTP, FTP, SFTP, etc. will work, but requires that people either run their own servers or have easy and quick access to one. Unfortunately, while companies seem perfectly happy to promote and offer an inefficient system like e-mail for transmitting large amounts of data, they often put a lot of restrictions on any proper file storage and transmission services that are offered (and usually fail to offer easy, user-friendly ways for people to upload and manage data). This is probably because people who would abuse the system by transmitting things like warez would never use e-mail because it is so inefficient and unsuitable and thus e-mail providers are generally not worried about nefarious uses of large file transmission through SMTP like they are about nefarious uses of large file transmission through other means, which is unfortunate because this is effectively forcing people to resort to SMTP.

So make this your New Year's Resolution: Try to send files via a proper medium, if possible. Oh, and while you are at it, please set the default in your webmail or your mail software to use plain text by default instead of formatted HTML mail (my three e-mail pet peeves are attachments, HTML mail, and people forgetting to use BCC for multi-recipient messages).

Together, we can help purify the Internet... either that, or at least hold out like a bunch of Luddites, but the former sounds better. ;)

Comment spam update

Thursday, November 16, 2006
Keywords: Technology, Spam

On November 11, I switched on my new anti-spam system (it had been running in a test mode for a little while before that). Since then, there have been 492 comments* received by this blog...

  • Legitimate comments: 9 (1.8%)
    • Incorrectly filtered: 0 (0%)
    • Not filtered: 9 (100%)
  • Spam comments: 483* (98.2%)
    • Caught by filter: 476 (98.6%)
    • Not caught by filter: 7 (1.4%)

I like that there are no false positives (the old system of content filtering and post age lockout was very problematic in this area). The 7 false negatives were rather unexpected, though. This means that spammers are now going as far as integrating scripting engines in their bots (either that, or those 7 were entered by a human and not by an automated bot). Oh well. A 98.6% catch rate isn't too bad.

________________
* This is an understated number because some of the spam bots are so poorly written that they can't even fill in the right blanks; those attempts that fail basic validation are not logged by the new system and are not included in this count.

Rant/peeve of the day: Excessive Domains

Saturday, November 11, 2006
Keywords: Technology

Did you know that TIME has a blog? Can you guess what the address is? blog.time.com? Nope. time.com/blog? Nope. It's time-blog.com. Instead of the usual microsoft.com/windows website, Microsoft is running ads for Windows Mobile that point users to windowsmobile.com. Chevron's ads for their alternative energy initiative point users to willyoujoinus.com instead of something like alternatives.chevron.com (and this isn't an attempt to disassociate the brand name from the campaign because the URL in the print ad appears right next to a big Chevron logo). And there was a recent pharmaceutical ad that I saw that pointed people to a domain that looked like askabout[drugname].com.

Ideally, Internet addresses that belong to the Example Corporation should be in the form of division-name.example.com or example.com/product-name or example.com/campaign-name or service-name.example.com or region.example.com (for the company's regional divisions) or something else that is located squarely within the example.com domain. But instead, companies are now using promotion-name.com or company-service.com or companyusa.com. Instead of adding new addresses to the company's main domain, they are using a new domain name for every new website that they put up.

This is annoying and bad for two reasons. First, this is semantically impure and defeats the hierarchal and organizational structure of DNS and URIs. This is organizational equivalent of dumping all your paperwork in a single box instead of filing them into different folders and drawers. But this is only a minor problem that probably only purists like me would angrily shake their fists at.

The second problem is one of security and trust. A recent article discusses how scammers are registering sites that try to fool users into inputting their login information into a fake look-alike site. These scammers would register domains like citibank-secure.com, citibank-update.com, citibank-login.com, etc.; these are addresses that look like they belong to and are affiliated with CitiBank. The obvious solution would be to educate users that anything.citibank.com and citibank.com/anything are the only legitimate addresses because they reside within the CitiBank domain and are thus under the control and jurisdiction of CitiBank. This is where the issue of trust comes in because citibank-login.com resides within the domain of .com and is thus under the jurisdiction of, effectively, nobody. Unfortunately, few people realize that, because of how DNS works, any guy off the street can create a website at citibank-something.com while only CitiBank can create a website within the citibank.com domain. This lack of understanding is strongly reinforced as people are trained to accept that sites like time-blog.com are legitimate sites owned by TIME (why on earth blog.time.com couldn't be used it beyond me; if anything, "blog dot time dot com" is easier to say and remember than "time hyphen blog dot com") and that companyname-service.com is a legitimate site owned by companyname.com, then this attempt at user education becomes futile (to be sure, it was mostly futile to begin with, but now it's even more futile). Oh well. Companies have long been shooting themselves in the foot in terms of security.

PS: There are a number of companies that do it right, however. AMD's ad campaign directs people to amd.com/lessmoney. Similarly, Xerox, IBM, Intel, and Computer Associates do the same thing with the websites for their ad campaigns. While some of Microsoft's campaigns use improper domains, most of their print ads direct people to addresses like microsoft.com/peopleready. Unlike TIME, The Economist's blog is located within their own domain at economist.com/debate/freeexchange (though they could've picked a shorter name). Finally, while Google uses gmail.com for the domain of their e-mail service because it's shorter to remember and type for the @ part of their e-mail addresses, the actual login and webmail interface is located at mail.google.com. These offending companies should take a cue from the companies that handle domain usage properly and do things right.

PPS: Now that people are accessing websites via search engines, the need for short, memorable domain names isn't nearly as important, which makes the use of a separate catchy domain name fairly pointless. Not to mention that in many cases, these separate domains can be just as hard to remember, if not more. For example, when you sit down at the computer several minutes after hearing a pharmaceutical ad, could you remember if it was askabout[drugname].com, learnabout[drugname].com, or tellmeabout[drugname].com that was mentioned in the ad? Furthermore, that drug company's rivals could register "learnabout" and "tellmeabout" in an attempt to capture confused visitors who didn't remember the name quite right. Of course, this wouldn't be a problem if the first company made proper use of its domain names because there is no memory ambiguity in companyname.com/drugname.

Edit: Another addition to the Wall of Shame: verizon.net vs. verizonwireless.com vs. vzwshop.com.

This entry was edited on 2006/11/11 at 14:07:27 GMT -0500.

Blog Spam Numbers

Friday, November 10, 2006
Keywords: Technology

I'm starting to log the number of blog comment-spam attacks that are launched against this blog. Can you guess how many attempts1 there were over the past 12 hours?

Answer: 66.

So that's about 5 per hour. And it extrapolates to nearly one thousand blog spams per week. There is also a very healthy IP address diversity; there are only a few IP addresses that launch more than one attempt; most of the addresses are unique. These IP addresses also span the globe and come from every continent (to my surprise, even Africa, where there aren't many computers). These are the trademarks of a botnet. And if this is what a small, obscure and low-traffic site like mine gets, I hesitate to imagine what the big blogs experience.

BTW, this is a very good article that people (especially laypeople) should definitely read because it is one of the few articles that actually paints a fairly accurate picture (vs. the inaccurate crap that comes out of the mainstream media) of what computer security nowadays is really all about.

________________
1 None were successful, of course. :)

Rant/peeve of the day: "Security" Questions

Thursday, November 2, 2006
Keywords: Technology

It seems like more any more sites are using security questions. Forgot your password? Not to worry, we'll let you log if you can answer the question, "What was your mother's maiden name?" or "Where were you born?" or "What is the name of your dog?". So on one hand, people are being instructed to create better passwords that do not comprise of any personal Google-able information and on the other hand, more and more sites are offering these weak "back-door" logins that ask questions whose answers a Google search may reveal. But that isn't what bothers me; what bothers me is most of these places require that you provide these "security" questions (what a misnomer!). Great, so now my relatives (who will know my mother's maiden name, the city where I was born and that I have never owned a dog) can, if they wanted to snoop, request for a password reset? At least some sites are courteous enough to let you specify no security question or your own custom question (in which case, I use "What is your password?"). My solution has been to create a secondary secure password for use exclusively as security question answers and to use that as the security question answer regardless of what the actual question is. But it would be much easier if the idiots operating these sites respected user choice more (i.e., make the SQ optional) so that this wouldn't be a problem in the first place.

n.b.: Some places don't use the security question as a back door, but instead as a challenge that must be answered in addition to the password before one could log in. These are, I think, legitimate uses of security questions, but it is still rather patronizing because, while most people are idiots about creating secure passwords and thus such secondary authentication is necessary for them, for people who do practice "safe passwording", this is yet another nuisance.

This entry was edited on 2006/11/02 at 19:26:27 GMT -0500.

New York Times on Security

Monday, October 23, 2006
Keywords: Technology

This NYT article was linked to on Slashdot. It's a good article, definitely worth a read for non-technical people simply because the average person knows so little about computer security. But as a technical person, I have some bones to pick with this article.

  1. Shopping is probably one of the safest things that you can do when surfing from a public hotspot using your own computer. That's because almost all e-commerce websites require SSL for logins and credit card input, so the sensitive traffic is encrypted. There are some sites that don't use encryption during transactions (in which case, using your credit card on such a site in public would be very stupid), but people should not patronize such places even if they are not in public because the failure of an e-commerce site to provide (and require) SSL is a signal that they are probably not too careful with data security in other ways as well. With IE7 and Firefox both coloring the address bar for secure sites, instructing people on detecting SSL should be easy.
  2. All the major services like Google, Yahoo!, and Windows Live require SSL for logins so that passwords can't be stolen. In addition, Google is pretty good about providing optional whole-session SSL for a number of their services including Gmail and Reader so that all traffic--not just your login information--is encrypted.
  3. E-mail passwords for SMTP/POP3/IMAP are still generally insecure, but a lot of people are using webmail these days (which generally have secure logins), and the use of the proper e-mail protocols over SSL is increasing (e.g., Gmail requiring SSL for POP3/SMTP).
  4. The best form of security is still better password control, which the article does not evangelize. People shouldn't use the same password everywhere. I use a weak easy-to-type password for unimportant accounts or accounts without encrypted logins (like my IMDb account). A much stronger password with mixed case, numbers, and non-alphanumeric characters is used for accounts with encrypted logins and sensitive personal information (financial sites, shopping sites with stored credit cards, Gmail, etc.). Finally, there is password for accounts with sensitive info but no secure logins (I try to avoid having such accounts whenever possible, and I wouldn't access such accounts from a public place). (I also have a fourth password administrative things like logging into my computer or SSHing into my home network from the outside, but there is really no reason why I couldn't use my other strong encrypted password for this.) This is a much more effective way to limit the scope of security lapses for the average user than instructing him or her on the use of VPN or SSH tunnels, and three different passwords shouldn't be that hard for people to remember. And it is easy to create secure non-alphanumeric passwords that are easy to remember, either; e.g., x*6=42=>x=7 or pass(42%)!=T are memorable, secure passwords with letters, numbers, and symbols.
  5. The article broadly advocates VPNs without discussing the other ways to ensure security. VPNs are rather specialized in their purpose are generally not necessary. Oh well, what did you expect from the mainstream media?

I guess in a nutshell, the article is good because it highlights the security problems that most people are not aware of, but it then goes into a typical mainstream media overhype and proposed overcorrection. This problem is not new, and a nice solution--SSL--has existed for eons.

This entry was edited on 2006/10/23 at 21:37:30 GMT -0400.

Firefox 2

Tuesday, October 17, 2006
Keywords: Technology

I have been using Firefox 2 as the default browser on my regular machines (vs. running it on a VPC test box) for nearly a month now, ever since the first candidate for RC1 was spun (yes, that's a release candidate of a release candidate), and now that it looks like that RC3 is in all likelihood going to be the final version, I thought that I might comment on Firefox 2.

  1. It has a much better memory footprint. It also seems faster and snappier, too.
  2. Session saving and undo close tabs are now built-in features. This is great because I used to get these features from an extension. Unfortunately, the only extension to reliably provide these features was a horrible memory leaker and was somewhat processor-inefficient. Being able to dump this extension alone was worth the upgrade, and probably contributed to #1 above.
  3. New tab management. I often have lots of tabs open, and I often overrun the tab bar, so the overflow scrolling and the drop-down list is extremely useful. The close button on each tab is annoying (since I close tabs by middle-clicking) and the wider minimum tab width is wasteful, but both of those settings can be changed in about:config.
  4. Speaking of about:config, there is a new hidden setting that lets you disable compatibility checking for extensions. There are many Firefox 1.5 extensions that will not install in Firefox 2.0 because the author had set the maxVersion in the extension to 1.5 even though the extension really is compatible with 2.0. This configuration setting will allow me to install these extensions without using the NTT or manually editing the maxVersion code in each extension.
  5. There is a handy button to restart Firefox after installing an add-on. The new session saving will also automatically kick in during such a restart to restore all of your tabs and even what you have filled into forms after the restart. Makes installing stuff much less painful.
  6. Built-in spell check. No more copying-and-pasting into Word to check for typos.
  7. Much better RSS handling; live bookmarks was lame.
  8. Various minor bug fixes, such as the improved password auto-fill handling.
  9. I personally love the look of the new theme and the little visual tweaks that were made here and there. The old tabs looked rather ugly on Windows Classic, and this is the first Firefox where I did not have to manually re-skin the tabs in the default theme to make them look decent in Windows Classic. Now combined with ClassicFox, Firefox looks quite stunning on Windows Classic. But of course, that is a matter of personal taste.
  10. Personally, I do not care much for some of the other major features added in Firefox 2. Phishing protection is useful for Joe Sixpack, but not for me (I have it disabled). I liked the old-fashioned auto-complete instead of search suggestions in the search box (which I have also disabled), and microsummaries (live titles) just seems like a novelty.

I think this is the first time since the Firebird days that I was actually excited for a new release (well, okay, I'm not really excited about it because I have already been running it for a while, but you get the idea).

Firefox in Windows Classic

Monday, October 16, 2006
Keywords: Technology

Firefox drop markers in Windows Classic; before and after:
ClassicFox Screenshot #1

Firefox menus in Windows Classic; before and after:
ClassicFox Screenshot #2

ClassicFox extension for Firefox 1.5 and 2.0. For those people who use [the proper and far superior] "classic" Windows instead of XP's Play-Doh Luna theme.

Score one for Jabber!

Sunday, October 15, 2006
Keywords: Technology

It looks like LiveJournal launched a new IM service a few days ago, based on Jabber/XMPP. Hooray! (If you're asking, "Why should I care about Jabber?", read this post.)

The Electric Car, Part II

Sunday, October 15, 2006
Keywords: Technology, Politics

This is a follow-up to a post that I made back in July.

As I wrote in July and as I will write now, I am not a fan of conspiracy theories. As a result, I approached the documentary Who Killed the Electric Car? with a fair amount of skepticism, but since I had not seen the film when I first wrote about it in July, I held back on passing judgment. Well, I finally got a chance to watch it last night...

  1. This film takes a surprisingly balanced view in that, in addition to presenting its side of the story, it takes the time to explore and address a number of the counter-arguments as well.
  2. The film seems to be more documentary in nature than some of the other politically-motivated "documentaries" in the sense that I did not get the feeling that it was frothing at the mouth with anger. It was fairly rational, and does not try to take the conspiracy too far (unlike Loose Change, which made a number of claims that bordered on the ridiculous).
  3. Lingering objection: If electric cars were really that great, why did they not take off in environmentally-friendly countries? The film indicated that Toyota made electric vehicles, but they are not an American company. Why did they not introduce such vehicles in Japan? Japan's consumer base is more rational and adoring of new technology. They do not have a powerful oil industry, and their government, in certain respects, is less corrupt than ours. The same could be asked about Europe.
  4. Lingering question: Is the film representative of EV1 drivers? Were most EV1 drivers really as satisfied as those portrayed or did these people represent a minority of those who tried out the EV1? I have no reason to suspect that the latter is the case, but I would be interested in knowing the answer to this.
  5. Lingering objection: This still does not address the problem that our electrical infrastructure is in no way suited to handle the sort of strain that electrical vehicles would produce on a large scale. Granted, a hydrogen infrastructure would be even more costly, and the cost of upgrading the electrical infrastructure could easily be pushed off to the utilities who would stand to profit in the long run from this.

Overall, I think that the film is surprisingly good and presents the case without exhibiting much of a tin-foil-hat syndrome. Go watch it.

Of Spam, Malware, and Kernels

Saturday, October 14, 2006
Keywords: Technology

Part I: Fun with Malware

According to my server logs, it would appear that one of the images from my gallery has become the background image of a number of different MySpace profiles (these are total strangers who probably found the image through an image search). So I thought that I might as well amuse myself by looking at these log entries. For example, while there is a significant number of people who visit my blog using alternative browsers (though they are still a minority), virtually all of the people who access the MySpace profiles that embed my file as a background are still using Internet Explorer.*

This glimpse into browsing habits of the "normal" world is not particularly interesting, except for a few entries that caught my eye. These entries had a user-agent string of Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; SpamBlockerUtility 4.8.0). How odd, I thought, that that someone would have a spam blocker add-on installed for a web browser. As I expected, Google search results indicate that this "SpamBlockerUtility" add-on that this particular person had is indeed malware. I then went to SBU's website for the heck of it and quickly discovered that this is the Mark Foley** of software in terms of hypocrisy. It describes spam as "harmful and irritating" (it is funny to see a malware company say that) and that by blocking spam, it can save bandwidth (right, because client-side spam software can now magically block spam on the server side; and what about the adware bandwidth?). But the fun doesn't stop there! They also bundle such useful and relevant things like a thousand different emoticons to make your e-mails "cool" (no doubt a bundling agreement with one of the malware companies that specializes in those emoticon toolbars). Their website even has a section on helping users with installation problems: it instructs people on how to log in as an administrator. Anyway, I had a great laugh looking through their website. Unfortunately, as evidenced by these log entries (and by the even larger number of IE systems that have "FunWebProducts" installed), there really are people who fall for these things.

Part II: Kernels, Anti-Virus, and the European Union

As reported by Slashdot, Microsoft, under pressure from European antitrust officials, is opening kernel-level access for third-party anti-virus packages, like McAfee and Norton. The Washington Post article frames this as an issue of anti-trust, which is incorrect (and the officials in Europe are equally confused about this matter). The heart of the matter is that Microsoft locked down kernel access in Vista, and now the makers of Norton and McAfee are complaining that this is an attempt to lock them out of providing anti-virus for Windows.

  1. There are other (better) third-party anti-virus makers who have made their products Vista-compatible without needing to get through the kernel lockdown. Most importantly, even Microsoft's own anti-virus package does not require or get kernel-level access. So how exactly is this an antitrust issue?
  2. The whole point of the kernel lockdown was to make the system more secure by limiting the amount of system access that any piece of software could have. This is the digital equivalent of allowing law enforcement officers to freely break the law.
  3. Norton AntiVirus does not exactly have a great track record and can sometimes cause more problems than it solves. Kernel access? Bad idea.
  4. Real computer security does not come from anti-virus. It never has, and it never will. Real computer security is accomplished through educating the user. Anti-virus is snake oil: at best, it is a band-aid; at worst, it is poison.

________________
* And people wonder why most geeks have so little respect for MySpace...

  1. One should not use a photo as a background image on a site, at least not without first editing it to soften the contrast; these MySpace users appear to lack a basic sense of aesthetics.
  2. These MySpace users seem to be unaware of etiquette regarding things such as hotlinking.
  3. Not only is hotlinking poor etiquette, but embedded hotlinking allows the host server (in this case, me) to log and spy on the visits to their site. For example, I could tell how frequently a particular visitor (uniquely identified by IP and UA) reloads/revisits the profile; how is that for creepy? Well, this is, after all, how graphical counters work.
  4. Internet Explorer?
  5. Installing a spam blocker for a web browser. Riiiight.

** Sorry, I couldn't resist; it's the fad these days, and I guess I am sometimes a slave to fashion.

Google + YouTube = Disaster

Monday, October 9, 2006
Keywords: Technology

It all started out as a rumor. Which the Wall Street Journal then picked up. Eventually, everyone was talking about it. I, however, did not believe in this rumor because it made absolutely no sense:

  1. Google is buying a pile of liability. YouTube is flooded with illegal material that infringe copyright. YouTube has not been sued yet because they are a small independent operation with no money. If Google owns YouTube, then there is suddenly a lot of money that could be won through infringement suits. This alone makes Google's buyout of YouTube incredibly stupid.
  2. Google will eventually be forced to clamp down on copyright infringement on YouTube, thus sanitizing it to something similar to Google Video. Illegal videos are extremely common and popular (popular enough to draw even Bill Gates into piracy crime). Any clampdown will effectively destroy the attractiveness of YouTube and decimate its user base.
  3. Google already has a decent video service of its own. Once the YouTube user base is destroyed, Google would just have a shell of a company that is no better than its own video service.
  4. There is no way that an unprofitable YouTube could be worth over $1.6 billion USD.

As it turns out, Google really did bite. Well, my criticisms still stand, and I am dismayed at this decision. This also reinforces my belief that Google really does not have much in the way of a grand scheme. If Google really did have a plan, there would not be the recent decree to their employees telling them to stop launching products and to instead focus doing things like integrating existing ones into some sort of coherent strategy. Google is a company built on accidental success (Google's CEO: "We throw it against the wall and see what sticks.") and driven by only a vague silhouette of a strategy, and this YouTube deal (which is so reminiscent of the sort of reckless drunk-on-success deals of the dot-com bubble) certainly fits that characterization.

This entry was edited on 2006/10/09 at 20:21:36 GMT -0400.

Digg and the Fallacy of Web 2.0

Friday, October 6, 2006
Keywords: Technology

I rarely visit Digg, but I do glance at it every now and then. One of the items on the front page last night when I decided to visit on a whim was about cars running on water. It had over 300 diggs by then, and as of this morning, the count was over 700. It's quite a high number of diggs for something that is pure quackery that should never have even made it to the front page! In contrast, when AOL offered free (well, sorta free) domain names through their My eAddress service, that got only 17 diggs and did not make it to the front page.

There is a lot of confusion over what exactly this new "Web 2.0" buzzword is all about, and the most widely accepted notion of Web 2.0 is that there is a new paradigm of user-generated content*, like blogs, "democratic" news (Digg), inane YouTube videos, etc. But there is a problem with this, as Digg has illustrated: the quality of the content of Web 2.0 is only as good as the quality of the collective, and unfortunately, this world is brimming with idiots. Of course, even places with editorial oversight like Slashdot is far from perfect, if people can remember the time when they reported on a compression scheme that could compress arbitrary random data (which, by the way, is patently ridiculous and simply impossible and can be easily proved so mathematically), but those who criticize the failure of Slashdot's editorial board will likely have a heart attack when they see what sort of things make it to Digg's front page each and every day.

Needless to say, I have little confidence in the wonders of Web 2.0. And as distasteful and politically incorrect as this may sound (esp. coming out of a libertarian like myself) there is little wonder why our Founding Fathers did not advocate the direct election of Presidents (more...).

________________
* Ironically, this what the web was about at the beginning, at least, before the dot-com gold rush; so we're actually going from Web 1.1 back to Web 1.0, but apparently, nobody likes the sound of that.

Dell, Gateway, and RAID-0

Friday, October 6, 2006
Keywords: Technology

I recently noticed that both Dell and Gateway are offering RAID-0 hard drive setups. What was interesting about this was that there were only three types of hard drive options that Dell and Gateway offered: single drive, RAID-0, and RAID-1. There was no option to have a second drive without RAID.

While it is good to see companies offering RAID-1 to mainstream customers, it is surprising to see them offer RAID-0. First, nobody should ever be running hard drives in a RAID-0 setup. Well, okay, there are some situations where RAID-0 makes sense, but they are rather special and they certainly do not apply to Joe Sixpack buying from Dell or Gateway. These are people who would be unable to make much use of (or even notice) the performance boost from RAID-0, and these are also people who tend to be very lax about data backup. In other words, the average computer users that these companies are selling RAID-0 to represent a group that is probably the least likely to benefit from RAID-0 and is also probably the most vulnerable to the greatly increased risk of catastrophic data loss posed by RAID-0.

And it is not the offering of RAID-0 that is troubling, but rather how they are offering it. First, both companies label RAID-0 as "performance", and in the case of Dell, they even have a page touting the performance advantages of RAID-0. Okay, there is nothing wrong with that since RAID-0 is faster. Second, they do not post any information whatsoever about the risks of RAID-0. So not only are there no warnings presented when a user chooses RAID-0, if a user actually takes the unusual step of reading what RAID-0 is all about on the Dell website, they will not find a single word talking about the downsides of RAID-0. If a consumer chooses RAID-0 with the knowledge of the huge risk that is being taken, that is fine: it is their choice. But that is not the case if a consumer chooses RAID-0 because s/he has been told incomplete information (and before free-market advocates can criticize that sentence, I would like to remind everyone that one of the conditions necessary for free markets is symmetric information, and this is a violation of that). Third, RAID-0 is also the best possible price point. A single 500GB hard drive costs more than two 250GB drives, so the user is presented with the option to get 500GB worth of total capacity for less money and for improved performance. Who could resist? Of course, that two lower-density drives could cost less than a single high-density drive is nothing new, but in the past, people who took the two-drive option were just given two hard drives, without getting them chained together into a reckless RAID-0 setup. Furthermore, that non-RAID option no longer exists at either Dell or Gateway: if you want to take advantage of the lower cost of two hard drives, you are now being forced to take the RAID-0 option. Finally, this setup is rather prominently marketed and is being presented as a mainstream option instead of a "for people who know what they are doing" option. In the past, Dell would offer people a choice over the number of hard drives and then as a separate advanced option, they offered the user a chance to chain them together in RAID. Not any more: the RAID is built straight into the main hard drive selection. In fact, it is so mainstream that there is one Gateway offer where there was a free upgrade to the 500GB RAID-0 setup.

So why would they do this? I have a theory: over the years, Joe Sixpack has gotten drilled into his dull little mind that "C:" is the hard drive and that "D:" is the cupholder CD/DVD drive. Imagine the confusion when Joe Sixpack now has a computer where "D:" is a second hard drive and that the CD/DVD drive is now "E:". Furthermore, would Joe Sixpack really know how to use a second hard drive? In all likelihood, he will install programs to their default locations (on C) and he will save his files in "My Documents" or on the Desktop, which is also on C. And eventually, the C drive may fill up while D remains empty (preemptive note to Mac/Unix people: NTFS is perfectly capable of Unix-style drive-in-a-folder mounting; in fact, I use it extensively, but even mounting cannot completely solve this problem because inevitably, Joe Sixpack will be wondering why one of his folders is full and why the others are not or vice-versa). Now that the new Intel chipsets make it possible to do RAID without installing extra hardware, companies have realized that they can wipe away the confusion that two hard drives will cause average computer users while even touting a bit of a performance boost. Of course, the cost of this blissful ignorance is data insecurity, especially since studies show that members of the Sixpack family rarely back up the digital assets that they value the most: photos, personal documents, etc.

Fun with JavaScript

Wednesday, October 4, 2006
Keywords: Technology

I wrote a Firefox extension yesterday. It is not anything earth-shatteringly great. Just a simple function that I had wanted and that I had wanted to be implemented with an extension that was very simple, small, and lightweight. There were only a few dozen lines of code, and it was more of a two-hour learning exercise.

Now what was interesting about this experience was that I got to work with something that I hadn't touched since 1998: JavaScript, and this post is mostly about that experience. But before I begin, I should do a quick backgrounder. A large number of people do not know that most Mozilla products are kinda like elaborate webpages. At the core is the Gecko rendering engine, written in C and natively compiled. This handles all the grunt work and handles things that are specific to the operating system, like dealing with graphics (GDI+, OpenGL, etc.), the file system, etc. In the case of Firefox (or Thunderbird, Sunbird, Songbird, Seamonkey/Mozilla, etc., but not Camino), the browser itself, however, is written in a markup language called XUL. Like XHTML, it is a XML-based markup language, and it works an awful lot like HTML but with different tags. And Firefox is just one really big and elaborate collection of XUL, so Firefox is in a way akin to a big collection of HTML webpages. And just like HTML, the style and layout in XUL is controlled by CSS. This, by the way, is what makes up Firefox skins: they're just a collection of CSS files and graphics, and it's the CSS files that specify where things should be placed, how much spacing there is between various items, what colors to use, what images to use, etc. The brilliant thing about this setup is that it's cross-platform. Just as a webpage will look the same in Windows as it does in Linux, the XUL user interface is equally cross-platform. And since they already have Gecko to render webpages, why not save themselves the trouble have Gecko render the user interface too? It is also brilliant in that it provides for a fairly clean separation of the front-end user interface from the back-end, and XUL+CSS makes changing the UI and custom skinning much easier than most other skinning systems because a lot of people are already familiar with HTML+CSS.

While XUL+CSS is, in my opinion, ingenious, the Mozilla way of doing things does have one soft spot. A webpage isn't just about layout and design. Function is what matters, and something's gotta happen when a user clicks on a button or selects an option or else all you've got is a pretty picture. This is where the JavaScript comes in. Yes, the Firefox user interface is powered by JavaScript. Click "OK" in the options dialog, and you're calling up a JavaScript function that saves the options that you selected. From a practical perspective, I can see why Mozilla went with using JavaScript. It fits this "webpage" style of software, it means that they can just reuse the JavaScript interpreter already in Gecko instead of scratching out a new language, it--along with XUL and CSS--makes working on Firefox and Firefox extensions relatively easy (at least compared to say, working on an IE extension; you just need a Notepad and WinZip; no compilers needed!), and it means that they can rely on the great pool of existing JavaScript programmers. But hold on a minute... JavaScript programmers?

As the name would suggest, JavaScript is a scripting language that borrows some of the style of Java (though the two are unrelated). With the rise of Firefox and with the rise of AJAX, people are starting to take JavaScript a bit more seriously instead of snickering at the very concept of a "JavaScript programmer", but that doesn't change the fact that the language is, fundamentally, a tacky curiosity. Putting JavaScript's following of Java's moronic dogma towards the OO style, the fact that JavaScript was originally conceived as a lightweight language scripting language designed for trivial frills is evident. This is a language where, even today, changing the third byte of a string requires blowing up the string and manually piecing it back together. Ugh.

Needless to say, my experience with coding up the extension didn't go quite as smoothly as I had hoped. I hadn't coded JavaScript since 1998, and I needed to check the language reference, but that's okay. Every programmer needs to consult a language reference. The frustration was just dealing with the quirks and limitations of the language itself. For example, the ability to access a character in a string using [ ] but not being able to modify it that way is a rather annoying limitation of the language, and a very counter-intuitive one at that. While there have been efforts made over the years to mend JavaScript's shortcomings, the need to maintain backwards compatibility means that JavaScript can never be truly and fully reformed. JavaScript is, IMHO, a bastard language that should have been aborted at birth, but unfortunately, it has survived. Fortunately for me, my extension was simple and short, so I only had to deal with JavaScript for a couple of hours, and since I do not intend on becoming a "JavaScript programmer", I don't think I will have many more run-ins with this language. I do, however, have some newfound respect for the poor souls who do have to do major amounts of coding in JavaScript. At least people working on the Firefox interface using XUL+CSS+JS are lucky: they don't have to deal with the fact that IE, Opera, Safari, and Mozilla all handle JavaScript differently. It's a wonder how people who do AJAX haven't jumped off a cliff yet.

The Google Talk Tragedy

Monday, September 25, 2006
Keywords: Technology

It has been over a year since Google Talk was released. It has, unfortunately, not caught on. While this does not bode well for Google, the real loser here is Jabber and the entire Internet community as a whole.

What the heck is Jabber, and why should I care?

To describe Jabber, one needs to first understand the essence of the Internet. I am sure that most people have met someone who thinks that www.msn.com is "the Internet". There are many who think that the Internet is some monolithic and centralized thing, which it is not. The Internet is like a road network. Like a road network, it is composed of smaller networks and road segments that happen to be connected to each other (a country's road network contains many different local city networks and even private driveways). There is no central authority (the Federal government may have control over Interstates, but it has no control over a local residential street); instead, the Internet is an interconnected collection of smaller independent networks.

This decentralized nature of the Internet is mirrored in how e-mail works. Like the Internet, there is no central e-mail authority and there is no centralized magic hand that makes e-mails go where they are supposed to go. When you e-mail bill@example.com, the e-mail server that you are connected to asks example.com's name server what its mail exchange (MX) server is. example.com's name server might reply by saying that its MX server is located at mx.example.com. Your mail server then hands the mail over to mx.example.com, and mx.example.com then takes it from there: if mx.example.com recognizes the user bill, it will place the mail in his inbox; otherwise, it will report that the e-mail address is invalid. If example.com does not run its own mail server and instead outsources its e-mail handling to a mail service (for example, Google Apps for Your Domain), then its name server would report that its MX server is located at aspmx.l.google.com and the mail would be sent there instead. Like the Internet, there is no central authority in mail. Mail works by millions of decentralized mail servers passing messages to one another, and these mail servers know which server to pass messages along to based on what MX servers are listed by the domain after the @ in the e-mail address.

Jabber is an instant messaging system that works using the same principles as e-mail. Jabber screennames take the form of user@domain-name and as a result, Jabber screennames are usually identical to someone's e-mail address. When you send bill@example.com an instant message using Jabber, the server that you are connected to asks example.com's name server what its Jabber server is. example.com's name server might reply by saying that its Jabber server is xmpp.example.com. Your server then hands the instant message over to xmpp.example.com, and xmpp.example.com then takes it from there: if xmpp.example.com recognizes the user bill and this user is online, he will receive the message. If example.com does not run its own Jabber server and instead outsources to a third party (for example, Google Apps for Your Domain), then its name server would report that its Jabber server is xmpp-server.l.google.com and the message would be sent there instead. Does this seem awfully similar to how e-mail works?

In contrast, existing proprietary instant messaging systems are closed. If we look at AOL's new @aim.com e-mail service for AOL Instant Messenger users, we can easily see the difference between a closed system (AOL Instant Messenger) and an open system (@aim.com e-mail). When using a closed system like AOL Instant Messenger, you can send instant messages only to other users @aim.com and you can receive instant messages only from other users @aim.com. When using an open system like e-mail, you can use your @aim.com e-mail address to e-mail people @gmail.com, @hotmail.com, @verizon.net, etc. And you can receive e-mails from those places as well. Could you imagine the enormous outcry there would be if AOL suddenly said that people with @aol.com and @aim.com addresses can only e-mail each other and not receive e-mails from the outside? This is because people are used to e-mail being an open system and few have even considered the possibility of instant messaging working in the same open federated fashion as e-mail. If e-mail had turned out the same way that IM did and if you had friends on six different e-mail networks, then you will need six different e-mail accounts just to get in touch with them! It makes no sense for e-mail to operate in this fashion, and for instant messaging to operate like this is equally absurd.

Quick summary: The majesty of Jabber...

  1. A Jabber address is similar to (and can be the same as) an e-mail address. In the future, as voice and video is added and as the grossly obsolete phone number system is finally stamped out, people can communicate to each other using a single unified contact address for e-mail, chat, voice, and video.
  2. Jabber is a decentralized, open and free system. There is no central Jabber authority.
    • It is not proprietary.
    • People can communicate with each other regardless of what network they are on and who is providing their Jabber service.
    • The Jabber system is more reliable. Server outages will only affect the people who rely on that particular server and not the entire instant messaging world.
    • Individual server operators can have more control over local operations and can customize and innovate the service that they offer while still maintaining compatibility with the rest of the network. For example, when Google introduced voice calling over Jabber, people using Google's service can do voice chats with one another. Other Jabber servers can choose whether or not to adopt this new feature, but they can still communicate with Google's servers using the traditional text method.
    • Companies can run their own Jabber server and provide all their employees with a Jabber address. If they wish, they could create a secure, isolated Jabber network for internal-use-only, much like the some internal-use-only e-mail systems.

Google rides in on a white horse...

Jabber was born in 1998, but it existed in the shadows of obscurity. Some companies adopted it for internal use and some ISPs such as EarthLink set up Jabber servers for their customers, but for the most part, it never really took off in a field dominated by Microsoft, Yahoo!, and AOL. A year ago, when Google joined the IM wars with Google Talk, it did things the non-evil Google way and used Jabber instead of proprietary technology. Finally, a major IM provider has adopted Jabber!

...and then it stumbles and falls off the horse...

The problem with Google Talk is that I love Google Talk. When it came out, I abandoned the other IM services in a heartbeat, switched to Google Talk, and never looked back. Putting aside the fact that Google has done such a wonderful thing by supporting Jabber, I loved Google Talk because it was an IM client that seems to have been written for computer nerds. The interface was beautifully spartan. Text was used in place of pictures wherever possible. There were no flashy colors. Emoticons were properly displayed as text, and early versions did not feature those hideous annoyances known as buddy icons. It was, in my opinion, everything that instant messaging software should be, which meant that it was doomed to failure because people like me are a small minority in a sea of people who love colors, cutesy fonts, graphical smilies, buddy icons, etc.; for the vast majority of IM users, Google Talk was completely unacceptable. And so despite the backing of the coolest name in technology and despite the popularity of the Gmail service that Google Talk was linked to, it has faltered and failed to take the world of instant messaging by storm. And that does not bode well for Jabber and for the future of open instant messaging. Google fumbled, and it tragically squandered a golden opportunity.

...but there is still hope...

When Google bought a 5% stake in AOL last year, it also announced that Google Talk and AOL Instant Messenger would one day be able to talk to one another. The easiest way for this to happen is for AOL to offer a Jabber transport which would be like a gateway for Jabber servers to communicate with AOL's servers. AOL's introduction of @aim.com e-mail addresses and their marketing campaign trying to get people to think of the AIM screenname as screenname@aim.com will help greatly in AOL's conversion to Jabber. So if AOL does implement Google Talk interoperability (it will soon be a year since the making of that announcement, and there has been no progress update) and if it does it in the easiest way available to them (which is to Jabberize their network), then there may still be hope for Jabber yet.

Domain Registrar Product Rave

Tuesday, September 19, 2006
Keywords: Technology

A number of years ago, when I registered a domain at GKG, I was quite pleased with what they had to offer. Domains were only $10 per year, which was less than the best-known budget registrar at the time, Dotster (which is the domain registrar that I was using before GKG). In addition, they offered free e-mail forwarding and e-mail privacy in WHOIS (instead of listing my real e-mail in WHOIS, they listed a @whois.gkg.net e-mail that then forwards to my real e-mail). And for just $5 more per year, I could get a POP3 mailbox for my domain. This was just a couple of years after the end of the Network Solutions monopoly on domain registrations, so competition was just starting to warm up, and at the time, this was a pretty nice package.

So for nearly half a decade, GKG has served me well. But they've been stagnant while the rest of the domain registration industry has undergone massive changes due to intense competition. Network Solutions, once the holder of a government-sanctioned domain registration monopoly now has a market share of less than 9%, and GoDaddy, helped by their infamous Super Bowl ads, is now the top registrar, but only with a 17.5% market share. It's a diverse market with hundreds of ICANN-accredited registrars, and of the 77 registrars with over 100,000 domains, GKG is only the 56th largest.* In addition to lacking the sort of colorful marketing of registrars like GoDaddy, the package that GKG offers with each domain has remained the same over all these years. Their price of a domain registration has dropped slightly to $9 to match GoDaddy's price, and the price of the POP3 mailbox has increased to $8 (not that it matters much, as I now use Google Apps for Your Domain for domain e-mail services; see my previous blog entry for more on that).

This brings me to the 6th largest registrar, Schlund+Partner, better known as 1&1 Internet. According to Netcraft, its $6/yr registration ($3 less than GoDaddy's or GKG's price) is the lowest regular non-promotional price of all the major registrars. And it offers basic DNS hosting/management,** a free 1GB mailbox with with POP3 and IMAP support, e-mail forwarding/catchall, and free private WHOIS listings (which cost an extra $8 at GKG or $5 at GoDaddy) so that in addition to hiding my e-mail from the WHOIS database, it also hides my name, address, and phone number from the WHOIS database. So for $6, I can get what GKG would've charged me $25 for.*** One of my experimental domains is about to expire in a few months, so instead of renewing it with the existing registrar, I'm now in the process of transferring it to 1&1. All my other domains won't expire until late 2007, so I won't bother with transferring them until later.

________________
* And for those who care, DreamHost is 58th, but at its current rate of growth, it should overtake GKG very soon.

** It's not very advanced, but it does save me the trouble of getting a ZoneEdit account for sites that have fairly simple DNS needs.

*** Of course, I don't actually choose to go with the $25 package from GKG because GAFYD is now providing e-mail and while private registrations are nice to have, they are not necessary.

The Domain Wars

Monday, September 18, 2006
Keywords: Technology

Back in November 2005, Microsoft announced a cool new service called Windows Live Custom Domains. With this service, if you owned the domain name example.com, you could now create Hotmail (a.k.a. Windows Live Mail) accounts of the form user@example.com. You provide the domain name, and Microsoft provides the server and infrastructure necessary for an e-mail service at that domain name. It's useful for small companies who can't afford to run their own mail server but who want to give their employees professional-looking e-mail addresses at the company domain. It's also useful for owners of personal domains who want e-mails at their own domain name. While this sort is service is hardly new, there was one important difference: this is free. Microsoft had become the first of the major e-mail providers to offer a free domain e-mail service.

A few months later, in February, Google launched Gmail for Your Domain (which was expanded on and renamed to Google Apps for Your Domain last month). Google's service is similar to Microsoft's. If you own example.com, you could now have a Gmail account at user@example.com and you could sign into Google Talk using user@example.com. The differences between the two are the same as the differences between Gmail and Hotmail: there's more space (Google offers 2 GB vs. Microsoft's 250 MB), a better webmail interface (though that's subjective), and most importantly, secure SMTP/POP3 access, which allows Gmail--and thus Google Apps for Your Domain--to be used with proper e-mail software (e.g., Outlook, Outlook Express, Thunderbird, Eudora, Apple Mail, etc.).

Recently AOL has joined the fray as well with the My eAddress service. Now this is where things start to get interesting. While Microsoft and Google are BYOD (Bring Your Own Domain) services where you provide a domain name that you own, AOL will register a domain of your choice for you. Apparently, for free. Which means that you won't have to pay for a domain registration, which will save you about $9 per year. Like Google, AOL is offering 2 GB of space. And like Google, AOL's e-mail service can be used with proper e-mail software like Outlook Express, Thunderbird, etc. However, while Google offers support for e-mail software through the POP3 protocol, AOL is offering it through the arguably better and more fully-featured IMAP4 protocol. And if AOL is offering all this for free, it begs the question, why the heck are they doing this?

In the case of Microsoft, it is a rich company trying to become the gatekeeper of the Internet by pouring money and resources into establishing their new Windows Live brand and strategy. Furthermore, without POP3 or IMAP support and with a much smaller storage limit, Microsoft isn't offering very much to begin with. In the case of Google, they are hoping to offer the Google Apps for Your Domain service as a premium service--a "business solution" for small companies and organizations. Google is currently in a beta stage, so the service is free for now, and they have promised to keep the accounts created during this beta period free. But what about AOL? Their offering is the grandest: free domain name and IMAP support, but do they have a strategy that merits this extravagance? They could offer the service for free now and then make it a premium pay service later on. And since AOL is doing the domain registration for you, they will technically own the domain, which will give them a sort of blackmailing power over you if they decide to make it a pay service (pay up, or forever lose these e-mail addresses), whereas if this was a BYOD service and you owned your own domain, you could easily jump ship and set your domain's MX records to point to any other service offering e-mails for domains or even to your own servers and thus preserve all those e-mail addresses. However, AOL has stated that they are working on allowing people to use their own domains and that BYOD is not allowed at the moment only because the service is so new that they haven't gotten around to supporting that yet. So if AOL follows through and offers BYOD support, then, if you opt to use your own domain, there will be nothing to prevent you from switching service providers the second they decide to do something unpleasant with their service (granted, the typical AOL user would not be tech-savvy enough to know how to do this). Additionally, unlike Google, AOL has not stated intentions to make it a pay service, so a course reversal there would bring bad PR for them at a time when they're spending so much time and money trying to heal the AOL brand image. So I think that AOL is caught up in a me-too fever and is outdoing everyone else simply for the sake of outdoing everyone else and not for any rational business purposes, which certainly won't be a first for AOL. ;)

Finally, where is the last the Big Four? Yahoo! has been offering services like this for many years now, but it has always been a premium service. According to their website, they will charge you either $35/yr for a single address at your domain, or $120/yr for up to 10 addresses at your domain (in contrast, Microsoft's free service offers 40 addresses per domain, AOL offers 100, and Google offers anywhere from a minimum of 25 to the thousands, depending on how many you request at signup). The prices are less if you provide your own domain. So it would appear that this crazy domain services fever hasn't reached Sunnyvale yet.

This entry was edited on 2006/09/18 at 17:31:35 GMT -0400.

Fun with Google Trends

Saturday, August 26, 2006
Keywords: Technology

Oh my, this is such a fun toy.

Perl vs. Python. Now you know which language rules supreme! Note that Python has an unfair advantage in that its trendline also includes people looking for snakes and Monty Python. Unfortunately, the direction of the trends is a bit worrisome. Here's to Parrot/Perl6!

Slashdot vs. Digg. This is unfortunate. Digg users are often ill-informed, immature, and very knee-jerk. Not that Slashdot's mob is much better, but at least the mob doesn't rule with such impunity at Slashdot.

Windows vs. Linux vs. FreeBSD vs. Mac. Ick. Especially how the Mac is creeping up...

iPod vs. Porn. iPods may have surpassed booze in popularity on college campuses according to a recent survey, but at least it hasn't surpassed porn... yet. Am I the only one who's worried about this unhealthy obsession that our society has with iPods?

Bush vs. Kerry. It's interesting to see the huge drop-off after elections.

Wordpress vs. LiveJournal. The trends are moving in the right direction... *ducks*

Wordpress vs. LiveJournal vs. Blog vs. MySpace. On the other hand, once you toss MySpace into the picture, you get a much more terrifying and sobering trend. This is disgusting. Look at how it even outstrips blogs in general.

Firebird vs. Firefox vs. Perl vs. Apache. Witness the name-change from Firebird to Firefox (the Phoenix->Firebird name change can't be mapped because it's out of the time frame and because Phoenix is also a major city name) and Firefox displacing the greatest success of open source, Apache (I think Apache's the only major open-source product to have well over a 50% share of its market, no?).

iPod vs. MySpace vs. Porn vs. Bush vs. Microsoft. And finally, the big show-down. *shudders* That MySpace trendline is really creepy.

This entry was edited on 2006/08/26 at 13:50:10 GMT -0400.

HTTPanties

Wednesday, August 23, 2006
Keywords: Technology

Someone directed my attention to these products listed at ThinkGeek; there are even "action shots". I think it's hilarious; I've never thought of HTTP response codes in that way before. But now that I look back at the list of HTTP response codes with my mind in a gutter mindset, there are actually quite a few of them that are open to devious readings, including:

202 Accepted
300 Multiple Choices (think about that one...)
400 Bad Request
401 Unauthorized
402 Payment Required
405 Method Not Allowed
406 Not Acceptable
414 Request-URI Too Long (see page 69 of the HTTP/1.1 specs)
416 Requested Range Not Satisfiable
417 Expectation Failed
502 Bad Gateway
503 Service Unavailable

Speaking of the kinky, this Wall Street Journal op-ed about the "fertility gap" is worth reading.

This entry was edited on 2006/08/23 at 18:26:44 GMT -0400.

Intel's Botched Marketing

Wednesday, August 2, 2006
Keywords: Technology

In an ideal world, marketing serves to inform the consumer so that the consumer can make a well-informed decision. Of course, in the pursuit of sales, marketing these days rarely informs and often brainwashes, but there are still some cases, such as computer chip marketing, where there isn't as much of a disconnect between increasing sales and being genuinely informative, and in such situations, failure to inform could hurt sales...

For over a month now, Intel has been advertising in print media the new server/workstation processors that it launched in late June. These chips were based on the same new architecture (NGMA) behind the new Core 2 Duo chips launched last week, meaning that they deliver dramatically better performance while consuming less power (and thus generating less heat*). Intel, wishing to inform the public of these great chips, loudly proclaimed in its ads that these chips perform better while consuming less juice--a spectacular 80% gain in perforamance per watt, all of which are true claims that many people have now verified. So... what's the problem?

Intel introduced its "Xeon" brand for server/workstation processors back in 1998, in the days of Pentium II. While Intel's other brands have undergone name changes, such as the well-known Pentium to Pentium II to Pentium III to Pentium 4 progression, the very recent Core (Duo) to Core 2 (Duo) progression, and the progression of Itanium to Itanium 2 for high-end "big iron" servers, Xeon has remained Xeon. So through nearly a decade of architectural overhauls, instead of progressing the name from Xeon to Xeon II (the switch to Netburst) to Xeon III (the switch to dual-core) to Xeon IV (the switch the NGMA), the name of the new chip that Intel hopes will rescue it from its arch-rival's increasingly-popular Opteron is "Xeon". This means that Intel's ad proclaiming the wonders of these new chips had no way of identifying these chips except by calling them the "new Xeons".

All this poses several problems. First, in late May, Intel had released a "new" line of Xeons: the 5000 series (which referred to the product numbers, like 5030 or 5060). These chips were only a minor revision of its existing line, and offered only a modest reduction in power consumption while maintaining roughly the same performance. Just a month after the introduction of the 5000 series, Intel released the real new Xeons that are the topic of this discussion. These chips are not a minor revision of the existing product line; they were a complete redesign and offered enormous reductions in power consumption while dramatically boosting performance. For people not familiar with the world of Intel microarchitectures (and most are not), this is rather confusing, as there were two "new" product lines introduced within a month of each other, and only one of them was worth any attention.

Second, the new NGMA-based processors were given product numbers in the 5100's instead of something that would have more clearly identified it as a new product, like something in the 6000's, especially since the 5000's were just released a month ago--this idiotic product numbering falsely suggests that the 5000 series were the new chips and the 5100 series were just a minor revision of the 5000 series. To make matters worse, Intel's ads never even specified the range of product numbers that its new NGMA chips would occupy. It doesn't say "new Xeon 5100 series", it just said "new Xeons".

Third, the new 51xx Xeons has lower clock speeds (GHz) than the older 50xx Xeons. However, since the new 51xx Xeon can do about 50-100% more work per GHz than the old 50xx Xeons, a 51xx Xeon clocked at 2.0 GHz can easily outperform a 50xx Xeon clocked at 3.0 GHz. However, Intel, which has marketed chips based solely on GHz for many years, has yet to fully adapt its marketing to informing people that performance isn't measured in GHz, but rather in GHz times IPC (instructions per cycle), which dampens its marketing push for its new products (all of which have much higher IPCs). They've made attempts to break way from the old GHz way of measuring performance, but these attempts have been incoherent at best.

Now imagine that you are a graphics artist who does professional image editing, and thus, you need a powerful workstation. However, since you're an artist and not a computer scientist, you have no idea what all this fuss about new microarchitectures is about. You go to Dell to order a Dell Precision workstation featuring Xeon chips. In the configuration screen, you see a list of processors to choose from. They all bear the same Xeon brand name. Their model numbers are all similar (and how would you know that there is such a huge difference in that second digit of the model number?). And Dell configuration page doesn't offer any bit of explanation of what's going on. The only thing that really sticks out are their clock speeds. So what would you do? There is a 3.73 GHz processor and a 3.0 GHz processor at the same price, so of course, you pick the 3.73 GHz one, oblivious to the fact that this processor is actually slower and will run up your power bill faster. And based on a story that I've heard through a friend, something very similar to this really did happen recently.

Intel has spent years building up the Xeon brand name, so it's understandable that they wish to keep it. But why can't they update the name to reflect the changes in the processor? They do this with all their other product lines: last week, they released the new "Core 2 Duo" processors, even though the difference between "Core 2" and "Core" is very small compared with the differences between "new" Xeon and Xeon. And they don't seem to have qualms about doing numerical branding with their other business/server chip, the Itanium, which was recently updated to Itanium 2. Numerical name updates would allow them to differentiate their chips without abandoning the brand name. A "Xeon II" brand for these new chips would not only help eliminate the vast amounts of confusion surrounding the new Xeons (even I needed to consult Intel's specification charts to get everything straight at first), but it would highlight that these are indeed radically new chips, free of the performance and power problems that plagued the older generation, and it would help generate the sort of product awareness that Intel needs to retake the market share that its old power-guzzling* Xeons have been hemorrhaging for over a year. But alas, for whatever strange reason, Intel did not do this, and the product awareness that Intel had hoped to generate for its new Xeons has degenerated into confusion.

________________
* In server environments, power consumption is extremely important as 24/7 operation means that electrical costs can often match or exceed the purchase cost of a server in just a few years. Lower power consumption also means that the chips generate less heat so that less money has to be spent to cool the server rooms.

This entry was edited on 2006/08/02 at 09:11:50 GMT -0400.

The Electric Car

Wednesday, July 26, 2006
Keywords: Technology, Politics

There is a new documentary movie this summer titled Who Killed the Electric Car? Although I have not seen it, I did see the trailer for it, and I have been reading about it in the news media (e.g., at CNN). It's a big conspiracy theory movie, and I'm not sure I'm sold on their claims. Of course, I should reserve full judgment until after seeing the actual film, but here are some of my preliminary concerns:

1) As we have witnessed in recent weeks, our power infrastructure is severely strained. In California, the problem is generation capacity. And throughout the country, but especially in the east, the problem is transmission capacity. Forget about the hassles and logistics of adding new power plants; just upgrading the existing $1-trillion electrical transmission infrastructure with millions of miles of wiring to handle the enormous extra load that electrical cars would generate would not be trivial in either time or money.

2) Batteries are imperfect devices. How efficient are these batteries that are used, and how long will they last?

3) While Americans have not exactly been the greenest people on this planet, there are other wealthy industrialized nations that are much more environmentally conscious. Why hasn't there much in the way of electrical car development in Europe or Japan?

4) While electrical cars may be more efficient and environment-friendly (yes, there is pollution associated with electrical generation, but it will be concentrated and easier to deal with) than gasoline cars, the real standard that should be used is whether or not they are that much better in terms of efficiency and practicality than the other green alternatives, like hybrids or hydrogen fuel cells. Hybrids are nice in that they achieve a large efficiency gain without any infrastructure requirements.

5) Did they really kill the EV1 because of some evil conspiracy, or was it killed out of purely economic concerns, such as the worry that not enough people would buy it to justify manufacturing and support costs?

In the end, I still think that the best solution is massive gasoline taxes to address the issue of the unpriced petrol externality. I think we may finally be getting to the point where Americans are finally starting to let go of the absurd notion that cheap gasoline is some sort of basic human right, which would make European-style gas taxes possible. And once that's in place, the market will take care of the rest. In the meantime, this is an interesting--albeit a bit off-topic--article from the July issue of the Scientific American.

Emperor Jobs Strikes Again

Tuesday, July 25, 2006
Keywords: Technology

It is often popular to describe Bill Gates and Microsoft as evil and to cast Gates in the image of a Borg leader or Microsoft in the image of the Evil Empire. Yet, in the process of poking fun at Darth Gates, people often forget the cloaked figure quietly looming in the background--Steve Jobs.

As I have detailed in an older post, Jobs and Apple are far from angelic, and by many measures are much worse than Microsoft. They exert tight controls over their software and hardware, they overprice their products, they are litigation-happy (just ask the person who wrote a book about Jobs titled iCon; do you see Bush suing the scores of liberal authors who write books about him?), they are deceptive (unfair benchmarking, and have you seen those recent deceptive anti-PC ads about the hassles of PC drivers?), and they try to leverage anti-competitive market power whenever possible (e.g., iPod interoperability).

More recently, however, they have been leading Intel around on a leash. Dell has been a loyal Intel-only customer for as long as they have existed. Apple, on the other hand, was waging anti-Intel marketing campaigns just a few year ago. Dell sells huge volumes of Intel chips and places Intel's beloved "Intel Inside" stickers on its products. Apple sells relatively low volumes of Intel chips and refuses to place "Intel Inside" stickers on any of their products. Apple even launched an controversial ad early this year--without Intel's approval--describing PCs with Intel chips as boring little boxes that angered Intel's PC partners. So how has Intel rewarded Apple for their troublesome behavior? By allowing Apple to be the first to launch Intel's Yonah (Core Duo) chips early this year. That was over half a year ago. Intel will officially launch its new Conroe (Core 2 Duo) chips later this week. These chips have been shipping to various computer manufacturers and retail outlets for some time now, so they should have these chips in stock and ready come the launch date on July 27. However, it has recently been revealed that Intel has forbidden them to sell any of the chips with the exception of the ultra-high-end $1000 versions until August 7. This is in stark contrast with Intel's plan earlier this month, which was for a general availability launch on July 23. How odd, you might say, for Intel to put off general shipment of its most anticipated chip ever for nearly two weeks even though everything should be in place. Well, August 7 is also the date of Steve Jobs' keynote at Apple's developer conference, WWDC, where he is widely expected to announce the new Mac Pro computers with Intel's new Conroe (or Woodcrest) chips. Of course, this may very well be nothing more than just a coincidence. But given history, it seems unlikely. So in order to give Jobs the honor of being the first to launch Conroe-based computers and in order to help that megalomaniac inflate his already overblown ego, Intel has ordered all of its loyal retail partners to hold off on selling the new chips until the Apple launch.

I should note that not all the fault falls on Apple. Intel's CEO needs to grow a backbone; anyone who saw a video of the Core Duo launch early this year would be struck at how timidly Intel's CEO acted next to Jobs when presenting the new Intel chips to Jobs like an obedient fetching puppy. Intel should recognize that Apple is one of their smallest customers and that it is not wise to snub its larger customers (who are also more cooperative--though that may just be the problem) in order to favor Apple. Perhaps Intel should learn from IBM (the previous chipmaker to be caught in an abusive relationship with Apple)...

In the end, most people do not see Apple as evil, simply because, unlike Microsoft, Apple has not been successful and strong enough to exert its power. But as Apple grows in popularity and market share, people will see begin to see why Microsoft's triumph over Apple may have been a godsend. ;)

Edit: This is a good "what-if" read...

This entry was edited on 2006/08/23 at 00:15:16 GMT -0400.

Microsoft's Contribution to Spam

Wednesday, July 19, 2006
Keywords: Technology

While investigating the comment spam problem, I did a random sample of IP addresses and found that every one of those from my sample is under the jurisdiction of APNIC (i.e., they are physically located in the Asia-Pacific region). My first thought was, "okay, so all the spam is coming from Asia; that's no big surprise." In a region where the rule of law is weak at best and where shady businesses such as counterfeiting is the norm, this was no surprise. But what intrigued me was that the variety of addresses. The same spammer would have access to dozens of varied IP addresses across different blocks. This shouldn't be surprising. In this day and age, spammers have become more sophisticated; they no longer use their own machines to do their dirty work. They will infect other machines and use these "zombie botnets" to send spam. Not only does this increase their available bandwidth and capacity, it also makes shutting them down much more difficult as it defeats the tactic used a few years ago of blocking select IP addresses.

The question is thus no longer so much a question of why Asians are spammers (after all, how many Asians have even heard of Texas Hold 'Em--the subject of a recent burst of spam?) but a question of why so many Asian machines are compromised and under the yoke of a spammer (who may not necessarily be Asian). Which brings us to Microsoft. In Asia, estimates place the number of pirated Windows installations somewhere around 90% of the installed base; it is virtually impossible to buy a computer with a legitimate copy of Windows in China (I know from experience). This is not surprising given the relatively high price of Windows and given Microsoft's weak token efforts to stop piracy there (they are more focused on richer countries; they know that people in poorer countries can't afford Windows and Gates has admitted that piracy is effective in protecting Windows' market share against free operating systems like Linux in such price-sensitive markets). Although Microsoft unofficially and quietly condones piracy in places such as Asia and Russia, their official condemnation of such activity means that the copies of Windows in that region are relatively insecure. Updates such as SP2 won't install, and thanks to their pushing things like WGA through automatic updates, it is common practice for Automatic Updates to be turned off. The result is a massive population of unpatched, insecure systems in Asia. Coupled with the relatively impotent ISPs and network-level security, this leads to an army of compromised machines used by criminals to send spam and launch DDOS attacks.

In the meantime, I've finished hacking up new anti-spam measures for this blog; let's hope they hold...

Smarter Spammers

Tuesday, July 18, 2006
Keywords: kBlog, Technology

Sigh.

Blog spam used to be only a minor nuisance. From the very beginning, there were attempts at comment spam, as indicated by my server logs. Fortunately, incompetent spamming software coupled with a bit of security by obscurity (since I'm like the only person using the kBlog blogging platform) shielded me. Of course, that didn't hold for long, since not all spamming bots are so incompetently written...

But even then, it was easy to deal with, since all I needed to do was filter by technical heuristics, such as the use of HTTP/1.0 (commonly used by bots/scripts, but not by real browsers), whether redirects are properly followed, and whether auxiliary files like CSS and images are accessed (as real browser would do, but not most bots). Well, at least, these filtering heuristics used to work.

These bots are now smart enough to emulate real browsers in every way, from the use of HTTP/1.1 to the downloading of images and CSS files. Also, in the past few days, I've been hammered by comment spammers (they used to come by only occassionally). The spam would come in bursts, and during these bursts, the rate of attempts could be as high as one per second. This leaves me in the undesirable position of being forced to address comment spamming through content filtering. And we all know what a hornet's nest that is...

The Downward Slide of the Slide Rule

Sunday, April 16, 2006
Keywords: Technology

The May 2006 issue of the Scientific American has an interesting piece on the history of the slide rule. At least, it was interesting for me because up to this point, I hadn't the faintest idea how a slide rule worked or how one might actually use a slide rule.

Invented a long time ago in the 1600's, the slide rule is surprisingly simple in concept: it basically uses the cool properties of logarithms to reduce "hard" problems like division and roots into simple addition and subtraction arithmetic. The article even had a cut-out do-it-yourself slide rule to practice with, though I was not quite ready to mutilate my copy. :P

Although there is no doubt that computers (and digital calculators in particular, although I do so much calculation on my general computer these days that I haven't touched my calculator in years) are much faster, efficient, precise, and capable, the article does bring up a good point: there was something valuable in this lost art. How many people today are familiar with logs, much less the properties of logs? And with the ease of number crunching today, gone is the incentive to spend time thinking about a problem in order to find clever shortcuts and simplifications. I suppose something analogous in computing is that software has grown increasingly inefficient as processing power increases because more and more programmers no longer bother to really think about the logic and then see and take the most efficient solutions (something that I am sometimes guilty of as well, especially with my older projects). Or in economics, this is analogous to the recent study that showed that modern economists are now so far removed from the basic principles of economists that the vast majority of them cannot even correctly answer a simple introductory-level opportunity cost problem.

Of course, this is not to say that I am going to go buy a slide rule tomorrow and dump my many computers; I am by no means a Luddite, and in the greater scheme of things, the death of the slide rule has been a hugely positive thing. But it is interesting in a fascinating sort of way to look at and think about the unintended effects of such progress.

This entry was edited on 2006/04/18 at 00:56:13 GMT -0400.

The Lure of Phishing

Tuesday, April 4, 2006
Keywords: Technology

Life is full of little coincidences. Not long after reading an article about the rise of "phishing" and its increasing sophistication, I received a phishing e-mail.

Normally, this would not be something worth writing about. Indeed, I have gotten a number of phishing e-mails before in e-mail accounts that were subject to spam. But this time, it was very different. As the owner of a number of domain names, I have the luxury of using a unique and different address with every site that I deal with (e.g., something that looks like amazon@example.com or paypal@example.com), and I have a special obscure domain name that I use just for this purpose. This way, e-mails that claim to be from, for example, PayPal that are not sent to the e-mail address that I use for solely for PayPal are easy to reject (it also allows me to figure out which sites sell or leak out e-mail addresses to spammers, which was the original purpose of such a scheme). Anyway, this phishing e-mail passed this first test: it came from an online store where I had bought an item a couple of years ago, and it used the correct name and e-mail address.

The second test is an examination of the e-mail headers to see if the IP address that my mail server received the e-mail from makes sense. This e-mail passed this test as well: the IP addresses of the transmitting server belonged to the service provider that was hosting the servers for the company that supposedly sent the e-mail.

The third test is the believability of the content. The e-mail was plain-text, which helped its legitimacy (because it's easy to hide things in HTML, most phishers use HTML e-mails), and the story that it told was plausible. The e-mail claimed that the company's servers have been hacked and that this e-mail was being sent to inform me of that. The language was formal and correct. The alarm bells finally rang when it then requested users to log on and verify some of the information in their database. Not only is this a typical phishing lure, it also makes no sense if one stops to think about it: what exactly would this verification accomplish in respect to this security breach?

Anyway, things seemed fishy enough that I reported the e-mail to anti-phishing sites and CC'ed a copy of my report to the company's customer service. I suspect that their site has indeed been breached (thus, ironically, rendering the story true), and that was how the perpetrators were able to get the right e-mail address and also send the e-mail from the right server. A few hours later, I received a reply from the company, confirming that my suspicions were correct, that the e-mail was illegitimate, and that they are now looking to address the problem.

This particular experience was similar to a recent incident in Florida where bank sites were hacked and used in a phishing scheme. By hacking the company that they are trying to masquerade as, it allows the criminals to clear many of the hurdles and present a hook and lure that is much more convincing and tempting.

I suppose that I am fortunate to be sufficiently tech-savvy that I can easily avoid such Internet hazards, but there are so many people who I could picture falling for this particular trap: a few of my friends, my relatives, Joe Sixpack, etc. With its high efficacy, it's no wonder that phishing is growing so fast.

This entry was edited on 2006/04/04 at 23:57:11 GMT -0400.

30 Years of [Freedom from] Apple

Friday, March 31, 2006
Keywords: Technology

In honor of Apple Computer's 30th birthday, I would like to present myself as a target for flaming and write about why I am glad that Microsoft won the great PC war. And in case you are wondering if this is an early April Fool's joke, I assure you, I am dead serious.

To Apple's credit, their products are very well designed--even sexy--and that OS X's marriage of UNIX geekiness with slick interface is far better than any attempt made by the Linux people. But that is the extent of my love for the company. Yet, most geeks, Slashdot readers, and computer scientists all love Apple, so why not me?

Hail the Apple Monopoly

Let us consider an alternate universe where Apple triumphed over Microsoft. What would such a universe really look like? First, Apple will be a monopoly just as Microsoft is now. Microsoft's Windows monopoly is like a natural monopoly because most of the world's software was designed for Windows. If Apple's platform triumphed over Microsoft's, then these same forces would necessitate an Apple platform monopoly. Of course, that does not necessarily translate into a monopoly with market forces: Linux is an open platform and as a result, many companies produce different flavors of Linux and all Linux applications are compatible with all of these flavors--at least theoretically (it gets messy in practice). Would Apple support such an open platform? No. By all accounts, Apple is just as tight-fisted as Microsoft when it comes to such things. Don't believe me? Look at the emerging iTunes monopoly for online music sales. Apple has resisted all calls to open up the iTunes/iPod standard (much to the detriment of Linux users, who for some reason, still root for Apple), claiming that this is their prerogative. Gee, doesn't that sound an awful lot like Microsoft? But what about Darwin (the OS X core), you ask? While Apple's Darwin is nominally open-source (probably because Apple adapted it from FreeBSD), it is almost entirely internal to Apple and with the recent move to Intel chips, Apple is planning to close it off. As long as Apple ran on its own architecture, it could control the hardware and so it does not need to worry so much about the software, but as soon as Apple lost control of the hardware architecture by moving to Intel, it showed its true colors and clamped down. What about bundling? Let us not forget that Apple bundles just as many (if not more) toys in its operating system, from a web browser to a media player to calendaring software, etc. In the end, this alternate universe would still be dominated by a large monopolist with the same tight, closed grip, except that instead of people comparing Bill Gates to Darth Vader or the Borg, Steve Jobs will be the target of such mockery.

Emperor Jobs vs. Darth Gates

Unfortunately, Jobs would probably be less tolerant of such comparisons than Gates. When iCon, a biography of Jobs that he did not like, was written, Steve stirred controversy by personally banned the sale of all books by that publisher in Apple stores. Despite the large number of books written about Gates, he has not been known go ballistic like that (on the other hand, if it was Steve Ballmer...). It comes as no surprise that most biographies describe Gates as a relatively quiet thoughtful person who is fairly easy to get along with while most biographies describe Jobs as an overbearing control freak who alienated many of the people who have worked with him (I mean, how many people get ousted from their own company?). And while many people dismiss it as just an expensive marketing campaign, it's hard to ignore the fact that Gates is by far the most philanthropic person. It may be worth noting that, long before the anti-trust case, Gates had promised to donate almost everything.

So who would you rather have as the overlord of personal computing, Steve Jobs or Bill Gates? If the decision is between Steve the megalomaniac or Bill the guy who pulled all-nighters playing bridge, I'd pick the latter.

Trigger-Happy Lawyers

Apple is also very trigger-happy when it comes to lawsuits, much more so than Microsoft. Are you a Mac enthusiast who posts about the latest rumored Apple product? We'll see you in court! Are you trying to get OS X to run on a regular Intel PC? Oh look, a pretty cease-and-desist letter. In contrast, Microsoft does not oppose Wine and makes no effort to silence people who talk about the myriad of ways to bypass Windows XP's anti-piracy features. Perhaps most telling of all is the long Apple-Microsoft lawsuit of the 90's in which Apple unsuccessfully tried to sue Microsoft for stealing the look and feel of the Macintosh. Fortunately, Microsoft won the case; if they had lost, the legal precedent that would have been set would be far worse than that of today's innovation-stifling software patents. It is ironic that the geek community's love affair with Apple seems to turn a blind eye to this long-forgotten case. Perhaps the greatest--and most chilling--irony, however, is that the Macintosh was itself not entirely original and that it was more or less "copied" from work done by Xerox PARC much in the same way Windows was "copied" from the Mac.

Pricing

Microsoft's platform dominance certainly plays a role in maintaining its monopoly, but Apple's pricing helps a lot, too. While many complain about Microsoft's monopoly pricing, few pay attention to the fact that Apple's software prices are comparable to that of Microsoft's, and if you figure in the sorts of small incremental changes to the OS that Apple sells as an upgrade versus what Microsoft offers as a free service pack for XP, Apple could even be considered to be more expensive. Most notably, of course, is the fact that Apple computers themselves have always been more expensive than comparable PCs.

Hardware, Innovation, and Competition

Until the recent move to Intel, Apple's tight control over everything extended to its hardware; even commodity components like DVD drives were subject to the long dictatorial arm of Apple (I know this from experience tinkering with firmwares for such drives for Apple machines). The pace of innovation in computer hardware has far outpaced that of software, resulting in both low prices and very impressive computer hardware. Would PC CPU technology be where it is today without the competition between AMD and Intel? Would graphics card technology be where it is today without the duel between ATi and nVidia? This was all a byproduct of IBM's fateful decision to use proprietary technology for only one chip that was relatively easy to reverse engineer. The PC hardware platform may be dominant much in the same way that Windows is dominant, but unlike Windows or OS X, it is an open and free platform, and the wonders of that are numerous. I hesitate to imagine what the world of computing would be like if the IBM-Microsoft wagon got bumped off the road by the Macintosh. In such a scenario, by tightly controlling the hardware instead of allow the sort of free-for-all that became the PC industry (essentially extended the closedness of the OS platform down to the hardware), Apple dominance would have almost certainly stunted hardware competition and innovation.

Open-Source Lip Service

Just a quick little aside here: As I mentioned above, Apple's commitment to open-source is mostly superficial, at least in the case of Darwin (look at Google if you want to find a company that really supports open-source). This is even true in the case of the Safari web browser (whose rendering engine was built from the Linux KHTML project), when relations with the KHTML people soured after the latter complained that Apple was not very good about sharing the work that they did. The open-source-loving Slashdot crowd loves Apple, yet, what exactly has Apple done for the open-source community beyond the ceremonial nod?

But Microsoft is still Microsoft...

Of course, this is not to say that Microsoft is good. Microsoft has committed many sins of its own and it is by no means saintly. But I am not talking about absolutes, either: I am not comparing Microsoft to an idealized perfect tech company; I am comparing it to Apple. I have shown and argued that in many ways, Apple is just as bad, if not worse, than Microsoft. If anything, Steve Jobs is a much more tight-fisted and scary person than Bill Gates could ever hope to be. As such, to the extent that Microsoft's dominance has saved the world from the spectre of Apple's dominance, I am happy for it, though ideally, a Google-like company would have been preferable. Why, then, does the tech community fawn over Apple so much? Well, as I noted, Apple has a finesse and flair for style coupled with good marketing. Second, Apple is the underdog, and our society loves rooting for the underdog. Finally, most people do not realize that Apple's practices are strikingly similar to that of Microsoft's mostly because, as the underdog, these aspects of Apple do not draw much attention (it's a bit like security through obscurity).

So while the tech community celebrates Apple's 30th birthday, I will quietly thank Microsoft for putting Apple where it is today, take pride in being one of the few remaining iPod holdouts, and cling onto the hope that one day Google will take over the world and free us from Microsoft. :)

This entry was edited on 2006/03/31 at 11:00:12 GMT -0500.

Security Through Obscurity

Sunday, March 26, 2006
Keywords: Technology

A recent report shows that snails that are left-"handed" have an advantage in encounters with right-"handed" crabs, whose claws are not as adept at opening the shells of left-"handed" snails (because of the orientation of the spiral). This reminds me of fencing, where fencing against lefties is more difficult because while lefties have plenty of experience fencing against opposite-handed opponents, righties do not face many opposite-handed opponents and thus do not have that sort of experience to draw from. Indeed, one of the tougher opponents in the class I took was a leftie. Anyway, one might describe this as an example of security through obscurity in the real world.

On that note, people who post comments on my blog will find that there is no spam protection. No e-mail address verification, no Turing tests to prove that you are human, no logins, and heck, I did not even bother to implement any simple heuristics under the hood (e.g., no looking for HTTP/1.0 requests, bad user agent strings, etc.). Despite this, I have yet to see a single piece of comment spam, even though, according to server logs, I have indeed been visited by bots and numerous attempts have been made. So why have all these spambots failed to infest my blog with comment spam despite my neglect to implement any sort of security? Apparently, these automated spambots are designed to target the common blogging platforms, and when they encountered my blog, they seemed to have a hard time with supplying valid entry IDs (even though they are hard-coded in a hidden field in the form). So by writing my own home-grown blogging platform, I have been spared comment spam through obscurity, which was very unexpected: I never realized until now that these spambots were so poorly programmed (which is good: we like spammers to remain incompetant).

Of course, this is similar to the Mac/Linux/UNIX security situation. While there are some architectural features that make these operating systems a little better than Windows XP SP2 in respect to security (and from the looks of it, most of these technical advantages will disappear with Vista), I strongly believe, for a number of reasons*, that the biggest contributing factor is the obscurity, which results in fewer attempts at breaches, fewer people searching for ways to breach security, and the creation of far fewer viruses/worms/trojans/etc. As such, I have always found it amusing that these communities try to convert users with the security carrot.

________________
* Despite the bad rap on Windows, the NT architecture is actually a fairly robust one--just look at how NTFS compares with other file systems. The security model is also very good on NT systems. The problems lie with the implementation: bugs and having many users run as the superuser. Having administered Linux servers in the past, I am also fully aware of the many vulnerabilities that other systems suffer. I could go into more depth, but I that is a story for another post.

This entry was edited on 2006/03/26 at 13:31:24 GMT -0500.

Abstinence-Based Virus Security

Monday, March 13, 2006
Keywords: Technology

As reported by C|Net and Slashdot, a recent goof in the virus definitions for McAfee anti-virus resulted in a massive number of false positives and the quarantine or deletion of critical files used by software such as Microsoft Excel, Java, GTK-based applications, Sendmail, et al.

This gaffe neatly illustrates one of the problems with anti-virus software and one of the reasons why, for the past decade, I have not used one (and except for the one time when I accidentally executed a virus that I was examining, I have remained infection-free while using Windows). Ultimately, the key to computer security is user education and continuing to update software as security holes are patched up--and yes, Linux/UNIX need security patches too. These solutions are neither easy nor perfect, and as such, it makes perfect sense to have a safety-net solution in the form of anti-virus software.

While this safety-net concept makes sense and is a good idea on paper, it suffers in execution. First, there are many who eschew user education in favor of relying on anti-virus, which is certainly understandable given the difficulty of educating the average computer user. Of course, this would not be such a problem if anti-virus is effective, but it often is not. Anti-virus is ultimately a reactionary tool. New viruses and worms are initially unhampered by anti-virus tools because anti-viral definitions are updated to cope with the new virus only after the virus has been discovered and examined. While this works for most people, for those who are hit by a virus before their software viral definitions are updated (and before they download and install the new definitions), this would be akin to building flood levees after the flood had already hit. Furthermore, well-written viruses and malware can often disable anti-virus software, and viral mutations require that anti-virus software receive frequent updates (which not all users do). Finally, anti-virus software often carry undesirable side-effects. While events of the magnitude of the recent goof with McAfee are rare, stability and software compatibility issues are quite common (although this applies to anti-virus in general, it is especially true with Norton AntiVirus); ever wondered why some software require that anti-virus be disabled during installation? In addition, there is also a noticeable crimp on system performance as anti-virus software scans files as they are loaded into memory and as anti-virus software do their routine drive scans. The ironic end result of these side effects is that anti-virus software can sometimes cause more problems than they solve.

The Case for BBP

Saturday, March 11, 2006
Keywords: Technology, BBP

Background / Introduction

Many years ago, I had my first encounter with online communities when I was the resident Perl guru at Hypermart's community support newsgroup, teaching newcomers how to use Perl for CGI. It was a lot of fun, but Hypermart soon went through a string of purchases--as the company being purchased--and everyone left.

Fast-forward some number of years, and I found myself once again engaged in yet another online community, where I would disassemble and modify firmwares and produce software to automate this process. But instead of a newsgroup that I would read in Outlook Express, I was on the staff of two web-based forums running vBulletin and phpBB (I have since retired from the firmware community).

How times have changed. In the early days of the Internet, online communities meant NNTP (i.e., newsgroups). To clarify, many people (including me) so strongly associate NNTP with Usenet that private NNTP is forgotten. Hypermart's newsgroups were an example of private NNTP: their newsgroup server hosted only their newsgroups and had no Usenet content. In this respect, private NNTP was probably the closest precursor to modern web-based forums: when you visit a web forums site, you only get those forums attached to the corresponding site and not every forum in the world (in contrast to Usenet). Today, private NNTP is all but extinct and web forums dominate.

The problems of NNTP and web forums

Whether or not this de facto death of private NNTP is good is debatable. Let's look at why NNTP failed:

  1. Spam: This problem has two dimensions; the first is the difficulty of controlling spam posts because of the limits of the moderation system and the second is the exposing of everyone's e-mail addresses to harvesters.
  2. The lack of a web interface: Aside from the obvious convenience, not everyone has newsgroup reading software at their disposal, and at public terminals, it is usually not feasible or wise to configure the software. Web interfaces do exist, such as nntp.perl.org, but they are not very robust or useful. In a perfect world where browsers have easy-to-use integrated NNTP clients that can store server and user information on a temporary basis (very much like how FTP is handled by web browsers), such web interfaces would not be necessary, but we do not live in a perfect world.
  3. No server virtualization (i.e., hosting multiple hostnames on a single IP, like Apache's indispensable VirtualHosts)
  4. Lack of features: Granting users privileges, read-first stickies, fancy formatting (mostly because it would allow hyperlinking), editing posts ex post facto, etc.
  5. Relative difficulty of setup: Setting up a web forum is easy: Apache, PHP, and MySQL are included in most Linux distros (and are very easy to find, download, and install for Windows). Setting up phpBB is fairly quick and simple. Most importantly, it's possible to get a high-quality setup without paying a single red cent for the software.

Of course, web forums are not perfect either, and most of these problems are manifested in the interface:

  1. Marking new posts: It's relatively easy in a newsgroup reader to see what posts/threads you have not read. It's easy to manually mark posts are read or unread. Most importantly, newsgroup readers do not suffer from the granuarity problems that web forums suffer of automatically marking threads in a forum read once you've loaded the latest thread listing or every post in a thread as read once you loaded that thread.
  2. Faster and more sensible browsing. Ever get tired of clicking "next page" in a web forum to browse through both lists of threads and the posts in each thread?
  3. Web forums lack the more robust sorting and organization that is possible with proper NNTP clients.
  4. Processor-intensive tasks such as sorting and searching are now handled by the server, which severely crimps scalability.
  5. Threading: Although most web forums are capable of threading the display of threads, most opt for a linear display of threads. Why? It's hard to pick out which are the new posts in a threaded display. With newsgroup software, this is not a problem.
  6. Inconsistent interface across different forums (though with newsgroup readers, the inconsistency lies across different reader software, but in this case, it's the user who ultimately chooses the interface).
  7. Inconsistent data representation (which makes the jobs of search engines nice and fun).
  8. Crossposting can be done only by making separate posts (though enough people frown on crossposting that this is more or less moot).

It would seem that while NNTP had its shortcomings, its replacement by web-based forums has wrought problems of its own. However, the problems associated with web forums are more or less unavoidable, and with the problems that come with having to use a separate client, web forums are necessary.

The best of both worlds

It is my belief that the best solution to these sorts of problems is a new protocol that would allow the use of client software alongside a web-based interface. For example, while people can use Gmail's web interface, they could also access e-mail through Gmail's secure POP3/SMTP interface. While people can read blogs and write blog entries via web interfaces, Atom allows them to read and write blog entries using a client, and the growing popularity of syndication aggregators is indicative of the success of this idea. This is a best-of-both-worlds approach that caters to both the technically savvy and to those who prefer the efficiency robustness of client software while maintaining accessibility for people on the go and for the "Hotmail generation" (i.e., the number of people who have never experienced e-mail outside the webmail context and who are not aware of an Internet outside the web browser).

BBP, the Bulletin Board Protocol

For lack of a better name, I hereby propose the Bulletin Board Protocol. It should do roughly the same things as NNTP while addressing the issues with NNTP that can be addressed. It should also be a protocol that could work well with existing web forum formats (much like how RSS or Atom can easily fit the typical blog model).

The most important element, and the element that would differentiate BBP from the proposals of IETF's nntpext working group would be the two versions of BBP. There should be a "BBP/XML" format that rides atop HTTP. In this respect, BBP/XML would be very similar to Atom or RSS. There are several benefits of doing this:

  1. Application protocols that piggyback on HTTP are unaffected by the increased number network setups that firewall ports left and right as an over-reaction to security concerns.
  2. Secure access could be handled trivially by using HTTPS instead of HTTP.
  3. Building the "server" software would be made easier and installing such software would be made easier because it will just be another application that works alongside Apache or another web server.
  4. Using XML would allow utilization of the large array of XML and SOAP resources out there, which would be nice for the development of client software or anything else that makes use of BBP/XML.

Of course, it is usually the case that there is a tradeoff between performance and ease of development, implementation, and deployment. As such, BBP could also be implemented as a true Internet protocol, which would eliminate the performance overhead of piggybacking on HTTP and allow for optimizations like persistent and stateful connections. A "true" Internet protocol would also qualify BBP for a URL (e.g., bbp://user:pass@example.com/forum/thread/post). Finally, for Internet purists like me, it simply feels good to have a true protocol instead of yet another protocol shoehorned into HTTP. But in this day of 3PR (i.e., Perl, Python, PHP, Ruby), .NET, Java, etc., it seems that people are willing to make this ease-performance tradeoff, and as such, I would expect for BBP/XML to dominate, and "true" BBP support should thus be optional.

Implementation

Since this is a new idea that crossed my mind just last night, I have not ironed out the specifics of this protocol (I'll do that at a later date), but I do have a general idea of how such a protocol may be adopted if this idea ever gains traction. Writing a library of PHP functions would help existing web bulletin software adopt the XML version of this protocol; the goal would be for BBP/XML to be tacked on to existing web bulletin software with the same sort of ease that RSS and Atom were tacked on to blogging software. I suppose a proof-of-concept clients could be implemented by modifying the NNTP portion of Thunderbird or another open-source newsreader. Hopefully, once content is available in this format, there would be a rush of people developing real clients, as was the case with Atom or RSS.

To this end, BBP should be as compatible as possible with the current form and paradigm of web-based forums so that it could be implemented not so much as something new, but rather just a newly appended feature.

Credits

I must credit Mr. Milat for the inspiration from his lament over the the demise of NNTP.

Search in Windows Live

Wednesday, March 8, 2006
Keywords: Technology

Microsoft has been making noise about defeating Google at its own game, and today, they unveiled a new search on their Windows Live service. It's not a new search engine. I compared the results with MSN search, and they're identical. It's simply MSN search wrapped in a sexier interface. I suppose they think that a new jazzed-up interface built using AJAX will help them unseat Google. Well, will it work?

  • It does not work with Microsoft's IE 7b2. I thought that it'd be fair to get my first impression of this new toy using Microsoft's latest version of IE, but that didn't seem to be such a hot idea. Every search, whether it was done from the www.live.com or search.live.com kicked me back to the www.live.com home page.
  • It does not work with Opera. But few people use Opera anyway.
  • In Firefox, middle-clicking does not seem to work, so I can't open search links in new tabs unless I laboriously right-click to get a context menu and select from the menu. For someone who is now addicted to search result parallelism where I open up a bunch of search relevant search results in tabs so that they can load while I'm reading and so that I don't have to keep hitting the back button, this is a pain.
  • Working with search results in the traditional form involves clicking on a result, and then hitting the back button to return to the results so that you can visit more results. This does not work in Firefox. Hitting back takes me back to the main page, where I have to do the search all over again. Wow, talk about totally wrecking the search interface!
  • In IE6, hitting the back button is not as destructive. You do not have to do the search again, but there is a noticeable and annoying lag as the search results are re-loaded into the AJAX pane. What's worse is that you are taken back to the top of the search results. While this may not have been a problem with traditional search results where at least you'll be on the same page of results, when all the results are on one page and the only way to navigate through the results is by scrolling (somewhat slowly as the data is being loaded into the AJAX pane), this is a nightmare.
  • Did I mention that scrolling was slow? Unless the user somehow thinks to use the page up and page down buttons, scrolling by the weird scrollbar widget (that took some use getting to) or by the mouse wheel is relatively slow (and not that much better even with the keyboard). In traditional paged views with traditional scrollbars, you can instantly jump to an absolute position; not true here.
  • Have you see the Windows Live homepage on a resolution less than 1024x768 (or in a browser window at that resolution but that was not fully maximized), even in IE? Nothing like text running into each other to give off a nice professional first impression.

I think that the users on Slashdot summarized this well: this is a textbook example of how not to use AJAX to build a user interface. I wonder if this was what Microsoft had in mind when they talk about a Google killer.

Also, since this is not a new search engine, I wonder if Microsoft is doing this simply for the publicity. They launched the new MSN search with much fanfare, but it was poorly received. Perhaps they think that by re-launching the search in a new interface, they might get a fresh round of publicity? If so, I'm not sure if this blunder is the publicity that they want.

This entry was edited on 2006/03/08 at 18:13:24 GMT -0500.

And people wonder why AOL is dying...

Thursday, March 2, 2006
Keywords: Technology, Ranting

You'd think that with a brand name with such wide recognition, AOL could use it to attract new broadband users to their broadband service. Verizon, Earthlink, and others have all managed to successfully convert their previously dialup-heavy services into broadband. Well, there has always been the overpricing issue, but I think that the problem may lie with their technology.

Take for example, their most popular service: AIM. I haven't been using the official AIM client in eons; I have long since converted to the Windows version of Gaim. Up to right before the latest version of AIM Triton, AIM has always had a less-than-spectacular interface. You'd think that in this day in age, a company as big as AOL could make a better interface that doesn't look like it was hastily ported from Windows 3.1. Well, they did make a new interface, which they officially rolled out of beta last December as Triton, and for the first time today, I tried out AIM Triton. Admittedly, it looks a bit better, but that's not saying much, and that is about as far as my praise will go. The installation size is much bigger, and BTW, thanks for the littering of my system with "Try AOL" icons, the attempts to make AOL Explorer the default browser, and the attempts to set AOL as my default homepage. And why on Earth do I now have AOL Explorer on my computer? It's just Internet Explorer wrapped in an AOL interface, except the interface is sluggish and the entire browser is, mysteriously, excruciatingly slow, unresponsive, and CPU-hungry, which is made worse by the fact that it auto-starts when I sign into AIM. Oh, and I apparently can't uninstall it without uninstalling AIM. And then there's the buddy list, more colorful and flashy than ever, with distracting animated ads that can sometimes eat up quite a few CPU cycles. Did I mention the AOL anti-spyware software (which I had no idea was even included) that suddenly popped up a window asking me if I wanted to scan my system when I was in the middle of something else?

In the end, I think that this Washington Post review of Triton sums up the problems surprisingly well. Another example that might be worth looking at is Netscape 8, which is based on Firefox. It successfully turned a clean, friendly browser into a bloated, flashy, and distracting monstrosity complete with unwanted plugs for AOL. If Triton, Netscape 6/7/8, and even AOL's old IM software is to be any indicator of AOL's software quality in general, I am not surprised why they are sinking. They should learn from Google and the early incarnations of Yahoo! (before they too started down dark path of clutterness) and recognize that there is merit in the cliché of the customer always being right. Google Talk, for example, is fast, responsive, small in size, free of distracting animated ads, free of useless and distracting clutter, and in a word, clean. Even Gaim, whose performance suffers somewhat on Windows because its native environment is Linux, is faster, lighter, and more responsive than Triton. Google's homepage is refreshingly spartan while AOL's is visually noisy, contains Flash, and slow to load. Finally, when you install a Google product, it doesn't try to take over your machine and dump a dozen icons pointing you to other Google products. In the end, we all know which one is considered more lovable and which one has been more successful. It's time for AOL to recognize that marketing is not always the best way to do long-term brand investment, especially when this results in marketing doing the software engineering; brand investment comes first and foremost from product quality.

I'll stick with Gaim and Google Talk (only because Gaim has yet to implement voice chat). Oh, and thank goodness for virtualization, which allows me to try software like Triton without polluting my real systems. :)

This entry was edited on 2006/03/03 at 11:40:28 GMT -0500.

Fun with Gaaagle

Tuesday, February 28, 2006
Keywords: Technology, China, Politics

Speaking of evil Chinese governments and technology, I got an e-mail today (by way of my contact form) from some guy telling me to visit a site named Gaaagle (if you let it sit for a couple of minutes, you will be taken to this page).

I have already expressed my views on this controversy twice, and if you were to guess that I will not have many nice things to say about Gaaagle, then you would have guessed correctly. I won't rehash what I've said before; you can click on the links in the previous sentence for that. Is there something that is glaringly missing on Gaaagle, especially the page that you are taken to after a couple of minutes? You see lots of mocking images, parodies, and cartoons. You see accusations of greed. You see an outpouring of anger. What you do not see is a rational discussion. Is this how debates are to be carried out this day and age, by seeing who can shout the loudest and make the cleverest Normal Rockwell defacement? There are no arguments. No presentations of facts. Nothing that addresses the arguments put forth by the other side. Most notably, I have yet to see a single response in the past month to the paramount question of what exactly would be gained by Google pulling out. In fact, this has been how the entire debate has been carried out, in almost every online community, since the first day of this controversy. Every anti-Google/Yahoo/Microsoft argument has been along the lines of "The CCP is evil, and thus these companies are just as evil." Every response to arguments have been along those lines. It's like talking to a repeating record. Sure, they'll throw in some red herrings every now and then to spice things up, like the accusations of Chinese torture (yes, it's bad, but remind me again how that has anything to do with this?). This is especially true with the Free Tibet people. Although I personally support Tibetan independence, the sort of methods used by these people are not only comically ineffective, but even counter-productive at times (explain to me again how carrying out this protest like a bunch of hippies is going to win you any sort of broad support?).

Google faced an imperfect choice, and I believe that the choice that they made will make the situation better (or at least be neutral in effect), and if these people are so intractable in their narrow black-and-white "if you're not with us, you're against us" view of the world, then all that's left to do is to nod and smile.

PS: Speaking of discourse filled with outrage but little substance, doesn't this sorta remind you of the Democrats? Sigh. If only Clark had won the nomination in '04 instead of Kerry.

Slashdot Ignorance Strikes Again!

Tuesday, February 28, 2006
Keywords: Technology, China

I like Slashdot; I really do. It makes keeping up with technology easy by gathering all the interesting headlines all in one place. That having been said, the tendency of Slashdot towards sensationalism, knee-jerk reactions, fan-boyism, and ignorance is quite annoying (to its credit, Slashdot is actually much better in this regard than other places, like Digg).

So I saw this Slashdot headline in my RSS aggregator today: China Prepares to Launch Alternate Internet. My first reaction was, "Oh no, what are those commies dreaming up of this time?" I read the blurb, the comments that were modded up, and then the articles. Admittedly, the articles were vague, and I think that the translator should have been fired, but it seems that Slashdotters had no idea what they were talking about.

First, most people thought that China was going to set up their own DNS system to handle domain names with Chinese characters (e.g., 刘锴.net). Since the existing .com and .net registries already allow International domain names (IDNs), this would certainly be a major conflict; this exact system was implemented some time ago. After reading the article, it seems that all that the Chinese government is doing is setting up three new TLDs whose lingustic translations are .cn, .com, and .net (e.g., 刘锴。网络), so there is no overlap or conflict whatsoever with the existing .com and .net setup, contrary to what most misinformed Slashdotters think. Just to make sure, I picked out a random Chinese-based domain registrar, and sure enough, these were just new TLDs that are listed alongside existing TLDs.

Of course, adding new TLDs without getting ICANN's blessing is not quite kosher, but ICANN's power is not legally binding, and since these involve adding new namespaces that other countries couldn't care less about, it doesn't really matter that much. Furthermore, to label this as an "alternate Internet" is really misleading. Screwing ICANN isn't quite the same as screwing the IANA; remember, this is only DNS that we're talking about; the network is still interconnected (and firewalled).

Finally, as expected, the "Chinese-government-is-evil" card was played. Not that I disagree--I think that it is "evil" and that it shouldn't have bypassed the ICANN like this--but this ignores two important problems. First, it is not clear how exactly this could be used to thwart freedoms. Yes, the government has control over the registrations under these new TLDs, but that was already the case with the original .cn, and all the other TLDs in the world are unaffected. Also, if they wanted to censor access via DNS, they could do so without any of this. Second, there are some legitimate benefits. It widens the cramped DNS namespace a little bit, and it is also convenient to not have to switch the keyboard input between the Latin and Chinese character sets, which is genuinely confusing for some people (including me at first).

This entry was edited on 2009/03/08 at 17:40:59 GMT -0500.

In Defense of Google

Wednesday, February 15, 2006
Keywords: Technology, Politics, China

I have already written about this topic back in January. Google made a statement in January about this, and today, Google posted its Congressional testimony on this matter. The testimony is definitely worth a read.

Do no evil? But censorship is evil!
As Google states in its testimony and as I can attest from experience, the censorship was already going on before this started. The government tries to filter requests as they are sent to Google's U.S. servers and accessibility to the U.S.-based google.com is slow and spotty. Most importantly, even when the search results are not censored, access to most "undesirable" websites are blocked anyway. Offering a new google.cn service in addition to google.com and giving users the choice between fast but censored searches on google.cn or government-crippled but uncensored searches on google.com is not evil (especially since many day-to-day searches are on uncensored non-taboo subjects). Those in China who really care about politics often are aware of how to use proxies to bypass China's Internet security (which is what I did when I visited), and those people were never affected before and will continue to remain unaffected. In the end, offering choice is not evil. Google has not taken anything away from the users and while implementing de jure censoring on content that was already censored de facto does not stand on the highest of principles, it has no real effect good or bad in reality. And remember, these are just search results.

They are making a quick buck over there!
And this is wrong because...? They have employees to pay, servers to run, etc. They are a business, and businesses are supposed to make money. It is not ethical for businesses to make money by doing evil, but if they are not doing evil, then there should be no reason why they cannot pursue some profit. So the argument about making money works only in conjunction with being evil; it does not stand on its own. Considering Google's support of open source, open standards, encouragement of employees to drive green vehicles, etc., Google certainly strikes me as less evil than other money-seeking entities.

IBM helped the Nazis kill the Jews, just like how Google and others are now helping China!
Confirming Godwin's Law, House Rep. Lantos compared this to IBM's punch card technology helping the Nazis exterminate the Jews by facilitating logistics. When in doubt, sensationalize. There are differences here, however. First, filtering is fairly easy and can be crudely implemented without any sort of special technology. This would be akin to the Nazis having bought screwdrivers from the United States; they could make screwdrivers themselves fairly easily. Rep. Christopher Smith at one point expresses dismay that American technology is being used by the Chinese government for their nefarious deeds, demonstrating poor understanding of the issue; the Chinese have their own filters that they will happily apply if Americans do not use their own. Second, the Great Firewall of China is already quite adept at filtering, so this would be akin to the United States supplying the Nazis with excess screwdrivers when the Nazis already had enough of their own. More importantly, one must ask what the alternative is. Not doing business in China? In that case, then Chinese companies will quickly fill that gap, and I would much rather have an American company with headquarters safely outside of China censoring search results than a Chinese company under the nose of the Chinese government doing it.

[added] But Google actions are endorsing and legitimizing the CCP!
This was an interesting objection raised in one of the comments to this blog entry. I doubt that Google complying with the laws constitutes any real political effect beyond the touch Romantic symbolism that activists hold so dear. Furthermore, it is a mistake to confuse doing something out in response to circumstances with doing something because it truly believes in it, and we must not forget that the real political weight lies with the Western governments' legitimization of China.

Will someone please think of the children?
House Rep. Lantos asked Google today, "I'm asking you a direct question (about families)--I don't want your philosophy." This was after Lantos had asked Yahoo! about the well-being of the family of the journalist whose name Yahoo! handed over. Google has done no such thing (and by keeping Gmail and other services out of China, it is avoiding such a possibility), and no family has ever been hurt by image searches of Tian'an'men showing rosy pictures instead of tanks. That Lantos asked Google and Microsoft a question that was appropriate only for Yahoo! demonstrates either a lack of understanding of the issue, or, more likely, a desire to politically capitalize off of the sensationalism. Listening to some of the remarks made by Congress today, it seems that this has turned into a three-ring circus and that some people are using it for political gain.

But none of this changes the fact that the Chinese government is evil and totalitarian!
I agree! The problem is not the moral compass of these companies, it is the evil regime in China (I think we would all rejoice the day when it finally falls). But in the meantime, whether we like it or not, when in China, you have to obey Chinese laws. Americans would balk if other people came to the United States and ignored American laws. If Congress has such an aversion with China, then perhaps it should be considering diplomatic solutions. Is the American government prepared to back companies up if they do business in China, refuse to obey Chinese laws, and are faced with an angry Chinese government? Unless Congress can somehow give American companies some sort of teeth with which to resist the requirements of the Chinese government, then it is in no moral position to criticize companies for things that are out of their power.

In the end, critics attack the censorship, but they fail to offer any insight as to how that censorship can be dealt with. There is nothing that these companies can do that can change the political reality in China, and when an absolute "non-evil" is not possible, then one has to accept the lesser of evils. Understandably, people are not comfortable with that notion, but perhaps this analogy would help. Normally, shooting your pet would be an immoral and "evil" thing to do. What if your pet is ill and will die soon? Ideally, you would take it to a vet, but what if that was not possible? Is shooting it to put it out of its misery still immoral? This is what I mean by choosing the lesser of two evils. It may very well be that because search engine technologies have matured and are converging that the contrast between the two evils is not so well amplified, but this principle is still applicable.

This entry was edited on 2006/02/16 at 13:53:16 GMT -0500.

Yahoo!: Incentives & Trying Out New Mail

Thursday, February 9, 2006
Keywords: Technology

Yahoo!'s search incentives

A writer at C|Net has reported that Yahoo! is considering offering incentives to people who use Yahoo! as their primary search engine. Here is an excerpt of a survey that Yahoo! sent to some of its users:

Yahoo! is considering launching a program to reward people who make Yahoo! their primary search engine. Yahoo! Mail users will be given early access to this program. You will receive a monthly reward if you make Yahoo! your primary search engine. This means that most of the searching you do each month must be on Yahoo! Search. To ensure users receive credit for all searches conducted on Yahoo!, you may need to log in or use a search box specifically designed for this program (e.g., a Yahoo! rewards toolbar).

I thought that Yahoo! had given up on the search engine wars and is going to now concentrate on services instead? Whatever the case, I think that this is an interesting idea; we are simply starting to see some of the same marketing tactics of the traditional economy being adopted by the new economy. There is one thing that strikes me as a sticky issue, though: there is no foolproof way for them to be sure that you use Yahoo! for "most" of your searches. They can tell how many times you use Yahoo! search per month and then grant you rewards if you pass a certain threshold, but there is no way whatsoever that they can be sure that, at the same time, you used Yahoo! more than you used Google and thus "most" of the time. Since they will have to do this rewards program based on the raw number of searches, this invites other problems. According to Google's handy search history tool, I performed 26 unique searches yesterday. I would imagine that for casual users, that number would be less. For people who do a lot of searching, it would thus be easy to do the minimum necessary for Yahoo! and then use Google for the rest. In the extreme case, I could picture myself rigging up a simple program that will send a number of random search requests to Yahoo! each day, thus fulfilling the requirement without me actually touching the Yahoo! search engine; obviously, this would be undesirable for Yahoo!, but countering it is no trivial matter.

Anyway, enough about that. So what kinds of rewards are they thinking about offering?

  • No Yahoo! Mail ads: This is worthwhile, but then again, it is only because Yahoo! uses graphical ads and, even worse, animated Flash ads. Google ads are not only more pleasing to the eye, but they have actually been useful on occasion.
  • Unlimited Yahoo! mail storage: If you are anywhere close to using up the 2.6+ (and growing) gigabytes of storage that Gmail offers, please raise your hand...
  • Outlook access to Yahoo! Mail: Ahem. Ya know, Gmail offers this for free.
  • Five free music downloads each month
  • Discounted music subscription
  • Donations to charity
  • PC to phone calling credit
  • Netflix discount
  • Discounted Yahoo! Personals
  • Frequent Flyer Miles: Yep, they threw in the kitchen sink.

Not too bad; I guess could see myself springing for some of these offers, if all it took was for me to do some searching...

Taking Yahoo! Mail Beta out for a spin

In other news, I finally got invited to try out Yahoo! Mail Beta. This is the new mail product that people have been buzzing about for a while now as the Gmail-killer, so I have been quite anxious to see what all the fuss is about. Like many Gmail and many other products, it uses asynchronous JavaScript (AJAX) for smoother user interaction. Unlike Gmail, which tried to redefine the mail experience by introducing many features that average users were uncomfortable with (e.g., replacing folders with labeling and getting rid of the delete button), Yahoo! Mail emulates the Outlook Express interface so that people will get something that they are familiar with. I have to admit, the whole thing looks very slick and shiny, with a tabbed interface to switch between composition drafts and your mailboxes and even a spiffy drag-and-drop interface. The Gmail interface took a bit to get used to, so for Yahoo! to offer something more traditional is advantageous. The integrated RSS reader is a pretty cool feature that I wish Gmail had. Unfortunately, Yahoo! reported an error when I tried to add some feeds; well, it is a beta (if Google could just integrate Google Reader into Gmail, that would be wonderful).

Yahoo! Mail Beta Screenshot

That is about as far as my love affair with this beta will go, however. My biggest gripe is that the slick interface comes at a hefty cost: responsiveness. The interface seemed very sluggish and unresponsive, and any form of rich text scrolling is annoyingly slow. After clicking the inbox, I had to wait a couple of seconds for it to show up. Mind you, I am using a fast and stable broadband connection and a decent modern processor (2.8 GHz P4 with 1.5 GB RAM). When loading the inbox, the usage meter on one of my virtual processors (HT) shot up to 100% for the duration of the load. Not only does Gmail load the inbox in less than a second, Gmail barely registers a blip on my CPU monitor. I then tried Yahoo! Mail Beta on my old 800 MHz laptop, and the process of loading and starting Yahoo! Mail Beta not only caused the CPU usage meter to hit 100%, but it remained at 100% (rendering the computer unusable) for the entire duration of the startup, which took a remarkable 57 seconds (vs. 7 seconds for Gmail on this same laptop; startup times were 15 seconds vs. 2 seconds for my P4). This is e-mail, not Adobe Photoshop for goodness sakes! Gmail's minimalist interface may lack slickness, but it is fast, responsive, and efficient, and when you are using e-mail day in and day out, what matters the most? That you have nicely shaded tabs or that you have a nice, responsive interface? For Gmail users who like to use a traditional interface, Gmail offers free and secure POP3/SMTP access, which will allow you to use software like Outlook Express with Gmail; Yahoo! wants you to pay for this. Gmail also allows people to specify different "From:" addresses, so that I could use my Gmail account to both send and receive mail for kailiu.com. Lacking this feature, Yahoo! would allow you to receive this sort of mail, but not to send it. Oh, and the ads in Yahoo! Mail Beta are neither useful or unintrusive (can we say Flash animation?). So while I am truly very, very impressed as a programmer with the fact that they were able to build such a slick and familiar interface using JavaScript, Yahoo! Mail Beta is simply unusable. It is great eye candy, but that is about it. Gmail took risks when it abandoned a number of traditions of e-mail interfaces, but now I see that they were right in doing what they did: they were able to build an interface that was intuitive in its own way and that was suitable for the web medium. By clinging to old interface styles, Yahoo! stuck with an interface that is not really suitable for the web, and while they managed to pull it off, it does not really work.

This entry was edited on 2006/02/12 at 17:05:48 GMT -0500.

Google vs. Verizon

Wednesday, February 8, 2006
Keywords: Technology, Economics

Here's yet another article about the recent lust for content taxation that seems to be infecting various broadband ISPs. Read the article.

As if asymmetrical up/down bandwidths aren't enough, they are incensed that content providers don't have the right to use their pipes for free. Excuse me, their pipes? Their arrogance is extremely infuriating. Of course, Verizon owns the pipes, but the millions of Verizon customers pay a hefty monthly toll, and as a result, these customers should be expected to have the content that they want come through those pipes without meddling; by no means is Verizon giving away its pipes for free, contrary to what they would like for the public to think. If Verizon feels that its fees are not enough to cover the costs of maintaining those pipes, then it perhaps it should charge those people who requested the data, which it won't do because it can more easily get away with squeezing money out of various Internet companies than its customer base. Google is not enjoying a free lunch, either. It has to pay a hefty price for its bandwidth and for its connection to the Internet, though those payments do not necessarily go to Verizon. If someone makes a phone call, the caller is paying his local phone company for the right to use the lines to make outgoing calls and then the callee is paying her local phone company to for the right to use the lines to receive incoming calls. If this sort of thing goes through, it would be akin to forcing the caller to pay twice--once on his end and again on her end for the call, even though the callee is still paying on the receiving end, for a total of three charges.

In the end, Verizon and the other ISPs are doing this because they feel that they are in a position to dig into the profits of Google et al. and also because they fear that as Internet voice calls and other services become more common, they will lose out. Bundling their own (mediocre) services with their connection service is akin to Microsoft bundling their own software with Windows. While vertical integration can often be a good thing, this sort of alliance between content provider and connection provider is dangerous as the latter is monopolistic in nature. It produces a conflict of interest that has led to them toying with the idea of charging rival content services, which would be a gross abuse of market power, much akin to Microsoft charging rival companies for the their software to run on Windows.

This is a perfect example of when a small amount of government regulation is necessary (simply affirming the principle of network neutrality would suffice). Although there may be some token competition, the cost of switching ISPs and the impracticality of multiple companies laying multiple lines make it such that broadband ISPs wield near-monopolistic powers (and in some places, they do wield monopolistic powers). They are natural monopolies, which governments have the obligation to regulate in order to protect the free market, and if nothing is done to reign in abuses of power, then the Internet--one of the finest specimens of free market economics--will suffer. Much like the railroads of long ago, the Internet is the essential connecting fiber that binds our New Economy together, and we can ill afford the 21st-century equivalent of railroad robber barons.

Side note: I was greatly disappointed by the article, as it mostly dwells on arguments given by John Thorne, the Verizon executive, and the prominent placement of biased language such as "free lunch" will not help the average reader of this mainstream newspaper fully grasp both sides of this issue. That the arguments against the Verizon plan are presented very briefly and almost in passing at the very end is also unfortunate.

This entry was edited on 2006/02/08 at 12:50:23 GMT -0500.

Google and the Great Firewall of China

Friday, January 27, 2006
Keywords: Technology, Politics, China

I have to admit that I was pretty surprised at how very negatively and intensely most of the tech community is reacting to Google's censorship in China. So anyway, here's my take on this whole thing as a Chinese-American...

Background: .com vs. .cn

Google's actions simply involve the establishment of a new google.cn domain. Searches on this new .cn domain are censored. They are not on the .com domain. So anyone who wants to get the uncensored results can just use the .com domain. There is a Chinese language interface for google.com (there's even a Klingon language interface), and that was where I was taken when I tried to use Google search from China last summer: the uncensored google.com domain served from a server in the States using a Chinese language interface (because it detected that I was visiting from a Chinese IP address). Even gmail.com (a service that Google does not intend to officially introduce in China for some time) worked. None of these .com services are affected, as they represent servers not located in China. The downside was that these services were often slow and were sometimes completely inaccessible (considering that I could get fairly decent speeds when I SSH'ed into a private server in the US, I suspect some sort of government foul play). Not that these uncensored results did me much good, since I couldn't access a number of sites (without setting up a SSH tunnel :p), and believe me, there are a LOT of sites (even cnn.com!) that are affected by the vaunted "Great Firewall of China," which certainly lives up to its name.

In the end, nothing has changed. Google has simply added some servers in China and are being forced to comply with the standard set of government restrictions for those .cn servers only (i.e., if google.com is still accessible, then people can still get uncensored results). And before people bash Google too much, let's not forget about all the other companies doing business in China who are being forced to obey these local laws, and unlike the other search providers in China, Google openly discloses the censorship when displaying results.

But it's the principle of the matter!

Many claim that Google does have an option, and that's to not do anything. Not entering the Chinese market will certainly hurt Google's bottom line, but Google's mantra of "do no evil" seems to suggest that doing the right thing should trump the pursuit of treasure. Despite being a free-market economist, I do admire and strongly believe in this "do no evil" mantra, but there is one very important point that I think people are missing: what is the evil that is being done? What would things be like if Google does absolutely nothing? Does it make the Chinese more free? No. Would Google refusal to officially enter the Chinese market inspire the Chinese? Considering that Google's presence in China is so small (gee, I wonder why?) that most Chinese are not aware of it, no. Does Google's entrance into the Chinese market help the Chinese government in any way? Considering that the Chinese government could probably care less if there's one more search provider in China, no. Does this action by Google hurt the Chinese in any way? No (remember, there's always google.com, which is unaffected!). Does this action by Google affect users outside of China in any way, shape, or form? No. Does this action by Google serve as an endorsement and statement of support for the ways of the Chinese government? Only if you want to read it that way; remember, censorship in china is mandatory, not voluntary, and Google's official statement contains no statements that can be construed as support for the policies of the Chinese government. Of course, just because a law exists doesn't mean that it should be honored; it is the duty of people to resist unjust laws. But what can Google do? Google is in no position to offer any sort of challenge to Chinese laws; only the Chinese people are in such a position. So, um, where's the "evil"? Ruining the environment is an evil that is not easily justified by profit. Installing spyware is an evil that is not easily justified by profit. But, I ask again, where is the evil in setting up restricted servers in China? There was a photo on a news website showing supporters of the Free Tibet movement holding signs and protesting Google's move. Despite being sympathetic to Tibetans, I have to wonder if these people ever considered for just one second exactly what kind of harm Google has done to their movement. Anyway, to sum it up, if there is no true "evil" involved, then why shouldn't Google try to firm up its bottom line?

General Thoughts: China

China is slowly becoming more and more democratic. I was struck by how willing people were when it came to criticizing the government. Hop into any random taxi cab, strike up a conversation about government, and out comes a string of harsh words directed at the government. I find it odd that foreign news services doesn't seem to be able to pick up on this. In any case, the liberalization of China is a gradual process fueled by growing affluence and growing influence from the outside world (I'd imagine that the Internet helps). A bold (and foolish) gesture of defiance from Google is not going to do nearly as much good for democracy as the gradual improvement of China's information networks. Every Chinese knows about censorship, and they even joke about what can or can't be said. Censorship isn't working, and it's only a matter of time before the dam breaks. By offering services in China, Google is contributing to the water behind that dam. In the end, censorship in China is not Google's problem and there's nothing that any foreign entity could do anything about; it's ultimately a problem with the Chinese government that only the people of China can do anything about.

General Thoughts: Google

I have always been impressed with Google track record. Resisting the DOJ's ridiculous crusade against pornography (before someone compares this to the China scenario, remember that challenging the Chinese government and challenging the US government are two very different ballgames), being forward and upfront about controversial points that less honorable companies would've tried to hide, supporting open source, setting up strict guidelines for its software installers, supporting open chat standards, supporting open source, etc. are all examples of Google's "do no evil" policy, and my faith in them have yet to be shaken. Besides, I would much rather have the Chinese be introduced to the wonders of the Internet by way of Google instead of by way of Microsoft. ;)

This entry was edited on 2006/02/10 at 01:36:05 GMT -0500.

Potpourri (Random Stuff)

Wednesday, January 25, 2006
Keywords: Economics, Politics, Technology, Potpourri

"Dark Matter" in Economics

As mentioned in the latest issue of The Economist, there is a recently-published economic theory about something called economic "dark matter", which tries to explain why, despite having a huge negative account balance (i.e., our debt to the rest of the world), the US has a net positive flow of capital returns, which suggests a positive account balance. The idea here is that we are underestimating our true foreign account balance, much like how "dark matter" in physics serves as a fudge to account for what appears to be an underestimation of the amount of matter in the universe.

In depth: http://www.rgemonitor.com/blog/setser/113810

Ignoring the Facts

There's an interesting article about how people, once they have made up their minds on an issue, will tune out things that contradict that view, hampering rational judgment and discourse. This comes as no surprise. For example, I've noticed this in the debate about abortion, and even in personal interactions (i.e., how one's perceptions of others' actions are very strongly colored by how one already views other people). It's just interesting to see a scientific confirmation of this.

On that note, I wonder if this is how religions work: there are some who tend to attribute positive things that happen to them to God while glossing over the many neutral or negative events. And to be fair, I've also spent quite a bit of time wondering how much of this "filtering" colors the views of atheists.

Google Reader

There's a shocking lack of good RSS readers for Windows. Sage is nice, except that the interface is a bit awkward (probably because it's a Firefox extension). Thunderbird displays the whole page instead of just the content from the feed (plus, I don't use Thunderbird anyway). Opera's reader was okay, except that I don't use Opera. And all the other readers are either bloated, slow, .NET-based (eewww), and/or clumsy in implementation. I was so tempted to just write my own. But I thought that it might be worthwhile to try some web-based readers, so I first tried Bloglines, but the interface was clumsy at best. And then, I discovered Google Reader, and I'm impressed. A well-written software reader would still be better, but this comes close enough.

Google Sitemaps

Although the Google Sitemaps tool has been around for some months now, I didn't know that it existed until today. I'm going to try it out tonight; it looks like it could be pretty useful.

This entry was edited on 2006/01/25 at 17:36:13 GMT -0500.

Trying Out Internet Explorer 7 Beta 2

Monday, January 23, 2006
Keywords: Technology

I'm typing this entry using IE7b2 (the 5299 build that was leaked a couple of days ago)...

  • It seems like that the CSS issues that I would've like to see fixed (support for the opacity syntax, :before, and PRE blocks inappropriately stretching the width of parent blocks) are still not fixed since IE7b1. Grrr.
  • It seems like certain other CSS issues were fixed since IE7b1, like the parent>child syntax, which breaks many CSS filters, including the one that I use to work around IE's other bugs. Damnit!
  • Microsoft turned on ClearType in IE7b2... even though I have ClearType turned off for Windows... and there's no option in IE7b2 to turn ClearType off in IE. ClearType doesn't look very good on flat-screen CRTs; what happened to letting the user control stuff like this?!
  • There's no menu bar. There's a kind of iconic drop-down toolbar that replaces it. It's an interesting concept worthy of merit, but for someone who isn't used to it, it's pretty distracting and inconvenient. I'm just trying to picture the Average Joe who is used to working with a menu bar suddenly having this foreign interface thrust in front of him. Oh, and you need to dig in the menus to find the options to change this; no more easy access via a right-click.
  • IE7b1 included Google Search in its quick-search box in the upper-right corner. IE7b2 does not include it (only MSN Search), and there doesn't seem to be an easy way (at the moment) to add it.
  • The new icons are very attractive, though.
  • The RSS reader looks nice (something that's missing in Firefox), but this was already there in Beta 1.
  • Microsoft sent me a CD of Beta 1, but it wouldn't install because the Genuine verifier was somehow fubared even though my OS is genuine (I ended up having to download a crack for Beta 1). Looks like this problem is fixed with Beta 2, as it installs just fine without a crack. I think that this (along with the icons) are the only things I like about Beta 2 over Beta 1. How pathetic is that?

Verdict? I think I liked Beta 1 more than Beta 2 (I still like Firefox more, though). And I'm not looking forward to finding new ways to work around IE's buggy CSS.

This entry was edited on 2006/01/23 at 16:15:33 GMT -0500.

Wireless Everywhere

Monday, January 23, 2006
Keywords: Technology

A little chat I had tonight...

vivienm: *hugs wireless internet in class*
Kai: wireless is everywhere these days
Kai: every single *budget* motel that we stayed at during our trip to Florida had free wireless
Kai: I run network stumbler, and I can pick up over a dozen different signals
Kai: it's ridiculous
Kai: the weekly brick&mortar ads now dedicate a page or two each week to wifi crap
Kai: it's mainstream
Kai: *sigh* remember back in the day, like 5 years ago, when this stuff was cool and exclusive?
vivienm: my laptop picks up... 5 wireless networks here
vivienm: including mine
vivienm: *shrug* I remember when the equipment was horribly pricy
vivienm: and when it wasn't built into laptops :P
Kai: yah
Kai: $120/card when I got mine, and that was on sale
Kai: now cards are dirt cheap and $120 gets you a premium high-end base station
vivienm: I got my current 802.11g router (which isn't used for routing) for like $10 after MIR
Kai: $10 CDN?!

Wow, 5 years! Seemed just like yesterday for me. And the number of networks that I could pick up at this particular location went from zero to over twelve in less than four years.

This entry was edited on 2006/01/23 at 03:54:21 GMT -0500.

Time/Date Format

Monday, January 23, 2006
Keywords: kBlog, Technology

I'm genuinely curious about this: what the heck were they thinking in RFCs 822 and 2822 when they set the format for date-time? Why does the format look like "Sun, 20 Oct 2002 23:47:15 GMT" instead of "2002/10/20 23:47:15 GMT"? Okay, the day-of-week is optional, thank goodness, but was it really necessary to force the use of named months instead of a numerical month? Why expend that extra effort (albeit not that much, but I guess it could add up if you're working with a lot of data and this was 1982) converting from a numerical value to text and back again? Not to mention, the burden added to the programmer. Imagine doing that with C. Not that it's difficult or time-consuming, but just one more annoying thing to have to take care of.

I just added support for "conditional GETs" to kBlog after noticing that Apache was logging script errors for the 304 response code, which meant that kBlog had to parse the If-Modified-Since line in the request header. Great. So it's either load an extra module (Date::Parse) or manually parse the string and feed it to POSIX::mktime. Neither of which is difficult thanks to power of Perl (this blog entry probably took just as long to do), but it's the principle of the matter: what good comes out of this inefficiency in the specification?

This entry was edited on 2006/01/23 at 02:10:29 GMT -0500.

Using XML for kBlog

Thursday, January 19, 2006
Keywords: kBlog, Technology

I remember a buzz a year or so ago about the use of XML as a data storage format, and how XML files would be ideal to take the place of a database in certain situations. And so, I decided to explore that possibility with the kBlog beta.

Since my needs were pretty simple, I didn't use a full-blown XML parser. The XML::Simple module sufficed. XML works, but it's not very efficient. There's the character escaping, there's the problem that in order to parse XML, you need to look for matching start and end tags, etc. I guess the XML::Simple module may not have been the best choice either, but in the end, it didn't seem to be that suitable. There are many other encodings that are more efficient to parse, and I think I'll switch to an encoding of my own for data storage when I do version 1.0, mostly so that kBlog would then depend only on the fairly common POSIX module (but a minor performance enhancement wouldn't hurt, either) (the XML RSS is simple enough that it is already being generated without XML::Simple; generally speaking, XML is easy to create, but annoying to parse without a parser).

I guess the big upside to XML is human readability. It's easy to read and to edit (well, except for all the character escaping that needs to go on; that cancels out a lot of the benefit, actually), but aside from that, XML parsing isn't as efficient. The other upshot is that XML is codified and is a standard. I just wish someone would codify a data transport/storage standard that's geared towards machine readability and parsing efficiency instead of human readability.

Personally, I think XML is a bit overhyped. It's just a transport/storage format, and there are people who talk it up as if it's somehow The Next Big Thing™.

Fun with CSS (or not)

Thursday, January 19, 2006
Keywords: kBlog, Technology

One of the things that I was looking forward to trying out was the use of CSS for layout. Prior to this, I've used CSS for formatting and tables for layout. It seemed to be the hip new thing to do, and that everyone was doing it, so this should be fun, right?

The Good

The upside to doing this is the divorce of layout from the actual HTML. It was really refreshing to see how clean the HTML code looks compared to the code that I had for my previous website. What I used to do with lots of nested tables I can do now with a few DIVs. It's easier to tweak and adjust my layouts, too. The nicest thing is that for a script-generated site like this, having lighter HTML also means that the script that is generating the page is also much cleaner and easier to look at. And as a bonus, the site is also somewhat usable when CSS is disabled (in Firefox, go to View > Page Style to see what it looks like with all the CSS off), so when I'm browsing using text browser like Lynx (which I sometimes do, actually), things work.

The Bad

And then there's Microsoft. Years ago, when the browser wars were raging between Netscape 4 and Internet Explorer, I was rooting for IE. This was before Gecko, of course. It was such a relief to have a browser that would render what you tell it to render instead of seemingly randomly deciding to made something either 10 pixels wider or narrower (which was what Netscape 4 was doing). I hated the web design process back then because I spent significantly more time tweaking the code to work around the various bugs in Netscape 4 than I spent working on the design itself, so I was not sorry to see Microsoft win the browser wars. But the times have changed, and ever since mid-2003, I've been using Firefox (or Firebird as it was called back then) as my default browser. So come time to do the layout, it was only natural for me to do all the preliminary work on my default browser, following the W3C specifications. That was a dumb thing to do, because as soon as I started to test the layout in IE, things got nasty, either because of spotty CSS implementation or because of the countless bugs in IE. I eventually ended up spending more time banging my head against the wall, cursing Microsoft, and working around the IE-related problems than I spent on the actual layout itself. Déjà vu with an ironic role reversal.

The Ugly

Whoever at the W3C who decided that it would be good a good idea to officially deprecate the use of tables in favor of CSS for layout should be shot. There are certainly many delightful benefits (as mentioned above), but there are severe shortcomings of using CSS for layout. The whole notion of floating the CSS boxes to do columns is really a hack. Just as tables were meant for tabular data and not for layout skeleton, CSS boxes were meant to format parts of a page and not to serve as a layout skeleton, and the liberal use of float to try to shoehorn it into the role of serving as a layout skeleton is just as unwieldy as the use of tables. It also doesn't help that IE is a bit bugged in respect to all this. The lack of stretchable dimensions (or at least the ability to define it with mathematical expressions without resorting to JavaScript) is annoying as well (no, the percentages don't count because you run into rounding problems with them and it doesn't help when sibling elements are fixed in size). Positioning is also a royal pain in the ass. You can either go with the flow, float the position, or take an absolute position in respect to the canvas. There's no way to take an absolute position in respect to something that is relatively positioned (like a parent element). This makes absolute positioning pretty much worthless unless you use it for the entire page (which is not only tedious and difficult to maintain in the long run, but also misses the point). In the end, I found that tables are MUCH easier to work with for layout. They may be ugly and bulky, but at least they are suitable. I do love the benefits of CSS layouts and would love to see the day when CSS layouts completely displace table layouts, but that will require enhancements to CSS to allow them to perform that role. It seems ridiculous that they would deprecate something without first ensuring that the replacement is suitable.

Oh, and another rant against the CSS box model: there are benefits to setting the width and height to the inner content width height. This is especially true if the size of child elements are known and thus it's easy to set the parent's width and height. But in layouts, dimensions are often constrained not by child elements, but by sibling and parent elements, which is why Microsoft's way of doing dimensions in IE5.5 and earlier made a lot of sense: the dimensions of an object representing the content dimensions plus the padding and border. For example, if you have a parent div that is 400 pixels wide, and you want to fit two boxes with a margin space of 20 pixels in between, setting the width was easy: (400-20)/2. But with the official CSS box model, you have to toss the border and padding into that equation, making it slightly inconvenient (especially if you are tweaking the border and padding to see what looks best). This is especially troublesome when working with percentages. A child that has any sort of border or padding can never use a dimension of 100% (making it impossible to get a child with a border and/or padding to match the size of the parent if the parent's size is unfixed). A sensible solution would be to introduce an alternate measure of dimension like outer-width and outer-height, which can be set in lieu of width and height. Another solution would be to accept mathematical expressions in CSS, such as width: 100% - 8px.