Earlier this morning someone mentioned in passing that an advantage of Linux is that it can't get viruses. I think we all know that's not really true, but it seems to be a common misconception.
The first virus I ever heard about was actually a Unix virus, presented at a Usenix conference. Andrew Hume stuffed some self-replicating code in the padding of Z-format executables and passed them around Bell Labs during, if I recall correctly, a wc performance war that was taking place inside the Unix Systems Lab. It didn't do much other than attach itself to other Z-format executables, but it provided pretty decent proof-of-concept.
Thursday, May 21, 2009
Tuesday, May 19, 2009
"Unguessable" URLs
Yesterday I saw a tweet go by saying that some websites provide security by generating "unguessable" URLs. It didn't say what they're trying to protect; if it's something low-value that's probably not a problem. But it does raise the question of what's meant by "unguessable" and whether or not the people writing the code have any kind of understanding of randomness and how to get it. It is reminiscent, to some extent, of people who post to Usenet that they've got an "unbreakable" crypto algorithm and it turns out to be XOR-based. So, given the constraints of a URL format (characters, length, etc.) how likely is it that given a few examples of one of these unguessable URLs someone would really be unable to start generating guesses and getting hits?
In 2001 Michael Zalewski published a really fascinating paper in which he provided visualizations of the required-to-be-random TCP initial sequence numbers from a number of different operating systems, here. Kevin Mitnick was able to exploit the poor ISN randomness to hijack TCP sessions and break into some systems. The combination of the high visibility of Mitnick's actions and Zalewski's striking visualizations motivated a number of OS vendors to improve their random ISN generation, and a year later Zalewski published a follow-up paper in which he published visualizations of the revisions. There were still problems. Randomness is hard. Relying on the ability to protect content or transactions through the use of "unguessable" URLs seems unduly risky to me.
Saturday, May 16, 2009
Cloud computing did NOT cause Google's outage
This strikes me as just wrong. The issue here isn't that there's an inherent problem with cloud computing that led to the outage. Indeed, cloud/distributed computing create a robustness in the face of a number of different kinds of failures if processes and services can migrate or distribute onto unaffected nodes. This was a routing failure, one that would have been a problem if the services were running on a single server or many servers or virtualized servers or distributed servers or ... .
There are certainly reasons to be careful about storing data on servers you don't control (see, for example, this), but I really don't think concerns about robustness should be among them.
Subscribe to:
Posts (Atom)