Vanity Searches and a Little History

I mentioned in a previous post the fact that Google has gone through the effort of making available some very early USENET archives. I admit that I was never much of a poster on USENET; rather I would read through newsgroups and search for answers to the burning questions of the day.

A few days ago I did a vanity search in that archive to see what I’d find. (A vanity search is when you search for instances of your name on the web — try it sometime — it’s fun.) I found this posting that dates back to August 1994. It’s the earliest post that I’ve located under my name. You can read the post by clicking through, but the signature block brought back memories:

Jeffrey Kay "Net Surfer"
------------------------------------------------------------------------
Internet: jkay@k2.com UUCP: uunet!kappa!jkay
------------------------------------------------------------------------
"In the middle of every difficulty lies opportunity" -- Albert Einstein

Interesting memories surround this. One thing you’ll notice is that I’ve had this e-mail address for at least seven years. You’ll also notice the “net surfer” reference — sort of goofy in retrospect, but surfing the net in 1994 was a new and interesting thing to most people. I also had a UUCP address that I suspect very few of you will recognize.

Around that time, direct Internet connections into workplaces were very rare and no one had connections at home. Being the developer that I was, I had been goofing around with different operating systems and around 1990 purchased a copy of Coherent, produced by the Mark Williams Company. It was a Unix variant, based on Unix v7. I had been doing mostly DOS development at that point, so Coherent was an incredible opportunity to learn. It was cheap and ran on just about any x86 pc. I took a 286 PC and loaded it up and in very little time had a running Unix system in my basement.

Connectivity was the next order of business. Coherent included UUCP, so I opened a low volume UUCP account at UUNET (then an independent company). For $300 a year ($25 per month), I received two hours of UUCP connectivity. I named the Coherent box “kappa” and its UUCP address was uunet!kappa. My user account was jkay and hence the complete e-mail address uunet!kappa!jkay. Twice a day my Coherent box would dial out to a local UUNET number and upload out going e-mail and download any new messages. This was my setup for almost four years. It was a little painful to setup, but I was the only kid on my block who had a real e-mail address of his own, delivered into his own computer.

Coherent was an incredible system and a great way to get familiar with Unix. I built some software using the GNU compilers that were available and learned a great deal about Unix system administration and Unix in general. After a while I moved on to Linux using a Yggdrasil distribution of a 0.x kernel, but Coherent was a great opportunity to learn for a mere $99. Mark Williams shut down in 1995 — a eulogy is posted here.

I’ll post some future comments about the subject of that USENET posting — the Apple Newton — and PDAs in general.

Fidelity of Collaboration

In the spirit of Google reposting early rants and raves of the Internet, I thought I’d post some thoughts about communications.

My first foray into collaboration systems was around 1982, when I built a system called 000sys (arcane name, I don’t recall the history of the name exactly) at University of Virginia. It was built on an HP2000 computer system in BASIC and support single threaded conversations, instant messaging, “pages” that people moderated and file sharing. It was a remarkable system for its time and had hundreds of users at UVa. Since the system wasn’t accessible externally, only UVa users could access it.
It’s truly remarkable that in the almost 20 years since I built that system, very little has changed in the way we used computer systems to collaborate. The technology has advanced, but the metaphors are the same — lists, instant messages, e-mail, pages. What I look forward to most in the this millennium is some way to break out of that mold and really substantially change the way we collaborate using computers.

With respect to this, I’ve noticed a significant trend regarding technology and the transmission of content. It seems that technology is inversely proportional to the fidelity of the transmission. Consider the range of technology from face to face conversations (lowest technology) to Instant Messaging (highest tech) and the fidelity of the transmissions accordingly.

  • Face to face (F2F)– low tech, highest fidelity. Includes facial expressions, voice inflections, and gestures as well as the words themselves.
  • Telephone — higher tech than F2F, somewhat lower fidelity. Loses facial expressions and gestures but maintains vocal inflections and words.
  • E-Mail — higher tech than telephone, still lower fidelity. Loses vocal inflections, but allows the communicator to at least put together well thought out paragraphs.
  • Instant Messaging — higher tech than e-mail, lowest fidelity yet. Loses well thought out paragraphs of information, relies heavily on the typing ability of the communicators to transmit messages.

Hopefully we’ll figure out how to use new technology to increase the fidelity of our communications and collaboration, not continually reduce it. I don’t know if anyone else has coined this law — if not, perhaps someone will be generous enough to name it Kay’s Law of Collaboration, thereby eternally having the entire universe assume that it was Alan Kay that stated this :-).

Digital Rights Management and Copy Protection

In a previous life, I developed Digital Rights Management (DRM) software. With the rise (and decline) of Napster and other forms of P2P file sharing, DRM has suddenly become in vogue again. It seems that DRM, along with its cousin copy protection, is on a five year cycle. About every five years digital information copying becomes a real industry worry, companies implement a DRM or copy protection solution, that solution fails, and then DRM goes away for another five years.

This issue first became important while the PC was a new idea on the market. I recall purchasing software that had some form of copy protection included. One mechanism required a hardware “dongle” to be plugged into the serial or parallel port of the computer. Others required the original disk (with hidden files) to be inserted to start the software. Yet another scheme used a laser hole in the distribution medium to assure that a particular sector of the floppy couldn’t be written to or read from. Overall, these schemes generally failed. Why? Because five minutes after the scheme was invented, someone developed a means to bypass it. Again, these schemes ranged from the obvious to the sublime — I even recall traipsing through the assembly code of some application to remove the calls to the routine which talked to the laser hole on a diskette.

Later, copy protection schemes required answering some question from the installation guide (these even exist today). I’m not sure that anyone could develop a more perverse mechanism than that. I actually learned all of the capitals of the republics of the then-Soviet Union so that I could play my copy of Welltris without having to drag the book around.

Today this sort of thing has become fashionable again as the recording industry seeks to protect its profits by preventing people from copying music files. This was a big enough issue that it took down the king of MP3 sharing systems — Napster. (Sometime in the future I’ll post a retrospective on the legal defense of Napster.) Napster was easy with their big central database that was used for searching, but other tools based on completely decentralized searches (Gnutella, LimeWire, etc.) will be harder to take down.

And so we come upon the latest installment of Digital Rights Management. The recording industry, seeking to ensure that music doesn’t get shared, is going to put anti-copying alterations on CDs and is attempting to release music to the net by selling licenses to it. Even Windows Media Player includes license management to protect the redistribution of music and videos.

There’s a real problem with all of this however. First of all it’s a losing battle. The minute one scheme comes out, someone comes up with a way to break it. Generally speaking, it’s not that complicated to attack DRM because ultimately the unprotected goods have to be sent to your computer. From there, it’s only a matter of capturing the unprotected data. The five year cycle of DRM occurs because of this.

Second, it’s a bad thing in general. Today most content is protected by copyright. The beauty of copyright is that after a certain period of time, the content passes into the public domain. The founding fathers of the US did this (both for copyright as well as for patents) so that our culture would be enriched over time. However, DRM could actually undo virtually all of that.

DRM offers the opportunity to specify the terms and conditions for use of the content — a license. This is how software is distributed today — under license. If music and books were distributed this way, the works would never pass into the public domain. The terms and conditions would be spelled out and the buyer would have to agree. Only those who agreed to the license would ever be able to use those works. Imagine if the license also included a prohibition on criticizing the content (for example, you would never see a bad movie review). That may seem unrealistic to you now, but it’s wholly possible.

While the debate rages over MP3 and music sharing, the momentum is still against DRM schemes. Recently the recording industry was criticized for attempting to copy protect a recently released CD. In fact, I believe that we are at the point where the recording industry, represented by the RIAA, needs to recognize that a certain amount of music is going to be shared. To account for this, the RIAA membership should lower the price of music CDs. I believe that the high price of CDs will continue to drive the music lovers away from the record stores and onto the Internet. While bandwidth is still at a premium for many people, this is the time to get people used to paying 10 to 25 cents for a song that they can download. For a quarter, it’s hardly worth waiting hours to download via LimeWire, yet high quality audio could be made available via high performance servers that would counter the low performance and unpredictable quality of the peer networks

Why Great Companies Fail

Clayton Christensen, who developed the seminal theory on disruptive technology, examines why companies fail and why theory trumps data. From the article:

Businesses get blindsided because they focus on their best, most profitable customers and ignore other potential markets or customers seeking lower-cost products. This narrow view, Christensen says, ignores the fact that every market is characterized by three distinct change trajectories:

  • Performance improvement that customers can readily use (that is, it matches their own changing needs).
  • Technology advances driven by sustaining technological improvements.
  • New performance introduced by a disruptive technology, which typically begins at a lower level of performance, but rapidly improves until it meets the majority of customers’ needs.

I think this is an interesting market market observation. Markets change by improving performance of existing solutions, improving existing technology or introducing disruptive technology. If you don’t change with them, then you stand a good chance of tanking. This is what lead to the end of the mini-computing era — PC’s were disruptive technology that ended the life of mini-computing companies. There were no longer significant performance improvements or sustained technological improvements that could prevent the PCs from taking hold of the market.

You can find the full article on CNet News.