Masqq posted a list of “tools that follow the Unix philosophy”
As I skimmed the list, a thought struck me: “I used to work this way. Why did I stop?”
It only took a couple years for me to transition from doing “everything” in a terminal to leaning heavily on web apps. There were several factors at play, but I think Google Reader is what got me. Checking email from multiple locations was a solved problem with IMAP, but there wasn’t a great way to keep multiple feed readers in sync. I could’ve used a text-based feed reader on one machine and accessed it with a remote shell, but switching to Reader was just so simple.
Making it easier to do a task from “anywhere” is web apps’ killer feature. For example, it’s so much easier to just open a document in Google Docs from wherever I am than to, e.g., maintain a git repository of md/tex/txt files with clones on all of the computers I use. I say this as a person who uses git literally every day,
Fortunately, in the case of my text files, there are solutions already that are easier than a distrubuted RCS without the drawbacks of relying on a Google service. I use Syncthing, an open-source and decentralized tool (that otherwise works like Dropbox) to keep my notes/TODO lists/everything else in sync between computers.
But how do we keep feed readers, browsers, &c in sync?
One potential solution would be to rewrite our programs to work on top of something like Syncthing. Most programs maintain a state between sessions using the file system; instead of only reading these files on startup and writing them on shutdown, they could be monitored for changes and the state updated accordingly. This definitely wouldn’t be a trivial change. Are you going to alert the user about merge conflicts? Are you going to make the refresh frequency and merging behavior user-configurable? This adds a lot of complexity to an application!
Other approaches are possible, and I’d believe that there are simpler ones that haven’t occurred to me. But even if there aren’t, I don’t want this to be construed as an argument that “distributed” is inherently more complex than “centralized”. Rather, the complexity in one case is readily apparent while the complexity of the other is hidden from view. Think about all of the work that goes into achieving the nines of uptime necessary to make centralized services feel reliable. Google and Amazon’s infrastructure is definitely not “simple”, they’re just another person’s problem. Except it becomes your problem when Google or Amazon decide to renegotiate the terms of the deal.
There’s a metaphor about centralization of power here. It may seem “simpler” to delegate decision making to someone else, but you’re really just removing a bit of complexity from your own day-to-day life at the cost of creating more complexity somewhere else. In some cases perhaps that’s fine, but it shouldn’t be done lightly, and it should never be taken for granted.