Gitlog on port 9418
This post is written in response to the fascinating post by Solderpunk:
Solderpunk - Low budget p2p content distribution with git
It’s a long post, but riveting. I just wanted it to keep going and going. There is so much to think about, even though it’s such a simple idea.
I won’t talk much about the idea here, because Solderpunk explains it so well (just go and read the post). But, it’s basically a proposal to start thinking about how to use git as a content-distribution protocol.
The concept, as I understand it, needs to be grasped against the background of ‘sustainable computing’, or whatever you want to call it. ‘Sustainable’ not only in the environmental sense (although, that is of course key), but also in the personal and social senses.
Anyway, like I said, just read the post.
Here, I just wanted to document the process of my attempt to participate in this little idea-experiment. My further thoughts/reflections on the process are at the end in section 4 (if you already know how to do all of this).
At the moment, I have a version of my gemlog as a ‘gitlog’. To see it:
git clone git://spool-five.com/spool-five-gemlog
I am not particularly computer-literate. I’ve been using GNU/linux for over a year now and that has certainly done a lot to educate me, but I was curious about how feasible it would be for someone with average knowledge to implement a little test ‘gitlog’ (or whatever it will be called in the future).
It turned out to be pretty simple. I consulted the git manual, and that was about it. I did need to turn to the git manual though, because when I searched for things like “serve files with the git protocol”, etc., there are mostly just articles about the usual uses for git - sharing/collaborating on code. Solderpunk is right in that regard, it’s a very simple idea, but not many people have seem to caught onto it yet. Anyway, these were my sources:
In the documentation, there is a discussion of the several possible protocols for serving git directories (local, http, ssh, etc.). Here’s what it says about the git protocol:
Finally, we have the Git protocol. This is a special daemon that comes packaged with Git; it listens on a dedicated port (9418) that provides a service similar to the SSH protocol, but with absolutely no authentication. In order for a repository to be served over the Git protocol, you must create a git-daemon-export-ok file — the daemon won’t serve a repository without that file in it — but, other than that, there is no security. Either the Git repository is available for everyone to clone, or it isn’t. This means that there is generally no pushing over this protocol. You can enable push access but, given the lack of authentication, anyone on the internet who finds your project’s URL could push to that project. Suffice it to say that this is rare.
To test it out, I used my gemlog directory. Here’s what I did to set it up:
1. Create a bare git repository
First, you need to create a bare git copy of the git repository you want to serve.
In my case, I just initialised a git repository in my separate ‘gemlog’ directory. (I also added a .gitignore file for the non-gemlog files that were in that directory.)
Then, I made the git-bare/shareable repo:
git clone --bare gemlog spool-five-gemlog.git
I added the ‘spool-five’ prefix to the bare git directory. I came back and did this later. I think something like this is necessary under Solderpunk’s model, since otherwise, the pulled git repo is just a folder named ‘gemlog’ (or ‘gitlog’ or whatever). If you have multiple people’s repos called ‘gemlog’, not only would they conflict during the pull, but you wouldn’t have a way of distinguishing them. This is where a ‘metadata’ model that Solderpunk talked about would come in I suppose. For now, I’ve just manually named it as something unique.
2. Put the bare repository on a server
I just used rsync
rsync -av spool-five-gemlog.git git@server/path/to/public/directory
I also then added that as a remote source for the gemlog git directory, so I could just commit/push to it from now on.
3. Serve the files with the git daemon
This part was the trickiest for me. I had to fiddle around with the syntax a bit. Really, though, it’s just one, short line.
First, you have to allow the git daemon to export your files. You do this by simply creating an empty ‘git-daemon-export-ok’ file in the git directory.
touch /repo/directory/git-daemon-export-ok
Then, just serve the files with something like the following command:
git daemon --reuseaddr --base-path=/srv/git/ /srv/git/
This was the part that took a bit of fiddling. The way I have it now seems to work. I have my git directory in a directory called ‘public’, within the ‘git’ user’s path. So my exact command was:
git daemon --resuseaddr --base-path=/home/git/public/
Then, if I run the following command from anywhere, it fetches the git repository:
git clone git://spool-five.com/spool-five-gemlog
That’s it!
3.1 Make it more permanent
To make it more permanent, you can set up the git daemon via systemctl (or whatever your system supports).
Create the file /etc/systemd/system/git-daemon.service and place the following in it:
[Unit]
Description=Start Git Daemon
[Service]
ExecStart=/usr/bin/git daemon --reuseaddr --base-path=/srv/git/ /srv/git/
Restart=always
RestartSec=500ms
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=git-daemon
User=git
Group=git
[Install]
WantedBy=multi-user.target
I took this straight from the git docs which I linked above. The main parts which will be different are the ExecStart command, and the User/Group.
Then, just do the usual ‘systemctl enable git-daemon’ (to have it start at startup), and ‘systemctl start git-daemon’ (to start it now).
4. Questions/considerations:
Linking?
A very open question. There’s probably a good solution, but it’s not for me to figure out.
Protocols?
I’ve used the git protocol in this example, but not sure if it’s the best one. It seemed the simplest to me (for just sharing text files, no collaboration, etc.), but it’s the only one I tried out. I wonder which one Solderpunk had in mind? He does mention just using local git clones (for example in the ‘sneakernet’ model) and also mentions the benefit of the git protocol for older computers with no crypto support. But, since I don’t know the ins-and-outs of TLS, etc., I really don’t know what’s right here.
Authentication?
I played around with ‘signing’ some commits. It’s pretty simple and, as Solderpunk pointed out, built right into git from the get-go. The only question I have here is about what to do with the public key for verification? I don’t know if one of the main keyservers is the best option. I guess there could be some kind of ‘smolnet’ keyserver for ‘gitlogs’, but then that would be something centralised/maintained. I’m a bit lost on this part.
Structure/format?
This maybe isn’t a pressing question. Obviously, a strength here is that the repo/text files can have any kind of structure at all. People could be creative with it. One thing I did wonder about though is the old ’text wrapping’ question. Is it better to have gopher-style hard wrapping or gemini-style soft wrapping? I really have no idea! Also, I guess there could be some kind of ‘meta’ pages, with information about the ‘gitlog’, etc.
Enagagement Tools?
Would there be a dedicated client for this kind of thing? Or, just whatever text editor is at hand? One thing that really drew me to this proposal was the concept of using your own tools/methods to interact with the text (searching, building local databases, etc.)
Weightyness?
This one I had to think about for a while, because my initial reaction to setting up a ‘gitlog’ was one of ‘uneasiness’ at the thought of a blog being permanant in a certain sense.
Artists, software developers, musicians, and so on, are all used to people have their work stored locally. But, for bloggers/phloggers/gemloggers, or any social media users really, we’ve been kind of integrated into a system that (in appearances at least) simulates ’ephermality’. Posts, comments, pictures, are all just bits of infomation that echo into a void, bounce around for a while, then fade out. There is something comforting about this. It gives a sense of ‘freedom’ and ’lightness’; I can say whatever I want, because it will be lost in an endless stream of bits eventually. Coming from that kind of environment, the thought of your writing/confessions existing on someone’s USB twenty years from now feels unnerving and a bit too weighty.
In a similar vein, Solderpunk writes:
If you change your mind about something you wrote ten years ago and want to change it, you can do so - but everybody “subscribed” to your repository will be notified of this fact and will be able to see both the before and after versions. This kind of publishing is, by necessarily, radically long-lasting and radically transparent in a way that people aren’t used to and many may not be ready for.
However, and as Solderpunk points out, there are also some problems with that kind of thinking. Below, are my own thoughts on it.
Firstly, the simulated ’ephermality’ which underpins a lot of online social discourse, doesn’t exist at the concrete level. Or, rather, in only exists in a certain sense (for the end-user). I would say that a lot of the meaningful substance of online social interactions are indeed ephermal and fragile. They can be easily ’erased’ or ‘forgotten’ (when a server goes down, when you’re censored, when you opt-out of a service and lose access to contact infomation, etc.). But, the ‘monetizable’ aspects of social interactions get captured within processes which circulate them around for longer periods of time (basically, ad/tracking networks). They are anything but ephermal.
The trend of social media is actually toward permanently preserving this level of ‘social’ interaction. For example, when Twitter/Facebook first launched, their timelines were structured in ‘real-time’. Tweets/Facebook posts would pop up as they were written, and then disappear. There was something genuinely ephermal about this process. Not only were you ‘forgetting’ past posts/thread, but so too were Facebook/Twitter (to a degree). Over time, however, those companies took it upon themselves to capture/store/index the content, and arrange it in a particular mode. To the extent that social media as a whole trends in similar directions, the internet becomes more and more ‘weighty’, even if a simulated effect of ephermality still persists. The ‘weightyness’ in these cases is a monetary weightyness. The data is structured so that it is economically valuable (both in terms of providing information about markets/consumer, and in providing information/strategies for how to capture and hold user’s attention). Genuine social connections and interactions do remain ephermal, though, because you, the user/client, do not have the any of the necessary access/resources/tools to store and organise the data in a way that reflects your personal understanding of meaningfulness or social connection.
Secondly, even if ‘gitlogs’ became a thing, their ‘weightyness’ has to be taken in a certain sense. Degrees of ephermality and anonymity would still exist. In fact, it could be even more ’ephermal’ than mainstream social media, because the capture/store/index tools would be decentralised and fragmented. Even if your writings/musings exist at multiple points on a decentralised network, it’s not as if everyone will be constantly reading them and tracking them and violating your sense of privacy.
Still, though, I think there would undoubtedly be a ‘weightyness’, and it is one that we should spend time reflecting on. I don’t think it has to be unnerving (despite my initial, misguided reaction). For you, the end-user (and content creator), it would exist at a more genuine, social level. You might have to slow down a bit more, think more about what you say. This, in my opinion, is not a bad thing. It just feels scary, because we’ve been indoctrinated (via social media) to act otherwise.
So, from the perspective of end-users, social media companies are ephemeral from a technical standpoint; you have no technical control/say over your personal connections and interactions. A git-distributed system is the exact opposite. This isn’t a bad thing.
You have more power over your online social interactions, but also more responsibility.
Finally, the responsibilities that are encouraged, as I see it, are not only the responsibilities of discourse (writing/thinking in a meaningful, more permanent way), but also things like digital consciousness (everyone becomes a part-archivist), security consciousness, and, most importantly of all, environmental consciousness. At the end of the day, the true power of this format is in the ability to ‘disconnect’ at the network level, while still remaining ‘connected’ socially. In my opinion, that freedom in itself immensely outweighs the simulated freedom of being able to say whatever you want (whenever you want) in a YouTube comment section. It’s worth thinking more about.