Today we’re going to mix things up a little bit with something not so enterprise-y. This is from a side-project of mine. As is always the case with side-projects, you need to strike the right balance between „Oh, look at this shiny new technology!“ and „I wanna get things done!“. For this specific project, I’m more on the side of getting things done. Also, this is not a concrete How-To—just an account of my travels.
Disclaimer: This thing turned out way more rambling and anecdotal than anticipated.
A little context up front
There are those „Chose your own adventure“-books. Basically, you read a page about, say, you being stranded on a lonely island. At the end of the page, you are presented with a couple of options: If you want to walk along the beach, turn to page 13. If you want to investigate the forest, turn to page 21. If you want to take a nap, turn to page 35. You read the next page, the story continues according to your decision, then new options arise. And so on. There are a couple of different endings, some good, some bad. You re-start at the beginning, trying out new decisions. Fun!
I want to enable people to create this kind of stories themselves. Digitally, of course. And with the twist of adding some state and scripting into the mix. That means, you would be able to implement an inventory, a combat-system, or what have you.
How does it feel?
I dig open platforms. And since this is a non-profit side-project, I don’t have to shoe-horn some crude business-plan into it. Yay!
The game itself will be compiled into a single HTML-file, with some embedded JS and CSS. Just one file, which makes it easily distributable (host it on Dropbox, attach it to an Email, put it on a USB-Stick), very portable (a browser—as the required runtime—is surely available on every target system), playable offline and even somewhat hackable (it’s just text, no proprietary, binary file-formats).
I won’t go into detail about how the „game-engine“ works, because that’s not the story I want to tell today. Maybe some other time. Also, this story is not about the games. It’s about the editor that makes the games. I just used the game-format to introduce you to the properties I like. Ha!
So, naturally, I started building the editor using web-technologies. How does it hold up against my targeted properties?
Distributability of the editor is not as crucial as with the games. There will be one canonical source for the editor. It doesn’t have to be distributable among peers. A URL will do just fine.
Portability however is rather important. But, thankfully, using the browser as the runtime, we have that aspect covered.
And last but not least, my favourite: Offline-ification. There is no server involved. The editor persists games locally inside the browser. That means, apart from initially loading the „app“, the editor works offline. The internet is just the delivery mechanism. It could be made even offliner using Service Workers. Then, after the very first time you load the editor, no more requests to the internet were necessary. But I’ll leave that as an exercise to the reader (or will come back some other time, with another article).
Note, that I am not telling you which framework I used (vanilla.js), because, again, that is not essential to the story.
Every now and then, Timo proposes some new functionality. Some tweaks to improve usability. And he is right and I am grateful for the input. But the thing is, more and more, it feels like I am re-implementing things that have nothing to do with the core business-logic of the editor.
I am dreaming of text-files in a real file-system, and of a real text-editor, and of some little command-line utility to do the actual magic of compiling all the single files into one game, ready to be played in a browser. And best of all, apart from that small utility, I would not be implementing any of it!
So here we are: In this case, the web seems a tad restricting. As a user, it is pleasantly easy to get started („Goto this URL and you’re on!“), but as a developer I am jumping through hoops to make it all work-ish. I want to shift this balance a bit in the developer’s favour.
A CLI—with an ecosystem
Awaking from the dream of the last chapter, I start thinking about native applications.
A command-line utility. Just like git. Or ImageMagick. Oh, what an illustrious party! It sounds great—but would it be feasible? Can I pull it off? Can my one and only user handle the change in usability?
Without the browser as the runtime, how can we handle portability? I don’t want to ask Timo to install a JVM, or node, or ruby or some other interpreter or runtime. I want a single self-contained binary he can execute. No installation, just download and double-click. Simple! And because he is using Windows, while I am developing on Linux, I need to cross-compile binaries.
In comparison to the web, where offline-izing an „app“ is something noteworthy, a native application would be „special“ for requiring network access.
Obviously, a command-line utility is not nearly as approachable as a web-gui. But, with the core functionality neatly tucked away inside it, I can then further integrate with the GUI a user would actually be interacting with. For example, a Sublime Text plugin for accessing the CLI’s functionality from within the text-editor. Separation of concerns FTW!
Where do you want to go today?
I am not here to learn a new language, but since the algorithms and data structures involved are rather straight-forward, I would feel comfortable using a language I don’t know yet. The language is just a tool. Nevertheless—in the spirit of getting things done—it would still have to be some kind of high-level language. I won’t drop down to assembler.
Also, I am not interested in the act of actually cross-compiling binaries. I need to do it, sure, but it is just a means to an end. That means, I will favour solutions with a simple cross-compiling toolchain.
Note: The following is not super up-to-date any more. I initially checked the scene around the end of 2015. The tools might have changed in the meantime.
I found a couple of options for node:
enclosejs looks more suited for a CLI, but it requires buying a license and does not support cross-compiling out of the box.
nexe looks similar to enclosejs, only free. But again, the cross-compiling toolchain seems like too big of a hassle (manually setting up Visual Studio 2010).
So far, so meh… I would rather be using ruby anyways, so let’s see what’s available over there:
Travelling Ruby provides a pre-compiled ruby interpreter and bundles it with your application-code and used gems. The process however is rather manual: with custom Rakefiles, wrapper scripts and stuff. It does not feel slick. Maybe it would, after initially setting the toolchain up. But even then, I would „own“ the toolchain and be responsible for maintaining it.
mruby-cli. mruby is a lightweight ruby-implementation that can be embedded into other applications. mruby-cli provides a C-wrapper that embeds mruby and then calls your application’s code. It uses docker and provides images containing the cross-compilation toolchain. So far, so good. I started developing with mruby-cli. The lightweight-aspect means, that most of ruby’s standard-library is not directly available in mruby, though. Some functionality can be included manually, which is fine. But for example hunting down a gem for regular expressions got somewhat frustrating. It felt like they were all more on the „proof of concept“-side of the maturity-spectrum.
I get that compiling a dynamic scripting language (be it node or ruby) into one binary is not trivial. Doubly so, if native extensions are involved. Triply so, if the whole toolchain is currently more of a proof-of-concept.
Then, I remembered someone (maybe on Ruby Rogues, maybe at a conference) mentioning the Go programming language and that it does cross-compiling out of the box. A quick search reveals that yes, it is only a matter of setting two environment-variables before invoking the compiler. Neat!
Go is a high-enough level language, I assume it has a reasonable standard-library, and the cross-compiling story sounds exquisit. There’s also a package for creating CLIs. So why not?
As always in software engineering, I am making a trade-off here. I am ditching the languages I know for a new one. That means, I will probably write code that is … not optimal. There will be a lot of stack-overflow-driven-development.
I am willing to do this, because I want the cross-compiling process to be as simple as possible. I am not a cross-compiling kind of guy and this project shall not be my initiation to that world. Because I want to get things done. I can handle issues with the programming. I know, when I am taking shortcuts and I have a feeling for what the repercussions could be. But I lack expertise for handling issues with cross-compiling.
After implementing the first few features in Go, I start missing ruby’s syntax, the standard-library, and it’s overall object-orientedness. I don’t like the look of Go-code. I don’t like the boilerplate. And yes, this is obviously unfair, because I am not really giving it a chance and possibly just doing it wrong. Besides, that’s part of the trade-off I made, so shut up, self!
And to be honest, I don’t even have qualms writing that hacky code. It works! The hackery doesn’t matter. That reminds me of Nickolas Means‘ talk ‚The original skunkworks‘.
State of the Union
The basic features of the CLI are done. The code is a mess, but Timo does not care. The usability is—at the moment and at first glance—definitely worse than before. You need to open a terminal and type in the right incantation (similar to git on the command-line).
Apart from that, many new possibilities are within reach. Working with plain files on a real filesystem makes using git a breeze. Also, text-files are way more hackable from the outside than giant JSON-objects persisted inside localstorage.
One aspect I especially like about this „architecture“ is it’s modularity. You can use your trusted text-editor. Your favourite CSV—or none. The „interface“ is just a convention of organizing text files in directories.
Obviously, the CLI needs to be more integrated. While it is definitely cool to have the possibility to swap out components at will, Timo does not care about that. He just wants to write his stories. And that’s perfectly fine. So a next step will be to set up the „canonical way“ of using the CLI. That’s where the Sublime Text plugin comes into play.
The tool I am developing started out as a „web-app“. But along the way I realized that I wanted to do things, that are not easily doable on the web platform. I needed to go native.
Now, I am using Go to compile binaries for different platforms. The code is very hacky, because I don’t know Go. But its cross-compilation abilities are way more usable than those of languages I have experience with. That’s unfortunate, but I am willing to make this compromise, because I’d rather fight with the code than with the act of cross-compiling.
The core is a small CLI-application working on text files. The rest is very pluggable. The user must choose how to edit these text files, for example. As most users will be non-technical, that last „must“ needs to be converted into a „can“. There has to be an officially approved and supported way to be productive.
The priorities and decisions made in this story are all tailored to this concrete hobby project. While they might not be applicable to your concrete project, maybe the general abstract thoughts, the process, or the linked resources will.
Thank you for your time 🙂