A data-driven approach to static sites
My old static site setup was complex. I didn't really understand how it worked. And it took half an hour to compile. Why? I used Hakyll (which is great), and compiled Pandoc from source (which I don't recommend). I've seriously enjoyed playing with TCR. Technically, TCR nudges you to minimize the time from having your code in one working state, to having your code in another working state. I wanted something similar, but with the flexibility of playing around with raw HTML, and raw CSS. Also, I wanted something that's really simple to escape from. Take the repo, delete everything not HTML and CSS, and you have an archive. Built to last.
Tell me, how does it work? I write straight into an EDN file. If anyone wants to learn structural editing, writing a blog post in EDN is great exercise. Then the build system picks up the data, dispatching based on metadata in the EDN file. Finally, an HTML file is produced.
I didn't like that my old site isolated me from HTML and CSS. That's the worst way to learn. Writing HTML/CSS first - then opting into more structure when required is an alternative. Bonus - deploy times are really fast when Netlify's build is a pure file copy operation.
But .. writing in Markdown or Org-mode is kind of nice. Sure! I'm thinking of providing a :teod.subcons/transformers-transformer with functions for loading markdown or org-mode files. Then I can keep control of each page, and embed content where I want.
Sounds interesting? Just steal this code and try. Or write your own static site mechanisms, it's quite fun.
Future work. Error handling is ... not. Don't make your transformer chain too long. Also, I'd like to experiment with "embedding" other kinds of content. Markdown, Org-mode, Vega JSON, etc.
Nods of thanks go to Magnar Sveen, Jack Rusher for simple approaches to HTML with Clojure, and to Oddmund Strømme for discussions related to TCR and other things.
View this page on web: https://subcons.teod.eu/data-driven-site/