Tag: Web Development

Glitch Adds Support for Generated Sites 

I’m a big fan of Glitch for quickly and easily spinning up a website or little toy app. Right now I have a little API for generating image tags for assets in my Cloudinary account, and a number of Next.js apps for things like generating tonal color palettes or telling me how many days I’ve been in Covid “quarantine.”

Today, the team rolled out some new starter projects with new and improved support for generated static sites. Up to now, you could host fully static, hand-crafted HTML sites for free, but any kind of code (including a build step for the HTML) would require running a server, which would either go to sleep periodically or cost money. With these changes, Glitch will let you run build tools like Eleventy (for blogs and websites) or Vite (for JS & CSS assets) to generate your site, but will still host it 24-7 for free.


Where The Web Fonts Go

Get those fonts out of Git

Self-hosting web fonts can be easy; just add the font files somewhere in your site’s directory structure and reference them from your CSS. But if your site’s source code is stored in a GitHub repo, and you want your code to be public (or just forget to make it private), you may accidentally be violating the fonts’ license terms! Roel Nieskens called GitHub “the web’s largest font piracy site” due to web developers storing font files in publicly-viewable repos:

Let’s use the Github search API and see if we can find the most ubiquitous commercial font on the planet: Helvetica. And yep, more than 100,000 copies are findable on Github

What if you search for MyFonts’ products on Github? That’s exactly what I did. I skipped generic names that could result in false positives: names like Black, Latin or Text and fed the rest to the Github search API. The result? Of the deduped list of 29,951 fonts, 7,617 were present on Github – that’s a quarter of the entire MyFonts collection. Of their fonts labeled “bestseller”, 39 out of 49 can be found on Github, as well as 28 of the 30 labeled “top webfont”.

For a while now, I’ve kept my site’s source code private (even though I’d prefer it be public) so that I can store fonts there — it’s just so simple and straightforward to keep fonts and other assets with my code, and by keeping the repo private I can stay in compliance with all my font licenses.

But beyond that, having fonts in a Git repo is an anti-pattern because font files are relatively big binaries, which Git is not super-efficient at tracking or storing. And, because Git remembers everything, every font file I’ve ever used in any version of the site will remain part of the repo forever. Any time I (or Netlify’s build servers) clone a fresh copy, it’ll have to pull down a megabyte or so of font files, only a fraction of which it actually needs.

IMHO, the best idea is to not store web fonts in Git if you don’t have to, but where should they go instead?

My friend Stephen Nixon — who made the excellent typefaces Recursive and Name Sanswrote up a nice post explaining why and how he securely hosts web fonts on AWS S3 :

With the S3 Buckets feature of Amazon Web Services (AWS), this is relatively easy & very inexpensive – unless you are making a hugely-popular website, perhaps. You can (and should) configure it to only work on specific web domains, so you don’t break your licensing or end up paying for other people to use your font hosting!

S3 is great — one of very few internet things that is fast, cheap, and good for most use cases, and it’s been that way for more than a decade. Amazon offers a powerful web control panel for working with S3 buckets and data, and there are also many excellent third-party and open source apps that can upload to S3. My favorites are Transmit, Panic’s venerable file-transfer client for macOS, and s3cmd, a Python-based open source command line tool.

For me, the main drawback to S3 is that it can be annoying to serve fonts or other files over SSL. All S3 buckets have default s3.amazonaws.com URLs that can be accessed over HTTP or HTTPS, which is great. But S3’s static website hosting features (which you may not need for this, but idk) are only available over regular HTTP, and if you want to leverage those or use a custom domain you’ll have to set up CloudFront, Amazon’s CDN service, which is extremely powerful but also complicated and rather expensive.

Another drawback to S3, less important for small projects but still worth thinking about, is that without CloudFront all your data is served from your chosen AWS datacenter, not from Amazon’s CDN. Some users may see latency or slower downloads, which is exactly what you don’t want with larger assets like fonts. Slow font downloads can block page rendering or exacerbate problems like FOUT.

So, for the fonts on this site, I decided to use DigitalOcean Spaces, an “object” (aka file) storage service that’s patterned after S3, and compatible with S3’s API so that apps like Transmit will work with it. It’s a lot simpler, both in the product itself (nice web UI, easy-to-understand settings) and in its pricing model (a flat $5/month fee), and it has a built-in CDN that can integrate with DigitalOcean’s DNS servers to effortlessly configure custom domains and SSL certificates.

DigitalOcean’s control panel makes it easy to set up and configure Spaces, including custom domains, SSL, and CORS rules

I keep all my fonts in the same directory of the same Spaces bucket, which I manage using Transmit:

My web fonts in their directory on my Spaces-powered CDN

Each subdirectory is named after the fonts’ CSS font-family name, so that my “API” for using the fonts is consistent. To enable the Söhne fonts, I add a link to fonts/soehne/index.css, and then I can use font-family: soehne, … in my CSS. Nice and simple.

Because these directory names and URLs follow a nice, regular structure, I can lightly automate adding these links in my Hugo templates, providing a list of family slugs that are turned into <link> tags. These are hard-coded, but could just as easily be set as front matter data on a page or post.

CDN-hosted web fonts, integrated into my site’s Hugo templates

Now that these fonts are up in The Cloud, I can easily reference them in test pages and experiments without having to copy them over from another project.


Parcel Post

Recapturing the magic of early web development with modern tools

I dunno about you, but I’ve been missing the old days when we could try out some new web technique or think through some code by just opening up an editor, making a fresh index.html, and getting to work.

I’m generally a fan of frameworks that let you get to work with a bare minimum of boilerplate code or setup, and I’m particularly fond of tools that leverage the filesystem and/or the native syntax of the web, so that web development feels like it did back when uploading PHP scripts to a FTP site felt magical.

This is a rare feeling these days; in order to give developers the power to make powerful, scalable web apps, it feels like we’ve neglected or even forgotten how to make web pages. I miss the simplicity and immediacy — the feeling of magic — that made web development so fun when I was starting out.

Next.js has some of this magic. It’s a React-based app framework that uses file and directory names to set up URL routes; given a file named about/index.js, Next will create a web page whose URL is /about. This isn’t quite the old web I loved in the 2000s, because React is involved. That file isn’t a web page, it’s a JavaScript file that exports a component, and there are things that are stupidly hard to do without layering on ever more libraries and boilerplate. But what’s nice about Next is that once you install it and its dependencies, you can just create a couple of files, run next dev, and you’re off to the races.


This weekend I wanted to play around with Chroma.js, a library for manipulating colors and scales. I started out trying it in CodePen and Glitch — both great tools for trying things out — but found myself wanting to write code in my favorite editor, not a browser.

Parcel made it possible for me to have my cake and eat it too — to write code like I was building a totally local, static web page, but enjoy all the benefits of modern build tools.

Parcel’s website describes it as “a compiler for all your code, regardless of the language or toolchain… (it) takes all of your files and dependencies, transforms them, and merges them together into a smaller set of output files that can be used to run your code.” All of which is true, but I think obscures the most important part: Parcel does all of this with little or no setup, configuration, or boilerplate code.

This may seem remarkable in different ways depending on your experience with the modern JavaScript world.

If you’re familiar with compiled languages or frameworks, or other bundler tools like Webpack, Parcel’s big pitch is that it can simplify your life. Whenever I use Webpack it usually takes me dozens of minutes to write (or rather copy-paste) a configuration file and install packages to make my code run. Even for an experienced JS programmer who’s used to this pain, Parcel is a valuable time-saver.

But what’s really great about Parcel is that it’s a Webpack-like tool that can be used without prior knowledge of Webpack-like tools, that uses your own code to configure itself.

Take an HTML document like this:

<!-- index.html -->
<html>
  <head>
    <title>A throwaway web page experiment</title>
    <link href="./styles.css" rel="stylesheet" />
  </head>
  <body>
    <h1>Time to code!</h1>
    <div id="vue-app"></div>
    <script src="./app.js"></script>
  </body>
</html>

In a bygone era, with all your HTML, JavaScript, and CSS code hand-crafted as static files, you could just load this into a browser and go. In fact, let me tell you a secret: that way of making web pages still works. The modern web platform still supports simple ways of working, it just doesn’t allow or make it easy for you to use preprocessors if you want to.

But Parcel does! Once it’s installed, just run this command:

parcel index.html

Reading your HTML, Parcel will see that it depends on two other assets — styles.css and main.js — and build those, preprocessing them according to the file extensions. It’ll (re-)build your HTML too, replacing references to these source code files to the built asset files it generates.

What’s more, these don’t have to be plain CSS or JS files. If you want to use (say) Sass and TypeScript, you could do this and it will Just Work:

<!-- index.html -->
<html>
  <head>
    <title>A throwaway web page experiment</title>
    <link href="./styles.scss" rel="stylesheet" />
  </head>
  <body>
    <h1>Time to code!</h1>
    <div id="vue-app"></div>
    <script src="./app.ts"></script>
  </body>
</html>

Beyond that, Parcel brings a web server and hot reloading to the party—you give it some files, it gives you a local development URL, and that URL will auto-magically refresh as you edit code. Hot reloading has been a revolution in how I approach web design — beyond just reloading pages, seeing code or style changes applied seamlessly in the browser makes designing in the browser responsive and delightful. Hot reloading with Webpack usually requires a framework or complicated setup; in Parcel that too Just Works.


So what’s the catch? Well, Parcel may make the JS ecosystem much simpler and easier to use, but it is still part of that ecosystem. Simple things tend to work very simply, but if you push the limits of what Parcel is good at it can require some know-how to get back on track.

For my color theming experiment I wanted to use a couple of my favorite libraries: Tailwind CSS to apply styles to a web page, and Vue to set up data-driven templates. But it turns out the current release of Tailwind, v2.0, requires PostCSS 8. Parcel 1.x doesn’t work with PostCSS 8, so I needed to switch to a nightly build of Parcel 2, which isn’t out yet.

Parcel 2, meanwhile, doesn’t support single-file components with the current version of Vue — for those I had to upgrade to the beta of Vue 3. For my “simple” web page to hack on, I had to use pre-release, bleeding-edge versions of two JavaScript tools just to get things to work.

BTW, this is the NPM incantation to install the stack I ended up using:

npm install --save parcel@nightly vue@next \
  tailwindcss@latest postcss@latest

Now, I did have another option, to stick with versions of these libraries that work together, and to only use features that work with those versions. Tailwind 1.x is nice, as are non-single-file Vue views. I’m the one who chose to live dangerously.

And even with the JS dependency whack-a-mole, it was and is nice to set up a project by just writing code and having it work. It’s nice because I don’t feel like I wasted an hour setting up a throwaway project, and the steps to get going with some code are simple enough to keep in my head.


No More Masters

It’s time to change your Git repo’s default branch.

In my book Git for Humans, published in 2016, I made a lot of references to master — naturally, as it’s been the default branch name in Git for a long time. Like many people, I simply accepted that master meant “master copy” and didn’t look at it too closely.

But now it’s 2020, things are changing, and there are other, better names for our primary Git branches that don’t indirectly invoke slavery.

My work friend Una made the practical argument for renaming (to main, which the community seems to have adopted as the new standard):

For me, having a master branch is like realizing a cute geometric pattern on some old part of my house is made of swastikas, or that the old statue outside the main library in my hometown is actually a Confederate monument that had stood there for 115 years. Removing symbols of racism isn’t nearly enough, but that doesn’t mean don’t remove them.

People have to live in a Git-based world, and Git does not make that easy. Folks are talking about renaming branches like there’s just a box you can check. For new projects, it’s almost that easy — in fact, GitHub has announced they will change the default for everyone later this year. But existing projects need a bit more work, as I’ll explain.

Like I said, my book mentions master a lot. (Like, a lot a lot.) It seems likely that within the next few years this will seem like a really stale choice, so I am talking to the awesome team at A Book Apart about updating Git for Humans’ text and examples to favor main as the default branch name.

They’re pretty busy, and there’s still a pandemic happening, so no ETA on when a book update might ship. But it’s in the works!

For now let’s start with what you need to do to start new projects out on a main branch if your tools don’t yet treat that as a default.

Naming your first branch

One of the great things about Git is that it doesn’t really require your main branch to be named master (or anything else). You can choose any name you want, and you can change names at any time as long as you’re willing to do some work.

When you start a new repo, Git is hard-coded to set the first branch’s name to master. But that branch doesn’t technically exist until you make your first commit. So here’s what you do to set your preferred name on a brand-new repo:

git init # if you hadn't done this yet
git checkout -b main

Until GitHub finishes changing the default primary branch name, you’ll need to go into your repo settings there to tell it that main is your primary branch; you’ll find instructions for how to do this later in this post.

“Renaming” an existing branch

Behind the scenes, everything in a Git repo is immutable. When you make commits, it only looks like you’re changing files and directories in your project — really, you’re just adding new versions of things on top of the old ones.

In other words, you can’t actually rename a branch in Git, because renaming would be mutating data, which Git tries to never do. But you can create a new one and (optionally) get rid of the old one, which is basically the same thing.

Here we’ll replace a master branch with a new one called main, pointing to the current head commit:

git checkout master # if you're not already there
git checkout -b main

Alternatively, you can use git branch to ask Git to create a new branch pointing at the same commit master is on:

git branch main master

Whichever way you do it, your master branch will be left intact, you’ll have a new main branch that’s identical to master.

To make this new main branch available to your collaborators, push it to GitHub:

git push -u origin main

Updating your primary branch in GitHub and other tools

Next, let’s tell your tools that there’s a new primary branch in town.

GitHub

Open your repo page on GitHub while signed in, and click on the Settings tab, then click Branches in the left-hand navigation. Once you’re there, on the right-hand side you’ll see a drop-down that lets you change the name of your default branch.

GitHub repo settings page, showing default branch option

Once this is set, new pull requests will automatically be set up to merge into main, and git clones from GitHub will also check out main by default.

Netlify

If you (like me) use Netlify to host your websites and JAMstack apps, and use their GitHub integration to automatically publish changes after you push them, you’ll need to go into your Netlify site settings to select a new production branch. This is under Build & deploy > Deploy contexts.

Netlify Build & Deploy settings page showing default production branch setting

What about other integrations?

The value of changing your primary branch name right now is inversely proportional to the amount of time and effort you have to put into it. Eventually, we want a name like main to be the new default for every project, no marginal effort required. For small or relatively simple projects, it’s low-cost enough to do now, or soon, while it’s front of mind.

The master branch is a load-bearing element. Many systems and workflows depend on it, and the more tools you have hooked into your repo, the more work it will take to change names without ruining someone’s day. If you have complex integrations tied into Git, you should approach this with the same care you’d approach any other infrastructure change.

What you want to avoid is a situation like the one in this tweet by Bryan Liles:

Moving to main signals that we want to be inclusive. It’s meant as a welcome mat for underrepresented folks who may collaborate with us, now or in the future. But moving away from master breaks things now. Adopting a new branch name really is a cosmetic change, and though I think it’s ultimately the right thing to do, as long as you ultimately will do it, it’s OK to take your time and get it right.

Deprecating your master branch

This last section, and I cannot stress this enough, is for people who have carefully considered the impact of changing from master to main, and are ready to burn their ships and never look back.

Sadly, Git doesn’t have any such thing as “branch redirects,” and though GitHub has some special features to “protect” branches from receiving pushes, vanilla Git does not. Once you’ve decided to get rid of master, you may want to make it so that syncing with it fails, with a note explaining what to do instead.

So you may want to replace your old master with an “orphan” branch, which (as the name implies) is a branch/commit with no parent, completely detached from the rest of your Git repo’s history.

We’ll name this new orphan branch no-masters. To start, we call on git checkout --orphan, which asks Git to start a new branch but intentionally forget anything about your former head commit. This is similar to if you were starting over with a brand-new repo.

git checkout --orphan no-masters

You’ll end up with a branch that contains all the files and folders from your project, but staged as if they were new additions.

Next, we’ll remove all the content from this branch. Using git rm (as opposed to regular ‘ol rm) will only delete files and folders that are checked into Git, leaving behind ignored content.

git rm -fr .

Depending on your tech stack, this may leave behind some stuff that had previously been hidden by .gitignore, all of which will now show up when you run git status. So we’ll restore the old gitignore file from main to make sure these files are not committed or deleted:

git checkout main .gitignore

Finally, let’s leave a note explaining why this branch is empty. We’ll add and commit a README.md Markdown file with the following text:

# This branch is deprecated

This project's primary branch is now called `main`.

You should `git checkout main` and `git pull origin main` from now on.

Then commit these changes:

git add .gitignore README.md
# … output deleted …
git commit -m "Deprecation message for `master` branch"

Because this is an orphaned branch, if you run git log you’ll only see this commit, none of the history before it:

git log --oneline
> cd2b2c2 (HEAD -> no-masters) Deprecation message for `master` branch

OK, now for the scary part — replacing master with this content. Which means deleting your old master branch:

git branch -D master

This will delete master locally, allowing you to create a new masterbranch that points to this new, empty-except-for-deprecation-message commit.

git branch master no-masters

If you were to then git checkout master, you’ll see the deprecation message.

git checkout master
git log --oneline
> cd2b2c2 (HEAD -> master) Deprecation message for `master` branch

Whew. Okay. One last step: pushing this master branch to GitHub. Because this is a new, orphaned branch, you will need to force-push. This may (hell, probably will) break any integrations you have hooked up to master, so you may want to wait until your team and infrastructure are fully migrated over to main until you do this.

git push -f origin master

Ahhhhhhhhh, so nice to have that done. Here’s the deprecation message as shown on one of my GitHub repos:

Screenshot of master branch deprecation message on GitHub web interface

Because the server’s master now points to this orphaned commit, Git will raise an error whenever you or someone on your team tries to pull from it:

git pull origin master
From <your-repo-url-here>
 * branch            master     -> FETCH_HEAD
fatal: refusing to merge unrelated histories

If only it was this easy to break free from history in real life.


Blogging is a Pain in the Ass

There’s a reason we all moved to platforms like Twitter and Medium
From left: me, my blog (Photo: Eric Felton via Flickr)

A project of mine to start a “simple” WordPress blog is now on what feels like its ninth or tenth week of total bikeshedding.

I tweeted about this the other day:

I mean, seriously, think about all that goes into making a personal blog on one’s own domain. You have to sort out hosting, you have to set up a domain and get everything wired up properly, you have to keep your blogging software up to date.

If you’re like me, and you want your site’s design and typography to be unique and perfect, you have to design a theme, and make some very low-stakes, distraction-prone decisions about the kind of blog you want and how that should be reflected in its design.

After all that, you can (and should) post things. But there’s a chicken-and-egg problem: to design a theme, you need content (sample or real), but maybe to be inspired to write good posts, you need your theme. I’d wager that one secret to Medium’s success is that every post is formatted in a near-perfect vanilla blog theme from the beginning, and you can even see this exquisite formatting being applied as you write.


For setting up my own self-hosted blog, I brought some extra toil on myself in that—having worked as what’s now called a Frontend Developer™—I’m used to developing web sites a certain way.

I have a local development setup for this blog; that’s Frontend Developer jargon meaning that, by convoluted means, I’m able to run my WordPress blog on my own computer, so I can make changes to it that won’t affect the live copy of the blog until I’m ready to push them there. Because it’s 2018, and nothing is allowed to be easy anymore, this local environment consists of a Git repo, three different Docker containers, a whole JavaScript build toolchain mostly for doing CSS, and some ridiculous bits of HTTP proxy plumbing so that changes to my CSS automatically refresh my blog design in the browser as I work.

90% of the free time I’ve put toward launching this blog has gone toward getting this Rube Goldberg-esque collection of containers, volume mounts, ports, compilers, whiz-bangs and geegaws to work consistently. It’s all really cool, especially if your aim is to produce a white paper on deploying enterprise-grade web applications and not a simple personal blog.

My Integrated Development Environment™ for working on a simple WordPress blog, in Amazon’s web-based IDE, Cloud9. I remember making web pages in Notepad — now I’m using a web page to code web pages

The payoff for doing all this is that I can check out my WordPress code on any machine (including a cloud machine at AWS), run one simple script, and be up and running in about 60 seconds, complete with that CSS hot reloading thing which is, no kidding, very cool.

But, at the same time, all of that is table stakes — it doesn’t get me a blog, it just gets me the ability to create and customize a blog. Which is fine if it all takes a couple days or weeks, but after a couple of months of tinkering it makes me long for the simplicity of Medium.

I probably should just find or make a WordPress theme that looks like Medium TBH ¯_(ツ)_/¯

—Me, in a tweet

When in doubt, if you’d just as soon post to Medium but don’t trust Medium with long term stewardship of your words, you can’t go wrong making a personal blog that looks like Medium. That’s more or less what I am doing now, to get out of design decision paralysis.

But in the meantime, to get myself out of this endless cycle of tinkering with Frontend Development™ and DevOps™ Tooling® and bumbersnatch and fiddle-dee-dee—to get the posts I want to write out of my drafts folder and out into the world — I’m back here (sigh) on Medium.

(Fortunately I had the foresight to set up a custom domain name for myself before Medium deprecated custom domains last November.)