It uses Twitter’s streaming API to listen for tweets mentioning @yourvikingname, and also follows the @RoyalAlberta account. When one comes in, we apply a bunch of filters (because for example following a user also gets any tweets mentioning that user, while we’re only interested in ones made by that user), and then decide whether to respond. The bot retweets anything from @RoyalAlberta which includes a particular hashtag.
But the main function is to respond to users mentioning @yourvikingname. As long as the tweet isn’t a retweet or reply, and we haven’t recently responded to the same user, the bot uses the Twitter API to get the author user’s profile, then uses data from that to produce an image.
I compiled a long list of mappings from place names to demonyms (countries, provinces, and cities, and just for fun because it was easy I also added some celestial bodies and fictional places), and if the user’s location matches one of these, we add it to the image. In lieu of a match, we attempt to invent a demonym. For reasonable place names this works very well, but for other things people seem to decide to write in their “location” field, it does not! Such is the risk of trying to interpret and transform the contents of a freeform text field.
The user’s ID is used as a seed for a random number generator, and we then choose a Viking name from a list at random. We also get the user’s profile image, and add that to our output.
The image is generated in a headless Chrome browser – it’s just a simple HTML page which is populated at runtime with the chosen text and imagery. For development that HTML can be opened in a desktop browser. The production browser is run via Puppeteer, which injects the data and then takes a screenshot.
We upload this to Twitter and reply to the user with that media attached.
The queue of tweets to respond to, the generated media IDs, and user locks are stored in Redis so that we don’t have to worry about restarting the bot whenever we like.
There are a couple of handy testing scripts to go along with it. One takes a user’s Twitter handle or some made-up data resembling a Twitter user, generates an image, and emits it to standard output for easy testing. Another takes a tweet ID, gets that tweet from the Twitter API, then runs it through the same logic as if it were just received via the streaming API. I wrote this one to help send missing responses, since the bot was restricted for a few hours.
Something I learnt during development is how to get the highest-possible image quality on Twitter. Twitter’s uploaded images (non-animated ones, at least) all end up as JPEGs as far as I know. It seems that an uploaded image always gets re-encoded even if it was already a JPEG. You’d think it’d be best, then, to upload a lossless PNG and have the JPEG encoding (and therefore artefacts) happen only once. I found that PNGs I uploaded got re-encoded to JPEG at 75% quality. Given the red background colour of the imagery we’re uploading, the JPEG artefacts were very noticeable to my eye – unfortunate. I then tried uploading a JPEG at 100% quality. Twitter re-encodes this to 85% quality, not 75%, and the output looks much better. Very odd.