Deleting Old Git Branches — PowerShell Edition

I did a post on this earlier (I’ll link it once it’s uploaded, perhaps), but it relied on me being in a Unix-like system. Because I do Windows development right now, I wanted a way to easily do this in PowerShell. StackOverflow certainly has answers, but I cobbled together my own version below that works for the toolkit I have.

git branch --merged | rg -v "(^\*|master)" | %{ git branch -d $_.trim() }

ripgrep has been in my toolkit for years now at this point, so I used it for filtering out master. The last piped part is strictly used in PowerShell and was the part I was missing, since I was used to using xargs before. Tada! Works like magic.

Generating Stored Procs for DbUp

Retrieving the Procs

I recently tasked myself with setting up DbUp to add database migrations to a .NET project at work. It was straightforward enough to generate the table definitions with DataGrip but I needed to get my hands dirty to create the stored procedures. To get a lay of the land, I checked out my favorite schema in SQL Server, INFORMATION_SCHEMA.

SELECT * FROM INFORMATION_SCHEMA.ROUTINES

The columns I care about here are ROUTINE_NAME and ROUTINE_TYPE, the latter which I want to make sure is always “PROCEDURE”, which it is for my case. (yay) ROUTINE_DEFINITION is also worth paying attention to, but it’s capped at 4000 characters so I need to query the sys schema to make sure I get the full procs. Below is the information I need.

SELECT 'NEW PROC STARTS HERE', o.name, sm.definition FROM sys.sql_modules sm
INNER JOIN sys.objects o
ON sm.object_id = o.object_id AND o.TYPE = 'P'

I take this output in DataGrip and download it to a TSV I can parse later to create a set of SQL scripts that can be run to create the procs in my database.

Creating Temporary Proc Files

At this point it’s worth noting that if I had a SQL GUI that let me select all the procs and download them to a set of scripts, I would totally just do that. So I want to re-iterate what I’m trying to accomplish so I don’t go down a rabbit hole:

  • Each proc needs to be its own SQL script named after the proc
  • I want to parse this TSV without altering it
  • I shouldn’t have to do this again

That last point is worth discussing — it’s true I won’t have to do this again for this project (since future procs will be in source control and using DbUp for change management), however if this were worth turning into a generic tool then I’d want to make this parsing code re-runnable. I used to work at a company that would’ve definitely benefited from such a process, but alas they never encouraged me to go down this path and clean-up their bad practices.

Exploratory Parsing

To kick things off, I looked for my NEW PROC STARTS HERE text to figure out what kind of formats I had to deal with. In short, it looked a bit like this:

NEW PROC STARTS HERE	SelectSomeStuff	"create procedure

The only thing I could count on was that after that first double-quote, the proc would actually begin. Then we’d have newlines that would get introduced that would be part of the proc. We’d only know we were at the end when we got to another NEW PROC STARTS HERE delimiter.

The Script

Here’s what I cobbled together using the guidelines above.

class ProcFile
  attr_accessor :name, :contents

  def initialize()
    @contents = ""
  end
  
  def write_to_file
    File.open("#{@name}.sql", "w") do |f|
      f.write(@contents)
    end
  end
end

proc_start = "NEW PROC STARTS HERE"

new_proc = ProcFile.new()
File.readlines("procs.tsv").each do |line|
  line.sub!("CREATE PROCEDURE", "CREATE OR ALTER PROCEDURE")

  if line.include?(proc_start)
    # NEW PROC STARTS HERE\tProcName\t"This Is The Proc
    r = /^NEW PROC STARTS HERE\t(.+)\t"(.+)$/

    unless new_proc.contents.empty?
      new_proc.write_to_file()
    end

    match_data = r.match(line)

    new_proc = ProcFile.new()
    new_proc.name = match_data[1]
    new_proc.contents = match_data[2]
  else
    new_proc.contents += line unless line =~ /^"$/
  end
end

# write final file
new_proc.write_to_file()

Translating to a DbUp-Friendly Structure

Here’s the CLI tool syntax I wrote to create new database migrations from scratch:

$ customtool generate migration --name CreateNewTable

And this would create something like 20200214200931_CreateNewTable.sql and put it in the Migrations directory and everyone would rejoice. But unlike table definitions and other types of migrations I’d like to run, I want to treat my procedures more like code where the whole thing gets re-run anytime I know it changes. Therefore, my tool needs a new syntax.

$ customtool generate procedure --name SelectSomeStuff

This command instead deposits generated procs in the Procedures directory. Order no longer matters, so there’s no timestamp component to the filename. And when I want to run them, I just do:

$ customtool deploy procedures

Caveats

It should be obvious, but I want to note this here — the script I wrote mostly worked. It didn’t quite work when the SQL syntax wasn’t capitalized correctly or spaced consistently through all the scripts. To test the stored procedures I converted, I ran them locally and made an assumption that any conversion issues would show up as syntax errors and not still be valid SQL. You’re always better off doing a healthy dose of testing — even though this is still in-progress for me, I plan to run the CREATE OR ALTER on the lower environments so we can verify that everything works as intended now and not have to worry about it later if I run all the procs and introduce some small bug.

It’s also worth noting that this is exactly why stored procedures aren’t popular in frameworks like Ruby on Rails. They’re so difficult to test and do change management on! I really only encountered them when I entered .NET land, and then I was horrified how often engineers I worked with thought they were a good idea to implement. Yes, you can get performance gains from them. But you’re almost always better off doing something else that’s a bit more testable, just so you can sleep at night.

Addendum: System.CommandLine

Wasn’t going to write much here, except to say that I’m using the new System.CommandLine library to write my CLI tool. It’s still in pre-release but the functionality it currently provides is more than enough for me to write my tool without incurring too much of a headache either parsing the input myself (i.e. I don’t use a library) or learning a confusing library API, which was my experience with Command Line Parser.

Upgrading PHP

WordPress now has a health check you can run against your site to make sure it has all the latest bells and whistles. You can find it under Tools > Site Health — this is new to me, but I haven’t used WordPress in years.

My health check had some straightforward items — including to remove themes I wasn’t using. But there were two that I knew I would have to get my hands dirty for — upgrading PHP to at least 7.3, and installing the PHP extension imagick. I had to bumble around the Internet to figure out all the steps, so I figured I’d detail my findings here to make sure I understood what I did.

Installing PHP

I decided to go straight to PHP 7.4 since it’s the latest and this WordPress blog doesn’t have a lot of customization. On my host DigitalOcean, Ubuntu 18.04 comes with PHP 7.2 by default. So first thing was to SSH into the box and start getting the relevant packages.

$ apt-get install software-properties-common
$ add-apt-repository ppa:ondrej/php
$ apt-get update
$ apt-get install php7.4

The software-properties-common was already installed, but I’m pretty sure it enabled me to add the Personal Package Archive (PPA) on the next line. It looks like Ondřej Surý maintains the PPA for PHP — seems odd, but I saw multiple sources cite this repo so I went ahead with it. Then I ran a standard apt-get update and installed PHP 7.4 next.

For a sanity check, I ran php --version and was surprised it was on 7.4! But alas, this wasn’t enough for WordPress to start using it. So next I had to figure out how to get off of PHP 7.2.

Loading PHP Via Apache

This part was cool b/c I learned more about how Apache works! In the /etc/apache2/mods-available directory are a list of available mods for Apache to use, including php7.2.load and the newly installed php7.4.load. My gut told me I had to enable PHP 7.4 and disable 7.2, so that’s exactly what I did.

$ a2dismod php7.2
$ a2enmod php7.4
$ systemctl restart apache2

Loading Remaining WordPress Libraries

There was a DigitalOcean tutorial that suggested I install the following commonly-used WordPress PHP extensions.

$ apt-get install php-curl php-gd php-mbstring php-xml php-xmlrpc php-soap php-intl php-zip

Of course that wasn’t enough. After making Apache configurations above and restarting, I was told I needed to install the MySql extension.

$ apt-get install php-mysql

This worked! Now that I had WordPress running on 7.4, I went ahead with the remaining imagick extension.

$ apt-get install php-imagick

That’s it!

Execute Program

I recently started getting into Execute Program — it comes free with a subscription to Destroy All Software and I was feeling a bit down about work. (More on that later.) I’m making my way through the JavaScript Arrays course as a mix of refresher and new. Once I’m done (or close to done), I’ll attempt TypeScript, since there are some things I want to build in React using it and it’ll be useful at work.

Speaking of work and why I was feeling down: I had a pretty bad week last week. I was really struggling with the codebase I was working in and wasn’t quickly learning from my mistakes. This week has started off better; I attribute that both to a better attitude and my amazing co-workers. Everyone is really supportive of each other and that goes a long way towards getting myself out of a rut. On top of that, over the weekend I reflected on where I was mentally and I think that’s helping me figure out how to ground my problem-solving.

So that’s why I’m continuing to do the JavaScript learning. It’s useful. It’s fun. It’s productive. Of course I’d love to create my fantasy football API or build a React project in TypeScript but those projects take a lot of energy and right now I only have so many spoons to give.

I’ll catch ya’ll later. Hoping to make this a more regular thing.

Rails + Puma + Capistrano + Nginx

About a month ago I decided I wanted to get a website going for Cerulean Labs, my catch-all organization that has supported game dev, mentoring meetups, and other random group projects. It would be good to have a website that allowed users in the organization to coordinate meeting up, sharing projects, and reviewing important info like community guidelines.

I specifically chose Rails because I haven’t developed on it since Rails 3 and I miss developing in Ruby. Puma is the default app server out of the box, and Capistrano takes care of deploys. Last, Nginx is used as a proxy.

Installation

Below are the components I had to install on the host machine. The order is all out of whack — I bounced around as I figured out what still needed to be setup. If you’re looking for specific guides, check out the bottom of this post.

SSL Certificate

I used Let’s Encrypt to get my certificate. First, I setup my dependencies:

$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx

Installing the cert:

$ sudo certbot --nginx -d www.ceruleanlabs.com

Testing renewal:

$ sudo certbot renew --dry-run

It’ll prompt you along the way but the questions are straight-forward.

Nginx Gotos

I used the following when I wanted to check the syntax of my config files, check to see if nginx was running, and then to restart it whenever I made a change.

$ sudo nginx -t $ sudo service nginx status $ sudo service nginx restart

Nginx Config

When you first look at /etc/nginx/sites-available/default it looks something like this:

server {
    root /var/www/html;

    index index.html index.htm index.nginx-debian.html;
    server_name www.ceruleanlabs.com; # managed by Certbot


    location / {
            try_files $uri $uri/ =404;
    }

    listen [::]:443 ssl ipv6only=on; # managed by Certbot
        listen 443 ssl; # managed by Certbot
        ssl_certificate /etc/letsencrypt/live/www.ceruleanlabs.com/fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/www.ceruleanlabs.com/privkey.pem; # managed by Certbot
        include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

server {
    if ($host = www.ceruleanlabs.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    listen 80 ;
    listen [::]:80 ;
    server_name www.ceruleanlabs.com;
    return 404; # managed by Certbot
}

The first server block shows a default Nginx page. The second block makes sure that your site is always served over HTTPS.

We want to first add an upstream block that points to where our Puma socket will be located. I put this at the very top of the file:

upstream puma {
    server unix:///var/www/ceruleanlabs/shared/tmp/sockets/puma.sock fail_timeout=0;
}

Next we want to rewrite the guts of that server block that is currently serving the default Nginx page. Mine looks something like this, given we’re serving a Rails app via Puma:

server {
    root /var/www/ceruleanlabs/current/public;
    access_log /var/www/ceruleanlabs/current/log/nginx.access.log;
    error_log /var/www/ceruleanlabs/current/log/nginx.error.log info;
    server_name www.ceruleanlabs.com;

    location ^~ /assets/ {
        gzip_static on;
        expires max;
        add_header Cache-Control public;
    }

    try_files $uri/index.html $uri @puma;
    location @puma {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;

        proxy_pass http://puma;
    }

    listen [::]:443 ssl; # managed by Certbot
        listen 443 ssl; # managed by Certbot
        ssl_certificate /etc/letsencrypt/live/www.ceruleanlabs.com/fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/www.ceruleanlabs.com/privkey.pem; # managed by Certbot
        include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    error_page 500 502 503 504 /500.html;
    client_max_body_size 10M;
    keepalive_timeout 10;
}

Firewall Config

This part was also straightforward, and again I followed the guide at the bottom of this post. I pretty much just ran the following commands:

$ sudo ufw allow "Nginx Full"
$ sudo ufw allow "OpenSSH"
$ sudo ufw delete allow "Nginx HTTP"

$ sudo ufw status

Ruby Setup

$ curl -fsSL https://github.com/rbenv/rbenv-installer/raw/master/bin/rbenv-installer | bash

I then followed the rbenv guide for installing it to my shell so it would be available when I’m running commands remotely via Capistrano. You can set that shell for that user with chsh -s /bin/bash — using Bash, for example, with a ~/.bash_profile that looks like this:

export PATH="$HOME/.rbenv/bin:$PATH" eval "$(rbenv init -)"

JavaScript Runtime

We need a runtime for compilation of the Javascript components. I ended up just installing Node on the host machine, but you can also add something like mini_racer to your Gemfile to add support as well.

Deploys

$ cap production deploy

Additional Setup

This is all open-source, so you can of-course check out the rest of the app configuration on GitHub: https://github.com/ceruleanlabs/ceruleanlabs.com The rest of the changes I had to make mostly centered around configuring the Rails app with the correct database credentials, libraries, etc. I should’ve written my steps down better haha, but you’ll find the rest of what’s configured in the repo.


Useful Docs and Guides

Step 2: CircleCI and S3 Buckets

The second part took longer than usual. Part of this was because I was wrestling with DNS — I don’t think DNS changes ever go quickly — but the other part was because I was hit with another bout of depression. #yaydepression

For the second part I configured CircleCI to build and deploy this blog to an S3 bucket, which I reconfigured with the slidingdown.com URL by using Route 53 as my new DNS provider.

CircleCI

This part was pretty fun. I first setup Docker to make sure it could containerize correctly. This helped me debug a JS dependency for Middleman — something I quickly fixed with mini_racer. Next I figured out how to get it to build on CircleCI. CircleCI lets you build one project for free, which is why I’m having it run my blog, which is a private repo.

One useful piece of information was understanding how to persist data in workflows. The diagram in the article really helped and I was able to cobble the rest together using documentation.

AWS Configuration

The S3 configuration was really straightforward once I got the hang of the naming conventions and what how permission should be setup. I followed Amazon’s guide for setting up a custom domain static site And if you’re wary about copypasta’ing the policy into your bucket’s config, you can get the same results from using the AWS Policy Generator.

Getting CircleCI authenticated was also really easy — I created an IAM user specifically for CircleCI and gave it S3 permissions.

The most annoying part was figuring out I needed to switch from my own DNS provider to Route 53. I was trying to setup my own CNAME record and it didn’t take — S3 balked at my configuration. So I made the switch and everything worked magically.

Putting it all together

And now CI/CD for my blog works! The only nice-to-have I really want to have is SSL support. It’s not necessary because I host a static site, but it’s where the web is heading and Firefox gets angry. There might be an option using CloudFront that I can explore.

Raspberry Pi — I remember you!

Before we begin, fun fact: Middleman doesn’t like generating articles with exclamation points in the title. I’m guessing it has a problem stripping them out when generating the filename. Maybe this is a good opportunity for a pull request!


I’m following Gordon Turner’s Raspberry Pi guide to start step one of putting an information radiator in my home.

Hardware

I’m using an old monitor and an old Raspberry Pi 1. 8 GB memory card from who knows when. Nothing fancy here; why invest in hardware when I haven’t written any software yet?

Turning It On

I guess there’s one piece of hardware I invested in that was a nice-to-have — a Wemo Mini Smart Plug. Right now I have it hooked up to an IFTTT recipe that lets me press a button on my iPad to toggle the Pi on and off. Eventually I’ll have it hooked up to Alexa or put on a schedule so I can see it in the morning and otherwise have the whole thing turned off.

Setting Up the Pi

I followed the Raspberry Pi setup steps to get Raspbian Stretch flashed onto the memory card. (Etcher Pro looks _so_ cool btw) Next was to follow the directions as they were written out! I especially liked the instructions to SSH into the Pi, since that lets me copypasta the scripts easily from my MacBook Pro. After following those instructions, everything worked out!

Next Steps

Next I’ll want to design a website — it’ll have to be pretty lightweight so the Pi can easily render it. I think that’ll be a fun challenge! I also want to get my domains in order: I’ve been having trouble with GitHub Pages and custom domains, so I’ll probably bite the bullet and host on AWS or something. We’ll see what kind of architecture I need.

That’s it for now! This is the first day of a week long “sabbatical” for me. I’m using the time to get caught up on grad work, relax from burnout, and innovate. So far so good!

Be Still

I wrote a Firefox extension called Be Still that blocks traffic from social media websites I need to get myself off of. Right now it’s not published (you can still check it out on GitHub) and it only shows an image, but I’d like to make it more mental health focused going forward. I’d also like to support Chrome, Safari and Edge too.

But right now it’s something. Ironically enough, it was difficult to develop extension because my mind was on fire the whole time. I still don’t have a great system for planning out work — sometimes I cope fine and my brain can keep everything organized, other times it can barely focus on one thing. I’m hoping this extension is a step in a direction that supports people like me, and maybe it can give me ideas for building other tools I might need.

I haven’t written in this blog recently — was a bit held up trying to get my Ghost blog ported. Instead I think I should just keep writing these posts and not worry about importing the past. If there’s ever a time for me to need writing, it’s now.