• jq is a swiss army knife for working with JSON. It is especially handy for piping output of CLI tools, such as curling JSON APIs, or aws and az CLIs.

    I wanted to get a nice list of public IP addresses of my EC2 instances, together with instance names. I could have used boto for this, but the combo of AWS CLI and jq turned to be a simple and effective one-liner (split for better wrapping).

    aws ec2 describe-instances | jq '.Reservations[].Instances[] |
      {(.Tags[] | select (.Key == "Name") | .Value): .PublicIpAddress}' |
      jq -s add
    

    produces:

    {
      "foo": "54.131.121.177",
      "bar": "52.75.8.58",
      "baz": "34.228.156.28"
    }
    
  • Tue, Sep 26, 2017

    – Hey, we need to do a deployment.

    5 developers swarm in to participate in the process. The fun begins with importing CSVs into Azure tables, a trivial task that we’ve yet to automate. Then off we go to deploy the application. Each deployment is a special snowflake - some services get updated, some not… We set the dials and hit the button. After all, “Those progress bars ain’t gonna watch themselves” (© Stan). All seems to be going well for 15 minutes…

    …until someone realizes that – apparently – something else had to be deployed first.

    – uh, can’t you cancel it?

    uh, NOPE.

    so we wait for this deployment to complete, because canceling a running deployment is bad luck (trust me). Then we deploy the prerequisite (it’s an ARM template, FYI). Finally, we’re ready to deploy the original application, and so we push the button and twiddle thumbs…

    until 20 minutes later:

    hey, did we change that variable?

    guess what ensues?… correct - all sorts of good times.

    fast forward, and the redeployment of the prerequisite is done. We’re into the 2nd hour of this extravaganza now, by the way. We go back to re-re-deploy the apps (3rd time if you’re keeping count). This is it, and then we’re done, right?

    riiight.

    As is tradition, post-deployment, the three scripts exist, which must be manually run on a snowflake box, as a final sacrifice of tears to the great pool of entropy. This involves, uh, pasting the actual scripts into that thing over there, complete with executable paths and all. I don’t know but I’ve been told, this will take like 3 hours to run.

    And it would indeed… only if someone didn’t

    restart the remote-script-runner-service-thingamajig because it was being slow.

    Suddenly what happens? Pop quiz? I hope you said “those script processes are now disowned”, which they are, they bloody are. They are running, but apparently either doing nothing, or something useless. Logs? What if I told you: there are no logs.

    Long story short (not really): I get on chat with the server admin. He checks the box for me, does admin things. We have no visibility into what the scripts are doing. We leave them alone for the time being.

    And a few hours later, they are still humming along…


    There are some takeaways from here. something something.

Hosting AWS Docker Microservices Tooling Automation