Today the iorama.studio team released v1.1 of Looom, which now includes a delete tool!

They also took a great first stab at documenting how to use the app. This is super nice, because there’s a lot of features and buttons that are ambiguous or relatively hidden. I’ve spent about 10 hours playing with the app, and I naturally discovered features just by playing around — but I also learned a few new things. Check out the Loom User Guide for more details on how everything works.

One of the things I’ve been waiting to try out is the SVG export. For some reason, you’re not able to access the SVG files locally on the iPad for a simple airdrop export. I had to physically connect my iPad and my Mac with a USB-C cable to extract the SVG files.

While this flow isn’t ideal, an in-app export feature is apparently in the works.

Unfortunately, the source files for the video I posted last night was deleted from the app when I upgraded from version 1.0 to 1.1 😢

In any case, I was able to pull a few of the animated SVGs out of the iPad, and I’ve posted them below.

I actually noticed that there are some SVG path/clipping bugs with the SVGs that were exported. This is illustrated in the orange ball animation — I posted the SVG and video side-by-side — and you can see that there’s an entire thread that appears to be missing in the SVG.

In the early days of chrome extensions (january 2010) Jesse wrote an extension called Chromr. The general idea was simple and awesome — every time you open a new tab in chrome the page is a full-screen photo that comes from Flickr’s interestingness API.

I worked with him on it to improve style and the project was renamed More Interestingness and moved to our shared github organization.

I think More Interestingness is my alltime favorite chrome extension — it has led to many great conversations with coworkers and friends. Unfortunately on June 27 2014 Flickr made their API only support HTTPS/SSL encrtypted requests. This change silently broke the extension and I’ve spent the summer staring at blank newtab windows in chrome 🙁

Today it came up in conversation and I decided to take a look and fix things. Since the last updates in May of 2012 a lot of things have changed in the chrome extension environment, and after a few updates to the manifest.json file I was able to get things back in working condition.

more interestingness

You can download the More Interestingness chrome extension from the Chrome WebStore: https://chrome.google.com/webstore/detail/more-interestingness/ngddmdmkjnnefgggjnnnepijkcighifa

Over the last year I’ve grown quite fond of the idea of spot instances on EC2. The idea that you can spin up a relatively large cluster for almost no money to play around with new technology and tools is amazing.

I’ve been playing with CoreOS and the various cloud_config options for the last few hours, and I was getting sick of having to click through the EC2 console every time I wanted to spin a new cluster based on my new cloud_config. So I made a quick (read hacky/janky) script to spawn CoreOS clusters on EC2 as spot instances.

spawn_coreos_cluster.py
#!/usr/bin/env python

import argparse
import os
import sys
import time

from boto import ec2
from boto.ec2.blockdevicemapping import BlockDeviceMapping, BlockDeviceType



if not os.environ.get('AWS_SECRET_KEY'):
    err = 'No AWS credentials present in the environment, try again...'
    raise SystemExit(err)


INSTANCE_TYPE = 'c3.xlarge'
INSTANCE_BID = '0.05'

COREOS_AMI = 'ami-31222974'

AWS_ACCESS_KEY = os.environ.get('AWS_ACCESS_KEY')
AWS_SECRET_KEY = os.environ.get('AWS_SECRET_KEY')
EC2_KEY_NAME = 'jake'
SECURITY_GROUPS = ['sg-1234']


def parse_args():
    ap = argparse.ArgumentParser(description='Spawn a CoreOS cluster')
    ap.add_argument('-r', '--region', default='us-west-1')
    ap.add_argument('-n', '--node-count',
                    type=int,
                    default=3,
                    help='How many nodes should be in the cluster?')
    args = ap.parse_args()

    return args


def _get_cloudconfig():
    base_path = os.path.dirname(os.path.realpath(__file__))
    cloud_config = open(os.path.join(base_path, 'cloud_config.yml'))
    return cloud_config.read()

def spawn_cluster(count, region):
    conn = ec2.connect_to_region(region,
                                 aws_access_key_id=AWS_ACCESS_KEY,
                                 aws_secret_access_key=AWS_SECRET_KEY)

    mapping = BlockDeviceMapping()
    eph0 = BlockDeviceType(ephemeral_name='ephemeral0')
    eph1 = BlockDeviceType(ephemeral_name='ephemeral1')
    mapping['/dev/xvdb'] = eph0
    mapping['/dev/xvdc'] = eph1

    instance_params = {
        'count': count,
        'key_name': EC2_KEY_NAME,
        'user_data': _get_cloudconfig(),
        'instance_type': INSTANCE_TYPE,
        'block_device_map': mapping,
        'security_group_ids': SECURITY_GROUPS
    }

    spot_reqs = conn.request_spot_instances(INSTANCE_BID, COREOS_AMI, **instance_params)
    for req in spot_reqs:
        req.add_tags({'Name': 'coreos-cluster', 'coreos': True})
    spot_ids = [s.id for s in spot_reqs]

    for x in xrange(50):
        print 'Waiting for instances to spawn...'
        spot_reqs = conn.get_all_spot_instance_requests(request_ids=spot_ids)
        instance_ids = [s.instance_id for s in spot_reqs if s.instance_id != None]
        if len(instance_ids) == len(spot_reqs):
            print 'Instances all spawned'
            print '====================='
            for i in conn.get_only_instances(instance_ids=instance_ids):
                print 'CoreOS Node:'
                print '    - spot req id: %s' % i.spot_instance_request_id
                print '    - instance id: %s' % i.id
                print '    - Public IP: %s' % i.ip_address
                print '    - Public DNS: %s' % i.public_dns_name
            break

        time.sleep(10)


if __name__ == '__main__':
    args = parse_args()
    spawn_cluster(args.node_count, args.region)
cloud_config.yml
#cloud-config
coreos:
  etcd:
    discovery: https://discovery.etcd.io/fancy
    addr: $public_ipv4:4001
    peer-addr: $private_ipv4:7001
  fleet:
      public-ip: $public_ipv4
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start

    - name: format-ephemeral.service
      command: start
      content: |
        [Unit]
        Description=Stripes the ephemeral instance disks to one btrfs volume
        [Service]
        Type=oneshot
        RemainAfterExit=yes
        ExecStart=/usr/sbin/wipefs -f /dev/xvdb /dev/xvdc
        ExecStart=/usr/sbin/mkfs.btrfs -f -d raid0 /dev/xvdb /dev/xvdc

    - name: var-lib-docker.mount
      command: start
      content: |
        [Unit]
        Description=Mount ephemeral to /var/lib/docker
        Requires=format-ephemeral.service
        After=format-ephemeral.service
        Before=docker.service
        [Mount]
        What=/dev/xvdb
        Where=/var/lib/docker
        Type=btrfs
STDOUT
➜  core  ./spawn_coreos_cluster.py -n 3
Waiting for instances to spawn...
Waiting for instances to spawn...

Instances all spawned
=====================
CoreOS Node:
    - spot req id: sir-03rt1m
    - instance id: i-ead754
    - Public IP: 54.183.220.1
    - Public DNS: ec2-54-183-220-1.us-west-1.compute.amazonaws.com
CoreOS Node:
    - spot req id: sir-03rw5q
    - instance id: i-cfd053
    - Public IP: 54.183.178.2
    - Public DNS: ec2-54-183-178-2.us-west-1.compute.amazonaws.com
CoreOS Node:
    - spot req id: sir-03rwp8
    - instance id: i-45d053
    - Public IP: 54.183.218.3
    - Public DNS: ec2-54-183-218-3.us-west-1.compute.amazonaws.com

I’ve posted everything as a gist here: https://gist.github.com/jakedahn/374e2e54fdcef711bf2a

Earlier this year I made a trivial web application in ~15 minutes that uses Twillio to send me a text message every evening at 8pm that asks “How did you create value today?”

The goal of this was to have some sort of a diary to look back on in a few years. That’s always been my favorite aspect of writing/blogging – the ability to look back and see what past-jake was thinking/doing.

I found that the practice of answering this question had some interesting side-effects on how I think/work during the day. After about a month of responding daily I noticed that during the day my decisions and productivity where generally better because I started to ask “how much value does this add?” for everything I do throughout the day.

While this idea is vague, I think it is interesting and wanted to share.

Setting up an apt-cacher is easy, and so is injecting the apt_proxy attribute to cloudconfig so you can use it in instances:

#cloud-config
apt_proxy: http://192.168.1.42:3142