Wednesday, 22 April 2015

Resonant Rise 3 Java Settings to reduce lag

I've begun a new Youtube series covering Resonant Rise 3 (3.2.5.3-RC-MAIN).
In the process I discovered it lags.
HORRIBLY.

As my Linux server decided to sulk and fail to boot, I'm using a late model Mac Mini as a server.
The clients are late model iMacs.
This is OBVIOUSLY SUB-OPTIMAL.
I'm organising a replacement linux server which I will use in our upcoming videos.

So I did a metric shit ton of digging and finally found a series of settings for the server and client that reduce that lag considerably.
So...
Yeah...
You keep getting:

"Can't keep up! Did the system time change"

Messages?

Then do this.

On your servers LaunchServer.sh:

#!/bin/bash
set -x
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_71.jdk/Contents/Home
export PATH=${JAVA_HOME}/bin:${PATH}
opts="-server -XX:+TieredCompilation \
-XX:-DontCompileHugeMethods \
-XX:+UseCodeCacheFlushing \
-XX:ReservedCodeCacheSize=256m \
-XX:+UseBiasedLocking \
-XX:BiasedLockingStartupDelay=0 \
-XX:NewRatio=3 \
-XX:+UseParNewGC \
-XX:+UseConcMarkSweepGC \
-XX:+DisableExplicitGC \
-XX:+CMSIncrementalMode \
-XX:+CMSIncrementalPacing \
-XX:+CMSParallelRemarkEnabled \
-XX:+UseCompressedOops \
-XX:CMSInitiatingOccupancyFraction=30 \
-XX:+UseCMSInitiatingOccupancyOnly"
java -Xmx2G -XX:MaxPermSize=256M $opts -jar forge-1.7.10-10.13.2.1291-universal.jar nogui

(Slashes added for readability)

Of course this assumes you're using Java 1.7 update 71.
I tried JDK 1.8 and it gave me a whole bunch of grief.
So I decided to switch back to JDK 1.7 and see if that helped.
It did.
Suddenly the Mac Mini used all 4 cores instead of just 2 and the client side (iMac) sped up considerably.

Ok. Now client side.
In the ATLauncher settings, choose 'Settings' and:

For the Java path use: /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
(Again assuming you have 1.7 update 67 - change to suit)

And for the Java parameters:

-server -XX:+TieredCompilation \
-XX:CompileThreshold=1500 \
-XX:-DontCompileHugeMethods \
-XX:+UseCodeCacheFlushing \
-XX:ReservedCodeCacheSize=256m \
-XX:+UseBiasedLocking \
-XX:BiasedLockingStartupDelay=0 \
-XX:NewRatio=3 \
-XX:+UseParNewGC \
-XX:+UseConcMarkSweepGC \
-XX:+DisableExplicitGC \
-XX:+CMSIncrementalMode \
-XX:+CMSIncrementalPacing \
-XX:+CMSParallelRemarkEnabled \
-XX:+UseCompressedOops \
-XX:CMSInitiatingOccupancyFraction=30 \
-XX:+UseCMSInitiatingOccupancyOnly

(Slashes added for readability)

MANY THANKS TO https://plus.google.com/+JohnPaulAlcala/posts/FUKJ3QhZJ8w !

YMMV.

I'm still hunting for "The Perfect Seed" for videos, and will get back to making videos as soon as my current IRL workload decreases.

Wednesday, 1 April 2015

Where does chrome store open tabs information?

Recently I had a problem with a upgrade from Mavericks to Yosemite.
The upgrade failed.
Catastrophically.
But I had backed up all the main folders I might need in case of such a folder.

For reference, these are:

- /private
- /Library
- /Applications
- /Users/my_home_folder

I had to burn the machine and install Yosemite from scratch.

Now one of the things I wanted to recover was the current tabs I had open in Chrome.
The reason being was that some of them were very, very interesting and I had not bookmarked them.

I did a little bit of research, but not much turned up.
So I figured it out myself.

Let's say you have a folder ~/OldMachineBackup and it has the folders mentioned above in it.

So to find out what tabs you had open:

cd ~/OldMachineBackup
strings Application\ Support/Google/Chrome/Default/Current\ Tabs | egrep '^http' | sort | uniq

And there you have them. A list of http addresses.

Wednesday, 18 March 2015

Upgrading the docker service from 1.3 to 1.5 on Ubuntu 14.04

Recently I had to upgrade a production server to ensure it was running the latest version of docker (1.5.0 at time of writing).

The current install was 1.3.1 and we wanted all docker servers to be identical.

First up, assume you have done a sudo -i to ensure all commands are run as root.

Also, some commands are prefixed with $ . This is to identify the command from it's output.

And further, I have used bogus IPv4 addresses for certain URLs.

For reference, I used 10.0.0.3 as the docker server IP.

# First some useful commands
# Useful to find files in packages or use packages.ubuntu.com

apt-file search nslookup

# -------------------------------------------------------------------
# Install tree and dnsutils (nslookup and friends)
# -------------------------------------------------------------------

apt-get install tree dnsutils

# -------------------------------------------------------------------
# Kill all containers
# -------------------------------------------------------------------
docker ps -a | egrep 'ls-api' | awk '{ print $NF; }' | xargs docker kill
docker ps -a | egrep 'ls-api' | awk '{ print $NF; }' | xargs docker rm

# -------------------------------------------------------------------
# Kill all images
# -------------------------------------------------------------------
docker images -q --filter "dangling=true" | xargs docker rmi
docker images | grep -v REPOSITORY | awk '{ print $3; }' | \
  sort | uniq | xargs docker rmi

# -------------------------------------------------------------------
# Stop docker
# -------------------------------------------------------------------
service docker stop

# -------------------------------------------------------------------
# Uninstall docker.io (You may need to do 'aptitude search docker'
# to see if it is installed. If you do not have 'docker' installed,
# but 'lxc-docker', then remove that instead
# -------------------------------------------------------------------
apt-get remove docker.io

# -------------------------------------------------------------------
# Cleanup any remaining files you want from:
# /var/lib/docker 
# /etc/init.d 
# /etc/default/docker
# /var/log/docker* 
# Your choice.
# -------------------------------------------------------------------

# -------------------------------------------------------------------
# Install AUFS support
# -------------------------------------------------------------------
apt-get update
apt-get install linux-image-extra-`uname -r`

# -------------------------------------------------------------------
# Fix keys
# -------------------------------------------------------------------
apt-get update
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 \
  --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9

# -------------------------------------------------------------------
# Update your apt sources
# Note: First check your /etc/apt/sources.list.d/docker.list to see 
#       if it already has this
# -------------------------------------------------------------------
sh -c "echo deb http://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"

# -------------------------------------------------------------------
# Install latest docker 
# -------------------------------------------------------------------
apt-get update
apt-get install lxc-docker

# Note: During install, yo may get a message about 
#       /etc/init/docker.conf being present.
#       If so, choose Y to overwrite your old one with the new one.

# -------------------------------------------------------------------
# Change /etc/default/docker:
# -------------------------------------------------------------------

# Use DOCKER_OPTS to modify the daemon startup options.
#DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"
DOCKER_OPTS="-H tcp://0.0.0.0:2376 \
  -H unix:///var/run/docker.sock \
  --dns 10.0.0.1 \                      <<- Your local DNS
  --dns 10.0.0.2 \                      <<- services
  --insecure-registry 10.0.0.4:5000"    <<- If you have a private repo

# -------------------------------------------------------------------
# Look at the upstart jobs:
# -------------------------------------------------------------------
service --status-all
 [ + ]  apparmor
 [ ? ]  console-setup
 [ + ]  cron
 [ - ]  docker
 [ - ]  grub-common
 [ ? ]  killprocs
 [ ? ]  kmod
 [ ? ]  networking
 [ + ]  ntp
 [ ? ]  ondemand
 [ ? ]  open-vm-tools
 [ + ]  postfix
 [ - ]  procps
 [ + ]  puppet
 [ + ]  rabbitmq-server
 [ ? ]  rc.local
 [ + ]  resolvconf
 [ - ]  rsync
 [ + ]  rsyslog
 [ ? ]  sendsigs
 [ + ]  snmpd
 [ - ]  ssh
 [ - ]  sudo
 [ + ]  udev
 [ ? ]  umountfs
 [ ? ]  umountnfs.sh
 [ ? ]  umountroot
 [ - ]  unattended-upgrades
 [ - ]  urandom

# -------------------------------------------------------------------
# Restart the service to enable AUFS support
# -------------------------------------------------------------------
service docker restart

# -------------------------------------------------------------------
# Get info
# -------------------------------------------------------------------
$ docker info
Containers: 0
Images: 0
Storage Driver: aufs        <-- AUFS
 Root Dir: /mnt/docker/aufs <-- AUFS
 Backing Filesystem: extfs
 Dirs: 0
Execution Driver: native-0.2
Kernel Version: 3.13.0-24-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 16
Total Memory: 15.67 GiB
Name: dock-prod-001
ID: AAAA:BBBB:CCCC:DDDD:EEEE:FFFF:GGGG:HHHH:0000:1111:2222:3333

# Your ID will be different of course 

# -------------------------------------------------------------------
# Start Seagull to have a pretty interface to containers
# See https://registry.hub.docker.com/u/tobegit3hub/seagull/
# -------------------------------------------------------------------
docker run -d \
  -p 10086:10086 \
  -v /var/run/docker.sock:/var/run/docker.sock \
  --name=Seagull \
  tobegit3hub/seagull

# Browse via your desktop to http://10.0.0.3:10086/containers

# -------------------------------------------------------------------
# Install Elastic
# See https://registry.hub.docker.com/_/elasticsearch/
# The '_' in the URL means it's an official image
# -------------------------------------------------------------------
docker run -d \
  -p 9200:9200 -p 9300:9300 \
  -v /some/path/to/elastic_search_data:/data \
  --name=Elastic \
  elasticsearch \
  elasticsearch -Des.config=/data/elasticsearch.yml
# Note: The first 'elasticsearch' is the image, and
#       the second 'elasticsearch' is the command plus options

# Browse to http://10.0.0.3:9200 and http://10.0.0.3:9200/_search

# -------------------------------------------------------------------
# Install Redis
# See https://registry.hub.docker.com/u/library/redis/
# -------------------------------------------------------------------
docker run -d \
  -p 6379:6379 \
  -v /some/path/to/redis_data:/data \
  --name=Redis \
  redis redis-server \
  --appendonly yes
# Note: The 'redis' is the image, and
#       the 'redis-server' is the command plus any options

# -------------------------------------------------------------------
# Install redis command line tools
# So you can interact with the Redis container
# -------------------------------------------------------------------
apt-get update
apt-get install redis-tools

# -------------------------------------------------------------------
# Test
# -------------------------------------------------------------------
# First go to the redis_data folder which was defined via the start
$ cd redis_data
# Let's look at the data store
$ ls -l appendonly.aof
total 0
-rw-r--r-- 1 deploy docker 0 Mar 18 09:24 appendonly.aof
# Empty, so let's add a key
$ redis-cli
127.0.0.1:6379> keys *
(empty list or set)
127.0.0.1:6379> set hello world
OK
127.0.0.1:6379> keys *
1) "hello"
127.0.0.1:6379> get hello
"world"
127.0.0.1:6379> quit
# Now let's look at it
$ ls -l appendonly.aof
total 4
-rw-r--r-- 1 deploy docker 58 Mar 18 09:33 appendonly.aof
# Ooo. Changed, so let's see if we can view that file
$ file appendonly.aof
appendonly.aof: ASCII text, with CRLF line terminators
# Yup. So let's look at it:
$ cat appendonly.aof
*2
$6
SELECT
$1
0
*3
$3
set
$5
hello
$5
world

# -------------------------------------------------------------------
# Look at the containers:
# -------------------------------------------------------------------
$ docker ps -a
CONTAINER ID IMAGE                      COMMAND              CREATED        STATUS        PORTS                                          NAMES
c758b7dc90e5 redis:latest               "/entrypoint.sh redi 4 minutes ago  Up 4 minutes  0.0.0.0:6379->6379/tcp                         Redis
310ccb182746 elasticsearch:latest       "elasticsearch -Des. 8 minutes ago  Up 8 minutes  0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp Elastic
e78470ddebca tobegit3hub/seagull:latest "./seagull"          30 minutes ago Up 30 minutes 0.0.0.0:10086->10086/tcp                       Seagull

# -------------------------------------------------------------------
# Start doing your own deployments!
# -------------------------------------------------------------------

Thursday, 12 March 2015

Thank you Dr Mark Courtney and Dr Paul Murphy and all the staff at John Flynn Hospital

Ok. I survived my latest surgery. Many thanks to the staff of John Flynn Private Hospital and Dr Mark Courtney and Dr Mark Murphy for making it memorable.

I say memorable in that my handbag and overnight bag got lost. I'm not complaining mind you.

Seriously.

6 hours of searching by security staff eventually found them. Keys, cards, prescriptions, etc basically 2/3rds of my life recovered. Thanks guys!

In the mean time I had to wear paper clothing and be restricted to my room. Which was awesome I have to say. Fantastic views. But no underwear. Sucks to be me. Dr Courtney came round to see how I was faring at 7:30pm. Way, way, seriously way, after his visiting times.

Now that's dedication to doing the right thing.

Awesome!

He was an angel and sent me home rather than stay overnight because:

1) The cyst (huge bugger as it was) was easy to remove and
2) I didn't bleed like a stuck pig and
3) He understood I had to pay for the room out of my own pocket and
4) ALIENS! No. Not really. Just people. Humans. Good humans.
5) Now where did I put number 6?
6) Oh! Here it is!

I shook his hand warmly and gave him a hug. He positively beamed happiness.

And a big shout-out to Dr Paul Murphy.
We had met before and I kept calling him 'Paul'.
Kinda odd in a professional kind of way.
I kept expected him to say

I didn't spend 3 years in "evil Anaesthesia College" to be called "Paul" thank you very much. It's Dr Murphy if you please.
Sorry about that.

Oh. I have to mention. While in pre-op I overheard the birth of two babies. AWESOME. Made me smile. And many of the staff I have to say. Cool.

I now have a HUGE plaster on my neck and my neck hurts like... Like... Buggery... But I have antibiotics, pan forte and FINALLY have this damn thing out of my neck. Two years of coughing myself to distraction every morning for 2 hours. Gone. Clicking when I swallow. Gone. Glands the size of golf balls. Gone. Finally. Gone.

Thank you Dr Mark Courtney and Dr Mark Murphy and all the staff at John Flynn Hospital for making it go away. Thank you.

And A massive shout out to security for ransacking every ward and every locker for my bags.

Thank you.

Dr Courtney.
Dr Murphy.
ALL THE STAFF AT JOHN FLYNN PRIVATE HOSPITAL!
ALL OF YOU.
SPECIAL SHOUT-OUT TO SECURITY - AWESOME JOB DUDES.


Wednesday, 4 March 2015

Docker: Find what the container port is from inside the container!

Ok. Service registration and discovery inside a docker container can be fiddly sometimes.
And I wanted to have dynamic port numbers when starting a container so I could 'register' the service in Redis.

So how do you do it?

I first fiddled with using socat inside the startup for the service which worked, but was ugly.

FYI: I ran up a vagrant ubuntu VM and installed docker 1.5 to test this.

So here's the way to do it in ruby.

# ---------------------------------------------------------------------------
# Find out what our container port is
# ---------------------------------------------------------------------------

SVC_NAME = 'api-dummy_1'

require 'socket'
require 'net/http'

# Create the socket to the docker host
sock = Net::BufferedIO.new(UNIXSocket.new('/var/run/docker.sock'))

# Go grab all the containers details
request = Net::HTTP::Get.new('/containers/json')
request.exec(sock, '1.1', '/containers/json')
begin
  response = Net::HTTPResponse.read_new(sock)
end while response.kind_of?(Net::HTTPContinue)
response.reading_body(sock, request.response_body_permitted?) { }

# Parse and loop over it trying to find our name
data = JSON.parse(response.body)
puts "Data received: #{data}"
data.each do |container|
  puts "Looking at: #{container}"
  if container['Names'].include? "/#{SVC_NAME}"
    container_port = container['Ports'][0]['PublicPort']
    puts "CONTAINER_PORT: #{container_port}"
    ENV['SVC_PORT'] = container_port.to_s
    break
  end
end

Obviously you'd have to do something to handle it if you can't find the name...
And should really check the Ports array better.

The socket call returns an array something like this:

[
{
  "Command":"/bin/sh -c 'bundle exec foreman start'",
  "Created":1425423198,
  "Id":"5b11471046a04b64fffc2866d4eb67568221fb8c3445a326557182208559e460",
  "Image":"my_repo:5000/something/api-dummy_1:latest",
  "Names":["/api-dummy_1"],
  "Ports":[{"IP":"0.0.0.0","PrivatePort":5000,"PublicPort":49172,"Type":"tcp"}],
  "Status":"Up 1 seconds"
},
{
  ...another one...
}
]

You have to map the /var/run/docker.sock on running the container of course.
Something like this:

#!/bin/bash
export SVC_NAME=api-dummy_1
export DOCKER_HOST=tcp://127.0.0.1:2378
export REPO=10.0.0.1 # Whatever
docker pull ${REPO}:5000/something/${SVC_NAME}
docker kill ${SVC_NAME}
docker rm ${SVC_NAME}
docker run -d --env RAILS_ENV=production \
              --env HOST_IP=10.0.0.2 \
              --env SVC_NAME=api-dummy_1 \
              --env REDIS=10.0.0.3 \
              --name ${SVC_NAME} \
              -p :5000 \
              -v /var/run/docker.sock:/var/run/docker.sock \
              ${REPO}:5000/something/${SVC_NAME}

Names and IPs to be changed of course.

Still fiddly, but it works.

YMMV.

Update:

I just realised that docker provides a HOSTNAME environment variable which is essentially the container id which would allow you to call /containers/#{ENV['HOSTNAME']}/json instead of doing the loop to get the configuration for that specific id.

The configuration returned is slightly different. See https://docs.docker.com/reference/api/docker_remote_api_v1.15/#inspect-a-container for details

Enjoy.

Wednesday, 28 January 2015

Diamonds For Nothing a parody for Minecrafters

Now look at them diggy-diggies that's the way you do it
You play the pick-axe on the MC-YT
That ain't diggin' that's the way you do it
Diamonds for nothin' and chickens for free
Now that ain't diggin' that's the way you do it
Lemme tell ya them miners ain't dumb
Maybe get a arrow on your right arm
Maybe get a arrow on the back of your head

We gotta install these coal-powered ovens
Custom jukebox deliveries
We gotta move these iron bars
We gotta move these chestie's

See the little noob with the skin and the hat
Yeah buddy that's his own hat
That little miner got his own jet pack
That little miner he's a millionaire

We gotta install these coal-powered ovens
Custom jukebox deliveries
We gotta move these iron bars
We gotta move these chestie's

I shoulda learned to play the game
I shoulda learned to play them picks
Look at that miner, she got it stickin' in the camera
Man we could have some fun
And he's up there, what's that? Indie noises?
Bangin' on the creepers like a simoneeze
That ain't workin' that's the way you do it
Diamonds for nothin' and chickens for free

We gotta install these coal-powered ovens
Custom jukebox deliveries
We gotta move these iron bars
We gotta move these chestie's

Now that ain't diggin' that's the way you do it
You play the pick on the MC-YT
That ain't diggin' that's the way you do it
Diamonds for nothin' and chickens for free
Diamonds for nothin' and chickens for free

I want my
I want my
I want my MC-YT!

Thursday, 8 January 2015

Docker: Creating ruby 2.2.0, mysql and rails images

For the purposes of this post, I'm assuming you have a private repository with the address 10.0.0.1:5000.

In this example, you will notice a distinct similarity in the build scripts.
Hmm.
Funny that.
I created a tool sleet which is like fleet but for single server installs.
So that is the essence of those build scripts.
If I have time somewhere in my aging schedule and downright ludicrous deadlines, I'll open source it.

This creates 4 images in your private repo.

- debian which is an instance of jessie with various compilers etc installed.
- ruby-2.2.0 compiled from source
- mysql-ruby-2.2.0 with the build dependencies
- rails-2.2.0 from all of the above

The purpose was to get an image that can be used for `rails-api` micro services that builds and deploys ultra-fast.
This is done by pre-installing the most commonly used gems into the `rails-2.2.0` image.

Then when a build is done that inherits from the `rails-2.2.0` image, the `bundle install` simply uses the local gems and doesn't have to go off to any external gem sources.
The bundle output shows `Using` instead of `Installing` the latter of which involves snarfing gems off the web and purhaps compiling things.


First up, here's the tree you'll be creating:

.
├── debian
│   └── jessie
│       ├── Dockerfile
│       └── build.sh
├── mysql
│   └── 2.2.0
│       ├── Dockerfile
│       └── build.sh
├── rails
│   └── 2.2.0
│       ├── Dockerfile
│       ├── build
│       │   └── Gemfile
│       └── build.sh
└── ruby
    └── 2.2.0
        ├── Dockerfile
        └── build.sh

Here are the files:

debian/jessie/Dockerfile

FROM debian:jessie

MAINTAINER Your Name 

ENV REFRESHED_AT 2015-01-08

RUN apt-get update \
  && apt-get install -y --no-install-recommends \
    git curl procps \
    autoconf build-essential \
    libbz2-dev libcurl4-openssl-dev libffi-dev libssl-dev libreadline-dev libyaml-dev \
    zlib1g-dev --no-install-recommends \
  && rm -rf /var/lib/apt/lists/*

debian/jessie/build.sh

#!/bin/sh

IMAGE=debian

export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=${HOME}/.boot2docker/certs/boot2docker-vm
IMAGE_REPO=10.0.0.1:5000

echo "------------------------------------------------------------------------"
echo " ENVIRONMENT"
echo "      DOCKER_HOST     ${DOCKER_HOST}"
echo "      IMAGE           ${IMAGE}"
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo " ENVIRONMENT"
echo "      DOCKER_HOST     ${DOCKER_HOST}"
echo "      IMAGE           ${IMAGE}"
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo " DELETING OLD IMAGES"
docker rmi ${IMAGE_REPO}/images/${IMAGE} ${IMAGE}
echo "========================================================================"
echo ""

set -e

echo "------------------------------------------------------------------------"
echo "BUILD"
docker build -t="${IMAGE}" .
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo "IMAGES"
docker images | egrep "^${IMAGE}"
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo "HISTORY"
docker history ${IMAGE}
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo " TAG AND PUSH"
docker images | grep "^${IMAGE}"
sha=`docker images | grep "^${IMAGE}" | awk '{ print $3; }' | cut -f1`
echo "sha=${sha}"
docker tag ${sha} ${IMAGE_REPO}/images/${IMAGE}
docker push       ${IMAGE_REPO}/images/${IMAGE}
echo "========================================================================"
echo ""

ruby/2.2.0/Dockerfile

FROM 10.0.0.1:5000/images/debian

MAINTAINER Your Name 

ENV REFRESHED_AT 2015-01-08

ENV RUBY_MAJOR 2.2
ENV RUBY_VERSION 2.2.0

RUN apt-get update \
  && apt-get install -y bison ruby \
  && rm -rf /var/lib/apt/lists/* \
  && mkdir -p /usr/src/ruby \
  && curl -SL "http://cache.ruby-lang.org/pub/ruby/$RUBY_MAJOR/ruby-$RUBY_VERSION.tar.bz2" | tar -xjC /usr/src/ruby --strip-components=1 \
  && cd /usr/src/ruby \
  && autoconf \
  && ./configure --disable-install-doc \
  && make -j"$(nproc)" \
  && apt-get purge -y --auto-remove bison ruby \
  && make install \
  && rm -r /usr/src/ruby

# skip installing gem documentation
RUN echo 'gem: --no-rdoc --no-ri' >> "$HOME/.gemrc"

# install things globally, for great justice
ENV GEM_HOME /usr/local/bundle
ENV PATH $GEM_HOME/bin:$PATH
RUN gem install bundler \
  && bundle config --global path "$GEM_HOME" \
  && bundle config --global bin "$GEM_HOME/bin"

# don't create ".bundle" in all our apps
ENV BUNDLE_APP_CONFIG $GEM_HOME

CMD [ "irb" ]

ruby/2.2.0/build.sh

#!/bin/bash

IMAGE=ruby-2.2.0

export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=${HOME}/.boot2docker/certs/boot2docker-vm
IMAGE_REPO=10.0.0.1:5000

echo "------------------------------------------------------------------------"
echo " ENVIRONMENT"
echo "      DOCKER_HOST     ${DOCKER_HOST}"
echo "      IMAGE           ${IMAGE}"
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo " DELETING OLD IMAGES"
docker rmi ${IMAGE_REPO}/images/${IMAGE} ${IMAGE}
echo "========================================================================"
echo ""

set -e

echo "------------------------------------------------------------------------"
echo " BUILD"
docker build -t="${IMAGE}" .
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo "IMAGES"
docker images | egrep "^${IMAGE}"
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo "HISTORY"
docker history ${IMAGE}
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo " TAG AND PUSH"
docker images | grep "^${IMAGE}"
sha=`docker images | grep "^${IMAGE}" | awk '{ print $3; }' | cut -f1`
echo "sha=${sha}"
docker tag ${sha} ${IMAGE_REPO}/images/${IMAGE}
docker push       ${IMAGE_REPO}/images/${IMAGE}
echo "========================================================================"
echo ""

mysql/2.2.0/Dockerfile

FROM 10.0.0.1:5000/images/ruby-2.2.0

MAINTAINER Your Name 

ENV REFRESHED_AT 2015-01-08

ENV RUBY_MAJOR 2.2
ENV RUBY_VERSION 2.2.0

RUN buildDeps='libmysqlclient-dev'; \
      set -x \
      && apt-get update && apt-get install -y $buildDeps --no-install-recommends

CMD [ "irb" ]

mysql/2.2.0/build.sh

#!/bin/bash

IMAGE=ruby-2.2.0

export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=${HOME}/.boot2docker/certs/boot2docker-vm
IMAGE_REPO=10.0.0.1:5000

echo "------------------------------------------------------------------------"
echo " ENVIRONMENT"
echo "      DOCKER_HOST     ${DOCKER_HOST}"
echo "      IMAGE           ${IMAGE}"
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo " DELETING OLD IMAGES"
docker rmi ${IMAGE_REPO}/images/${IMAGE} ${IMAGE}
echo "========================================================================"
echo ""

set -e

echo "------------------------------------------------------------------------"
echo " BUILD"
docker build -t="${IMAGE}" .
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo "IMAGES"
docker images | egrep "^${IMAGE}"
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo "HISTORY"
docker history ${IMAGE}
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo " TAG AND PUSH"
docker images | grep "^${IMAGE}"
sha=`docker images | grep "^${IMAGE}" | awk '{ print $3; }' | cut -f1`
echo "sha=${sha}"
docker tag ${sha} ${IMAGE_REPO}/images/${IMAGE}
docker push       ${IMAGE_REPO}/images/${IMAGE}
echo "========================================================================"
echo ""

rails/2.2.0/Dockerfile

FROM 10.0.0.1:5000/images/debian-mysql-ruby-2.2.0

MAINTAINER Your Name 

ENV REFRESHED_AT 2015-01-08

ENV HOME /build
ENV RUBY_MAJOR 2.2
ENV RUBY_VERSION 2.2.0
ENV GEM_HOME /usr/local/bundle
ENV PATH $GEM_HOME/bin:$PATH

ADD ./build /build

WORKDIR /build

RUN bundle install --jobs 8

CMD [ "irb" ]
rails/2.2.0/build/Gemfile

source 'http://your.private.gem.server:9900'

# ---------------------------------------------------------------------------
# Always these
# ---------------------------------------------------------------------------
gem 'rails', '4.2.0'
gem 'mysql2'

gem 'thin'
gem 'foreman'

gem 'jsonapi-resources' # and/or roar

gem 'db_populate', git: 'https://github.com/ffmike/db-populate.git'

gem 'rack-cors'
gem 'activeresource'

gem 'jbuilder', '~> 2.0'
gem 'typhoeus'
gem 'etcd'
gem 'hutch'
gem 'elasticsearch'
gem 'searchkick'

gem 'therubyracer'
gem 'oj'

# ---------------------------------------------------------------------------
# Unused
# ---------------------------------------------------------------------------
# gem 'jquery-rails'
# gem 'turbolinks'
# gem 'bcrypt', '~> 3.1.7' # Use ActiveModel has_secure_password
# gem 'sass-rails', '~> 4.0.3'
# gem 'uglifier', '>= 1.3.0'
# gem 'coffee-rails', '~> 4.0.0'
rails/2.2.0/build.sh
#!/bin/bash

IMAGE=rails-2.2.0

export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=${HOME}/.boot2docker/certs/boot2docker-vm
IMAGE_REPO=10.0.0.1:5000

echo "------------------------------------------------------------------------"
echo " ENVIRONMENT"
echo "      DOCKER_HOST     ${DOCKER_HOST}"
echo "      IMAGE           ${IMAGE}"
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo " DELETING OLD IMAGES"
docker rmi ${IMAGE_REPO}/images/${IMAGE} ${IMAGE}
echo "========================================================================"
echo ""

set -e

echo "------------------------------------------------------------------------"
echo " BUILD"
docker build -t="${IMAGE}" .
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo "IMAGES"
docker images | egrep "^${IMAGE}"
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo "HISTORY"
docker history ${IMAGE}
echo "========================================================================"
echo ""

echo "------------------------------------------------------------------------"
echo " TAG AND PUSH"
docker images | grep "^${IMAGE}"
sha=`docker images | grep "^${IMAGE}" | awk '{ print $3; }' | cut -f1`
echo "sha=${sha}"
docker tag ${sha} ${IMAGE_REPO}/images/${IMAGE}
docker push       ${IMAGE_REPO}/images/${IMAGE}
echo "========================================================================"
echo ""
Enjoy. YMMV.

Docker: Run/Administer an instance of elasticsearch on your boot2docker vm

First ssh into your boot2docker vm, then run these commands:

docker pull dockerfile/elasticsearch
mkdir elastic_search
cat > elastic_search/elasticsearch.yml <<-EOF
path:
  logs: /data/log
  data: /data/data

http.cors.enabled: true
EOF
docker run -d -p 9200:9200 -p 9300:9300 \
  -v ${HOME}/elastic_search:/data \
  --name ElasticSearch \
  dockerfile/elasticsearch \
  /elasticsearch/bin/elasticsearch \
  -Des.config=/data/elasticsearch.yml
docker@boot2docker:~$ docker ps -a
CONTAINER ID  IMAGE COMMAND  CREATED  STATUS  PORTS  NAMES
[whatever]    dockerfile/elasticsearch:latest "/elasticsearch/bin/ 24 seconds ago Up 23 seconds 0.0.0.0:9200->9200/tcp,0.0.0.0:9300->9300/tcp ElasticSearch

Now from your Mac browse to http://192.168.59.103:9200 and you should get a status result.

Now run elasticsearch-head on your Mac (not the vm):

cd /some/tools/folder/of/your/choice
git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head
npm install
grunt server

Now browse to http://localhost:9100 and:
1) set the connect box to http://192.168.59.103:9200 and
2) click connect

You now have ES running on your boot2docker instance and can access it from your Mac.

Docker: Private Registry push yields "Error: Invalid registry endpoint" and "insecure-registry"

For some time I've been using boot2docker on my mac.
This was version 1.3.1.
I also have a private registry which for the purposes of this post I've called 10.0.0.1:5000.

Some time ago I accidently brew updated and got docker 1.4.1.
So I couldn't access the running boot2docker daemon.
For some reason that seemed quite reasonable at the time, I didn't restart the boot2docker vm.
Go figure.

So I had to do this kind of thing:

(
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=${HOME}/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
/usr/local/Cellar/docker/1.3.2/bin/docker push 10.0.0.1:5000/some_image
)

Then today I accidently started Kitematic.
Which upgraded the boot2docker vm and restarted it.
And blammo I could no longer push to the private repository.

FATAL[0002] Error: Invalid registry endpoint https://10.0.0.1:5000/v1/: \
  Get https://10.0.0.1:5000/v1/_ping: EOF. \
  If this private registry supports only HTTP or HTTPS with an unknown CA certificate, \
  please add `--insecure-registry 10.0.0.1:5000` to the daemon's arguments. \
  In the case of HTTPS, if you have access to the registry's CA certificate, \
  no need for the flag; simply place the CA certificate at \
  /etc/docker/certs.d/10.0.0.1:5000/ca.crt

Bugger.
Dope slap.
Noob.

So the fix is:

1) ssh into your boot2docker vm and
2) sudo vi /var/lib/boot2docker/profile and
3) Add 'EXTRA_ARGS="--insecure-registry 10.0.0.1:5000"' to it, :wq and
4) sudo /etc/init.d/docker restart
5) And just use the /usr/local/bin/docker and not the Cellar version.

YMMV.

Docker: Delete "dangling" images

After using docker for a while one notices a distinct slow down in performance of your host.

Some digging revealed that "dangling" images are the culprit.

While being a little simplistic, these are images that are no longer associated with any running container.

I dare you to go onto your docker host (even boot2docker) and do this:

docker images -q --filter "dangling=true"

My guess is that you'll see hundreds (or thousands if you've been busy) of those.

Here's the trick to getting rid of those large incontinent beasts that are leaving great piles of steaming images in your host:

docker images -q --filter "dangling=true" | xargs docker rmi

You may see a few false positives, but "hey!" at least the trash has been taken out.

Might be worth cronning it...

YMMV.