Monday, December 13, 2010

[LIFE] notable programs in "The Matrix" trilogy

I just participated in a real-life discussion on this topic, so, for what it's worth, I'm posting my final interpretation of the notable programs in "The Matrix" trilogy. This is just my interpretation; I'm not claiming it's better than any other interpretation out there. For sure, there are more thoroughly researched interpretations out there. I'm not even sure why I'm posting it. In any case, my interpretation is based purely on the content of the movies. Here goes:

  • Seraph: an authentication module. verifies that connected users are who they claim to be (by fighting them).
  • The Oracle: a random number generator. nothing she says is supposed to "make sense" but random elements are required to make the other (deterministic) programs in the matrix work correctly. her advice is not of any use to the protagonists (being human, they have sufficient built-in randomness). her advice is also erroneously taken to be genuine oracular output by the "smith" virus.
  • The Keymaker: a security module. generates security tokens (or "keys," which are portrayed as actual physical keys). once the protagonists have the keymaker, they can presumably enter even the most secure parts of the matrix using forged security tokens.
  • The Merovingian: a system administrator, probably of higher rank than the "agents." messes around in his spare time using the excess capacity of the matrix. fails in one of his primary duties (maintaining the security of the keymaker).
  • The Twins: powerful investigative tools available to the system administrator. the twins are possibly universal debuggers of some kind, or at least they have a very-low-level interface to the matrix. if one twin is maliciously modified, it can repair itself by overwriting the damaged parts with "good" copies from the other twin. to win, both twins must be disabled simultaneously.

What a dumbass post; I never thought I'd sink to this level, but here I am all the same.

Wednesday, December 8, 2010

[LIFE] how to be an ass in social networking

There are a number of excellent how-to guides on the Internet, so, in the spirit of brotherhood, I thought I'd add my own bit to the sum total of all human knowledge. Bit by bit, I'll be elaborating on the skills I've developed in my particular areas of expertise. Today's topic is "how to be an ass in social networking." It's actually a rather simple two-step process that anyone can master overnight.

Step 1 -- Preparation: Before you embark on the specific method described in step 2, it's worth asking yourself "why do I want to be an ass in social networking?" Maybe you don't! There is nothing wrong with not wanting to be an ass in social networking; it doesn't mean you're inferior to the rest of us. In fact, some[wtf? who?] would argue that you are more likely to be a kind and thoughtful person if you refrain from being an ass in social networking. In any case, if you are going to try and be an ass in social networking, it should be because of a genuine, honest, heartfelt desire to be an ass. Simply being an ass to blend in with the people you respect doesn't cut it in social networking.

Step 2 -- Get Real: The very best way to be an ass in social networking is simple, effective, and takes close to zero effort. Simply tell people your "brutally" honest opinion on any matter. It is a truth universally acknowledged that anyone's frank opinion on any matter is guaranteed to cause offense. You have managed to avoid causing offense thus far only because you have subconsciously and unknowingly squelched your honest opinion on virtually everything simply to avoid being an ass. All you have to do is take conscious control of your opinion and eliminate the "filter" that converts your honest opinions into socially acceptable ones. Almost everyone who uses this method quickly discovers that they are a "natural" at it.

So, guys, hope you enjoyed my post, it's been "how to be an ass in social networking" by zerosum42.

Monday, December 6, 2010

[LIFE] 2010y12m06d

Ok, so I've started a blog. It's place for me to post tech tips, and you can recognize those as the posts whose titles begin with [TECH]. It's also a place for me to rant about life in general, and you can recognize those as the posts whose titles begin with [LIFE]. I guess I should introduce myself. Here's my first haiku:
I'm a programmer.
Bet you're not.
Muhahahahaha.
Later.

[TECH] amazon route 53

Amazon Route 53 provides cheap scalable DNS. unfortunately, it has no GUI and the XML interface is too verbose to use by hand. fortunately, it's easy to write a script for DNS updates, and it's even easier to grab an existing script (see below -- some of it is truncated, but it seems to cut & paste correctly). the script expects AWS-provided 'dnscurl.pl' to be available in the current directory, and '.aws-secrets' to contain working credentials. to set these up, follow AWS-provided instructions here. you do not need to "create a hosted zone"; the script below creates zones as necessary. the script expects a 'zones' subdirectory. each zone (remember, a "zone" corresponds to a domain-name) should have a subdirectory in 'zones.' each record type should have a subdirectory named A, MX, or CNAME (according to the record type) in the zone subdirectory. each record type directory should have arbitrarily-named files containing the record data. an example should clarify the format better; here are the files involved in one zone:
$ find zones -type f | egrep zerosum42.com
zones/zerosum42.com/rrs
zones/zerosum42.com/CNAME/1
zones/zerosum42.com/CNAME/2
zones/zerosum42.com/id
zones/zerosum42.com/A/1
zones/zerosum42.com/A/2
zones/zerosum42.com/MX/1
the id and rrs files are automatically maintained (id contains a zone identifier intelligible to AWS and rrs contains the records confirmed by AWS last time the script was run). the remainder of the files contain DNS data in a straightforward format (though non-standard):
$ echo zones/*/*/* | tr ' ' '\n' | egrep zerosum42.com | ( while read NAME; do echo "====================> $NAME"; cat "$NAME"; done )
====================> zones/zerosum42.com/A/1
zerosum42.com 600
204.236.154.100
====================> zones/zerosum42.com/A/2
myhomepc.zerosum42.com 1
127.0.0.1
====================> zones/zerosum42.com/CNAME/1
www.zerosum42.com 600
zerosum42.com
====================> zones/zerosum42.com/CNAME/2
testing.zerosum42.com 600
zerosum42.com
====================> zones/zerosum42.com/MX/1
zerosum42.com 600
aspmx.l.google.com            10
alt1.aspmx.l.google.com       20
alt2.aspmx.l.google.com       20
aspmx2.googlemail.com         30
aspmx3.googlemail.com         30
aspmx4.googlemail.com         30
aspmx5.googlemail.com         30
basically, the first line contains a DNS "key" and TTL (in seconds) while subsequent lines contain DNS "values" one-per-line. note that the script is very sensitive to additional blank lines in the input; be careful. miscellaneous notes:
  • TTL of 600 seconds in 10 minutes.
  • trailing dot: example.com. vs example.com -- the trailing dot is usually required by DNS but it is supplied by the script; don't put it in yourself or there will be two dots and things get very confused. the script knows about record types and basically -always- supplies the dot.
  • only records currently supported are A, MX and CNAME. to get more, edit the script.
that's pretty much it. to invoke the script, pass the names of the zones to be updated as separate parameters. to update all zones, use the following shortcut (assuming your shell is bash or similar):
$ ls zones | xargs ./push.sh
here's the actual script. share and enjoy. oh yeah, forgot to mention software prerequisites ... you need the following ubuntu packages: curl, xmlstarlet. xmlstarlet is just used for final formatting for display to the user; you can probably chop those parts out of the script without too much trouble if you don't want xmlstarlet for some reason. curl is a dependency of the AWS-provided 'dnscurl.pl' that actually interfaces with AWS, so harder to fix that part if for some reason you can't get curl.
#!/bin/bash
# push.sh
# copyright (c) 2010 by andrei borac (zerosum42 AT gmail DOT com)
# this code is hereby PUBLIC DOMAIN, but there is NO WARRANTY

set -o errexit
set -o nounset
set -o pipefail

TMPINP=/tmp/aws-route53-inp-$$
TMPOUT=/tmp/aws-route53-out-$$
TMPRRS=/tmp/aws-route53-rrs-$$

function dnscurl_get()
{
  echo "dnscurl_get('$1')"
  ./dnscurl.pl --keyname my-aws-account -- -H "Content-Type: text/xml; charset=UTF-8" https://route53.amazonaws.com/2010-10-01/"$1" > "$TMPOUT"
}

function dnscurl_post()
{
  echo "dnscurl_post('$1')"
  ./dnscurl.pl --keyname my-aws-account -- -H "Content-Type: text/xml; charset=UTF-8" -X POST --upload-file "$TMPINP" https://route53.amazonaws.com/2010-10-01/"$1" > "$TMPOUT"
}

function dnscurl_delete()
{
  echo "dnscurl_delete('$1')"
  ./dnscurl.pl --keyname my-aws-account -- -H "Content-Type: text/xml; charset=UTF-8" -X DELETE https://route53.amazonaws.com/2010-10-01/"$1" > "$TMPOUT"
}

###
# obtains current record sets
###
function dnscurl_obtain_rrs()
{
  dnscurl_get 'hostedzone/'"$1"'/rrset?maxitems=100'
  ### ENABLE BELOW FOR DEBUGGING:
  #echo "<========== ENTER EXISTING RECORD SETS (RAW)"
  #cat "$TMPOUT"
  #echo
  #echo "<========== LEAVE EXISTING RECORD SETS (RAW)"
  cat "$TMPOUT" | sed -e 's!<ResourceRecordSet>!'"\n"'<ResourceRecordSet>!g' | sed -e 's!</ResourceRecordSet>.*!</ResourceRecordSet>!' | ( egrep '^<ResourceRecordSet>' || true ) | ( egrep -v '<Type>(NS|SOA)</Type>' || true ) | cat > "$TMPRRS"
  RRSZ="`stat -c %s "$TMPRRS"`"
  if (($RRSZ<8))
  then
    rm "$TMPRRS"
  fi
  echo "<========== ENTER ROUTE53 CURRENT RECORD SETS"
  if [ -f "$TMPRRS" ]
  then
    ( echo '<SET>'; cat "$TMPRRS"; echo '</SET>' ) | xmlstarlet fo -o | ( egrep -v 'ResourceRecord(|s|Set)>' || true )
  fi
  echo "<========== LEAVE ROUTE53 CURRENT RECORD SETS"
}

for ZONE in $*
do
  ###
  # if the zone does not exist (no 'id' file), create it
  ###
  
  if [ ! -f zones/"$ZONE"/id ]
  then
    echo '
<CreateHostedZoneRequest xmlns="https://route53.amazonaws.com/doc/2010-10-01/">
  <Name>'"$ZONE"'.</Name>
  <CallerReference>'`pwgen -s 16 1`'</CallerReference>
  <HostedZoneConfig>
    <Comment>aws-route-53-push</Comment>
  </HostedZoneConfig>
</CreateHostedZoneRequest>
' > "$TMPINP"
    dnscurl_post hostedzone
    cat "$TMPOUT" | egrep -o '<Id>/hostedzone/[0-9A-Za-z]*</Id>' | sed -e 's:[^/]*/[^/]*/::' -e 's:<.*::' > zones/"$ZONE"/id
  fi
  
  ###
  # read ZONEID from 'id' file
  ###
  
  ZONEID="`cat zones/"$ZONE"/id`"
  
  ###
  # determine current record sets
  ###
  
  dnscurl_obtain_rrs "$ZONEID"
  
  ###
  # update records
  ###
  
  (
    echo '
<ChangeResourceRecordSetsRequest xmlns="https://route53.amazonaws.com/doc/2010-10-01/">
  <ChangeBatch>
    <Comment>aws-route53-push</Comment>
    <Changes>'
    
    ###
    # delete before create
    ###
    
    if [ -f "$TMPRRS" ]
    then
      (
        while read -r LINE
        do
          echo
          echo '<Change><Action>DELETE</Action>'"$LINE"'</Change>'
        done
      ) < "$TMPRRS"
    fi
    
    ###
    # create after delete
    ###
    
    if [ -d zones/"$ZONE"/A ]
    then
      for EACH in zones/"$ZONE"/A/*
      do
        (
          read FQDN TTLS
          
          echo '
      <Change>
        <Action>CREATE</Action>
        <ResourceRecordSet>
          <Name>'"$FQDN"'.</Name>
          <Type>A</Type>
          <TTL>'"$TTLS"'</TTL>
          <ResourceRecords>'
          
          while read QUAD
          do
            echo '
            <ResourceRecord>
              <Value>'"$QUAD"'</Value>
            </ResourceRecord>'
          done
          
          echo '
          </ResourceRecords>
        </ResourceRecordSet>
      </Change>'
        ) < "$EACH"
      done
    fi
    
    if [ -d zones/"$ZONE"/MX ]
    then
      for EACH in zones/"$ZONE"/MX/*
      do
        (
          read FQDN TTLS
          
          echo '
      <Change>
        <Action>CREATE</Action>
        <ResourceRecordSet>
          <Name>'"$FQDN"'.</Name>
          <Type>MX</Type>
          <TTL>'"$TTLS"'</TTL>
          <ResourceRecords>'
          
          while read MAIL PRIO
          do
            echo '
            <ResourceRecord>
              <Value>'"$PRIO"' '"$MAIL"'</Value>
            </ResourceRecord>'
          done
          
          echo '
          </ResourceRecords>
        </ResourceRecordSet>
      </Change>'
        ) < "$EACH"
      done
    fi
    
    if [ -d zones/"$ZONE"/CNAME ]
    then
      for EACH in zones/"$ZONE"/CNAME/*
      do
        (
          read WCDN TTLS
          
          echo '
      <Change>
        <Action>CREATE</Action>
        <ResourceRecordSet>
          <Name>'"$WCDN"'.</Name>
          <Type>CNAME</Type>
          <TTL>'"$TTLS"'</TTL>
          <ResourceRecords>'
          
          while read DEST
          do
            echo '
            <ResourceRecord>
              <Value>'"$DEST"'</Value>
            </ResourceRecord>'
          done
          
          echo '
          </ResourceRecords>
        </ResourceRecordSet>
      </Change>'
        ) < "$EACH"
      done
    fi
    
    echo '
    </Changes>
  </ChangeBatch>
</ChangeResourceRecordSetsRequest>'
  ) > "$TMPINP"
  dnscurl_post 'hostedzone/'"$ZONEID"'/rrset'
  cat "$TMPOUT"
  echo
  CHID="`cat "$TMPOUT" | egrep -o '<Id>/change/[0-9a-zA-Z]*</Id>' | sed -e 's:[^/]*/[^/]*/::' -e 's:<.*::'`"
  while ! egrep -q '<Status>INSYNC</Status>' "$TMPOUT"
  do
    echo "waiting for CHID='$CHID' ..."
    dnscurl_get 'change/'"$CHID"
    cat "$TMPOUT"
    echo
    sleep 1
  done
  
  ###
  # finally, fetch again current record sets and list dns servers
  ###
  
  dnscurl_obtain_rrs "$ZONEID"
  cp "$TMPRRS" zones/"$ZONE"/rrs
  dnscurl_get 'hostedzone/'"$ZONEID" &> /dev/null
  echo "<========== ENTER DNS SERVER LISTING"
  cat "$TMPOUT" | egrep -o '<NameServer>[0-9A-Za-z.-]*</NameServer>' | sed -e 's:[^>]*>::' -e 's:<.*::' | tee zones/"$ZONE"/dns2
  mv zones/"$ZONE"/dns2 zones/"$ZONE"/dns
  echo "<========== LEAVE DNS SERVER LISTING"
done

I think it's not bad for a day's work.