Skip to content

IT Intervention

by

Intro

Use Tools!I recently visited a company to talk with them about how I might be able to help them with their IT and development needs. They’re an international operation and they produce components used in industrial applications.

Their website is a rich e-commerce platform and it provides informational resources to their clients. They also have a substantial set of applications that folks within the company use to provide content, product information, etc. for the public-facing site. So I was surprised to find that they employed just a handful of developers.

The developers appear to be very disciplined. They’ve adopted a version control system and a development-staging-production model of deployment. The production server is hosted in a top-tier, actively managed data-center and the hosting company actively maintains a duplicate server that they can quickly spin-up should the main server ever have a problem that prevents it from operating properly. What I’ve seen of the various pieces of the system seems relatively responsive.

They admit that the system is overly complex and that they’re not actively monitoring it. They also admit that their chief mechanism for determining when the system is in trouble is based on the complaints they receive from their customers.

Are there some red flags? yes. Does it look terribly different from most of the other companies I’ve come across? No, not at all.

Trajectory

First off, I agree with them on the problems they know they have. The developers agree that system has a lot of “moving parts” and they’re relying on their users to let them know when their system is having a problem.

Their understanding of these problems will likely move them toward an active monitoring system and, possibly, some under-the-hood reorganization that will tend to incrementally simplify the system.

Shortcomings

My goal is to try to help this company move to a less uncertain future. I see a number of potential issues, the least of which is the backlog of wishlist items for which they’ve been attempting to locate additional developer resources. I’ll be making every effort to help them see the shortcomings of their current system and move toward a less uncertain future.

Platform

It’s pretty old-school. I’m not trying to start a flame war and, honestly, it’s reasonably responsive, at least on the user-facing side, so I’ll refrain from mentioning it. With that said, I firmly believe that ruby and/or python become force multipliers in the hands of talented developers when compared with most of the popular web-development languages that came before them. A move in this direction would greatly reduce the amount of developer time required for the more complex tasks and allow them to more easily hire developers as it becomes necessary.

I’ll backpedal a bit on this and concede that they can and should make strategic shifts in this direction and not attempt a ground-up rewrite.

Automated Testing

Automated testing is a powerful tool for taming the problem of system complexity. While the developers are doing some manual, checklist-style testing that centers around recent development efforts, their users have effectively become their de-facto testing apparatus.

There is no question about the room for improvement here. The developers conceded that their choice of platform made automated testing more difficult than it might be for others and I absolutely agree, but it’s far from impossible and, as users become used to and dependent upon high levels of availability for the applications they use, this will be of critical importance in the future.

Currently, the whole system is front-end code tied directly to database-access apis. There are a number of ways (Selenium, Capybara, Splinter, etc.) to test the front-end code and an effective minimal strategy might be to simply author new tests as problems arise. Another approach to improving the test-ability of the system would be to separate the front end from the data layer via simple apis. The apis are easy to test themselves and, by de-coupling the front-end code from the database apis and substituting apis which are infinitely easier to mock in a testing environment, the front-end is instantly more testable as well.

Availability

This is where the potential to go off the rails is tremendous. Without active monitoring, the developers (who double as administrators) must first work to properly identify the issue when users report problems. But that is really just the tip of what really is a massive iceberg of risk.

The rest of the story centers around recovery and the ghoulish what-ifs that keep people like me up at night. The biggest problems facing this company when things go really wrong are: a significant dependency on 3rd party hosting companies, the knowledge of a small number of key personnel, and properly maintained documentation.

Mitigating the risk of significant downtime and loss of data during disaster recovery are:

  • a backup of the production server which is maintained in parity by the 3rd-party hosting company
  • a staging server which is kept in the same state as the production servers
  • backups of data and code
  • documentation
  • developers with end-to-end knowledge of the entire system

The problem with backup servers (both production and staging) is that they’re rarely tested. In the case of the hot spare maintained at the hosting company, it is only rumored to exist. The staging server has never seen a hit from a user outside the company’s internal network. That it could be made to stand-in for the production server should the need arise is arguably correct, but the amount of time and effort required to make it function and perform to the expectations of the production system’s users is questionable.

The problem with documentation is that it is never up-to-date. There are a variety of reasons for this, but chief among them are: it’s rarely needed, it’s updated infrequently, it represents the state of a changing system at a single point in time, and it is rarely so complete as to be an authoritative reference for a ground-up restoration of a broken system.

The problem with depending on developers with end-to-end knowledge of a system is that, from time-to-time, the leave. They get new jobs, get sick, die, and go on vacations. They also forget. Even the smart ones. And when they do, they rely on the documentation.

The problem with backups in systems such as these is that they require manual intervention to be of any use. And that means developers and documentation.

The “long tail” scenario here is that, while it’s true that the efforts this company has made to prevent catastrophe are effective and they’ve almost certainly worked in the past, they are by no means infallible and, when it goes bad, it can easily require weeks or even months to fully recover. The probability of disaster has been managed to some extent, but, in terms of lost sales and IT resources, and hamstrung staff who are dependent on the proper functioning of the system, the potential for loss in the event of a disaster remains high.

A targeted attack by ne’erdowells, inadvertently destructive code, and plain old negligence are just three of a long list of problems that could cripple this company’s ability to do business. Add to that the possibility of staff unavailability due to illness, vacation, or simply poaching by a competitor, and a simple outage can become a serious problem to company shareholders.

Solutions

As I mentioned above, this company could get a lot of mileage out of some modest investments in active monitoring and automated testing.

Monitoring. For monitoring, we like Nagios. There are several other options in this area too, but the main idea is to get as complete of an idea of the problems with the system as quickly as possible. In the best of circumstances, a good monitoring solution will alert responsible parties to an issue before the users have had an occasion to notice. This is low-hanging fruit.

Automated testing. This would be of tremendous help to the developers, both in the day-to-day course of their development cycle and as a recovery tool to ensure that the systems are functioning as they should in a post-incident scenario.

Platform. From my soapbox, I’ll sing the praises of open source as loudly as I can. I think they could benefit greatly from a switch of operating system, database, and development platform (on the back-end, for a highly testable data layer, at the very least).

Automated Deployment. This is the A-1, prime solution of which that this company is in dire need. Tools such as Ansible, Puppet and Chef allow for scripted configuration of servers and they handily convert 12-hour installation procedures into just minutes of hand-off automated provisioning goodness. For one of our clients, I’ve used these tools to create server creation procedures that involve a single command. They can similarly be used to develop tools to snapshot, archive and restore entire warehouses of back-end data. Put more simply, using automated deployment tools, it is possible to create systems which can be built from the ground up in minutes using developers who are brand new to the environment. We’re in the process of handing over just such a system right now and I can say unequivocally that, without such tools, the acclimation of the new developer would be far more expensive for our client and would be fraught with many more problems than we’re seeing now.

Summary

As companies become more dependent on their IT infrastructure to conduct their day-to-day operations and as users become more used to a highly available networked world, it is increasingly important that IT departments stay abreast of current tools and technologies available to mitigate risks associated with failures in their IT systems, be they failures of hardware, software, vendors or IT personnel.

If this company’s situation sounds like your own, give us a call. We live for this stuff.

Django with the Flow

by

We’ll start by getting the obvious out of the way: if you’re writing anything other than Python code, then you’re doin’ it wrong. With the obligatory “My language is better than yours” arrogant-developer requirement met, I suppose now we can move on to other stuff.

Most of my previous work had been in Rails, but I had been itching to get at Django and just hadn’t found the time. That changed though when LightCastle decided to take our website off of Play and port it to some other framework. Dan and I decided to have a competition to see who could rewrite the site faster; he’d try in Lift (with Scala) and I’d try in Django (with Python). Another coworker considered taking a shot at it in Chicago Boss, but left the competition because it was obvious that I’d win. And really, who can fault a man for a graceful bow-out? Dan ended up being too busy to do much with the competition, so I essentially won by default.

Getting Started

At first I was a little turned off from Django because I just wasn’t catching on to it. The routes were weird. The project structure was stupid. And converting the layouts was a pain. But the more I worked with it and got the hang of it the more I really started to like it. Once it started making sense, I began to see why Django’s developers made the decisions they did.

In case you want to try following along, the code can be found here.

The easiest way to install Django is through pip. (You can also install from source, but I like having greater support for dependencies and general compatibility.) Since I had never installed the framework before, that was my first order of business. A quick call of ‘sudo apt-get install python-pip’ got me the python package installer, and a ‘sudo pip install Django’ had me almost up and running. A subsequent ‘sudo apt-get install python-pip python-dev build-essential’ sealed the deal for all the required dependencies.

Application Vs. Project

djangoProbably the biggest difference between Django and Rails that I noticed right off the bat was the project layout. It turns out that you can rename your directories to suit your needs better, but I didn’t like the default that is created with the django-admin.py, which creates a root-level directory by a name of your choosing, then nests another directory by the same name inside that directory. Essentially, all you need for a functioning Django app is a settings file, so that means you have a lot more freedom to your project layout than I first thought.

The short explanation to Django’s approach to development is that it organizes things by *applications* and *projects*. A project is an overarching concept. So if you’re creating a website for Boeing, the website would be the application, while the project would be something like “Boeing”. The thought is that a project can potentially comprise many applications, from a blogging platform to a content management system. Separating things this way makes it easier to port an application from one project to another if you need to, the reasoning goes.

django-treeFor our project, we ran “django-admin.py startproject lightcastle”, which caused the framework to create an umbrella ‘lightcastle’ directory, with another ‘lightcastle’ directory inside it. A file called manage.py is also created in the top-level ‘lightcastle’ directory, which is used for things like running an interactive shell pre-loaded with your code or starting a local server.

Inside my top-level ‘lightcastle’ directory (also called the project-level directory), I ran ‘python manage.py startapp website’ to create a website application. This creates several files inside a directory it makes called ‘website’: __init__.py, models.py, tests.py and views.py. Admittedly, I didn’t work in the tests.py file at all (Bad developer!!).

A Few Basics

The init files that are created inside the directories allow you to import files in the same directory as the init file as modules into other files. So if you have police_officer.py in a directory that has __init__.py in it, you can import police_officer or any of its individual methods easily. Typically the init files are empty, but you can include code to initialize stuff inside the packages when they’re imported. If you create subdirectories that will have code you want to use elsewhere, you’ll need an __init__.py file present.

The models.py file is where you define your database fields. We didn’t use a database for our project, so I didn’t have anything in this file.

The views.py file was a little tricky for me; it’s not the equivalent to the views directory in Rails as I was expecting. Instead, views.py is where controller logic goes. Actual “views” go in a templates directory. Views have .html extensions, which seems a bit obvious when you think about what a view does: present information. The templates also use a bit of django magic to insert content, set variables or load other html and css files.

Another big difference between Rails and Django is the routing. One thing to remember about Django is that there’s typically very little ‘magic.’ The routing file in Django, called urls.py, is essentially a few custom python functions that take a regex as the first argument (that describes the url path), followed by the name of a handler method. There really isn’t a whole lot of special “Django syntax”, which is a big plus in my book compared to Rails.

Another thing I didn’t like much about Django is that there are usually two routing files for every application in your project: the application-level urls.py file, and the project-level urls.py file. Frankly, I skipped ever using the application-level urls.py file because I had no reason to have one. Essentially your urls are routed through the project-level file first, and then sent to the application-level routing file.

For our website, I just used the project-level routing file because it was easiest and quickest. As i fooled around with the urls.py file, I eventually really enjoyed it. Each route in the file generally points to a views.py controller method that handles which template to use before rendering the “context” as an HTTP response. You can also pass template files in directly as the second argument by calling TemplateView.as_view(template_name=”some_file.html”). That was handy for keeping the view.py file pretty empty, though I probably could have created a small function in there to set a Template and Context.

Conclusion?

I suppose that’s it for my first thoughts on Django. Ultimately, I really enjoyed the framework and look forward to messing with it some more. Keep a lookout for future blog posts about it, as I might end up adding my own Django tutorial to the myriad already available online. I might do one on the WSGI Apache module, too, since it was a bit of a pain to mess with, and I’ll probably include a portion on deploying Django too (turns out capistrano, which is Ruby-based, works pretty darn well).

The Future of Ruby, RubyNation 2013

Rosy_CoderRuby is growing up.  Never was this more evident than at the recent 2013 RubyNation conference in Maryland.

Two things delighted me.  The first was the number of women presenting and attending the conference. The second was a renewed respect and appreciation of classic software design practices.

When I started with Ruby and Rails so many years ago this is what I heard: “‘Design Patterns are already applied and built into rails, you should focus on learning the Rails conventions”, and that message was dis-passionately delivered by pasty overweight men, drinking lousy coffee and wearing Birkenstocks.   But that “golden path” is not for everyone, and it’s led many of us through some dark and scary woods.  I can understand the desire to program without knowing all the core academic Object Oriented stuff, but it is exactly that stuff that I enjoy the most.  I’m thrilled to see it coming back around as an accepted practice.  It turns out it is O.K. to look before you leap.  

And this grand new vision of thinking first, is not chanted down from pompous, mid-life crisis, balding charlie-brown looking drones. It’s coming from the women! Sandi Metz, author of Practical Object Oriented Design in Ruby, provided an exceptional presentation on the writing of good tests.  With a focus on well defined interfaces between classes, the introduction of a little  ‘ceremony’, it becomes possible to develop great, easily tested software.  Tests can remain simple, clean and lead to better design throughout your application.  I will be owning her book.

Not to be outdone on the front of thoughtful web development, the white man did represent well, through the likes of Jason Clark.  Jason, from New Relic, provided a detailed overview of the application of the Event Pattern – a modified version of the Observer Design Pattern, from the original Go4 book.  Jason’s talk focused on real world problems faced at New Relic and how they overcame them though some thoughtful design.  The Event pattern provided them a means to decouple their classes and clean up their code, making it more easily tested, modified, and extended.  He followed up with details on the use of ActiveSupport::Notifications – the new (and well thought out) eventing system within Rails.    

“Academia and Hacking”, by Emily Stolfo, was the most innovative and surprising talk I encountered at the conference.  Emily is currently teaching a Ruby on Rails class at Columbia University.  For those of you with a CS degree you have some idea what an innovative idea this is in and of itself.  Her “5 Hacker Habits” – addresses a gap in our present education experience.  She’s worth following.

map_compassI’m sorry to say I missed Kerri Miller’s presentation on code metrics  (I’m forced to blame Josh Adams’ presentation on robotics which was way too much fun. )   My good friend Kendal felt she provided one of the best talks at the conference,  so I found her video on Confreaks.    Kerri is a funny, insightful presenter, with great thoughts on how we can make effective use of code metrics.   Kerri is an opinionated, fun, confident presenter.  She and Sandi Metz really stand out from the crowd as effective capable technical leaders. 

There were many other great presentations.  Russ Olson gave a great introductory talk on insight and intuition, and how to make the best use of them.  Both Dave Copeland and Andy Piszka offered thorough, well examined approaches to scaling large ruby applications iteratively.   And Dave Bock did an excellent job hosting a series of lightening talks that were very enlightening and fun.

So what is the future of Ruby?  I think it’s a future where people spend less time “meta-programming” and where the community reforms itself around core principles rather than core people.  Keep an eye on  groups like the Rails Girls DC, they are the drivers of the Ruby community now, and they will have a powerful impact on it.

Five Principles to Good Documentation Writing (Good Documentation Series)

by

The last time you guys heard from me, I was griping about the lack of good documentation out there. So in an effort not to be that guy who complains about stuff all the time, here are my tips for writing good documentation.

Five Principles

1) Know the documentation’s purpose (and audience). Is it going to be used by developers? Is it for QA folks to know the process by which to test something? Is it just a quick-reference of commonly used shortcuts? Is it a basic introduction to the concepts behind the software? Is it the end-all-be-all source of knowledge for it? Whatever the case, knowing the audience you’re writing for and why they’re reading it will take you a long ways toward making relevant docs.

2) Lay out how data flows through the software (diagrams are handy here). Think “Step A retrieves data from an endpoint, which causes step B to parse the data and then return it so that step C can do X with it.” In other words, spill its guts all over the place. This’ll help those who want to delve deeper into the application to do so, and also makes it easier to build on top of it. Not to mention that debugging becomes easier when things are laid out clearly and simply (one might even make a good argument that clearly == simply). When you know how data flows through the application, finding the clogs is a cinch.

3) Provide lots of code examples. Often when learning, there is no better way to figure something out than by seeing it in action. That said, don’t make the mistake of just providing examples as your documentation. This leaves readers scratching their heads with questions like “what does this actually DO?” You might have a usage example, but unless one can deduce a lot of information from that usage example, the documentation will be very limited in its effectiveness — users should not have to look at your source code to figure out how things work.

4) Document the project on multiple levels. Personally, I like when there are three tiers of documentation: a high-level overview, a mid-level instructional level that will answer about 70% of the questions I might have, and a nitty-gritty detailed blueprint of the system. Organizing documentation like this gives readers a quick way to find anything they’re looking for. Cutting down search times will win you friends (and influence people).

5) Take notes while you develop. This will drastically cut down on the time it takes to create documentation. It doesn’t take long to type a couple lines on how something works as you’re writing it. Not only does it help with creating the documentation, but it can also help you identify flaws in logic or over-complications in the design of the software.

On Tiered Documentation

Your high-level overview should give readers an idea of what the product is capable of. It should get them thinking about all of the stuff they *could* do with the application. I like Ruby on Rails in this regard. Its Getting Started page railspic is a good example of a high-level overview. It gives you a few commands that will allow you to have a skeleton site up in minutes. It also goes over some of the behind-the-scenes stuff that is going on, but it presents them as optional “more information” sidebars. That way it doesn’t bog you down in technical details. After all, you’re just looking for how to use the stupid thing first, right? Once you can get something functional with it, then you can start exploring the innards of the the framework.

Mid-level documentation should be the meat of your docs. It should provide the average user with most of the things they’ll ever need to know to use your api/library. At this point you’re getting into more of the behind-the-scenes workings of the code. In short, this level of documentation should answer the question “If I were picking this up with no prior-knowledge, is this enough to answer all but the most detailed questions someone might have?”

Low-level documentation is kind of like the open-source of documentation. You’re spilling the beans on exactly how everything works. Again, it all needs to be readable. You don’t want readers to get lost in a muck of techni-speak. A decent example of this is my blog on the SAX parser I wrote in Python. It’s a pretty short script, so I give the entire code. I explain it from the beginning on through, going through why variables are used and when they’re changed. Dan Funk created a great little diagram that helps a lot with the understanding, which he created based off a reading of the documentation. (Disclaimer: While that post is reasonably easy to follow, there are several things I’d like to change about it at some point!) The post begins by telling you the three functions available from the new class, and what they do. Then it gets into the details of how it does those things.

In short, when writing docs with a tiered approach, you should think of whittling the application down bit by bit. This way, you progressively add more detailed information to the reader’s understanding, building on what they already know. The only caveat is that they have to have solid understanding of what you’ve presented to them already, hence the need for clear and concise documentation

One Last Thing

Above all, good documentation should be intuitive, otherwise it’s usefulness is severely diminished. After all, what good is documentation that you can’t find, navigate or understand? If your documentation isn’t doing that, consider revamping it.

Raspberry Pi, First 30 minutes

The $35 board that will likely cost you sleepless nights, help you break other hardware, and cost you $400 in additional (awesome) add-ons.

The $35 board that will likely cost you sleepless nights, encourage you to dismantle your spouses favourite hardware, and cost you $400 in additional (awesome) extensions.

Like many people I rushed out and purchased a Raspberry Pi when I heard all the hubub and started seeing the amazing projects come out online.   Then it sat on my desk for 3 weeks.  Then it got moved to my backback and I carried it around for another three weeks …  and it was about to go into my “never to be completed” box-o-stuff.  To be honest, it’s actually a “basement-o-stuff”, and walking through it is to walk through my museum of lost hopes and broken dreams.

But not this time.  No sir.  My good friends Riley Chandler and Josh Brown sat down with me last night, we got my first Rasbian SD disk burned and I booted up my Pi for the first time.  Then we played some Python games and I had a blast!

Here is, start to finish, my recipe for getting the raspberry pi up and running.  Along with some tips I learned as I fumbled around last night:

Step 1:  Shopping List

To Digress ... Riley showed me this awesome piece of equipement.  A USB keyboard that works great for his XBMC build.

To digress … Riley showed me this awesome piece of equipment. A USB keyboard that works great for his XBMC build.

I went to Staples.  Which is neither financially prudent nor does it get you the coolest stuff.   I should get stuff on line, but then I spend way too much money on things I don’t need – vis-à-vis the image to the right.

  1. Wireless Keyboard and mouse (Didn’t end up using them, used wife’s instead, cost me $40)
  2. SanDisk SDHC Card (8 gb) $8.99  — I forgave it for advertiseing that it is  Waterproof, because it was cheep, and it clearly indicated a speed rating of 4, which is what the Pi calls for.
  3. Surge Protector w/ 2 USB Charging Ports (19.99) – Need some extra ports anyway, and this fit the bill with a 5v/1A usb port rating.

Step 2: Download Raspian

I downloaded Raspian from the Downlaod center of the Rasberry Pi.  Not only is this THE recommended distro, but it’s also debian based, so I don’t have to think hard about packages etc… my having grown up on debian based systems.

I unzipped the download, which netted me a disk image:
2013-02-09-wheezy-raspbian.img

Step 3: Write the Raspian Disk to my SD Card

This bit of advice – how to quickly write to the SD card from a linux distribution is out there a bunch of places.  I found this page particularly helpful: http://www.embeddedarm.com/support/faqs.php?item=10

My linux laptop has an SD Card slot, and is running Ubuntu 13.04.

  1. Insert your SD Card.
  2. run fdisk
    > dan@ook:~/Downloads$ fdisk -l
    Disk /dev/mmcblk0: 7948 MB, 7948206080 bytes
    81 heads, 10 sectors/track, 19165 cylinders, total 15523840 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
            Device Boot      Start         End      Blocks   Id  System
    /dev/mmcblk0p1            8192    15523839     7757824    b  W95 FAT32
  3. From this delightfully arcane bit of output I deduced that my device is located at /dev/mmcblk0  – the “p1″ is the partition, and I don’t want to write inside the parition.  I want to write over the full drive.
  4. I run the dd command (a convert and copy command)
    > dan@ook:~/Downloads$ sudo dd if=/home/dan/Downloads/2013-02-09-wheezy-raspbian.img of=/dev/mmcblk0
  5. I become patient, cat like, an embodiment of the Zen Buddha, as I wait patiently for the quiet minded dd command to finish.  It took a while.  Minutes crawled by, nothing output, was it hung?!?!  No.
  6. This is the face of the dd command.

    This is the face of the dd command. Notice the disdain and anger in it’s eyes for my lack of competence.

    I skipped Step 6, went to Step 7and watched as my Raspberry Pi, also not one for over communication, stared plaintively back at me with one red light.  I then googled and re-ran the dd command about 15 times, each time risking doing something horrific to my other drives, till I realized what I’ve already told you in step 3.  So, for you, there is no Step 6.

  7. I put the SD Card into my Raspberry Pi, and hooked all my other components up (monitor, keyboard, sound, power supply, etc…)  And booted it up.  Following the excellent instructions provided by the Rasberry Pi getting started guide.
  8. I watched delighted as a giant Rasberry showed up on my monitor.  I danced.  I drank beer.
  9. I drank beer.
  10. I drank beer and googled around for how to now break my Rasberry Pi in some new and unique fashion.  I then spent $45 on a kickstarter project for the BrickPi.

After that Riley showed me his XMBC Rasperry Pi.  And we and our kids watched some Despicable Me in HD on our projector.  It was delightful.

I’m hoping to get some additional components this weekend and perhaps take apart our remote door bell and see if I can’t connect the two in some way so that we get a message in our Campfire Chat room when someone rings the doorbell at our office while we are still sitting around in our underwear at home.    Once I get that done, I’ll post that as well.

Campfire Chat – for Devs, SAs, and the People who put up with them

campfireMy first experience with 37 Signals Campfire product was on a fast moving development project with a hard deadline and  a gazillion geographically distant people working around the clock.  We had a daily standup call, but it was essential that we all knew how the development effort was progressing throughout the day (and night).

Using Campfire specifically for project communication

A Campfire account was established for the project, and we set up a “main room” where the devs, scrum master, system administrator and project manager gathered.  There were also “team rooms” set up that allowed the various sub teams to hack through their parts of the code with a smaller audience. Campfire’s persistent chat gave us the ability to “scroll” back to see what we had missed while we were away eating and sleeping.  The upload file function allowed the latest scrubbed DB dumps to be shared among all of us quickly and efficiently.  We didn’t use the conference call feature often, but it was still handy to have in the event of a problem.

One feature that quickly proved both useful and annoying was Campfire sounds.  (Check out this Campfire Cheat Sheet for sound and emoticon codes)  The fun – who doesn’t like to hear “Push it” being played when the code is being deployed?  The not so fun – there are folks who really can’t stand to hear “nyan cat”.   You may genuinely freak people out if you play “horror” at 2:30 am during a code push.  The useful – you need to get the PMs attention? Play the crickets sound, and he’ll check the campfire room to see what’s up.

We integrated Github and Jenkins CI to automatically post to our Campfire room.  On  pull requests we would hear a cheery “Great Job” sound effect and see some text explaining the changes.  Commits were also noted in the Campfire room, and then we would know to be on the lookout for the results of the tests on the latest code from Jenkins.  If the build succeeded, we’d hear a rimshot.  If we heard a sad trombone, we’d know the build was broken and would start troubleshooting so that we could get a fix in.

We also set up Nagios to report problems to the room.  It’s useful for everyone to know that the reason the the code push failed wasn’t that there was some problem with the code, but that the target system was unavailable.
We also set up cron jobs that sent messages to the room to remind us of regularly scheduled events, like “timesheets are due” and “it’s time for scrum, here’s the dial in number”.

Using Campfire for team communication

In the case of our team at LightCastle, we are currently all local and we have an office where we enjoy collaborating in person.  But most of the time we have at least a few team members who are working remotely, and we have an unwritten policy that if you are sick or irritable, you should just keep yourself to yourself.

Initially we were all communicating through email and individual skype sessions.  This was frustrating for team mates who were outside the discussion though, because they’d be unaware of progress and setbacks of the project until someone remembered to share what had changed.  Surprisingly, it was a little bit of a sell to get everyone to use Campfire at first.  But once everyone realized that they could share  and discuss at length pictures of cats, weird stuff from Reddit, and what we they were eating for lunch INSTANTLY to all who were logged in, it was a done deal.  Collaboration flourished.  When a member of the team speaks up in the room about a problem they are working through, they get faster and better feedback.  Instead of simply pairing with another individual to solve the problem, anyone logged in can offer a suggestion.  If a server needs to go down for emergency maintenance, a quick message to campfire and everyone knows.  New memes are distributed with ease.

The_Essential_Kenny_Loggins

There have been drawbacks.  We like pranks, and Campfire has been the method for perpetuating one of our favorites: See if you can make someone’s machine blast sounds at inopportune times.  Two of us were having a conference call with a customer and other developers, when someone in our LC Campfire room typed “/play loggins”.  I scrambled to hit the mute button on my laptop, and there was silence on the call.  Finally someone on the call spoke up and asked, “Are we all just going to pretend we didn’t hear that?”  If one of us realizes that a team mate is working out of a library or coffee shop, it’s almost ensured that a cacophony of Campfire sounds will ensue.

To sum it up, Campfire is fun.  And, unlike IRC, there is a low barrier to entry. We can easily invite the technical and non-technical into a common community that enables great communication and a strong sense of team across many boundaries – both physical and mental.

Prezi vs Inkscape’s Sozi

I draw a lot of diagrams in InkScape, and I’ve often fantasized about turning my diagrams into dynamic animated presentations.  I’ve seen several presentations lately using the excellent Prezi software, and as I was remarking on its coolness a friend pointed me at Sozi – a plugin for InkScape that allows you to turn any InkScape diagram into an animated SVG perfect for presentations.   Since then I collaborated on a presentation using Prezi, and  have some comparisons I would like to make.

The Sozi diagram I created is shown below (click it to see the presentation, progress the slides with the right arrow key).  I put it together in order to kick off an Open Source meetup group we formed at the beginning of the year.  I wanted something light and inclusive, so the talk focused on how to convey the concept of Open Source to the uninitiated.

presentation

I LOVED drawing the diagram. It was a fairly simple endeavor.   InkScape is a tool I know and love.  It is a powerful fully fleshed vector drawing program that rivals the very best in commercial software.   Sozi, the plug-in, feels more like alpha code – very early development – an excellent and thorough proof of concept.  And you must approach it in this light.  It was particularly helpful to have viewed this Sozi Tutorial Video, presently one of the best places to get an overview on how to use Sozi. If there was ever an Open Source project deserving of a good UI developer’s love and attention it would be Sozi.  Clean it up, tighten it, and you have a killer application.  In the meantime, with patience, you can still produce stellar results (like the one above, If I do say so myself).  And I believe it was worth the effort.

Another huge benefit of Sozi is that it is generating a presentation you can share with ANYBODY.  It it producing standards compliant SVG that works just fine when I open it in Firefox or Chrome (and I suspect IE and Safari but I’m unwilling to test it.) without any additional plugins, security warnings, etc… etc..

Prezi, on the other hand, is a polished piece of work, that stands on its own and is well focused to the task at hand:  creating a presentation.  Because of this, getting started happens fast.  Very fast.  I was able to jump into the middle of another persons work, pick up where they left off, and with a bit of good natured fumbling around I was able to do all the things I wanted to do.  I didn’t have to go watch a video, and click around blindly cussing at myself – it all fell into place.

Where Prezi falls short is that it /is/ a stand alone application – it isn’t built on top of a powerful graphics tool like InkScape, so you can’t create your presentation within it.  You have to use it just like you would use PowerPoint – that is to create the artwork elsewhere, then finagle the creation into the confines of the presentation software.

I think, perhaps, there are two distinct types of users here – and these tools will evolve over time to meet the needs of those two groups.  If you live and die by the effectiveness of your presentations (As I  do),   then you need something like InkScape and Sozi – which, while not 3D modeled non-linear video editing, offers up a set of tools that will well outstrip the mainstream.  If instead, you just want a step beyond PowerPoint, just to up the ante on your 1990′s era competition, then Prezi is the ticket.  Prezi’s sleek well defined and intuitive interface provides for very effective presentations that can take you just past the expectations of most audiences.

Ultimately, the software doesn’t make the presentation.  But that wasn’t the purpose of this post.  I’ll leave that broader subject to the master,  Mr. Edward Tufte.

Follow

Get every new post delivered to your Inbox.