The last time you guys heard from me, I was griping about the lack of good documentation out there. So in an effort not to be that guy who complains about stuff all the time, here are my tips for writing good documentation.
1) Know the documentation’s purpose (and audience). Is it going to be used by developers? Is it for QA folks to know the process by which to test something? Is it just a quick-reference of commonly used shortcuts? Is it a basic introduction to the concepts behind the software? Is it the end-all-be-all source of knowledge for it? Whatever the case, knowing the audience you’re writing for and why they’re reading it will take you a long ways toward making relevant docs.
2) Lay out how data flows through the software (diagrams are handy here). Think “Step A retrieves data from an endpoint, which causes step B to parse the data and then return it so that step C can do X with it.” In other words, spill its guts all over the place. This’ll help those who want to delve deeper into the application to do so, and also makes it easier to build on top of it. Not to mention that debugging becomes easier when things are laid out clearly and simply (one might even make a good argument that clearly == simply). When you know how data flows through the application, finding the clogs is a cinch.
3) Provide lots of code examples. Often when learning, there is no better way to figure something out than by seeing it in action. That said, don’t make the mistake of just providing examples as your documentation. This leaves readers scratching their heads with questions like “what does this actually DO?” You might have a usage example, but unless one can deduce a lot of information from that usage example, the documentation will be very limited in its effectiveness — users should not have to look at your source code to figure out how things work.
4) Document the project on multiple levels. Personally, I like when there are three tiers of documentation: a high-level overview, a mid-level instructional level that will answer about 70% of the questions I might have, and a nitty-gritty detailed blueprint of the system. Organizing documentation like this gives readers a quick way to find anything they’re looking for. Cutting down search times will win you friends (and influence people).
5) Take notes while you develop. This will drastically cut down on the time it takes to create documentation. It doesn’t take long to type a couple lines on how something works as you’re writing it. Not only does it help with creating the documentation, but it can also help you identify flaws in logic or over-complications in the design of the software.
On Tiered Documentation
Your high-level overview should give readers an idea of what the product is capable of. It should get them thinking about all of the stuff they *could* do with the application. I like Ruby on Rails in this regard. Its Getting Started page is a good example of a high-level overview. It gives you a few commands that will allow you to have a skeleton site up in minutes. It also goes over some of the behind-the-scenes stuff that is going on, but it presents them as optional “more information” sidebars. That way it doesn’t bog you down in technical details. After all, you’re just looking for how to use the stupid thing first, right? Once you can get something functional with it, then you can start exploring the innards of the the framework.
Mid-level documentation should be the meat of your docs. It should provide the average user with most of the things they’ll ever need to know to use your api/library. At this point you’re getting into more of the behind-the-scenes workings of the code. In short, this level of documentation should answer the question “If I were picking this up with no prior-knowledge, is this enough to answer all but the most detailed questions someone might have?”
Low-level documentation is kind of like the open-source of documentation. You’re spilling the beans on exactly how everything works. Again, it all needs to be readable. You don’t want readers to get lost in a muck of techni-speak. A decent example of this is my blog on the SAX parser I wrote in Python. It’s a pretty short script, so I give the entire code. I explain it from the beginning on through, going through why variables are used and when they’re changed. Dan Funk created a great little diagram that helps a lot with the understanding, which he created based off a reading of the documentation. (Disclaimer: While that post is reasonably easy to follow, there are several things I’d like to change about it at some point!) The post begins by telling you the three functions available from the new class, and what they do. Then it gets into the details of how it does those things.
In short, when writing docs with a tiered approach, you should think of whittling the application down bit by bit. This way, you progressively add more detailed information to the reader’s understanding, building on what they already know. The only caveat is that they have to have solid understanding of what you’ve presented to them already, hence the need for clear and concise documentation
One Last Thing
Above all, good documentation should be intuitive, otherwise it’s usefulness is severely diminished. After all, what good is documentation that you can’t find, navigate or understand? If your documentation isn’t doing that, consider revamping it.
Like many people I rushed out and purchased a Raspberry Pi when I heard all the hubub and started seeing the amazing projects come out online. Then it sat on my desk for 3 weeks. Then it got moved to my backback and I carried it around for another three weeks … and it was about to go into my “never to be completed” box-o-stuff. To be honest, it’s actually a “basement-o-stuff”, and walking through it is to walk through my museum of lost hopes and broken dreams.
But not this time. No sir. My good friends Riley Chandler and Josh Brown sat down with me last night, we got my first Rasbian SD disk burned and I booted up my Pi for the first time. Then we played some Python games and I had a blast!
Here is, start to finish, my recipe for getting the raspberry pi up and running. Along with some tips I learned as I fumbled around last night:
Step 1: Shopping List
I went to Staples. Which is neither financially prudent nor does it get you the coolest stuff. I should get stuff on line, but then I spend way too much money on things I don’t need – vis-à-vis the image to the right.
- Wireless Keyboard and mouse (Didn’t end up using them, used wife’s instead, cost me $40)
- SanDisk SDHC Card (8 gb) $8.99 — I forgave it for advertiseing that it is Waterproof, because it was cheep, and it clearly indicated a speed rating of 4, which is what the Pi calls for.
- Surge Protector w/ 2 USB Charging Ports (19.99) – Need some extra ports anyway, and this fit the bill with a 5v/1A usb port rating.
Step 2: Download Raspian
I downloaded Raspian from the Downlaod center of the Rasberry Pi. Not only is this THE recommended distro, but it’s also debian based, so I don’t have to think hard about packages etc… my having grown up on debian based systems.
I unzipped the download, which netted me a disk image:
Step 3: Write the Raspian Disk to my SD Card
This bit of advice – how to quickly write to the SD card from a linux distribution is out there a bunch of places. I found this page particularly helpful:
My linux laptop has an SD Card slot, and is running Ubuntu 13.04.
- Insert your SD Card.
- run fdisk
> dan@ook:~/Downloads$ fdisk -l Disk /dev/mmcblk0: 7948 MB, 7948206080 bytes 81 heads, 10 sectors/track, 19165 cylinders, total 15523840 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/mmcblk0p1 8192 15523839 7757824 b W95 FAT32
- From this delightfully arcane bit of output I deduced that my device is located at /dev/mmcblk0 – the “p1″ is the partition, and I don’t want to write inside the parition. I want to write over the full drive.
- I run the dd command (a convert and copy command)
> dan@ook:~/Downloads$ sudo dd if=/home/dan/Downloads/2013-02-09-wheezy-raspbian.img of=/dev/mmcblk0
- I become patient, cat like, an embodiment of the Zen Buddha, as I wait patiently for the quiet minded dd command to finish. It took a while. Minutes crawled by, nothing output, was it hung?!?! No.
I skipped Step 6, went to Step 7and watched as my Raspberry Pi, also not one for over communication, stared plaintively back at me with one red light. I then googled and re-ran the dd command about 15 times, each time risking doing something horrific to my other drives, till I realized what I’ve already told you in step 3. So, for you, there is no Step 6.
- I put the SD Card into my Raspberry Pi, and hooked all my other components up (monitor, keyboard, sound, power supply, etc…) And booted it up. Following the excellent instructions provided by the Rasberry Pi getting started guide.
- I watched delighted as a giant Rasberry showed up on my monitor. I danced. I drank beer.
- I drank beer.
- I drank beer and googled around for how to now break my Rasberry Pi in some new and unique fashion. I then spent $45 on a kickstarter project for the BrickPi.
After that Riley showed me his XMBC Rasperry Pi. And we and our kids watched some Despicable Me in HD on our projector. It was delightful.
I’m hoping to get some additional components this weekend and perhaps take apart our remote door bell and see if I can’t connect the two in some way so that we get a message in our Campfire Chat room when someone rings the doorbell at our office while we are still sitting around in our underwear at home. Once I get that done, I’ll post that as well.
My first experience with 37 Signals Campfire product was on a fast moving development project with a hard deadline and a gazillion geographically distant people working around the clock. We had a daily standup call, but it was essential that we all knew how the development effort was progressing throughout the day (and night).
Using Campfire specifically for project communication
A Campfire account was established for the project, and we set up a “main room” where the devs, scrum master, system administrator and project manager gathered. There were also “team rooms” set up that allowed the various sub teams to hack through their parts of the code with a smaller audience. Campfire’s persistent chat gave us the ability to “scroll” back to see what we had missed while we were away eating and sleeping. The upload file function allowed the latest scrubbed DB dumps to be shared among all of us quickly and efficiently. We didn’t use the conference call feature often, but it was still handy to have in the event of a problem.
One feature that quickly proved both useful and annoying was Campfire sounds. (Check out this Campfire Cheat Sheet for sound and emoticon codes) The fun – who doesn’t like to hear “Push it” being played when the code is being deployed? The not so fun – there are folks who really can’t stand to hear “nyan cat”. You may genuinely freak people out if you play “horror” at 2:30 am during a code push. The useful – you need to get the PMs attention? Play the crickets sound, and he’ll check the campfire room to see what’s up.
We integrated Github and Jenkins CI to automatically post to our Campfire room. On pull requests we would hear a cheery “Great Job” sound effect and see some text explaining the changes. Commits were also noted in the Campfire room, and then we would know to be on the lookout for the results of the tests on the latest code from Jenkins. If the build succeeded, we’d hear a rimshot. If we heard a sad trombone, we’d know the build was broken and would start troubleshooting so that we could get a fix in.
We also set up Nagios to report problems to the room. It’s useful for everyone to know that the reason the the code push failed wasn’t that there was some problem with the code, but that the target system was unavailable.
We also set up cron jobs that sent messages to the room to remind us of regularly scheduled events, like “timesheets are due” and “it’s time for scrum, here’s the dial in number”.
Using Campfire for team communication
In the case of our team at LightCastle, we are currently all local and we have an office where we enjoy collaborating in person. But most of the time we have at least a few team members who are working remotely, and we have an unwritten policy that if you are sick or irritable, you should just keep yourself to yourself.
Initially we were all communicating through email and individual skype sessions. This was frustrating for team mates who were outside the discussion though, because they’d be unaware of progress and setbacks of the project until someone remembered to share what had changed. Surprisingly, it was a little bit of a sell to get everyone to use Campfire at first. But once everyone realized that they could share and discuss at length pictures of cats, weird stuff from Reddit, and what we they were eating for lunch INSTANTLY to all who were logged in, it was a done deal. Collaboration flourished. When a member of the team speaks up in the room about a problem they are working through, they get faster and better feedback. Instead of simply pairing with another individual to solve the problem, anyone logged in can offer a suggestion. If a server needs to go down for emergency maintenance, a quick message to campfire and everyone knows. New memes are distributed with ease.
There have been drawbacks. We like pranks, and Campfire has been the method for perpetuating one of our favorites: See if you can make someone’s machine blast sounds at inopportune times. Two of us were having a conference call with a customer and other developers, when someone in our LC Campfire room typed “/play loggins”. I scrambled to hit the mute button on my laptop, and there was silence on the call. Finally someone on the call spoke up and asked, “Are we all just going to pretend we didn’t hear that?” If one of us realizes that a team mate is working out of a library or coffee shop, it’s almost ensured that a cacophony of Campfire sounds will ensue.
To sum it up, Campfire is fun. And, unlike IRC, there is a low barrier to entry. We can easily invite the technical and non-technical into a common community that enables great communication and a strong sense of team across many boundaries – both physical and mental.
I draw a lot of diagrams in InkScape, and I’ve often fantasized about turning my diagrams into dynamic animated presentations. I’ve seen several presentations lately using the excellent Prezi software, and as I was remarking on its coolness a friend pointed me at Sozi – a plugin for InkScape that allows you to turn any InkScape diagram into an animated SVG perfect for presentations. Since then I collaborated on a presentation using Prezi, and have some comparisons I would like to make.
The Sozi diagram I created is shown below (click it to see the presentation, progress the slides with the right arrow key). I put it together in order to kick off an Open Source meetup group we formed at the beginning of the year. I wanted something light and inclusive, so the talk focused on how to convey the concept of Open Source to the uninitiated.
I LOVED drawing the diagram. It was a fairly simple endeavor. InkScape is a tool I know and love. It is a powerful fully fleshed vector drawing program that rivals the very best in commercial software. Sozi, the plug-in, feels more like alpha code – very early development – an excellent and thorough proof of concept. And you must approach it in this light. It was particularly helpful to have viewed this Sozi Tutorial Video, presently one of the best places to get an overview on how to use Sozi. If there was ever an Open Source project deserving of a good UI developer’s love and attention it would be Sozi. Clean it up, tighten it, and you have a killer application. In the meantime, with patience, you can still produce stellar results (like the one above, If I do say so myself). And I believe it was worth the effort.
Another huge benefit of Sozi is that it is generating a presentation you can share with ANYBODY. It it producing standards compliant SVG that works just fine when I open it in Firefox or Chrome (and I suspect IE and Safari but I’m unwilling to test it.) without any additional plugins, security warnings, etc… etc..
Prezi, on the other hand, is a polished piece of work, that stands on its own and is well focused to the task at hand: creating a presentation. Because of this, getting started happens fast. Very fast. I was able to jump into the middle of another persons work, pick up where they left off, and with a bit of good natured fumbling around I was able to do all the things I wanted to do. I didn’t have to go watch a video, and click around blindly cussing at myself – it all fell into place.
Where Prezi falls short is that it /is/ a stand alone application – it isn’t built on top of a powerful graphics tool like InkScape, so you can’t create your presentation within it. You have to use it just like you would use PowerPoint – that is to create the artwork elsewhere, then finagle the creation into the confines of the presentation software.
I think, perhaps, there are two distinct types of users here – and these tools will evolve over time to meet the needs of those two groups. If you live and die by the effectiveness of your presentations (As I do), then you need something like InkScape and Sozi – which, while not 3D modeled non-linear video editing, offers up a set of tools that will well outstrip the mainstream. If instead, you just want a step beyond PowerPoint, just to up the ante on your 1990′s era competition, then Prezi is the ticket. Prezi’s sleek well defined and intuitive interface provides for very effective presentations that can take you just past the expectations of most audiences.
Ultimately, the software doesn’t make the presentation. But that wasn’t the purpose of this post. I’ll leave that broader subject to the master, Mr. Edward Tufte.
I spent time today taking to Senator Mark Warner about entrepreneurship in the Shenandoah Valley of Virginia. It wasn’t just me and Mark throwing back tequila shots and shooting the breeze, but it was pretty darn cozy. I am elated Senator Warner took the time to organize this series of talks (get on the wagon Bob Goodlatte). In my humble opinion, Mark Warner is a capable speaker, with direct and honest answers. But the best part of the meeting was the opportunity for me to talk to, and hear from, other area entrepreneurs.
As to Mark Warner’s comments, he discussed how start up enterprises can help accelerate job creation. Mark cited a 2010 study by Ewing Marion Kauffman Foundation that shows a connection between successful start-ups and jobs. The thrust of the message is existing companies don’t make any appreciable difference in job growth. Which is interesting, but also a bit obvious. New business, on average, creates jobs. Particularly if you don’t count the ones that fail. But as a bullet point in the introduction to a talk on Entreprenuership, it’s a good statistic. Senator Warner also noted that job growth tends to happen around university towns and metropolitan areas – not in urban areas. The Senator commented that if the internet can create jobs anywhere, why can’t it create great tech jobs in western and southern Virginia? Why indeed. He also spoke about keeping good talent here in the States – providing mechanisms to allow graduate students from other countries to stay, rather than forcing them back to their native countries.
Senator Warner’s final comments, and many of the questions he answered focused on the initial capital needed to get a start-up business off the ground. That’s right. Money. He spoke in detail about the SCC and pending legislation that will effect Virginia Businesses. This was all well beyond my capacity to follow. I have trouble even typing the word “ligisa…” without being distracted by my cat walking by or by dust particles floating in the air — look it’s a rainbow!. But I was interested to see the buzz around crowd sourcing. The potential power of letting individuals help foster and invest in business ideas. Mark talked about the success of organizations like KickStarter (which just helped get the RedBeard brewery here in Staunton off the ground) and Kiva. He also spoke about how to legis…. leg … gleal …. LEGislate such organizations when it comes to expectations of a “RETURN ON YOUR INVESTMENT”. Which seems to be the big tipping point for the US government – where you go from “yea yea whatever” to “hey, that sounds like money”. Anyway. If you care about crowd sourcing and aren’t hyper excited about new ways to legislate, then be alert.
Back down to little Staunton, Virginia, the small community where I make my butter. There are a few things about start-ups here in this valley that interest me. The first is this LightCastle business – which we started out of our basement, and that is slowing and patiently growing with the help of our community. The second is the Staunton Creative Community Fund – an organization that helps small “Start-up” businesses get their feet on the ground when they can’t get loans the typical way. And yet another is giv2giv where people are donating their time and efforts to build out a non-profit organization with a truly original and powerful idea. But based on what I heard today, and my understanding the Jobs Act, and the SCC’s forthcoming legislature none of the start up companies I care about are on the radar. A radar that apparently only pings when it encounters numbers larger than 1 million. Mark Warner hit the nail on the head when he said that the only real progress to be made at this level is LOCAL. And Local, is EXACTLY where I wanted to be all along.
Note: This is the first part of an ongoing ‘writing good documentation’ series.
Let’s face it, good documentation is hard to find. If you’ve been programming for more than a few days, you’ve probably been frustrated by trying to read the instructionals on how to use X application/framework/library/whatever. I can’t begin to tell you how many times I’ve tried some new majigger only to find that its documentation was lacking.
And we might as well go ahead and acknowledge that writing documentation isn’t exactly thrilling. That said, how many times have you written some code, then had to look at it six months later for some reason and thought, “the hell does this do?” Happened to me the other day, and I had to trace through my own code again to figure it out. The thing about documentation is that it’s going to make *everybody’s* job easier, so why not take the extra hour or two to hammer out some decent docs?
But, They Might REPLACE ME!!
Might as well get the obvious out of the way. If you’re putting all your special knowledge down on paper, of course your boss is going to want to get rid of you! After all, there’s no need to keep you around any more, right? Maybe that’s a relatively reasonable assumption on the surface. But if you’re writing docs that make your code easier for everyone to use, it really makes you more valuable to your employer. Ultimately, they’re worried about the bottom line. So if you’re saving your bosses some dough by making things easier for everyone to use, it gives them that much more reason to keep you around. Besides, there will inevitably be some case where the docs just don’t bestow expertise. That’s when you go to the documentation *writer*. They clearly know what they’re doing (or at least they *presumably* know what they’re doing).
Think of it like a vehicle’s manual. You can search the web or your owner’s manual to find out how to do some of the simpler things. But if something is seriously wrong with it, you don’t do it yourself: you take it to the mechanic. They’re vastly more familiar with a vehicle’s moving parts than you could be in your few hours of Google searches. The same goes for documentation. You can write some of the best documentation around, but you’re not going to remove the need for the original developer.
The thing that doesn’t make sense is that if “programmers are lazy,” why do we still hate to write documentation? It’s easy to think of it as taking on more work. But writing documentation is really about reducing your workload by decentralizing the knowledge of how something works. You’re putting solid documentation in a place where everyone can access it and better understand the program. This way, you’re distributing the knowledge over a larger sample of people. A better way to look at it is as creating a script for humans. Think about it: it makes a task easily repeatable (by others), and breaks the task into simpler problems/tasks that are easy to do individually. When you think about it that way, how can you *not* want to write documentation?
Making It Incremental
One thing that one of my coworkers does that i like is he takes notes as he’s developing. This way, not only is he solidifying the logic in his brain, he’s also creating a skeleton for more official docs. Once you have a set of barebones notes, actual documentation becomes a whole lot easier. Plus, when you can explain something thoroughly, you know you’ve really got it. Not to mention the fact that it really makes things easier when you face similar problem on some other server/project. Your notes get rid of that “What did we do the last time this problem came up? I can’t remember” problem. Instead of wasting the brain power on remembering the solution to some obscure problem, you’re spending the brain power on ctrl-f’ing your notes for the right keywords. That’s a pretty good trade-off, I’d say.
That covers some of the advantages of writing good documentation. In the next installment of the series, I’ll start talking about how to actually create good documentation.
Matplotlib is one of those libraries that everybody loves. And I mean everybody. Google loves it. #python loves it. And #matplotlib definitely loves it.
I found the library as I was looking for something to graph employee performance data for a client. I’ve already posted about that adventure, so I’ll keep this blog just to Matplotlib, which was by far the most popular recommendation I saw. On my ubuntu system, all it took was a quick “sudo apt-get install python-matplotlib”, and it installed all the required dependencies (of which there are many). Be warned though, it takes a while. After that, all you need is a couple of imports and you’re set. I love when things are simple.
Once I got into using the library, though, I found it to be a bit more complicated. A few times I found that the documentation didn’t help much, and had to visit #matplotlib. That said, once I started fiddling around with it and looked at a few examples on the webpage I started to get the hang of it.
For our client, we just needed a few simple bar graphs of things like revenues per employee for the month and for the year. For the most part, I copy/pasted a basic bar graph example from the matplotlib samples and edited it to my needs, then wrapped it in a graph() function to make it callable. Here’s the whole bit of code handling the graphing of data.
def graph(x_keys, bar_values, number, y_label, graph_title, save_name, has_dollars, old_records_save_name): ind = numpy.arange(number) # the x locations for the groups width = 0.35 # the width of the bars fig = plot.figure() ax = fig.add_subplot(111) new_bars= rects1 = ax.bar(ind, bar_values, width, color='#6699CC') ax.set_ylabel(y_label) ax.set_title(graph_title) ax.set_xticks(ind+(width/2.0)) ax.set_xticklabels( x_keys ) ax.margins(.05) ax.set_xlim(-.5, number) if has_dollars=="yes": for item in ax.get_yticks(): new_bars.append("$"+str(item)) ax.set_yticklabels(new_bars) def autolabel(rects): for rect in rects: height = rect.get_height() ax.text(rect.get_x()+(rect.get_width()/2.0), .8*height, '%d'%int(height), ha='center', va='bottom') autolabel(rects1) t=ax.title t.set_y(1.07) fig=plot.gcf() fig.set_size_inches(4,4) fig.tight_layout() plot.subplots_adjust(wspace = 0.05) plot.savefig(save_name, format="png") plot.savefig(old_records_save_name, format="png")
Parameters that I gave it were x_keys, bar_values, number, y_label, graph_title, save_name, has_dollars and old_records_save_name. Each time you call the function, it saves a month- and year-to-date graph (old_records_save_name is the variable the program uses to save the files as past records. It was supposed to imply it’s an old graph record. In actuality what it does is save files according to month, so that when the month changes over, the old files are left and a new one is created, preserving the old files.). Most of the parameters are self-explanatory: x_keys is used to place the values on the x-axis, bar_Values is the value to be graphed, number is used to help set the x-axis values, y_label is the label for the y-axis, graph_title is the graph’s title, save_name is the file’s save name and has_dollars is a boolean check to see if the graph needs to display a “$” sign for the values it’s graphing.
The new_bars parameter is a list I had to create to allow $ signs to be used for y-tick labels, which you see go into effect at the line that starts “if has_dollars==’yes’”. There, I call the get_yticks function so that I can get all the values that are being used for labels on the y-ticks. Then I cycle through them and prepend a $ sign to the value, then save them all in the list. Finally i call set_yticklabels with the new values as the parameter. It’s not too complicated, but required a little snooping around to see how the y_tick labels were set.
I also had to add in a margin to get the graphs off of the axes (because it looked ugly), and setting the xlim also helped in that. In the autolabel function, you’ll notice the rect.get_width()/2.0) portion of code. This just sets the x-labels to be in the middle of the tick, otherwise they’re offset a bit. Finally, I added the fig.tight_layout() call and fig.set_size_inches(4,4) call. These just helped to size the image and keep it clean looking. Tight_layout() is a built-in function that comes with the newer versions of matplotlib, and hot damn I like it. It really made cleaning up the graphs easy.
All in all, I’d have to recommend matplotlib to anybody looking for a way to graph data. It has some pretty fancy features. The only downside is that the documentation was a little tricky at times, though I think somebody in #matplotlib mentioned that they’re working on that.