How we built News Mixer, part 3: our agile process

This post is last in a three-part series on News Mixer — the final project of my masters program for hacker-journalists at the Medill School of Journalism. It’s adapted (more or less verbatim) from my part of our final presentation. Visit our team blog at crunchberry.org to read the story of the project from its conception to birth, and to (soon) read our report and watch a video of our final presentation.

When you made software back in the day, first you spent the better part of year or so filling a fatty 3-ring binder with detailed specifications. Then you threw that binder over the cubicle wall to the awkward guys on the programming team.

They’d spend a year building software to the specification, and after two years, you’d have a product that no one wanted, because while you were working, the world changed. Maybe Microsoft beat you to market, or maybe Google took over. Either way, you can’t dodge the iceberg.

IMG_3605 by nautical2k
IMG_3605 by nautical2k

Agile software development is different. With agile, we plan less up front, and correct our course along the way. We design a little, we build a little, we test a little, and then we look up to see if we’re still on course. In practice, we worked in one-week cycles, called “iterations,” and kept a strict schedule.

How we met
Every morning, we scrum. A scrum is a five-minute meeting where everyone stands up, and tells the team what they did yesterday and what they’re going to do today.

And at the end of the work week, we all met for an hour to review the work done during the iteration, and to present it to our stakeholders, in this case, Annette Schulte at the Gazette and our instructors Rich Gordon and Jeremy Gilbert.

Design, develop, test, repeat!
In the following iteration, our consumer insights team tested what we built, our panel in Cedar Rapids. And their input contributed to upcoming designs and development.

And we managed this process with free and open-source tools. With a couple hundred bucks (hosting costs) and some elbow grease, we had version control for our code, a blog to promote ourselves (using WordPress), a task tracking system with a wiki for knowledge management, and a suite of collaboration tools – all of which are open source, or in the case of the Google tool suite, based heavily on open source software, and all free like speech and free like beer.

That’s all for now! Hungry for more on agile? Check out my posts about our agile process on the Crunchberry blog, and read Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin, and The Pragmatic Programmer, by Andy Hunt and Dave Thomas, and Getting Real by the folks at 37signals.

How we built News Mixer, part 2: the trouble with Facebook Connect

This post is second in a three-part series on News Mixer — the final project of my masters program for hacker-journalists at the Medill School of Journalism. It’s adapted (more or less verbatim) from my part of our final presentation. Visit our team blog at crunchberry.org to read the story of the project from its conception to birth, and to (soon) read our report and watch a video of our final presentation.

Facebook Connect launched last week to much fanfare. Put simply, it’s a tool that gives Facebook users a way to log in to News Mixer, or any web site, without having to first set up a username and password and all the usual crap you’re forced to do when you when you want to use a web site. Just click the blue button, and you’re in.

Log in with Facebook Connect on News Mixer

Besides reducing barriers to entry, Connect lets us do some pretty neat stuff. Comments used to happen in the darkness — they were buried at the bottom of an article, where only the trolls dwell. But when you make a comment on News Mixer, your Facebook friends can see — and it’s our hope that this will bring your friends into the conversation.

More identity => less jackassery
In addition, by putting your name and face next to what you say, and showing your words to all your friends, we also hope that you’ll put a little more thought into that comment you’re writing.

But at what cost?
The thing is, we can find out a lot more about you than just your friends list. Connect reveals to us all sorts of information about our users. We could wish you happy birthday or tell you when one of your favorite bands is coming to town. Or we could help you find an apartment when you change from married to “it’s complicated.”

You see, whenever you use a Facebook Connect site, or any Facebook application for that matter, you effectively make the application your “friend.” Anything you reveal to your loved ones, we know too.

Facebook’s terms of service do tell us that we’re not allowed to store this data, but this is almost impossible for them to police. Facebook does allow users to restrict the information revealed to applications, but the reality is most people have no idea how much privacy they’re giving up with each witty remark.

But I promise, we’re being good! Other sites might be creepy, but we’re not. The only data News Mixer looks at is your name and your friend list, and we don’t store anything.

That’s it for part two! Can’t wait and hungry for more? Check out Facebook Connect in action at News Mixer!

How we built News Mixer, part 1: free and open-source software

This post is first in a three-part series on News Mixer — the final project of my masters program for hacker-journalists at the Medill School of Journalism. It’s adapted (more or less verbatim) from my part of our final presentation. Visit our team blog at crunchberry.org to read the story of the project from its conception to birth, and to (soon) read our report and watch a video of our final presentation.

We could not have built News Mixer without free and open-source software. For those of you who aren’t familiar with the term, this is how the Free Software Foundation describes it:

“Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.”

Free software is a matter of the users’ freedom to run, copy, distribute, study, change and improve the software.

The Free Software Definition, Free Software Foundation

Now, journalists in the room might be surprised to hear a nerd talking like this, but the truth is that we’re remarkably similar, journalists and technologists — free software and free speech are the backbone of the web. The Internet runs on free software — from the data center to your desktop.

LAMP =
Linux (operating system) +
Apache (web server) +
MySQL (database) +
Python (teh codez)

I won’t dwell too long on the super-nerdy stuff, but for those interested, News Mixer runs on a LAMP stack, sort of the standard for developing in the open-source ecosystem. Notably missing from the list are non-free technologies you may have heard of like Java, or Microsoft and .NET.

The biggest tech choice we made was to use Django. Its a free and open-source web development framework put together by some very clever folks at The Lawrence Journal-World. For those of you in the know, it’s a framework similar to ASP.NET or the very popular Ruby on Rails, but with a bevy of journalism-friendly features. Django is how we built real, live software so freakin’ fast.

And you can have your very own News Mixer, gratis, right now, because News Mixer is also free and open source. We’ve released our source code under the Gnu General Public License, and it’s available for download right now on Google Code. So, please, stand on our shoulders! We’re all hoping that folks will take what we’ve done, and run with it.

That’s it for part one! Can’t wait and hungry for more? Check out the Crunchberry blog, or my other posts on using free and open source software to practice journalism.

enviroVOTE: Tune in tonight to track the environmintiness of the elections

This morning, Ryan Mark and I launched enviroVOTE!

Conceived last Monday, and built in a three-day coding sprint that ended in the wee hours this morning, the site tracks the environmental impact of the elections by comparing winning candidates with environmentally-friendly endorsements.

enviroVOTE

The numbers

Amy Gahran got the scoop with her E-Media Tidbits post:

The site’s home page features a meter bar currently set to zero. That will change as election results come in tonight. You can also view races by state, with links to specific eco-group endorsements given to specific candidates. …

But the analysis goes deeper than that. Below the meter bar is a percentage figure. That’s where Envirovote gauges the level of enviromintiness of the 2008 elections. Boyer defines enviromintiness as “The freshness of the breath of the nation. Technically, this is the percent change in the eco-friendliness of this year’s elections compared to the last applicable elections for the same seats.”

We calculate the eco-friendliness of a candidate based on how many environmental endorsements they’ve received compared to their race-mates.  Most of the endorsement data, as well as candidate and race information was lovingly sucked through the tubes from Project Vote Smart.  Other data was pulled from Wikipedia and the environmental groups’ websites.

The awesomeness to come

The enviro-meter hasn’t moved yet, but very soon it’ll show the environmental impact of today’s election.  We’ll post the results as they come in tonight, and if America made environminty choices, those bars are gonna start turning green!

So, what are you waiting for?

Check out enviroVOTE tonight, as the polls come in!  And for the play-by-play, follow us on Twitter!

NYT’s new Visualization Lab: They bring the data, you mix the charts

As announced on their excellent Open blog, the Times rolled out a neat tool yesterday:

The New York Times Visualization Lab… allows readers to create compelling interactive charts, graphs, maps and other types of graphical presentations from data made available by Times editors. NYTimes.com readers can comment on the visualizations, share them with others in the form of widgets and images, and create topic hubs where people can collect visualizations and discuss specific subjects.

It’s based on the technology developed by the folks at Many Eyes (about which I’ve blogged before). In this implementation you can’t upload your own data. Instead, the data you’re able visualize is provided by the Times editors.

Still learning a bit

The interface is pretty cludgy, and the initial data sets don’t quite work with the canned visualizations (NYT folks: if you’re watching, see below for my bug report), but they should be able to work that stuff out.

England and Wales

My other complaint is that the data is more like what I’d look for in an atlas than I’d expect from a newspaper. Party Affiliation By Religious Tradition, National League HR per AB Leaders 2006-2008, and Sarah Palin’s Speech at the RNC are fun as a start, but don’t realize the potential of this system.

I sure hope data sets discovered while researching New York Times stories get uploaded to the lab. They’ve got to have some FOIAed federal data on their desktops. That kind of stuff is begging for citizen journalism.

Or, do it yourself

If you love this, you’ll want to take a swing at making your own charts over at the full-featured Many Eyes site. I’ve been playing with the Illinois State Board of Education’s schools report card data:

(The Times did make one huge improvement… their embedded charts have a *way* better color scheme.)

Nathan at FlowingData weighted in on the Lab last night:

I said the API was a good step forward. The Visualization Lab is more than a step. … I’m looking forward to seeing how well Times readers take to this new way of interacting.

Agreed. I’m really excited about this. It ain’t perfect, but it’s an exciting development for online news, especially if they start uploading lots of source materials and make it a bit easier to use. The big question is: Will people use it?