Megan Taylor

front-end dev, volunteacher, news & data junkie, bibliophile, Flyers fan, sci-fi geek and kitteh servant

post

Learning ActionScript 3.0

When I set out to learn a new programming language, I usually take baby steps:

  • Read as much as possible about the language
  • Find the experts online and see what they’re saying/doing
  • Find and work through beginner tutorials
  • Come up with an idea to build something on my own

It usually takes a good 3 months or so before I get to that last step.

I didn’t get that luxury with AS3. A few weeks ago, I started watching the AS3 tutorials at Lynda.com. I had been assigned to rebuild The Miami Herald’s 60 Seconds project.
The current project is written in AS2. All the bits and pieces are internal. My mission was to rebuild it in AS3 and make it load information from an XML file so that it could be updated easily.

I started out with a series of classes: one to load the XML, one to parse it, one to define the thumbnails, etc. These classes were refined and rewritten until I got the thumbnails to load into the screen, much as they do in the original version.

It’s taken me 3 weeks to get that far. Google is my best friend. The next few steps:

  • fix interface so that when more videos are added, the screen will scroll left and right to show the additional videos
  • clicking on a thumbnail will go to large version of video with description etc, pulled from XML
  • add commenting, feedback and rating functionality

Right now, I can’t even begin to figure out how that’s going to get done. But it will, and I’ll learn a lot from the experience.

Check my Del.icio.us bookmarks for AS3 resources.

post

Suggestions for changes at SOJo

This week I’ve been thinking about restructuring some areas of this site, as well as getting into a more stable posting schedule.

The first area of concern is the sidebar of this blog. I’ve already started messing with a few things, for example the blogroll. I had the blogroll pulling automatically from a folder in Google Reader. But I think its more serviceable to have links to things I’ve read or bookmarked recently, instead of a list of sites I may or may not have updated in months. What do you think?

What items are actually useful in a blog sidebar? What should go higher or lower? What do you look for?

I’m also going to change the postings from Delicious. I’ve been having problems with their auto-posting service for my bookmarks, and I’d rather have real content on here and put bookmarks in the sidebar. Besides, you can always grab the feed from my Delicious page or add me to your network.

My Twitter account is basically my “lifestream,” and I don’t want to duplicate that too much here. But I still want to provide easy access to all that information. Maybe a separate page that displays that?

I also need to update the Clips section. I want to provide a little more context, maybe break it up into sections for text, video, programming, etc.

I’d love any suggestions, and you’ll notice a few changes as I figure out what I want to do this week.

post

City of Memory

City of Memory

This is such a beautiful package.

“City of Memory is an online community map of personal stories and memories organized on a physical geographical map of New York City.”

People can add their own stories, including video, audio and photos.

The project is “Funded by the National Endowment for the Arts and The Rockefeller Foundation.”

post

Bandwagon of the summer: News APIs

In May announced its intention to build an Application Programming Interface for its data. MediaBistro quoted Aron Pilhofer:

The goal, according to Aron Pilhofer, editor of interactive news, is to “make the NYT programmable. Everything we produce should be organized data.”

More details, if they can be called that:

Once the API is complete, the Times’ internal developers will use it to build platforms to organize all the structured data such as events listings, restaurants reviews, recipes, etc. They will offer a key to programmers, developers and others who are interested in mashing-up various data sets on the site. “The plan is definitely to open [the code] up,” Frons said. “How far we don’t know.”

I haven’t heard anything since then, although the article mentioned that something would be ready “in a matter of weeks.”

Today I spent some time reading the API documentation for National Public Radio.

That’s right, NPR has an API. (mmm, I love my alphabet soup.)

NPR’s API provides a flexible, powerful way to access your favorite NPR content, including audio from most NPR programs dating back to 1995 as well as text, images and other web-only content from NPR and NPR member stations. This archive consists of over 250,000 stories that are grouped into more than 5,000 different aggregations.

You can get results from Topics, Music Genres, Programs, Bios, Music Artists, Columns and Series in XML, RSS, MediaRSS, JSON, and Atom or through HTML and JavaScript widgets.

Now, I’m a bit of an NPR junkie, so I’m thinking of ways to access all this information for my personal use. And I can see how it could be useful as an internal product for NPR.

But how would another news organization use this? Oh wait, they can’t:

The API is for personal, non-commercial use, or for noncommercial online use by a nonprofit corporation which is exempt from federal income taxes under Section 501(c)(3) of the Internal Revenue Code.

This one doesn’t make sense either:

Content from the API must be used for non-promotional, internet-based purposes only. Uses can include desktop gadgets, blog posts and widgets, but must not include e-newsletters.

And way down at the bottom of the page is a huge block of text describing excluded content. Boooo.

Check out these blog posts from Inside NPR.org, where they explain some of their decisions.

I think this was a great first step, but if you’re gonna jump on the bandwagon, make sure you don’t miss and land on the hitch.

cat

Further, really understand what purpose this bandwagon has. If you’re going to free your data, free it! Let people and news organizations use it (always with a link back) for all kinds of crazy things. Remember kids, sharing is caring!

post

New project: Borrowers Betrayed

A week ago, I was assigned the task of building the story package for a series on mortgage fraud. This had been in the works at The Miami Herald for quite some time, and the investigative team was finally ready.

When we found out that Congress was working on legislation relevant to the series, the package was fast-tracked. I had one week to build this thing.

It launched yesterday morning and if I do say so myself, it’s wicked cool. We have profiles and documentation for 4 major offenders, a flash graphic, a couple of static graphics, a slide show and a video, in addition to all the stories.

I even got a credit line in the footer!

I learned a lot about coding fast – quick and dirty sounds good, but it pays to take just a few extra minutes to do it right. It was also a good team experience. It’s so much harder to put things together when no one know what anyone else is doing, it almost justifies meetings! (Except that’s why we have instant messenger and Twitter.)

And guys, I forgive you the millions of revisions and changes. Everything turned out great.

Check out how they did the story.

So what’s next? I have a bunch of different projects on my plate, but I’ll give you a few hints: Video, Flash, ActionScript 3, XML, Twitter, database, Django, Python. Not another word! You can’t drag it out of me!

post

Journalism job trends

Ever since I made my relationship with journalism official – I finally committed on paper as a junior in college – I’ve been trolling JournalismJobs.com. That obsession only grew when I graduated 2 months ago.

I keep an eye out for opportunities for myself and people I know, but also for trends: what skills are wanted, what kinds of jobs are open, where papers are hiring.

The first two things I noticed were that the average years of experience desired had gone up, and there were more upper-echelon jobs open. Years of experience went from 2-3 to 5-and-up over the past year or so. Just out of college, that’s not good news for me. I also see a lot more ____ Editor jobs – not counting the ubiquitous “Web” or “online” editor position (usually a cut-and-paste job!) – and sports writing positions. Why are there so many sports positions open when that’s one of the most popular beats in the newsroom?

More interesting than the job titles are the job descriptions. Lists of skills and vague descriptions of expected duties tell us almost as much about the state of journalism as the recent spate of layoffs.

My favorite job description is the search for “computer jesus”. These are the job descriptions that list 100 programming languages plus multimedia skills. Yea, right. Am I running the entire news site and producing content all by myself?

Then there’s the “we don’t know what we want you to do but we’re supposed to hire an online person” job description. This one, from The Times-News in Idaho, actually made me want to cry:

Must have visual design skills and be knowledgeable on Internet concepts and the latest developments on the Web. Must be proficient in PHP, HTML, Javascript, XML, Macromedia Flash, Dreamweaver and Photoshop. Writing skills are a plus. (emphasis added)

Writing skills are a plus? Are you serious? Hiring a journalist – you’re doing it wrong.

I realize that a lot of these are written by people who really don’t know enough to narrow down what they want. And I’m not trying to put those people down. But between this post on putting together a Web team and this one on journalism job salaries, I thought there was a place for a little something on the chaotic state of journalism job descriptions.

post

Miami Herald’s updated Health section

Well, my first project is live! The Health section of the Miami Herald’s Web site has been redesigned.

My contribution is that slick-looking sidebar on the right. I had some help from Stephanie Rosenblatt for the graphics, and of course she put together the Doctor Sleuth. (They are using Caspio and I have been too busy for training!) The tabs on the results pages are mine though.

There’s some more projects on the table for the Health section, so hopefully I’ll get to be more involved over the next few weeks.

I finished working on a little PHP script today, with Rob Barry’s help, that queries, parses and geocodes some data. Hopefully we’ll have that into the DataSleuth system soon.

post

Internship, week 2

So last week I got one of my projects to the “show it to the boss” point. Supposedly it’s going live tomorrow. I will link then.

My story has been postponed until “official action has been taken” whatever that means. Oh, well.

I have 2 other projects to finish this week, plus a couple of long-term data projects, and the grapevine tells me I’m getting a new assignment today. This is good, cause I’m used to high-pressure deadlines and that hasn’t been the case so far.

Over the weekend I purchased Outlaw Journalist: The Life and Times of Hunter S. Thompson by UF’s very own Bill McKeen, as well as The Definitive Guide to Django: Web Development Done Right, by Adrian Holovaty and Jacob Kaplan-Moss.

I can’t wait for these to come in. I really want to continue to learn different programming languages and frameworks. My internet access at home right now consists of finding an open wireless network on my street and sitting outside with the mosquitoes, so some books will be really helpful.

If anyone wants to recommend other books or online resources, please do!

post

Internship: week the first

I gave my impressions from the first day or so of work, but a full (sort of) week has given me more time to get acquainted with my new job.

I’ve worked on several projects, thought none of them are quite ready to go live yet. I’ll link to them when they do. But so far the work has been pretty easy and well within my skills. I was surprised at how much Flash I remember, even though I haven’t touched the program in over a year.

I’m also working on a story for next week! I pitched this one myself, and while its nothing big, I’m happy to be writing. My greatest fear is being pigeonholed into the programming room.

I’m supposed to see about some database work in the next week or so, which will be something new to add to my arsenal. I know how databases work and how to work with them, but I’ve never actually built one.

On the side, I’m continuing to work through Django tutorials and plan on buying some books soon. I’m also in the market for a job after my internship is over.

I’ve got a couple of posts coming up that should be more stimulating, but I’ve been too busy to really organize my thoughts yet. Here’s hoping I can get one or two out next week.

post

IRE Conference – Day 2

This morning I met with my IRE mentor, Steve Doig, who is a CAR teacher at the University of Arizona. We talked about some of the work I’d done, people in the industry to learn from, and ways to stay on top of projects at different newspapers.

I love mentorship programs because I get a basically captive audience for my pro-online and data visualization ranting. I guess it’s also a networking shortcut.

I spent a frustrating hour and a half tracking down an internet connection so I could clear out the ::gasp:: 1000+ items that have accumulated in Google Reader after 3 days of neglect.

Then I went to a session called Cutting Edge Digital Journalism from Around the World.

The session was led by Rosental Alves, University of Texas; Sandra Crucianelli, Knight Center for Journalism in the Americas; and Fernando Rodriguez, Brazilian Association for Investigative Journalism.

One of the things that surprised me was the idea that in Central/South America, CAR/investigative reporting/databases are viewed as “as a gringo thing.”

Rodriguez showed off a database he worked on of politicians in Brazil, called “25,000 politicians and their personal assets.” Politicians have to submit a certain amount of information in order to run for office, including a listing of assets. It took 2 years to track down all this information because the records were not organized and were available only in hard format. Eventually, the database could provide a view of who the politicians were.

The database was published online and stories were written for the newspaper (Folha) as well. Readers started to call in and report inconsistencies. Other newspapers started to use the database for their own stories.

Crucianelli presented a way to monitor government documents online in 4 different countries. (El Salvador, Panama, Honduras and Nicaragua) All 4 countries had recently changed their access laws for public information.

She found that Panama had the best online access to government documents. El Salvador had the worst access.

At noon, Matt Waite presented PolitiFact. Sexy, sexy Politifact. He gave a tour of all the features of the site as well as showing us a little of the back-end: the Django admin setup.

I followed Matt and Aron to a session with Knight grant winner David Cohn, talking about Spot.Us.

Spot.Us is supposed to be an answer to the question: How will we fund reporting that keeps communities informed?

The answer is based on the premise of citizen journalism. Writing is not the only means of participation.

On Spot.Us, anyone can create a story idea. Reporters can pitch stories based on contributed ideas to their communities. People in the community commit money for pitches. Then the reporters cover the stories. Some of the money goes to pay editors. The stories can be republished for free or published exclusively if the original donor is refunded.

And that’s it for me today. I’ll be in for some afternoon sessions tomorrow.