Meanwhile, Inside the Wired-Out Infrastructure of SXSW …
"I get that question a lot," he says, shaking his head over a cold beer at the Dog & Duck Pub.
Sure – but what's the answer?
Wilcox sighs. "Well, one of the charms of SXSW is that it takes place all across Austin, and that means there are tons of businesses and people doing all kinds of programs. And that particular program was conceived by, um, an outside party – it may have been a law firm, I don’t recall exactly – and they were doing a promotion and thought it would be, I guess, clever to pay homeless individuals to be hotspots. So that’s definitely not part of SXSW technical operations. However, it is a by-product of the fact that this event takes place all over the city, with people doing all kinds of things."
The event – which is what Wilcox keeps calling it – because, we suppose, it’s simpler than continually saying South By Southwest or The Humongous Media Thing That Eats Austin’s Brain For Two Weeks Each Spring – the event does take place all over the city, and hundreds of thousands of people from all over the world come to experience it.
And you know what’s not quite ready for that to happen?
The city itself isn’t ready.
At least, it’s not ready until Wilcox and his teams of tech specialists make it ready.
“This year we’re planning on using about twelve miles of ethernet cable as we build out the networks to support the event,” says Wilcox. “We build a variety of different things, but, in essence, what we do is: We go into all the locations where SXSW activities will be taking place – the Austin Convention Center, the Hilton, Palmer Auditorium, the Omni, the Stephen F. Austin, the AT&T Conference Center, all these places – and we bring in a variety of bandwidth through different vendors, using methodologies that range from copper to fiber to microwave, and we build out these elaborate show networks to support all the attendees coming to SXSW.”
Austin Chronicle: Copper, fiber, microwave – why a variety?
Scott Wilcox: Doing temporary bandwidth is really challenging. So you look at all the possible ways of delivery, depending on the location and the amount of time you have to order it in advance. We’re fortunate to be working with a great vendor called RightRound – they specialize in temporary Internet connectivity and high-density wireless and streaming for conferences and festivals.
AC: OK, you’re laying twelve miles of physical cable, adding it to the city’s existing infrastructure. So, excuse my ignorance here, is this on the streets or is it strung between telephone poles, or … ?
Wilcox: Sometimes we put antennas on the tops of buildings and we run the cables into their networks. And then we go in, and – we’ll use, this year, about 500 computers and different types of network devices that require connectivity – so we’ll go into these locations, and because they’re not normally built out to accommodate the density of users we have attending SXSW, specifically the Interactive part, we have to augment the networks already in place. So we bring the bandwidth to the location, and we have to wire out the networks, put additional access points for wireless usage, and so on. So it’s a combination of things across the city, in the approximately 200 venues we’re using this year.
AC: 200 venues. Jesus. And all that cable in these buildings or at these temporary event structures – what happens to it when SXSW’s over?
Wilcox: We bundle it up and store it for next year. Like, twelve miles of it, you know? We’re talking pallets and pallets of cable.
AC: And of course that’s only part of what y’all do.
Wilcox: Yeah, so we have multiple teams of people, different people who specialize in different areas, ranging from social media to video streaming to database operations, networking and routing, mobile, mobile apps. Kirk O’Brien leads our IT and tech production and Pro Media areas. Pro Media is the program where we do a lot of video streaming and live video shoots and audio recording, it’s an in-house operation where we use about a hundred different camera operators, producers, directors, and internet engineers to stream the programs we’re doing live. And there’s Melissa Golding, who heads our digital-content area, which includes social media and everything we publish on sxsw.com and our other websites. And there’s Justin Bankston, who heads up our software division, and he’s in charge of all the custom applications we’re running – currently 25 custom applications that help us run the event, either for attendees or for internal use, comprising thousands and thousands of lines of custom code.
AC: Alright, and with all this complex shit you have to deal with, what’s the worst technical nightmare you’ve faced in the last five years? Like, during the event?
Wilcox: [laughs] Uh … [laughs] It’s so hard to pick out just one. Typically, to run such a large-scale temporary infrastructure, there are dozens and dozens of mini-crises that occur across the systems, so the goal is to solve them as quickly as possible – before many people notice. And I think we’ve been really successful in doing that. But the realtime experience of being inside while it’s happening is that there’s always something breaking somewhere. So the goal is about rapid response, about understanding what’s happening and fixing it quickly. From the early hours of the morning until late at night, you’re running this dynamic infrastructure that’s being used by hundreds of thousands of people. It’s all about creating the best event. And the best technology is technology that gets out of the way and allows people to do the things they would normally do.
AC: Well, sure, and you guys do a fine job – as has been seen year after year. However, sir: Is there a specific nightmare that you can share?
Wilcox: Well, maybe the year that all of our digital communications went down at once in the middle of the music festival and nobody could use any of their walkie-talkies? That was kind of rough. I think that was three years ago. And there was a moment just last year, where there was a race condition on some of the software we were using to power the SXSW schedules on the website, and it appeared to us as if there was someone in a Virginia airport adding thousands of events to their schedule. Those are the snarled kind of problems where you’re trying to find out what’s causing particular instants. There was also the year that the main database server we use to power all the schedules and operations had a kernel panic and completely shut down, and a number of years where servers would lose their power supplies and we’d have to dynamically switch. Some of the nice things available now, now that the cloud is in existence, it helps in terms of the scalability and dynamic recovery. So we spend a lot of time thinking about “What if this fails?” or “What if this happens?” and try to come up with the best plan for rapid recovery – which is quite a bit different from running, say, a typical office network, where you might have weeks to stabilize things. We only have hours, and when something unexpected happens, we only have minutes – or it becomes a large problem.