When the New York Times published a story about the prolific Kronos Quartet, we were floored by the sophisticated presentation that revealed the subtle communication between performers. Using the kind of 3D motion capture systems we often find in high-end, animated movies (think Gollum, Avatar, etc), Senior Graphics Editor Graham Roberts and the Times team filmed a performance and turned that footage into a gorgeous visualization rendered in flowing particle streams. We caught up with Graham to find out more about this intriguing project, Inside the Quartet.
This is a crazy project, how did it come about?
Like most projects we do, it was in a sense an iteration on something we did previously, in this case a project called Connecting Music and Gesture which aimed to demystify conducting. For that project we put renowned conductor Alan Gilbert in a motion capture suit to represent his movements in a new, engaging way … make the invisible visible, etc. Daniel Wakin, our culture editor, conducted (no pun intended) the interview for that project. Some time after, Dan came to us with the idea that the Quartet could use a similar kind of treatment to give people an inside look at the subtle kinds of communication that musicians use to make it all work. To see if this made sense, we met up with a casual quartet composed of some musicians that Dan knew, to watch them play and discuss some of the issues of communication with them. What we found was really interesting to us and that there really was a whole world of subtle interactions that would be interesting to try and reveal.
I felt that to get a general audience interested in this, we would want to work with a really well known group, and I couldn't think of anyone better than Kronos Quartet, known for their openness to experimentation, and being in existence for more than 40 years. I reached out to them, and to my delight they had seen the Connecting Music and Gesture project and liked it very much. They were up for the experiment, now we would just have to find a time in their busy schedule to make it work! Getting everyone together on a particular date was one of the major challenges of this.
Clearly there was a sophisticated team in place, can you illuminate some details about the production process, team and timeline?
There were several components to make this all work the day of the shoot:
- First, I knew this had to be in an environment that we could control, and get the highest quality audio recording, including multi-tracked isolated audio signals without using isolation booths. I had some experience outside of work at a studio called Dubway in downtown Manhattan that I knew had a relatively large live room with high ceilings, and a top notch staff. I reached out to them explaining that we wanted to record Kronos Quartet for an experimental project, and they were very accommodating. We were trying to time the day around when Kronos Quartet would be nearby on tour, and we would be ready as well. There was a lot of scheduling and re-scheduling. Kronos brought their own sound engineer who worked with Dubway's engineer.
- Next, the point cloud data was recorded with an array of Microsoft Kinects by a 3-person team called OpenShades. They hacked a bunch of old laptops we had, using a dedicated laptop for each Kinect, and later helped us sync this data in time and space.
- Two of our best videographers, Leslye Davis and Catherine Spangler, shot video of the performances and interviews, and later helped organize all of this footage.
- Dan Wakin, our culture editor, conducted the interview.
- and myself, to make sure everything was coming together, and that we were covering everything we needed.
We had a ton of material, and originally we had been hoping to publish in time for a Kronos performance at Carnegie Hall a mere month away. It was clear that there were far too many considerations to make this happen. So the project became something I thought a lot about and made incremental progress on between many other things that were driving up, not unusual in this environment. I needed to learn how to work with the data from the Kinects, for one. Jeremy White in graphics and I worked on this particle data. One of Jeremy's important contributions was to find a way to cut out the musicians particles from everything else. The particle data was a huge mess including all of the background of the studio, mic stands, music stands, etc. He programmatically created polygonal zones around each musician that would allow us to separate out the particles that belonged to musicians vs. all of the other particles. And critically, group these particles to belong to a particular musician, so that I could then affect them as a group by what that musician was playing.
And then of course all of the 3D work to make the dot cloud look a certain way, all of the camera angles, etc. And then connecting each cloud to the correct audio signal to have the clouds appear and disappear along with the contributions of that musician to the overall sound.
For the real geeks, what specific technology is behind this?
As mentioned, we recorded the point cloud using an array of five Microsoft Kinects, each connected to a dedicated laptop. This data was converted to the .prt binary particle format and loaded into Autodesk Maya using an open source library called Partio. Then the Maya particle engine rendered each pixel into a tiny sphere, which I felt added more depth and "resolution" to the aesthetic. The isolated audio signals for each musician was processed using the Trapcode Sound Keys plug-in in After Effects.
We love the look of the webpage, how do you decide when to break the mold and build a custom layout?
For a standalone multimedia project like this, we almost always create something custom. Each project has specific demands, and it usually makes sense to design for those from the get go if you can. I created a rough mockup of the page in Illustrator, with the idea that it should be as simple as possible, and tied together by some of my favorite quotes from the Kronos interviews. At first I had a picture of the Quartet at top, but we realized that it needed something else to clue people in to the idea that they were going to see something a little different. So we scrapped the photo, and replaced it with the animated dot cloud cover. This I think helped entice people get to the videos since there is a fair amount of copy. Necessary copy, to give readers some perspective on the day and a little background into what we were trying to do, but I was also concerned that the videos would be pushed down too far. Shan Carter further refined the design, and is the one who built the page.
What was your inspiration for the project?
An acquaintance, James George, has done some amazing filmmaking with this kind of approach. He created an interesting documentary called Clouds that actually works on the Oculus Rift. He developed a system called rgbd toolkit which pairs a DSLR with a Kinect. I didn't end up using this system, but it was one of the first places I was made aware of the possibilities. To me it's almost like filmmaking in reverse. You record the space in 3D, and then choose all of the camera angles, and the aesthetic, later. Also, having played in quartets myself (cello) I was already interested in the subject matter. I liked the idea of creating this sort of synesthesia effect, where you would literally see, through appearance and disappearance, how each musician was contributing to the whole. Even as a musician who can read a musical score, I found this was really a great way to understand who was doing what.
What did you learn during the process of crafting this story?
On a practical, dorky level I learned a lot about handling particle data. More generally, I learned that keeping it simple can sometimes be the most effective way to communicate. We had a lot of complicated ideas about using this data ... like doing something in webGL, letting people rotate around, etc. I think this would have ultimately been very limiting though. The simple idea of letting people see in a sense what they were hearing, and pairing this with carefully considered editing of a great interview from true masters of the medium was, in the end, I think the best way to engage our audience, and give them some insights into this music.
And for fun: if you had to write a fortune cookie, what would it say?
You will eat a dry tasteless cookie in the near future at an embarrassingly non-authentic Chinese-food restaurant.
Thanks, Graham! Always a pleasure and can't wait to see what you'll do next! Check out more of Grahams work at www.grahaphics.com