Friday, September 25, 2009

#137 Teach your children well

This show is about two experiences that were bouncing around in my head and collided. First, a podcast about technology and second,, conversations with educators lead me to consider the necessity of providing some kind of multimedia literacy as part of the higher education experience. Sort of like directed play in the schoolyard to avoid bullying.

My thought is not only how to produce content that hits the intended mark, but also present the impact and responsibilities, both personally and to others that being a broadcasting entails.

There was a time when it was rare and seemingly ridiculous to require students to attend schools with a computer. Not so ridiculous now. I think the time is coming to accept the responsibilities that private broadcaster carries. I don't have any solutions to offer, but I think we should be thinking about this.

Here are some links to people and places mentioned in the show.

Thursday, September 10, 2009

#136 Something from Nothing

Listening to this show and trying to reach from what I know to what the least informed person knows is very difficult. It's hard to record a session of me talking off the cuff, just using notes, because I end up using a lot of shorthand without explain things. I need a glossary.

I don't have one, but I have included some brief explanations of some of the video codewords I've mentioned in passing, in this show. If you follow the links you'll find more thorough descriptions. It can get pretty thick, but it really helps to know these things.

This episode begins with a recent revelation about my goals and future direction as a filmmaking professional. I've also included details I've gleaned from blogs, podcasts and presentations at a recent meeting of the Boston FCP User's Group.


This has been a busy summer of media events. I've attended Podcamp Boston 4, Podcaster's Across Borders in Kingston Ontario, the Boston Media Makers get together which meets in Jamaica Plains the first Sunday of every month and the annual Avid Summer BBQ.

I wanted to attend Podcamp Montreal, which takes place in a couple weeks, but I think I need to stay at home. There is also a Podcamp New Hampshire, taking place in Portsmouth in November.

It was at the Boston Final Cut Pro User's group that took place in August that I saw Philip Hodgetts present an overview of new features for FCP7. I've also included information I've gleaned from Apple's FCS site and the Digital Production Buzz. You can also find video tutorials online that demonstrate what the new features are in all applications within the suite.

One of the warm up presentations that I thought was particularly noteworthy was for Mocha for FCP from Imagineer Systems. Check out the link above for videos that explain image tracking and rotoscoping in a way that will quickly make sense. Here's my abbreviated version:

Imagine you're seeing a movie and there's a scene at a football stadium. There's a huge video screen that shows instant replays and and short commercials. The people who made that movie didn't record the information on that screen when they shot the footage of the stadium, they inserted their own footage on the screen in the editing suite. Maybe an advertisement for a product that they're getting paid to place in the movie.

In a still image you can select the space inside the frame of the jumbotron and remove it and insert what ever image you choose. In a moving image, the shape of the screen and position of the screen in the frame is changing in every frame if the camera is moving.

You accomplish this difficult task by marking places in the frame that are always visible (bright white points usually) and then making sure they remain in place as the camera pans across or zooms out. Now that you've got the location of the anchor points, you create a mask that fits inside the screen area of the jumbotron and then make sure that mask is linked to those anchor points that are being tracked. That is called rotoscoping. Then you drop in your video and make it look like it was always there.

For sure, that's a gross simplification, but I hope it gets the idea across.


The new version of the Final Cut Suite (no number, should be #3 tho) include new feature updates for all the produces (except for DVD Studio Pro), but Philip was there to cover just Final Cut Pro 7.

Because I clumsily referred to ProRez 422 in passing, give me a moment to explain what an intermediate codec is and what 422 color space and 4444 refers to.

A color space is the limited range of color that can be viewed from the entire spectrum of color. Humans can see a wide swath of color between ultra violet and infra red (violet to red). Some insects and animals can see beyond that range. Mechanical devices, like monitors and cameras and capture and display color in a variety color spaces depending on type of color space. HSB (Hue Saturation and Brightness) is one space, RGB (Red, Green and Blue) is another.

Video cameras generally use a color space called YCC, which is roughly RGB. The Y is the luma quality and the two Cs are the chroma, or color qualities. Those are the three values in a camera that shoots 4.2.2.

Our eyes are more sensitive to luma than chroma, so in a 4.2.2 color space there's twice as much luma, or light, as there is color. Web and DVD video use a 4.2.0 space and the DV standard uses 4.1.1.

Think of the 3 areas of information captured by a 4.2.2 camera as distinct channels of light or color. Like channels in photoshop. There is 4th channel of visual information which cameras won't capture because it is created in post production, the alpha channel.

Alpha channels are used in Photoshop, After Effects and Final Cut Pro are an additional layer of information that can be used to remove areas of the frame so that something else can be seen through it. Or it can act as a selection area of a moving object in the frame so an effect or filter can be applied to it.

Hold that thought for a moment and let me move on to a codec. Among other things, it's a software program that compresses a moving digital image. There are a variety of codecs that compress video as a camera records it, and decompresses it as a DVD player plays it. It's a lossy process, which means digital information is lost when it's compressed. The greater compression, the more minutes of video can be shushed into a gigabyte of storage space.

Still with me?

There are a lot of codecs out there and the variety is necessary because of how you're using them. Camera codecs need to compress data a certain way to retain the most information to fit on the storage medium, tape, drive or solid state card. Cameras are capturing video for one purpose only, to store it. You aren't using it, cutting it into pieces, so it can squeeze it really tight.

Cutting video in a codec designed for camera capture, particularly HDV, is not a pleasure to cut. It's doable, but has problems that I'm not going to get into. If you're producing a feature length movie or TV video you want to work in a codec that will give you more freedom to edit. That's what an intermediate codec is.

One codec for capture, another for playback (sometimes the same one) and one in-between for the edit. Prorez is an intermediate codec.

You capture the video from the camera as you do normally, then select the footage in FCP and then convert it to a Prorez codec.

Hang in there, I'm coming to the end.

When you convert footage captured by a camera using a codec using the 4.1.1 or 4.2.0 color space to, say, Prorez 4.2.2, does that mean you're getting a better quality out of the footage you shot?

No!

Footage captured in every DV or HDV camera is being compressed on the fly using whatever color space the camera uses. That compress, being lossy discards anything that doesn't fit. So when you convert it to a high resolution color space, it's got a bunch space it isn't using. When you shake it, you can hear it rattle.

So why would you convert it to the 4.2.2 space, or for that matter, 4.4.4?

One reason is these other codecs have other characters that make it more efficient for your editing software to edit, render and export the video. More importantly though, and this IS the reason you would use Prorez 4x4 (4.4.4.4) is that anything else you add, a still image, motions graphics from After Effects or Motion, or 3D animation from Maya or 3D Studio Max will be added at their full, mostly likely higher color and image resolution.

These additional elements, even something as simple as title text have to be massaged by various filters and often moved in and out of other programs to make them feel like they're as real as the realness of the video footage. Have a space that allows you to work with the maximum amount of resolution of color and pixel depth offers the kind of control the people with the big bucks are looking for.

You and I are just lucky that we don't have to have big bucks to get into this party.

As hard as that was to read, and I congratulate you if you got this far, it was no picnic figuring how to say it. And be careful, don't use this in your research paper.

I'm not going to put links to all this stuff. I've put out the bare bones. If you need to know more you can look it up for yourself.

I hope it's been useful.

You can find pricing for educational software at Journey Ed and Academic Superstore. I've used them both and they're fast.

Mike Jones at Digital Basin has a good review of the suite upgrade, including what's missing. And check out the Film School Drinking game which I found in the same article. It's an education in itself.

Finally, if you're on the fence about getting the Snow Leopard update, Leo Laporte and his gang of usual suspects provide a definitve thrashing of the pros and cons in #156 of Macbreak Weekly.
 
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.