Aug 162013
 

This post is the third, and final, in a series of posts on mathematical camera representation.  The following are links to the earlier two entries in this series:

  1. Camera Representation Part 1: Homogenous Coordinate Systems and the Simplest Camera Imaginable
  2. Camera Representation Part 2: Moving the Camera and the Image

This post builds upon the model built up in these previous two posts by adding two final concepts:  the ability to handle non-square pixels in an image and the ability to handle skewed images.

For the rest of this discussion, the form of the solution for finding the projection matrix will remain the same as in Part 2.  That is, the 3 \times 4 projection matrix \mathbf{P} can be found by incorporating the 3 \times 3 camera rotation matrix \mathbf{R}, the 3-vector \mathbf{t}, and the 3 \times 3 upper-triangular intrinsic camera parameter matrix \mathbf{K} as

\mathbf{P}= \mathbf{K} \left[ \mathbf{R} | \mathbf{t} \right].

The intrinsic camera parameter matrix \mathbf{K} defined in the Part 2 will be updated to take into account non-square pixels and skew.  It hopefully makes sense that \mathbf{K} is where these changes take place since pixel dimensions and image skew are intrinsic to the camera and do not relate to the camera's extrinsic location in the world.

 Non-Square Pixels

Most digital cameras have rectangular pixels.  Because the pixels are rectangular, the camera model must scale the image by different amounts along the x- and y-axes.  We now update the definition of the intrinsic camera parameter matrix \mathbf{K} to be defined as:

\mathbf{K}= \left[ \begin{array}{ccc} \alpha_x & 0 & x_0\\ 0 & \alpha_y & y_0\\ 0 & 0 & 1 \end{array} \right].

Here, \alpha_x= f m_x and \alpha_y= f m_y where m_x is the number of pixels per unit distance in x and m_y is the number of pixels per unit distance in y.  The principal point (x_0, y_0) is now measured in terms of pixels.

Skew

The final parameter we will add to our model is the skew parameter s.  The skew parameter models how the x- and y-axes are aligned in the image plane.  In most cases, the axes are perpendicular and s=0.  If the x- and y-axes are not perpendicular, then s \neq 0.

Incorporating the skew parameter into the intrinsic camera parameter matrix, we get

\mathbf{K}= \left[ \begin{array}{ccc} \alpha_x & s & x_0\\ 0 & \alpha_y & y_0\\ 0 & 0 & 1 \end{array} \right].

Final Note on Degrees of Freedom

The camera projection matrix \mathbf{P} is a homogenous transform, which means that two projection matrices are equivalent if the only difference between them is a non-zero scaling coefficient.  That is, \mathbf{P}_1= \mathbf{P}_2 if \mathbf{P}_2= c \mathbf{P}_1 where c is a non-zero constant.  Practically, this means that a projection matrix has 11 degrees of freedom despite being a 12-item matrix.

Going into a bit more depth, we can expand out our projection matrix as

\mathbf{P}= \mathbf{K} \left[ \mathbf{R} | \mathbf{t} \right]= \left[ \begin{array}{ccc} \alpha_x & s & x_0\\ 0 & \alpha_y & y_0\\ 0 & 0 & 1 \end{array} \right] \left[ \begin{array}{cccc} r_{11} & r_{12} & r_{13} & t_x\\ r_{21} & r_{22} & r_{23} & t_y\\r_{31} & r_{32} & r_{33} & t_z \end{array} \right]

We can now count our degrees of freedom:

  • \mathbf{K} has 5 degrees of freedom since it has 6 elements, but is homogenous and only defined up to scale.  That is, \mathbf{K} only has five elements that are mutually exclusive.
  • \mathbf{R} defines a rotation matrix, and therefore only has 3 degrees of freedom (roll, pitch, and yaw).
  • \mathbf{t} has 3 degrees of freedom since it defines a translation in 3-dimensional space which links the camera position with the world origin.

Thus, by simple addition, the camera projection matrix \mathbf{P} has 11 degrees of freedom.

And with that, we are finished with our discussion of the mathematical camera model.  I hope that you have found this useful!

Jul 242013
 

Sallie Mae, the too-big-to-fail, government-backed lender, recently put out study on how Americans pay for their college educations.  Since the cost of an undergraduate education continues to increase, how students continue to pay for it is important to understand.  Here is a link to the short writeup, and here is a link to the full report.

How Americans Pay for College 2013

How Americans Pay for College 2013. Taken from the short article "How America Pays for College 2013: A national study by Sallie Mae and Ipsos" on the Sallie Mae, Inc. website. Used for educational purposes.

Above is a nice pie chart from the article breaking down the percentages of where the average college student's funding originates.  I find fault with this chart in one area though:  Grants and scholarships do NOT pay for college.  Grants and scholarships are not a funding source.  In my work with my College Price Comparison Tool and as university professor, grants and scholarships are simply fancy words for the discount a college or university gives a student off of their advertised sticker tuition price for the first year.  This discount is primarily based on that student's (and that student's parent's) ability to pay through savings, income, and borrowing.  The sticker tuition price has no more to do with how much people actually pay for college than the sticker price on a vehicle at a used car lot.  And claiming the amount one can talk a smarmy car dealer down as "funding" for a vehicle is simply wrong.  Therefore, a more accurate set of percentages can be found by not considering the dubious category of "grants and scholarships."

With that in mind, here is a chart showing more accurate percentages of where the average college student's funding originates:

Funding sources for the average American student's college education.

Funding sources for the average American student's college education.

Looks a bit different, right?  The big thing I notice is that, in the average case, parents, friends, and relatives will put themselves on the hook for about 60% of the average student's education.  But for the rest, the student is on their own.

As always, find the total price of a four-year college education for every college on your list, before even visiting or applying, by visiting Birdseye College Price Comparison.

Jun 182013
 

I recently finished reading Clayton Christensen's book The Innovator's Dilemma (TID), which talks about technological innovation and how companies can effectively manage it.  TID was originally written in 1997, and I read a (slightly) updated version published in 2011.  I found this book fascinating, especially the way that Christensen broke down innovation into two distinct categories: sustaining innovation and disruptive innovation.  Each of these categories requires different strategies to manage productively, which are supported by historic studies of innovation in many different industries such as hard drives, excavators, steel, computers, motorcycles, and more.

Sustaining innovation is technology and process updates that give a company's existing customers more of what they want.  This is in contrast to disruptive innovation, which initially has no known customers, unknown applications, and capabilities that existing customers do not value.  But once the market for the disruptive innovation is found and grows, it will eventually take over the company's existing customers, often driving the company out of business.  TID points out that it is very odd that companies so often have problems managing disruptive innovation: sustaining innovation can be incredibly complex and expensive, but leading companies will almost always spend the resources necessary to see it through and bring the sought-after innovation to market.  Disruptive innovation is the opposite--it is often cheap and quite simple compared to what an industry-leading company already produce, but it is usually of "inferior" quality to existing customers.  Since existing customers do not want the disruptive technology, the market must be found for it before money can be made.  Large, successful companies historically do not perform this search--Christensen believes that is because established companies see the search for an unknown market as risky when they have existing customers they can continue to please through their established processes.

I found the ideas presented in TID particularly fascinating in terms of the higher education industry because of my work with the Birdseye College Cost Comparison site and because of the time I spent down in the trenches as a university professor.  The university business looks, to me, very similar to the large, established firms discussed in TID that, over and over again, got driven from their markets by smaller upstarts peddling disruptive technology.  The similarities in brief:  universities are established, they offer exceptionally complicated products in terms of accredited degree programs in many disciplines, and they are completely reliant on high-end customers feeding them fat margins.  Almost all of their development caters to these high-end customers.  To illustrate reliance on high margins, consider that many private, non-profit universities derive a large portion of their net budget, often a majority, from room and board charged to students for ever-more palatial "dorms."  Really, these dorms are more similar to luxury apartments than the two-to-a-tiny-room block houses of old.  Universities are essentially getting a significant portion of revenue from an expensive, high-margin add-on to their main product, at a time when news services regularly report that cost is a great concern.

After reading TID, I do not know what disruptive innovation will rise.  In fact, TID makes the point that this cannot be known.  But I can speculate based on what I know of the market and what cheap, readily-available technology currently exists.  Video courses might be one option:  with the expansion of low-cost fast Internet connections and the affordability of streaming and receiving video, entire video lecture series and accompanying material can be created and distributed.  The rise of Wikipedia offers another option:  a vast quantity of small, discrete, digital, educational elements (videos, interactive programs, web pages, homework assignments, etc.) created by a diverse number of experts that can be mixed and matched to create educational packages tailored to meet knowledge or skill goals for individual customers.  Or it could be something else entirely.  Whatever product arises, it will initially be something that existing customers (students going to university right now) neither want nor need.  These products will also not have many features the current higher-education market demands.  Perhaps no one in the new market is concerned about homework, tests, football games, lectures, grades, personalized feedback, dorms, accreditation, degrees, certification, etc.  The lack of some or all of these features will make the new product look "inferior" when compared to the established market, but the new product will have features that appeal to a different, initially unknowable set of people--the new market.

I have no idea what this low-cost, higher-education market is or what products to serve it will look like.  TID makes a compelling case that markets for disruptive innovation cannot be known before they arrive.  That said, I do not think that companies currently providing free college courses over the Internet such as Corsera or Udacity have figured out who the customers in this new market are or what they want.  At least not yet.  Both companies seem too locked into the current university model, expending a great deal of effort trying to emulate features of the university system, trying to become equivalent to a university.  I am not sure this is serving them well.

But whatever the new higher-education market turns out to be, it will be largely ignored by existing universities when it arrives because its margins are too low and the products too simple.  Years later, seemingly overnight, the new market's products will be sufficient for most then-current university students, and there will be a massive shift away from the current university model to the new model.

We live in interesting times.  Wherever this evolution eventually takes us, it is an exciting time for higher education.  I cannot wait to see what develops!

Jun 112013
 

Northern Spark is an art festival-type event that incorporates art installations, interactive art, music, theater, and dance performances, art creation, and more.  It occurs from dusk to dawn and (at least over the past two years) takes over several blocks of a Twin Cities neighborhood, so the night features prominently in all the work presented.  This year, Northern Spark took place from sunset on June 8 until sunrise on June 9, 2013 in the Lowertown neighborhood of St. Paul.  Most of the installations/performances/etc. were setup in and around the grounds of the Union Depot train station and the street immediately outside it.

I was and continue to be surprised at the technology used in many of the installations and performances. I realize that this focus on the medium and not the message is not a very "fine art" way to think, but many projects incorporated projected images and video, cameras, LED lights, and other custom electronics which I find simply fascinating.  Now, a lot of this may have to do with the fact that it is dark and therefore many projects had to have a built-in light source simply to be visible.  But it still gives a vastly different feel to the work than you get even in a very contemporary art museum.

An outdoor installation, Strange Attractor, used a camera to cause a giant LED light board to react to light patterns it detected.  I was never quite able to suss out what patterns caused it to respond, but it was cool seeing computer vision used in an art project!

Another interactive installation, can you listen to the same river twice?, on the shore of the Mississippi River utilized an underwater microphone wired up to headsets that were all attached to organically-shaped reclining benches lit up with LEDs.  The effect was very relaxing.

An installation piece on the train tracks, The World Is Rated X, utilized a camera on each end of an enormous, cage-like contraption, to project large images recorded by those cameras on opposite sides.

Yet another installation piece, Rooftop Routine, continuously projected a movie of women hula-hooping on a rooftop onto the side of a large building, high in the air.

The Siege Engines group had a trebuchet shaped like the Foshay tower (a "Foshaybuchet," natch) that launched throwlights at a target all night.  This one was neat because they enlisted the public to help build the throwlights. I'm not sure this went as planned since I saw most people just walking off with the throw lights and very few of them were actually thrown at girders and such.

A group with a large, space-themed group installation, the astronaut spirit academy, incorporated a giant projected image of the rotating Earth.

The local Bell Museum had an inflatable planetarium in which they ran shows about stars, galaxies, and the universe all night.

Some of the really creative uses of technology I saw came from performance based groups:

The Forever Young silent dance party was quite interesting in that a faux living room was set up with a DJ who played music for participants to come in and dance to.  But, outside observers could not hear the music as it was being broadcast only to headphone sets worn by those dancing.  It was akin to watching a live version of a music video with the sound turned off.

One cool performance piece at Northern Spark was Instant Cinema: Teleportation.  In this piece, a trio of musicians improvised all night to live video being projected on top of them from a pair of videographers wandering the entire event, broadcasting back their live video.

Finally, my favorite technology-enabled piece was the late-night Gossip Orchestra, in which 20 excellent musicians, all sporting different instruments, sat around in a circle.  Audience participants would then act as a conductor, turning musicians on and off by enabling or disabling LED spotlights that would light up a musician when they were to play.  They could adjust the color of the LED spotlights to affect the mood of the music.  The musicians were all exceptionally good, they really did change and flow with the different "directors," and the music created was phenomenal.

Many of the projects at Northern Spark demonstrate the types of cool art that are made possible by creative application of technology.  I am continuously surprised by this event, and encourage you to check it out next year!

Jun 042013
 

This past weekend (June 1 and 2, 2013) was the Hack for MN hackathon.  This hackathon was part of the National Day of Civic Hacking White House initiative. This particular event was hosted at DevJam in south Minneapolis.  The idea behind this hackathon was to develop software over two days to solve community problems.

I had a lot of fun at the hackathon.  If these events continue, I will definitely take part again. The people running the event were very organized and exceptionally nice.  The DevJam location was amazing.  And they got more than enough food for everyone throughout the weekend.  It was very, very enjoyable.

I had never done a hackathon before and was not too sure what to expect.  Before the event started, about 20 ideas had been submitted and posted on the event website.  After some opening remarks, everyone there spit off to form groups around one of those ideas.  The initial groups did not necessarily stick together, and I eventually ended up on a project we called DataPark.  The purpose of DataPark was to build a tool that could be used by neighbors and city planners to analyze the effect of road redesigns, new construction, or other development on parking within neighborhoods--a major source of contention in many city planning meetings!  We certainly didn't end up with a finished project by the end, but I got to work with some great guys and some decent progress was made.

There was clearly a lot of government and non-profit interest in this event.  Representatives from the Minneapolis, St. Paul, and state IT groups were there, as well as various other people who worked with local data.  At a couple points during the day, official-type people would give short talks about the data their group is making available.  It is pretty cool that there is government interest and support for sharing data that can be used for non-official software projects.

At the end of the work day Sunday, every group gave a presentation about their project and the progress they have made.  A lot of projects really did not get far out of planning stages, but there were some interesting projects in the mix.  The most impressive one (to me) was MSP Bus, which took the GPS data supplied by Metro Transit to build a working, fully functional web app that told you how far away each bus is from your nearest bus stops.  It actually works and it is awesome!  Check it out.

Overall, I think the Hack for MN hackathon was quite a success.  They laid the groundwork for community involvement and productive dialog between the developer community and government.  Some actual work got done.  But most important, and revealing to me, it demonstrated that there are ways that software can be used to help improve city life.

EDIT 6/8:  Mike Altmann, one of the guys I worked with on DataPark put together a great writeup of the DataPark project.