A while back (when I first wrote this), a couple of articles related to Howard Gardner’s multiple intelligences (MI) theory floated through the Twitters.  First, we have Gardner himself trying to draw a line between his MI theory and “learning styles”.  For those who are reading this and haven’t heard of MI, here’s Gardner’s own summary:

The basic idea is simplicity itself. A belief in a single intelligence assumes that we have one central, all-purpose computer—and it determines how well we perform in every sector of life. In contrast, a belief in multiple intelligences assumes that we have a number of relatively autonomous computers—one that computes linguistic information, another spatial information, another musical information, another information about other people, and so on.

Gardner goes on to explain why MI does not in fact mean the same thing as “learning styles”, and points out that there is no evidence of the benefit of trying to teach to multiple learning styles.

Following that came a link to an actually older article by Daniel Willingham which discusses the problems with Gardner’s MI theory itself.  The biggest takeaway from the article (which matches what I learned studying intro Ed Psych a few years ago) is that the data on intelligences supports multiple, hierarchical intelligences.  There is evidence for separate mathematical and verbal intelligence, plus a controlling general intelligence “g” factor that influences them both.

It is important to bear in mind that the hierarchical model described in the previous section is not a theory, but a pattern of data. It is a description of how test scores are correlated. A theory of intelligence must be consistent with these data; the pattern of data is not itself a theory. For example, the data do not tell us what g is or how it works. The data tell us only that there is some factor that contributes to many intellectual tasks, and if your theory does not include such a factor, it is inconsistent with existing data. Gardner’s theory has that problem.

In other words, Gardner’s theory not only seems flawed, but Gardner is completely misrepresenting the discussion by only comparing his theory to a “one unified intelligence factor” theory.  He’s still trying to make his theory sound good by comparing it to a model that psychologists have rejected for something like half a century.

Okay, great. So with that summed up, here’s what I’d really like to explore: why do people fall in love with both MI and “learning styles” in the first place?

I know of fantastic, clever and thoughtful teachers who dislike having these theories shot down because they’ve seen something good in applying them.  I think we need to call out that good, maybe find a way to reframe it and hold it up as being valuable and defensible without needing MI or learning styles language.

Both Gardner and Willingham take a stab in this direction.  Gardner gives the following advice that highlights the appeal of a “learning styles” mentality:

1.       Individualize your teaching as much as possible. Instead of “one size fits all,” learn as much as you can about each student, and teach each person in ways that they find comfortable and learn effectively. …

2.        Pluralize your teaching. Teach important materials in several ways, not just one (e.g. through stories, works of art, diagrams, role play). In this way you can reach students who learn in different ways. Also, by presenting materials in various ways, you convey what it means to understand something well. If you can only teach in one way, your own understanding is likely to be thin.

To me this hits exactly what people want to hear when they’re taught about multiple learningstyleintellwhatevers. And the reason is it’s great advice. But it’s great for reasons that probably have nothing to do with intelligence models.  Is getting to know your students and connecting learning with their interests a good idea?  Heck yes of course!  Students learn more from a teacher who actually cares about them.  Students need mentors who invest in their lives.

Is presenting new material in multiple ways a good idea?  Heck yes of course!  Off the top of my head I am pretty sure this is evidence-based and everything.  If we really want to connect this to cognitive science somehow, we could point out that connecting new material to existing things-that-the-students-know helps them remember it, is how the brain wires thoughts together, and the more connections you make the more likely they will recall it.  (But notice that you could just cut the “brain wiring” bit out of that last sentence and it’d be just as clear and could still be verified.)

Willingham points out another reason that MI hits the “like this” trigger in our minds:

Great intelligence researchers–Cyril Burt, Raymond Cattell, Louis Thurstone–discussed many human abilities, including aesthetic, athletic, musical, and so on. The difference was that they called them talents or abilities, whereas Gardner has renamed them intelligences. Gardner has pointed out on several occasions that the success of his book turned, in part, on this new label: “I am quite confident that if I had written a book called ‘Seven Talents’ it would not have received the attention thatFrames of Mind received.” Educators who embraced the theory might well have been indifferent to a theory outlining different talents–who didn’t know that some kids are good musicians, some are good athletes, and they may not be the same kids?

When contrasting MI with a “unified intelligence” model, it’s not difficult to see why teachers would grab onto MI.  To say that all students contain a single variable that ranges from “smart” to “er, not smart” stings when you think of the wild variety of skills and talents that students have.  Kids who shine in one area may look incompetent in another – but they DO shine somewhere, and a psychology that seems to ignore that sounds heartless.

There are two things to extract here.  One is that those teachers were right about intelligence – the data supports them.  The problem was that Gardner’s MI went too far, creating vague “intelligences” that seem only to amount to prior knowledge and experience and strongly stating their independence even when the data does not support it.  The layered, hierarchical model that the data supports does show that some kids may be incredibly well-spoken and insightful but still struggle with mathematical reasoning.

The other question here is how much any of this has to do with overall “intelligence”, or whether it all boils down to past experience, domain-specific knowledge and self-efficacy.  Within any one of these intelligence models, it’s possible for someone to have significantly more botanical knowledge than the average.  Does this mean they have a uniquely high “intelligence” in that area, or does it just mean they’ve learned a lot of knowledge in the domain of botany?  I want to believe (although don’t know for certain) that psychologists working in the area of psychometrics try to take this into account as they test their models.  But for the teacher looking only at the bare structure of the theory, it may be easy to forget that neither model excludes the possibility of students who excel through past experience.

So.

Let’s keep telling people to use multiple representations – preferably meaningful ones – to teach their subject.  Let’s keep telling teachers to get to know their students and individualize things where they can.  Let’s also stop promoting poor models of the mind.  We don’t need to hold onto flawed theories to be able to keep the good stuff that came from applying them.

For this final Infographics and Data Visualization assignment, we were given the freedom to research and produce an infographic on any topic we wanted.  I floundered on this for a few days, then decided to turn this into a chance to educate myself on India. My wife will be travelling there bringing medical aid to rural communities in a few months, and I realized that I have a very incomplete view of where India is at today.

So the target audience was … myself, mostly, to answer the question : how bad is poverty in India, and how has it changed in recent history?  The end result for me is that I feel like I still have a lot of gaps in my understanding of India’s poverty, but the big picture makes a lot more sense than it did a month ago.

However, not everything I learned found its way into the infographic.  I was running short on time – the assignment had a deadline of Sunday this past week, and while they gave us some extra time to submit I didn’t really want this running into my work week.

Click on the picture above to see the full infographic.

What I think turned out well:

  • It had a decent range of forms of representing data – a little heavy on line graphs, but they fit what’s being shown.
  • I made a choropleth map! Pretty much manually, actually, using Illustrator to color and using Excel to color-categorize. I also managed to wrangle Illustrator into converting a bitmap of the map into a vector graphic that it would let me color properly.
  • Hopefully the highlight of the message – that India has come a long way, but still has a long road ahead – shows up in the GNI graph, where you can see the dramatic improvement in the last decade but contrast that with how little that still adds up to per person.

What got left out:

I had found a decent resource showing the cost of various expenses in India vs other parts of the world, and wanted to incorporate that into the featured GNI graph.  My hope had been to replace the “$3.85/day” metric with a measure of what someone in India could actually buy with that amount of INR earned in a day.  (eg. a horizontal line across showing how much a loaf of bread or 1L of milk would cost.)  Comparing directly to US $ can be misleading, since spending $10 worth of rupees (based on currency conversion) will actually pay for something on the order of $50 worth of goods (based on costs in INR vs what that would cost in North America).  I’d experienced this weirdness before travelling in Uganda – currency valuation is just strange – but this was more extreme than I’d expected.

The biggest reason it got dropped is that I could not figure out whether the WorldBank data I was using for GNI had taken buying power into account or not.  I didn’t want to double-multiply the effects of this difference by accident, and I was low on time to hunt down the details.  If I’d wanted to commit more time to this, though, that would be high on my list of ways to make this more impacting (and meaningful).

Things that could use fixing based on feedback I got in the course:

  • Apparently I ought to pay more attention to how my monitor is color-calibrated, because I honestly thought those beige boxes (title, callouts) were more grey!  They were mentioned by a few people as being too strong, taking attention away from the rest of the page.  (Even as a grey that dark, they’d probably be too much though.)
  • One person mentioned that it looked a little too thin on content for a whole page.  This was interesting because while making it I often kept pushing things in closer than my original layout – but then still wondered what to do with empty space in a few spots (most notably around the slope graph).  Possibly should have rearranged things for a more natural layout for the slope graph, with text beside it instead of below.
  • Just say no to vertical text! I gotta admit, I had that in the back of my mind and ignored it because I was too hesitant to break my original grid layout to make room for titles. Which makes no sense because there was plenty of room.  I should pull those y-axis labels up above the graphs.
  • The infographic actually pulls data from two sources which use varying cutoffs for the poverty line – I originally messed up and mislabeled two of them as being $1.25 / day, when they weren’t.  I edited those off so I wasn’t lying – but since the per-state lines were created by the Planning Commission using a more complex metric (that gave a varying line per state) I couldn’t think of a good concise way to relabel it.

I’m tempted to take this feedback and create a v2 of the graphic, but that’ll have to wait until later.  I’ll post again if I get it done.

So this week’s infographics MOOC exercise was another draft for an interactive infographic, this time on the unemployment rate per state in the US.  The assignment was based on another DataBlog post from The Guardian which showed a choropleth map of unemployment using data over the time of Obama’s presidency.

The biggest problem was that the data during that time looks nearly meaningless.  It’s noisy, it has a vague downward trend, but you see this *blip* at the start where everything is jumping upwards.  Those who remember the last five years’ worth of economic history better than I did will remember why – the insane crash all kicked in about half a year prior to Obama’s election.

So step one, I hunted down a wider data set via data.gov.  Using the last 8-10 years’ worth of data gave a much more interesting picture and set the context for what was actually going on in the last four.

Step two this week for me was playing with making the data work in Tableau Public.  I had spotted this tool a few weeks ago, and didn’t try it last week as it does have a bit of a learning curve and I really wanted to practice something in Illustrator.  But this time I decided, what the heck, if making something actually interactive isn’t that much more work than graphing and drawing it up in Illustrator, why not?

The end result was very close to what I wanted – my ideal needs just a couple more features (pop-up or overlay annotations on a line graph, a customized timeline control on the line graph) which may or may not ever show up in Tableau, so I guess I still have some motivation to learn a decent chart library in Processing.

You can see the published interactive at Tableau, but the line graph isn’t working, which is kind of lame since it was made using a technique they demonstrate in their own tutorial.  Hopefully that gets fixed, but in the meantime I just screencasted from the desktop version for the assignment hand-in.

I picked up a copy of the absolutely-beautiful book Generative Design recently and I am loving it.  It’s a perfect resource for where I’m at – exploring generative art, wanting to learn more but don’t need someone to teach me the basics of programming.

It’s been great to find out how many techniques are much simpler to code than I’d expected.  For example, I decided to code my own copy of this noisy motion sketch which creates fantastic wispy-smoke-like patterns.  I’d seen work like this before and assumed some amount of complex simulation was going on.  Turns out it’s just taking Perlin noise and using it as a sort of vector field, defining an angle for particles to move in from each spot.  (Which sounds fancy but it’s basically just a few lines of code.)  Huh!  Easy-peasy, tiny code, fun results.

I’ve signed up for Alberto Cairo’s massive online open course on infographics and data visualization.  I’d been tempted by other MOOCs before and decided I didn’t have the time to commit to it, but this one caught me at a good time.

The entire course has been very good so far, giving a great perspective on how infographics relate to journalism as a whole as well as how to think of the infographic / datavis gap as a spectrum of functional and artistic intent.  It’s also included some Illustrator tutorials that have finally gotten me over that initial learning curve in designing with vectors.  (Although as you’ll see shortly, I still have a long ways to go.)

The week 3 exercise was to draw up a draft for an interactive infographic based on the data mentioned on The Guardian’s Datablog re: the transparency of international aid agencies around the world.  The Datablog post includes a couple of bar graphs – a good tool for comparing values with precision, but the stacked bars don’t convey a lot and there’s room to tell more.

My draft suggests a few improvements.  Users could be allowed to filter the aid agencies by geography, letting them compare agencies based in the same region more easily.  Radio buttons could also let users choose between seeing the full aggregate score or only the subscores based on the three levels of detail that the transparency report surveyed the agencies on.

The biggest change I’d propose is on the second layer of the infographic – a slope graph to highlight the general trend across agencies of improved transparency.  This was one of the summary points of the original report and is worth highlighting through data.

Here’s the PDF mockup I created; I used fairly simple shapes to represent tabs, dropdowns, etc rather than spending a lot of time on them, as in a real interactive I would expect to be grabbing UI components from a library for whatever coding / design tool I was using.

aid transparency draft copy

I’ve been on a knitting kick lately. It makes for a good evening activity, as it’s something I can do while sitting on the couch and hanging out with my wife (without the mentally-distant factor that happens if I’m online or gaming).  Plus, if you’re going to be a guy who knits, might as well nerd out on it for maximum unusualness.

The backstory: I worked for a month or so as a coder for a downloadable game about knitting.  I needed to know how to draw knits and purls – which is awfully hard to figure out if you know nothing about knitting.  So, I picked up a beginner book and learned.

That game got cancelled (the design was kind of shaky, my prototyping skills were insufficient to come up with something convincing in time) but then I found this book, which strangely found its way onto my bookshelf. (How do you NOT put that title on your bookshelf?)  The projects in there are fantastic, and I think I’ve made half of them by now.

Now I’ve discovered the fun of Ravelry and being able to search through an entire internet’s worth of free patterns intelligently, as well as posting photos to show off a bit.  For example, the weird yarn I found for these ‘Medallion Mitts’ just made the whole enterprise worth taking pictures of.

My last project was actually a pattern I invented, cribbing from the general deal of “knit in the round to make a hat”.  I had tried knitting the fisherman’s watchcap found in Knitting With Balls, but I got impatient and made it a little too short, plus the ribbing came out too wide and kind of goofy.  I’ll leave the rest of the story for now as it should get its own post along with writing up the pattern, but you can see the results on Ravelry here.