(Note: I wrote this a month or more ago, it’s been lingering, I know it’s a bit too long but if I pull the ideas apart they’ll lose something. That or it’ll turn into a 4-part series, which is waaaaay too much. Enjoy!)

This past summer I decided to take on learning to draw. I had to make it a project, a goal, which will likely seem absurd to almost every artist who has devoted their life to the visual arts. So many kids who just love drawing, who grow up just loving drawing, struggling to find a way to make ends meet through drawing because it’s what they’ve wanted to do all along.

I was and wasn’t one of those kids.

Sure, I totally drew stuff all the time. In school when I’d finished my homework (with time to spare – yes I was *that* kid) it’d be time to draw absurd cartoons, swap them with my friends, create dumb jokes that were hilarious then and would probably just make me shake my head today. But I never, ever tried to draw anything real, anything serious, anything “good”.

Working with both large and small-scale game development had reminded me of this gap between my doodles and what “artists” can do. Without visual art, drawing, 3D modelling, and sound design, most games are impossible to make.

So I was often reminded that I can’t draw. Which is a lie.

Of COURSE I can draw. I can pick up a pencil and make marks. I can even compare those marks to the shape of what’s in front of me and try to make them resemble each other. But somewhere along the way I had decided, this is not what I do, this is not what I am good at. I didn’t have that many classmates who really loved to draw, and it seemed a distant thing.

The lie of “I CANNOT DO THIS” is pervasive, controlling, and debilitating. I know this because I’ve taught math. Kid after kid after kid who declare “I HATE MATH”, who describe themselves as incapable, powerless in the realm of numbers. Who fail to pick up the pencil and try because they know they won’t succeed. You can put up all the motivational posters you want, try to tell them that not trying is the only sure way to fail, toss up “First Attempt In Learning” acronyms (I really like that one, actually) but this is something deep inside, something a poster is likely never going to reach.

I’ve always been “good at math”.

I knew this about myself when I hit Calc 2 in my first year of Engineering, didn’t know how to self-regulate my effort spent on practice work and failed my first midterm exam. I was MAD. I KNEW I was good at this, had to be good at this, how dare you tell me that I should consider a voluntary withdrawl.  I was mad at myself for letting this happen. I was mad at trig integrals for not having a consistent, well-defined path to solve.  I studied more for that final than anything before, maybe anything since.  I walked away with a B.

The power of “I am good at this” saved me that year.  Although maybe without it, I would’ve seen that I really did need more practice along the way. Maybe those trig integrals would’ve stuck, instead of coming back to bite me in my third and fourth years as I struggled with Fourier transforms.

I’ve taught a lot of kids without the “good at”. I struggle to keep them engaged when curriculum guidelines tell me they should be able to factor a polynomial, solve for x, parse this obfuscated word problem.

So this summer I told myself, I am going to draw. I am going to draw whether I am good at it yet or not. I am not going to let the “not good at this” voice win, I am going to push past it and succeed despite it.

The NeoLucida Kickstarter was a nice boost in that direction. Learning that the great masters of old may in fact have been using technical aids in drawing felt like a playing field being levelled.  I dove onto the initial offering of the NeoLucida within the first 24hr blitz of pledges. (Not something I usually do, or recommend.)  I found a copy of Hockney’s “Secret Knowledge” in the library and read it cover-to-cover. I decided I would practice drawing now so that when the NeoLucida came, it would be an aid instead of a handicap. So I wouldn’t be cheating. (I knew it wasn’t cheating, and still worried about feeling like it was cheating.)

Reviving an old Processing sketch to try and publish something on Android was another boost to drawing. I had created one tool for digital drawing that looked neat, and drawing on a touch screen sounded like a natural fit, so I made it run on my new (and first) smartphone. Then I reworked it to incorporate ideas that had come up while teaching middle-school kids to use Scratch to create drawings. They created some crazy ideas, most of which looked like scribbles, and I realized that we were translating movement into line and computers are incredibly powerful at simulating and inventing new kinds of movement. So my radiating lines became a physics simulation, particles orbiting your touch and leaving traces.

Developing a drawing app while feeling incapable of drawing sounds crazy, but it was safe. I was comfortable with computing and with generative art. Giving a computer partial control over marks on the screen took away the pressure of precision. But the irony, the hypocrisy was still there.

So I started to draw.

I picked up “Keys to Drawing” by Bert Dobson from the local library, and loved it. It put concrete experience and immediate action first. It didn’t try to abstract reality into shapes. Dobson just told you, draw what you see. The hard part isn’t the drawing, it’s the seeing – letting yourself put to paper exactly what’s in front of you without letting your mind preprocess it into concepts and abstractions first. It had exercises. It was perfect for me, and when I had to return it I went out and bought a copy of that plus his later “Keys to Drawing with Imagination”.

I started working through the exercises. It was kind of safe – this was homework, I was supposed to draw this, it was okay if it was a weird thing to draw – and once I started, I kept drawing and drawing and suddenly I had something that looked GOOD. It worked. Next day I picked the next exercise and drew again.

It took me a while before I realized I could draw with my fountain pen.

I’ve been a bit of a pen geek for years now, lured into buying nice looking ballpoints and gel pens and whatnot. But when I got my Lamy Vista, I was done buying anything else.  It’s the affordable version of a Really Good Pen – clear plastic body, durable, made for everyday use, not too pretentious, but a for-real fountain pen with a high-quality nib.

Writing with a fountain pen took some getting used to. Fountain pens don’t require the kind of brute force that we’re used to with ballpoint pens.  When the tip touches the paper, ink starts to flow.  If the pen so much as thinks of touching the paper as you move from line to line, from letter to letter, it will leave a mark.  My writing, on the other hand, was shaped by the ballpoint pen.  Signing my name dozens of times at once when I worked as a courier turned my signature into a swift, violent scribble.  You just can’t do that with a fountain pen, it’ll slice things, it’ll tear paper, it’ll jam paper scraps into your nib and muck it up.

The world shifted away from the fountain pen before I was born, but not so long before that I wasn’t still raised in its legacy. I grew up learning cursive writing in elementary school as well as “printed” letters.  It wasn’t long into high school before I had dropped cursive entirely, resorting to a semi-connected mess of hastily printed letters that I still use today. When I first learned how to use my fountain pen, I remembered that legacy.  I remembered it every time I failed to lift the pen completely off the paper between lines, leaving connections where there were meant to be gaps.  I saw first-hand why cursive writing had persisted for generations despite it seeming like more work and more pretentiousness than simple printing.  The fountain pen is the hardware it was designed for.

The dominant writing hardware shifted decades ago, back in the 1950’s and 60’s, but in education we still see people struggling to choose what writing software to teach.  Cursive is fading, a writing style meant for pens we no longer use, and it’s neither good or bad that this is so.  It’s a natural consequence of a change of tools, of our shift in media (as Marshall McLuhan would think of it).

But it’s taken us this long to see the change caused by the ballpoint pen.  We mostly don’t even see why it happened.

Makes you wonder where we are in the shift caused by calculators – never mind the computer, LOGO and Papert, the freely-available computer algebra systems.  Maybe Wolfram Alpha will change how we teach in a few more decades.

So. One day I picked up my fountain pen and drew.

loved it. Somehow the results felt more real – not more realistic, but more “I AM REALLY DRAWING”. I wasted less effort with the insecurities of erasing. I was careful where I placed my pen, as I had been now trained to do when writing with a fountain pen, and when I did choose to leave a mark the ink and paper responded at my merest touch. No more faint, cautious layers of graphite as I try to define proportions correctly before wrestling darker shades of graphite into the image.  The fountain pen insists that where I draw, I DRAW.  Don’t pretend to put a mark there that you can’t really see.  Put it there for REAL.

I came up with some good drawings. Some getting-closer-to-great drawings.  I started getting brave enough to carry a drawing pad with me in public spaces, drawing during my son’s swimming lessons. (Maybe later I’ll be brave enough to put them online, but not today.)

Summer’s ended, and my mind has turned from my learn-to-draw project onto finding another classroom, trying to define for myself what I want to be teaching and what I want my teaching to be.  Starting to read Invent to Learn and understanding why I loved teaching middle-school kids how to make things in Scratch.

And now when I see my drawing pad beside my usual laptop parking spot, I hesitate. I feel like I can’t do it. I’m not patient enough. I don’t have time. I’m … not good at it.

Sometimes, change is slow.

And I need to remember that when I teach kids math who don’t believe, and I give them something they succeed in they still don’t believe, and when I give them another space to succeed in they still don’t believe, and when they walk out with a B they breathe a sigh of relief and are grateful to have survived because they still don’t believe.

And I need to remember that when I see the news that somewhere not far away, positive change is being clawed back in the name of tradition, of getting “back to the basics”, of holding on to old software because we’re still only a few decades into calculator use and we can’t remember why we valued cursive, we just know we did and still should, somehow.

A while back (when I first wrote this), a couple of articles related to Howard Gardner’s multiple intelligences (MI) theory floated through the Twitters.  First, we have Gardner himself trying to draw a line between his MI theory and “learning styles”.  For those who are reading this and haven’t heard of MI, here’s Gardner’s own summary:

The basic idea is simplicity itself. A belief in a single intelligence assumes that we have one central, all-purpose computer—and it determines how well we perform in every sector of life. In contrast, a belief in multiple intelligences assumes that we have a number of relatively autonomous computers—one that computes linguistic information, another spatial information, another musical information, another information about other people, and so on.

Gardner goes on to explain why MI does not in fact mean the same thing as “learning styles”, and points out that there is no evidence of the benefit of trying to teach to multiple learning styles.

Following that came a link to an actually older article by Daniel Willingham which discusses the problems with Gardner’s MI theory itself.  The biggest takeaway from the article (which matches what I learned studying intro Ed Psych a few years ago) is that the data on intelligences supports multiple, hierarchical intelligences.  There is evidence for separate mathematical and verbal intelligence, plus a controlling general intelligence “g” factor that influences them both.

It is important to bear in mind that the hierarchical model described in the previous section is not a theory, but a pattern of data. It is a description of how test scores are correlated. A theory of intelligence must be consistent with these data; the pattern of data is not itself a theory. For example, the data do not tell us what g is or how it works. The data tell us only that there is some factor that contributes to many intellectual tasks, and if your theory does not include such a factor, it is inconsistent with existing data. Gardner’s theory has that problem.

In other words, Gardner’s theory not only seems flawed, but Gardner is completely misrepresenting the discussion by only comparing his theory to a “one unified intelligence factor” theory.  He’s still trying to make his theory sound good by comparing it to a model that psychologists have rejected for something like half a century.

Okay, great. So with that summed up, here’s what I’d really like to explore: why do people fall in love with both MI and “learning styles” in the first place?

I know of fantastic, clever and thoughtful teachers who dislike having these theories shot down because they’ve seen something good in applying them.  I think we need to call out that good, maybe find a way to reframe it and hold it up as being valuable and defensible without needing MI or learning styles language.

Both Gardner and Willingham take a stab in this direction.  Gardner gives the following advice that highlights the appeal of a “learning styles” mentality:

1.       Individualize your teaching as much as possible. Instead of “one size fits all,” learn as much as you can about each student, and teach each person in ways that they find comfortable and learn effectively. …

2.        Pluralize your teaching. Teach important materials in several ways, not just one (e.g. through stories, works of art, diagrams, role play). In this way you can reach students who learn in different ways. Also, by presenting materials in various ways, you convey what it means to understand something well. If you can only teach in one way, your own understanding is likely to be thin.

To me this hits exactly what people want to hear when they’re taught about multiple learningstyleintellwhatevers. And the reason is it’s great advice. But it’s great for reasons that probably have nothing to do with intelligence models.  Is getting to know your students and connecting learning with their interests a good idea?  Heck yes of course!  Students learn more from a teacher who actually cares about them.  Students need mentors who invest in their lives.

Is presenting new material in multiple ways a good idea?  Heck yes of course!  Off the top of my head I am pretty sure this is evidence-based and everything.  If we really want to connect this to cognitive science somehow, we could point out that connecting new material to existing things-that-the-students-know helps them remember it, is how the brain wires thoughts together, and the more connections you make the more likely they will recall it.  (But notice that you could just cut the “brain wiring” bit out of that last sentence and it’d be just as clear and could still be verified.)

Willingham points out another reason that MI hits the “like this” trigger in our minds:

Great intelligence researchers–Cyril Burt, Raymond Cattell, Louis Thurstone–discussed many human abilities, including aesthetic, athletic, musical, and so on. The difference was that they called them talents or abilities, whereas Gardner has renamed them intelligences. Gardner has pointed out on several occasions that the success of his book turned, in part, on this new label: “I am quite confident that if I had written a book called ‘Seven Talents’ it would not have received the attention thatFrames of Mind received.” Educators who embraced the theory might well have been indifferent to a theory outlining different talents–who didn’t know that some kids are good musicians, some are good athletes, and they may not be the same kids?

When contrasting MI with a “unified intelligence” model, it’s not difficult to see why teachers would grab onto MI.  To say that all students contain a single variable that ranges from “smart” to “er, not smart” stings when you think of the wild variety of skills and talents that students have.  Kids who shine in one area may look incompetent in another – but they DO shine somewhere, and a psychology that seems to ignore that sounds heartless.

There are two things to extract here.  One is that those teachers were right about intelligence – the data supports them.  The problem was that Gardner’s MI went too far, creating vague “intelligences” that seem only to amount to prior knowledge and experience and strongly stating their independence even when the data does not support it.  The layered, hierarchical model that the data supports does show that some kids may be incredibly well-spoken and insightful but still struggle with mathematical reasoning.

The other question here is how much any of this has to do with overall “intelligence”, or whether it all boils down to past experience, domain-specific knowledge and self-efficacy.  Within any one of these intelligence models, it’s possible for someone to have significantly more botanical knowledge than the average.  Does this mean they have a uniquely high “intelligence” in that area, or does it just mean they’ve learned a lot of knowledge in the domain of botany?  I want to believe (although don’t know for certain) that psychologists working in the area of psychometrics try to take this into account as they test their models.  But for the teacher looking only at the bare structure of the theory, it may be easy to forget that neither model excludes the possibility of students who excel through past experience.

So.

Let’s keep telling people to use multiple representations – preferably meaningful ones – to teach their subject.  Let’s keep telling teachers to get to know their students and individualize things where they can.  Let’s also stop promoting poor models of the mind.  We don’t need to hold onto flawed theories to be able to keep the good stuff that came from applying them.

For this final Infographics and Data Visualization assignment, we were given the freedom to research and produce an infographic on any topic we wanted.  I floundered on this for a few days, then decided to turn this into a chance to educate myself on India. My wife will be travelling there bringing medical aid to rural communities in a few months, and I realized that I have a very incomplete view of where India is at today.

So the target audience was … myself, mostly, to answer the question : how bad is poverty in India, and how has it changed in recent history?  The end result for me is that I feel like I still have a lot of gaps in my understanding of India’s poverty, but the big picture makes a lot more sense than it did a month ago.

However, not everything I learned found its way into the infographic.  I was running short on time – the assignment had a deadline of Sunday this past week, and while they gave us some extra time to submit I didn’t really want this running into my work week.

Click on the picture above to see the full infographic.

What I think turned out well:

  • It had a decent range of forms of representing data – a little heavy on line graphs, but they fit what’s being shown.
  • I made a choropleth map! Pretty much manually, actually, using Illustrator to color and using Excel to color-categorize. I also managed to wrangle Illustrator into converting a bitmap of the map into a vector graphic that it would let me color properly.
  • Hopefully the highlight of the message – that India has come a long way, but still has a long road ahead – shows up in the GNI graph, where you can see the dramatic improvement in the last decade but contrast that with how little that still adds up to per person.

What got left out:

I had found a decent resource showing the cost of various expenses in India vs other parts of the world, and wanted to incorporate that into the featured GNI graph.  My hope had been to replace the “$3.85/day” metric with a measure of what someone in India could actually buy with that amount of INR earned in a day.  (eg. a horizontal line across showing how much a loaf of bread or 1L of milk would cost.)  Comparing directly to US $ can be misleading, since spending $10 worth of rupees (based on currency conversion) will actually pay for something on the order of $50 worth of goods (based on costs in INR vs what that would cost in North America).  I’d experienced this weirdness before travelling in Uganda – currency valuation is just strange – but this was more extreme than I’d expected.

The biggest reason it got dropped is that I could not figure out whether the WorldBank data I was using for GNI had taken buying power into account or not.  I didn’t want to double-multiply the effects of this difference by accident, and I was low on time to hunt down the details.  If I’d wanted to commit more time to this, though, that would be high on my list of ways to make this more impacting (and meaningful).

Things that could use fixing based on feedback I got in the course:

  • Apparently I ought to pay more attention to how my monitor is color-calibrated, because I honestly thought those beige boxes (title, callouts) were more grey!  They were mentioned by a few people as being too strong, taking attention away from the rest of the page.  (Even as a grey that dark, they’d probably be too much though.)
  • One person mentioned that it looked a little too thin on content for a whole page.  This was interesting because while making it I often kept pushing things in closer than my original layout – but then still wondered what to do with empty space in a few spots (most notably around the slope graph).  Possibly should have rearranged things for a more natural layout for the slope graph, with text beside it instead of below.
  • Just say no to vertical text! I gotta admit, I had that in the back of my mind and ignored it because I was too hesitant to break my original grid layout to make room for titles. Which makes no sense because there was plenty of room.  I should pull those y-axis labels up above the graphs.
  • The infographic actually pulls data from two sources which use varying cutoffs for the poverty line – I originally messed up and mislabeled two of them as being $1.25 / day, when they weren’t.  I edited those off so I wasn’t lying – but since the per-state lines were created by the Planning Commission using a more complex metric (that gave a varying line per state) I couldn’t think of a good concise way to relabel it.

I’m tempted to take this feedback and create a v2 of the graphic, but that’ll have to wait until later.  I’ll post again if I get it done.

So this week’s infographics MOOC exercise was another draft for an interactive infographic, this time on the unemployment rate per state in the US.  The assignment was based on another DataBlog post from The Guardian which showed a choropleth map of unemployment using data over the time of Obama’s presidency.

The biggest problem was that the data during that time looks nearly meaningless.  It’s noisy, it has a vague downward trend, but you see this *blip* at the start where everything is jumping upwards.  Those who remember the last five years’ worth of economic history better than I did will remember why – the insane crash all kicked in about half a year prior to Obama’s election.

So step one, I hunted down a wider data set via data.gov.  Using the last 8-10 years’ worth of data gave a much more interesting picture and set the context for what was actually going on in the last four.

Step two this week for me was playing with making the data work in Tableau Public.  I had spotted this tool a few weeks ago, and didn’t try it last week as it does have a bit of a learning curve and I really wanted to practice something in Illustrator.  But this time I decided, what the heck, if making something actually interactive isn’t that much more work than graphing and drawing it up in Illustrator, why not?

The end result was very close to what I wanted – my ideal needs just a couple more features (pop-up or overlay annotations on a line graph, a customized timeline control on the line graph) which may or may not ever show up in Tableau, so I guess I still have some motivation to learn a decent chart library in Processing.

You can see the published interactive at Tableau, but the line graph isn’t working, which is kind of lame since it was made using a technique they demonstrate in their own tutorial.  Hopefully that gets fixed, but in the meantime I just screencasted from the desktop version for the assignment hand-in.

I picked up a copy of the absolutely-beautiful book Generative Design recently and I am loving it.  It’s a perfect resource for where I’m at – exploring generative art, wanting to learn more but don’t need someone to teach me the basics of programming.

It’s been great to find out how many techniques are much simpler to code than I’d expected.  For example, I decided to code my own copy of this noisy motion sketch which creates fantastic wispy-smoke-like patterns.  I’d seen work like this before and assumed some amount of complex simulation was going on.  Turns out it’s just taking Perlin noise and using it as a sort of vector field, defining an angle for particles to move in from each spot.  (Which sounds fancy but it’s basically just a few lines of code.)  Huh!  Easy-peasy, tiny code, fun results.

I’ve signed up for Alberto Cairo’s massive online open course on infographics and data visualization.  I’d been tempted by other MOOCs before and decided I didn’t have the time to commit to it, but this one caught me at a good time.

The entire course has been very good so far, giving a great perspective on how infographics relate to journalism as a whole as well as how to think of the infographic / datavis gap as a spectrum of functional and artistic intent.  It’s also included some Illustrator tutorials that have finally gotten me over that initial learning curve in designing with vectors.  (Although as you’ll see shortly, I still have a long ways to go.)

The week 3 exercise was to draw up a draft for an interactive infographic based on the data mentioned on The Guardian’s Datablog re: the transparency of international aid agencies around the world.  The Datablog post includes a couple of bar graphs – a good tool for comparing values with precision, but the stacked bars don’t convey a lot and there’s room to tell more.

My draft suggests a few improvements.  Users could be allowed to filter the aid agencies by geography, letting them compare agencies based in the same region more easily.  Radio buttons could also let users choose between seeing the full aggregate score or only the subscores based on the three levels of detail that the transparency report surveyed the agencies on.

The biggest change I’d propose is on the second layer of the infographic – a slope graph to highlight the general trend across agencies of improved transparency.  This was one of the summary points of the original report and is worth highlighting through data.

Here’s the PDF mockup I created; I used fairly simple shapes to represent tabs, dropdowns, etc rather than spending a lot of time on them, as in a real interactive I would expect to be grabbing UI components from a library for whatever coding / design tool I was using.

aid transparency draft copy

I’ve been on a knitting kick lately. It makes for a good evening activity, as it’s something I can do while sitting on the couch and hanging out with my wife (without the mentally-distant factor that happens if I’m online or gaming).  Plus, if you’re going to be a guy who knits, might as well nerd out on it for maximum unusualness.

The backstory: I worked for a month or so as a coder for a downloadable game about knitting.  I needed to know how to draw knits and purls – which is awfully hard to figure out if you know nothing about knitting.  So, I picked up a beginner book and learned.

That game got cancelled (the design was kind of shaky, my prototyping skills were insufficient to come up with something convincing in time) but then I found this book, which strangely found its way onto my bookshelf. (How do you NOT put that title on your bookshelf?)  The projects in there are fantastic, and I think I’ve made half of them by now.

Now I’ve discovered the fun of Ravelry and being able to search through an entire internet’s worth of free patterns intelligently, as well as posting photos to show off a bit.  For example, the weird yarn I found for these ‘Medallion Mitts’ just made the whole enterprise worth taking pictures of.

My last project was actually a pattern I invented, cribbing from the general deal of “knit in the round to make a hat”.  I had tried knitting the fisherman’s watchcap found in Knitting With Balls, but I got impatient and made it a little too short, plus the ribbing came out too wide and kind of goofy.  I’ll leave the rest of the story for now as it should get its own post along with writing up the pattern, but you can see the results on Ravelry here.