Press "Enter" to skip to content

Posts tagged as “The Exploration of Intelligence”

embodied ai six coming in june …

centaur 0

Hey folks, I have been neck deep in preparations for a couple of workshops - the Advances in Social Robot Navigation one I already mentioned, coming up next week, but also the Embodied AI Workshop #6!

The Embodied AI workshop brings together researchers from computer vision, language, graphics, and robotics to share and discuss the latest advances in embodied intelligent agents. EAI 2025’s overaching theme is Real-World Applications: creating embodied AI solutions that are deployed in real-world environments, ideally in the service of real-world tasks. Embodied AI agents are maturing, and the community should promote work that transfers this research out of simulation and laboratory environments into real-world settings.

Our call for papers ends TOMORROW, Friday, May 16th, AOE (Anywhere on Earth) so please get your paper submissions in!

-the Centaur

[twenty twenty-five day ninety-seven]: the biggest problem with communication is the illusion that it took place

centaur 0

So, yes, it's late and i'm tired, but i couldn't just leave it at that, because the above quote is so good. I ran across this from George Bernard Shaw in a book on mentoring (which I can't access now, due to cat wrangling) and snapped that picture to send to my wife. In case it's hard to read, the quote goes:

The single biggest problem with communication is the illusion that it has taken place.

This was a great quote to send to my wife because our first vow is communication, yet we have observed problems with communication a lot. Often, when the two of us think we are on the same page, frequently we have each communicated to each other something different using similar-sounding language.

I was struck by how hard it is to get this right, even conceptually, when I was skimming The Geometry of Meaning, a book I recently acquired at a used bookstore, which talks about something called something like a "semantic transfer function" (again, I can't look up the precise wording right now as I am cat wrangling). But the basic idea presented is that you can define a function describing how the meaning that is said by one person is transformed into the meaning that is heard by another.

If you pay attention to how communication fails, it becomes clear how idealized - how ultimately wrongheaded - it is. Because you may have some idea in your head, but you had some reason to communicate it as a speech act, and something you wanted to accomplish inside the hearer's head - but there's no guaranteed that what you said is what you meant, and much less whether what was heard was what was said, or whether the interpretation matched what was heard, much less said or meant.

But even if they took your meaning - even if the semantic transfer function worked perfectly to deliver a message, there is no guarantee that that the information that is delivered will cause the appropriate cognitive move in the hearer's brain. Perhaps we're all familiar with the frustration of trying to communicate an inconveniently true fact to someone who stubbornly won't absorb it because it's politically inconvenient for them, but the matter is worse if your speech was designed to prompt some action - as Loki and one of the kittens just found out, when he tried to communicate "stop messing with me, you're half my size, you little putz" as a speech act to get the kitten to leave him alone. It had the opposite effect, and the kitten knocked itself onto the floor when it tried to engage a sixteen-pound ball of fur and muscle.

So what does that have to do with drainage?

My wife and I have had a number of miscommunications about the cats recently, ones where we realized that we were using the same words to talk about different things, and didn't end up doing things the way each other wanted. But it isn't just us. The cats stayed indoors mostly today, because workmen came by to work on a drainage project. I went out to sync up with the foreman about adding a bit to the next phase of work, and he offhandedly said, "sure, now that we're finished with the front."

"But wait," I said. "What about the drains in the front?"

"What drains in the front?" he asked.

We stared at each other blankly for a moment, then walked around the house. It rapidly became clear that even though we had used the same words to talk about the same job related to the same problem - excess water tearing through the mulch - we had meant two completely different things by it: I had meant fixing the clogged drains of the downspout of the gutter that were the source of the water, and he had took that to mean fixing the clogged drains where that water flowed out into the rest of the yard. A rainstorm soon started, and we were able to both look at the problem directly and agree what needed to be fixed. (The below picture was from later in the night, from another drain that was clogged and in need of repair).

It turns out the things that I wanted fixed - the things that had prompted me to get the job done in the first place - were so trivial that he threw them into the job at no extra cost. And the things that the foreman had focused on fixing, which also needed to be fixed but didn't seem that important from the outside, were actually huge jobs indicative of a major mis-step on the original installation of the drainage system.

We resolved it, but it took us repeatedly syncing up, listening for issues as we spoke, and checking back with each other - in both directions - when things didn't sound quite right for us to first notice and then resolve the problem. Which is why I found it so apropos to come across that Shaw quote (which I can look up now that the cats have settled down, it's in The Coaching Habit) as it illustrated everything me and my wife had been noticing about this very problem.

Just because you've said the words doesn't mean they were heard. And just because they're said back to you correctly doesn't mean that the hearer actually heard you. If you spoke to prompt action, then it's important to check back in with the actor and make sure that they're doing what you wanted them to - and even if they're not, it's important to figure out whether the difference is their problem - or is on your end, because you haven't actually understood what was involved in what you asked them to do.

So, yeah. The biggest problem with communication is the illusion that it has taken place - so rather than trust the illusion in your mind, take some time to verify the facts on the ground.

-the Centaur

Pictured: "Shaw!", obstreperous cats, and a malfunctioning drain.

[twenty twenty five day sixty-two]: Seventy-Five Percent of a Project is Worth Less Than Nothing

centaur 0

Recently Internet guru Seth Godin blogged about “Halfway Projects”: you can get value from eating half of a pear, but half a canoe is worth less than no canoe at all. I like that. It’s a great metaphor for project management, where quitting a project just before the finish line doesn’t deliver any of its value—but leaves you with all of the costs.

Now, I misremembered Godin’s example a bit - what he actually said was “half a pear might deliver 85% of its value”. But the principle is sound: half a battery charge might let you do 85% of your work … but half a parachute is definitely worth less than no parachute at all, because it might encourage you to take risks that you shouldn’t.

For project management, though, the idea helps explain my long-running idea “work just a little bit harder than you want to.” Often, when working on a project, we get exhausted, and decide to give up - but working just a little bit harder can take us over the finish line. Our instinct to save us effort can actually thwart the work we need to do to achieve success.

For example, recently I was working on a machine learning project that just wasn’t working. We’d spent enormous effort on getting the learning system up and running, without good learning results to show for it, and the arbitrarily imposed deadline was coming up to show something impressive, or the project would be axed.

But, if you know anything about machine learning, you know most of the effort goes into data preparation. We had to modify the system to log its data, massage it into a format that was useful for learning, and spend further coding effort to speed it up so it was useful for development (taking the data load from 36 hours to 36 seconds!).

The point is, we only got the data running in mid-February, and were trying to compress months of experimentation into just ten days. Finally, as the deadline approached, I got philosophical: we’d done all the work we needed to do to start learning, and my recommendation was that the team keep working on it, with or without me.

But … I didn’t stop there.

Before the final presentation, I spent time cleaning up the code, checking things in, and getting a few of the most promising programs ready to collect “baselines” - long runs of the system set up for comparisons. And the next morning, I reviewed those baselines to present a report to the team about which one was most promising.

Long story short, one of the simplest models that we tried was actually sort of kinda working. Once I realized we had a scaling issue in the output, a simple tweak made the system get even better. I spent another hour tweaking the graphs to put the human input and the system results onto the same graph, and the good results leapt out into sharp relief.

I could have just decided that the system was a failure - but then I wouldn’t have done that extra work, making it a self-fulfilling prophecy. I call this the “Sunken Cost Fallacy Fallacy”. For those not familiar, the “Sunken Cost Fallacy” kicks in when you keep doing something that isn’t working because of the effort you’ve spent, even though you have a better option.

But you can’t “decide” that something is a better option because you’re a Decisive Decider™. It actually has to be a better option, or what you’re doing is simply throwing away the effort that you’ve spent to date because you want to throw your weight around. No, if you suspect a cost is sunken, there’s no substitute for doing your due diligence - is the project working?

If it isn’t, sure, then quit. But often, that little bit of extra work can unlock the solution to the problem. During my presentation, the team asked natural about the simple model that turned out to be the most successful - and those questions made me realize it could be improved. Over the weekend, I applied those fixes - taking merely good to excellent.

Last week, as of Thursday night, I was pretty down on the possibility of success for our project. But I did my due diligence anyway, and by Friday morning, I had a working solution. By Friday afternoon, all the team knew it – and by Sunday evening, I was composing an email outlining our machine learning “recipe” that we can build on going forward.

Quitting just before the finish line wastes all the effort you spent on the project. Before you quit, work a little bit harder than you want to and do your due diligence to check whether it is working. If it isn’t, you can stop with no regrets; if it is, you will have not just saved the value of your project - you will have saved yourself from shooting yourself in the foot.

-The Centaur

Pictured: The project team. Three-quarters of them want to try a new direction, but the old seasoned hand isn't quite so sure.

Unsolved Problems in Social Robot Navigation at RSS 2024

centaur 0

Hey folks! I am proud to announce the Workshop on Unsolved Problems in Social Robot Navigation, held at the Robotics, Science and Systems Conference in the Netherlands (roboticsconference.org). We are scheduled for 130 pm and will have several talks, spotlight papers, a poster session and discussion.

I'm an organizer for this one, but I'll only be able to attend virtually due to my manager (me) telling me I'm already going to enough conferences this year, which I am. So I will be managing the virtual Zoom, which you can sign up for at our website: https://unsolvedsocialnav.org/

After that, hopefully the next things on my plate will only be Dragon Con, Milford and 24 Hour Comics Day!

-the Centaur

Pictured: Again, from the archives, until I fix the website backend.

[twenty twenty-four day one seven oh]: embodied ai #5

centaur 0

Today is Embodied AI #5, running Tuesday, June 18 from 8:50am to 5:30pm Pacific in conjunction with CVPR 2024's workshop track on Egocentric & Embodied AI.

Here's how you can attend if you're part of the CVPR conference:

  • The physical workshop will be held in meeting room Summit 428.
  • The physical poster session will be held in room Arch 4E posters 50-81.
  • The workshop will also be on Zoom for CVPR virtual attendees.

Remote and in-person attendees are welcome to ask questions via Slack:

 Ask questions on Slack

Please join us at Embodied AI #5!

-the Centaur

Pictured: Our logo for the conference.

[twenty twenty-four day one six nine]: t minus one

centaur 0

The Fifth Annual Embodied AI Workshop is tomorrow, from 8:50 to 5:30 in room Summit 428 in the Seattle Convention Center as part of the CVPR conference!

You can see our whole schedule at https://embodied-ai.org/, but, in brief, we'll have six invited speakers, two panel discussions, two sessions on embodied AI challenges, and a poster session!

Going to crash early now so I can tackle the day tomorrow!

-the Centaur

Pictured: More from the archives, as I ain't crackin' the hood open on this website until EAI#5 is over.

[twenty twenty-four day one six eight]: what ISN’T embodied AI?

centaur 0
two hangry cats

The Embodied AI Workshop is coming up this Tuesday, starting at 8:50am, and I am busy procrastinating on my presentation(s) by trying to finish all the OTHER things which need to be done prior to the workshop.

One of the questions my talk raises is what ISN'T embodied AI. And the simplest way I can describe it is that if you don't have to interact with an environment, it isn't embodied.

Figuring out that the golden object on the left and the void on the right is a tremendously complex problem, solved by techniques like CNNs and their variants Inception and ResNet.

But it's a static problem. Recognizing things in the image doesn't change things in the image. But in the real world, you cannot observe things without affecting them.

This is a fundamental principle that goes all the way down to quantum mechanics. Functionally, we can ignore it for certain problems, but we can never make it go away.

So, classical non-interactive learning is an abstraction. If you have a function which goes from image to cat, and the cat can't whap you back for getting up in its bidnes, it isn't embodied.

-the Centaur

Pictured: Gabby, God rest his fuzzy little soul, and Loki, his grumpier cousin.

[twenty twenty-four post one six six]: what is embodied AI?

centaur 0
big red stop button for a robot, i think from bosch

So, as I've said, Embodied AI is just around the corner. But what is this workshop about? Embodied AI, of course! It says so on the tin.

But the key thing that makes "embodied AI" different from computer vision is that you must interact with an environment; the key thing that makes "embodied AI" different from robotics is that technically it doesn't need to be a real physical environment, as long as the environment is dynamic and there are consequences for actions.

SO, we will have speakers talking about embodied navigation, manipulation, and vision; generative AI to create environments for embodied agents; augmented reality; humanoid robots; and more.

Okay, now I really am going to crash because I have to fly tomorrow.

Onward!

-the Centaur

Pictured: An e-stop (emergency stop) button from a robot. Looks a little jury-rigged there, Chester.

[twenty twenty-four post one six five]: embodied ai is almost here

centaur 0

Ok, the image is from ICRA, but I am still traveling, and have not fixed the problem on the website backend. BUT, Embodied AI is this coming Tuesday, so please drop in if you are at CVPR!

More later, I had several long days at the customer site and I am going to go crash now.

-the Centaur

[twenty twenty-four day one five three]: con carolinas day two

centaur 0

Back at Con Carolinas for day two (but once again images from the archives while my blog is getting updated in the background).

Today I was on a lively panel about the "Trials and Tribulations of AI" and if there's anything I could take away from that, it would be that "modern AIs do not check their work, so if you use them, you have to."

There's a whole debate on whether they're "really intelligent" and you probably can bet where I come down on that - or maybe you can't; here goes:

  • Yes, modern AI's are "artificial intelligence" - they literally are what that phrase was invented to describe.
  • No, modern AI's are not artificial general intelligence (AGI) - yet - and I can point you to a raft of papers describing either the limitations of these systems or what is needed for a full AGI.
  • Yes, they're doing things we would normally describe as intelligent, but ...
  • No, they're doing "thinking on a rocket sled", facing backward, tossing words on the track in a reverse of the Wallace and Gromit track-laying meme, unable to check or correct their own work.

These systems "hallucinate", just like humans are mistaken and make things up, but do so in ways alien to human thought, so if we use them in areas we can't check their work, we must do so with extreme caution.

And then there's the whole RAFT of ethics issues which I will get to another day.

Next up: "Neurodivergence and Writing" at 6:30pm, and "Is THAT even POSSIBLE" at 9:30pm!

Onward!

-the Centaur

Pictured: NOT Con Carolinas - I think this was Cafe Intermezzo.

Journaling: Today's Event: Con Carolinas. Today's Exercise, 30 pushups, planning a walk later today. Today's Drawing: finished one five three yesterday, will tackle one five four after I tackle my fix-my-roof thing.

[twenty twenty-four day one three two]: what?!

centaur 0

There's an ongoing debate over whether human emotions are universal: I, like many researchers, think that there was solid work done by Ekman back in the day that demonstrated this pretty conclusively with tribes with little Western contact, but some people seem determined to try to pretend that evidence can be made not to exist once it's been collected, if you just argue loudly enough about how you think it's wrong.

(The evidence is wrong?)

Yet my cat can look surprised, or scared, or angry, or alarmed, or content, or curious. It's fairly well established that some emotions, like the self-conscious ones of shame or pride, have highly variable, culturally-determined expressions (if they have consistent expressions at all). But when animals very different from us can still communicate emotions, it's hard to believe none of it is universal.

(The evidence is wrong? What's wrong with you people?)

-the Centaur

P.S. If you subscribe to the anthropic fallacy fallacy, please do not bother to tell me that I'm falling into the anthropic fallacy, because you're the one trapped in a fallacy - sometimes surprise is just surprise, just like a heart is still a heart when that heart is found an animal, and not a "deceptively heart-like blood pump."

Pictured: Loki, saying, "What, you expect me to do something? I'm a cat. I was busy, sleeping!"

[twenty twenty four day one two five]: this is it

centaur 0

This is it. Today, the 4th, is the last day to submit papers to the Embodied AI Workshop 2024, and we are not going to extend this deadline because we've gotten enough submissions so far that we, um, don't need to.

One more last time, the CFP:

Call for Papers

We invite high-quality 2-page extended abstracts on embodied AI, especially in areas relevant to the themes of this year's workshop:

  • Open-World AI for Embodied AI
  • Generative AI for Embodied AI
  • Embodied Mobile Manipulation
  • Language Model Planning

as well as themes related to embodied AI in general:

  • Simulation Environments
  • Visual Navigation
  • Rearrangement
  • Embodied Question Answering
  • Embodied Vision & Language

Accepted papers will be presented as posters or spotlight talks at the workshop. https://embodied-ai.org/#call-for-papers

Papers are due TODAY "anywhere on Earth" (as long as it is still today, your time).

Please send us what you've got!

-the Centaur

[twenty twenty-four day one two four]: last call for embodied ai papers!

centaur 0

Hey folks! Today (Saturday May 4th) is the last day to submit papers to the Embodied AI Workshop 2024!

Call for Papers

We invite high-quality 2-page extended abstracts on embodied AI, especially in areas relevant to the themes of this year's workshop:

  • Open-World AI for Embodied AI
  • Generative AI for Embodied AI
  • Embodied Mobile Manipulation
  • Language Model Planning

as well as themes related to embodied AI in general:

  • Simulation Environments
  • Visual Navigation
  • Rearrangement
  • Embodied Question Answering
  • Embodied Vision & Language

Accepted papers will be presented as posters or spotlight talks at the workshop. 

https://embodied-ai.org/#call-for-papers

Please send us what you've got! Just between you and me and the fencepost, if we get about 7+/-2 more submissions, we'll have enough to call it done for the year and won't need to extend the CFP, so we can get on with reviewing the papers and preparing for the workshop. So please submit!

-the Centaur

Pictured: the very nice logo for the Embodied AI Workshop, a joint effort of me, my co-organizer Claudia, and I think one of Midjourney or DALL-E. Yes, there's generative AI in there, but it took a good bit of prompting to get the core art, and lot of work in Photoshop after that to make it usable.

Embodied AI Workshop Call for Papers Still Open!

centaur 0

Our call for papers is still open at https://embodied-ai.org/#call-for-papers through May 4th! We're particularly interested in two-page abstracts on the theme of the workshop:

  • Open-World AI for Embodied AI
  • Generative AI for Embodied AI
  • Embodied Mobile Manipulation
  • Language Model Planning

Submissions are accepted through May 4th AOE (Anywhere on Earth) at https://openreview.net/group?id=thecvf.com/CVPR/2024/Workshop/EAI#tab-recent-activity ...

-the Centaur

[twenty twenty-four post one hundred]: trial runs

centaur 0

Still hanging in there apparently - we made it to 100 blogposts this year without incidents. Taking care of some bidness today, please enjoy this preview of the t-shirts for the Embodied Artificial Intelligence Workshop. Still trying out suppliers - the printing on this one came out grey rather than white.

Perhaps we should go whole hog and use the logo for the workshop proper, which came out rather nice.

-the Centaur

Picture: Um, I said it, a prototype t-shirt for EAI#5, and the logo for EAI#5.

[twenty twenty-four day ninety-four]: to choke a horse

centaur 0

What you see there is ONE issue of the journal IEEE Transactions on Intelligent Vehicles. This single issue is two volumes, over two hundred articles, comprising three THOUSAND pages.

I haven't read the issue - it came in the mailbox today - so I can't vouch for the quality of the articles. But, according to the overview article, their acceptance rate is down near 10%, which is pretty selective.

Even that being said, two hundred articles seems excessive. I don't see how this is serving the community; you can't read two hundred papers, nor skim two hundred abstracts to see what's relevant - at least, not in a timely fashion. Heck, you can't even fully search that, as some articles might use different terminology for the same thing (e.g., "multi-goal reinforcement learning" for "goal-conditioned reinforcement learning" or even "universal value function approximators" for essentially the same concept).

And the survey paper itself needs a little editing. The title appears to be a bit of a word salad, and the first bullet point duplicates words ("We have received 4,726 submissions have received last year.") I just went over one of my own papers with a colleague, and we found similar errors, so I don't want to sound too harsh, but I still think this needed a round of copyedits - and perhaps needs to be forked into several more specialized journals.

Or ... hey ... it DID arrive on April 1st. You don't think ...

-the Centaur

Pictured: the very real horse-choking tome that is the two volumes of the January 2024 edition of TIV, which is, as far as I can determine, not actually an April Fool's prank, but just a journal that is fricking huge.

Announcing the 5th Annual Embodied AI Workshop

centaur 0

Thank goodness! At last, I'm happy to announce the Fifth Annual Embodied AI Workshop, held this year in Seattle as part of CVPR 2024! This workshop brings together vision researchers and roboticists to explore how having a body affects the problems you need to solve with your mind.

This year's workshop theme is "Open-World Embodied AI" - embodied AI when you cannot fully specify the tasks or their targets at the start of your problem. We have three subthemes:

  • Embodied Mobile Manipulation: Going beyond our traditional manipulation and navigation challenges, this topic focuses on moving objects through space at the same time as moving yourself.
  • Generative AI for Embodied AI: Building datasets for embodied AI is challenging, but we've made a lot of progress using "synthetic" data to expand these datasets.
  • Language Model Planning: Lastly but not leastly, a topic near and dear to my heart: using large language models as a core technology for planning with robotic systems.

The workshop will have six speakers and presentations from six challenges, and perhaps a sponsor or two. Please come join us at CVPR, though we also plan to support hybrid attendance.

Presumably, the workshop location will look something like the above, so we hope to see you there!

-the Centaur

Pictured: the banner for EAI#5, partially done with generative AI guided by my colleague Claudia Perez D'Arpino and Photoshoppery done by me. Also, last year's workshop entrance.

[twenty twenty-four day sixty-one]: the downside is …

centaur 0

... these things take time.

Now that I’m an independent consultant, I have to track my hours - and if you work with a lot of collaborators on a lot of projects like I do, it doesn’t do you much good to only track your billable hours for your clients, because you need to know how much time you spend on time tracking, taxes, your research, conference organization, writing, doing the fricking laundry, and so on.

So, when I decided to start being hard on myself with cleaning up messes as-I-go so I won’t get stressed out when they all start to pile up, I didn’t stop time tracking. And I found that some tasks that I thought took half an hour (blogging every day) took something more like an hour, and some that I thought took only ten minutes (going through the latest bills and such) also took half an hour to an hour.

We’re not realistic about time. We can’t be, not just as humans, but as agents: in an uncertain world where we don’t know how much things will cost, planning CANNOT be performed correctly unless we consistently UNDER-estimate the cost or time that plans will take - what’s called an “admissible heuristic” in artificial intelligence planning language. Overestimation leads us to avoid choices that could be the right answers.

So we “need” to lie to ourselves, a little bit, about how hard things are.

But it still sucks when we find out that they are pretty fricking hard.

-the Centaur

P.S. This post, and some of the associated research and image harvesting, I expected to take 5 minutes. It took about fifteen. GO figure. Pictured: the "readings" shelves, back from the days when to get a bunch of papers on something you had to go to the library and photocopy them, or buy a big old book called "Readings in X" and hope it was current enough and comprehensive enough to have the articles you needed - or to attend the conferences themselves and hope you found the gold among all the rocks.

[twenty twenty-four day nineteen]: our precious emotions

centaur 0

It's hard to believe nowadays, but the study of psychology for much of the twentieth century was literally delusional. The first half was dominated by behaviorism, a bad-faith philosophy of psychology - let's not stoop to calling it science - which denied the existence of internal mental states. Since virtually everyone has inner mental life, and it's trivial to design an experiment which relies on internal mental reasoning to produce outcomes, it's almost inconceivable that behaviorism lasted as long as it did; but, it nevertheless contributed a great understanding of stimulus-response relationships to our scientific knowledge. That didn't mean it wasn't wrong, and by the late twentieth century, it had been definitively refuted by cognitive architecture studies which modeled internal mental behavior in enough detail to predict what brain structures were involved with different reasoning phenomena - structures later detected in brain scans.

Cognitive science had its own limits: while researchers such as myself grew up with a very broad definition of cognition as "the processes that the brain does when acting intelligently," many earlier researchers understood the "cognitive" in "cognitive psychology" to mean "logical reasoning". Emotion was not a topic which was well understood, or even well studied, or even thought of as a topic of study: as best I can reconstruct it, the reasoning - such as it was - seems to have been that since emotions are inherently subjective - related to a single subject - then the study of emotions would also be subjective. I hope you can see that this is just foolish: there are many things that are inherently subjective, such as what an individual subject remembers, which nonetheless can be objectively studied across many individual subjects, to illuminate solid laws like the laws of recency, primacy, and anchoring.

Now, in the twenty-first century, memory, emotion and consciousness are all active areas of research, and many researchers argue that without emotions we can't reason properly at all, because we become unable to adequately weigh alternatives. But beyond the value contributed by those specific scientific findings is something more important: the general scientific understanding that our inner mental lives are real, that our feelings are important, and that our lives are generally better when we have an affective response to the things that happen to us - in short, that our emotions are what make life worth living.

-the Centaur

[drawing every day 2024 post ten]: moar hands

centaur 0

Still working through the Goldman book, which has the inspirational quote: "I hope you wear this book out from overuse!" And that's what you need when you're practicing!

-the Centaur

P.S. My wife and I were talking about learning skills, and she complained that she hadn't quite gotten what she wanted to out of a recent set of books. It occurred to me that there are two situations in which reading books about a skill doesn't help you:

  • It can be you haven't yet found the right book, course or teacher that breaks it down in the right way (for me in music, for example, it was "Understanding the Fundamentals of Music" which finally helped me understand the harmonic progression, the circle of fifths, and scales, and even then I had to read it twice).
  • It can be because you're not doing enough of the thing to know the right questions to ask, which means you may not recognize the answers when they're given to you.

Both of these are related to Vygotsky's Zone of Proximal Development - you can most easily learn things that are related to what you already know. Without a body of practice at a skill, reading up on it can sometimes turn into armchair quarterbacking and doesn't help you (and can sometimes even hurt you); with a body of practice, it turns into something closer to an athlete watching game footage to improve their own game.

So! Onward with the drawing. Hopefully some of the drawing theory will stick this time.