This edition of Top3ics focuses on getting the most out of podcasts, and so includes a new tweak to my personal content strategy.
I’ve been listening to podcasts for ten years because I love the format. But something’s always nagged me. How much of what I hear do I actually absorb?
A while ago, for example, I curated my Inboxes to focus on learning more about cognitive biases, which led me to the You Are Not So Smart podcast. It’s great - smart, fascinating, witty - but I ended up buying the book both because it’s time to Pay4Content and because I suspected that reading it would help me better internalise what the author has to teach.
My Inbox Curation process is explained in my recent piece for The Mission: Why you need a Personal Content Strategy. This also explains why I write the Top3ic newsletter - to better absorb, every few weeks, the very best content which I read and curate every day as it passes through my PCS process:
But this process misses podcasts. I listen almost every morning on my exercise bike, in the Metro or during a stroll around the park. But at best all I do is make a mental note to do something with what I hear.
But mental notes are useless - they evaporate the moment I reach my desk. How much goes in one ear and out the other?
Hence this edition of Top3ics. Read it and you’ll both discover three great podcast episodes and how to better Listen and Learn by integrating podcasts into your personal content strategy.
Ironically, the podcast I was listening to when I decided to do this actually reinforces the reasoning behind my personal content strategy. In The Search Effect, the You Are Not So Smart (YANSS) podcast covers ”how search engines make us feel smarter than we really are”.
It’s well known that most people already think they’re smarter than the average. Less well known: regular use of search engines causes them to overinflate their sense of knowledge even further, even when they can’t access the Internet.
“when you successfully look something up, it feels like … you kind of mastered that bit of knowledge… erroneously include knowledge stored outside of your own head as your own.”
This echoes and reinforces my need for a personal content strategy, originally designed to ensure I internalise the knowledge I consume, rather than wrongly assume I know something because “I read about it somewhere”. This research shows I’ll also assume I know something because Google + mobile phone = overestimated self-knowledge:
“after having searched the internet, participants rated their ability to answer unrelated questions higher than participants who had been just given the information directly, and not used the internet to look it up”
If you think you know stuff but you actually don’t, how will you integrate that knowledge into your work or life?
Listen also: you’ll find three YANSS episodes on the ‘backfire effect’ cognitive bias in Beware BackFiring when Battling Bullshit.
Liar Liar makes a very good, very uncomfortable point: “what separates honest people from not-honest people is not necessarily character, it’s opportunity”.
Research shows that people rarely lie according to some rational, cost/benefit calculation - if they did, people would lie a lot more. While people want to be honest, however, our brains allow us to fudge it, allowing us to make “small slips” without really thinking about it.
And here’s the problem:
"the brain reacts very strongly to a first act of lying… [but] as we keep on lying …the brain kind of stops reacting to it”
This explains both the “slippery slope”, and why court cases begin with people swearing to be honest - it creates an ‘honesty mindset’ as they go into the trial. This is not what happens with most legal documents, where the signature comes at the end.
So Dan Ariely, interviewed in this episode, redesigned an insurance company’s form, putting the signature at the top of page 1. Honesty - as measured by the mileage the company’s clients reported - rose by 15%. As Dan put it:
We just need to remind people that they want to be honest.
The podcast also explains why it’s the creative people who cheat more - “Cheating, after all, is all about being able to tell a story about why what we want is actually OK” - and has some interesting reflections on the necessity of self-deception, including a very moving and sometimes funny account of Dan’s own decades-long battle with a serious burn injury.
When I started writing this edition, I was halfway through Liar Liar before making a wonderful discovery - Hidden Brain, and many other (but not all) NPR podcasts - have transcripts (here’s Liar Liar’s).
This makes it much easier to integrate podcasts into my content process - from now on:
This begs a question, however: once I’ve annotated the transcript, do I actually need to listen to the audio? The next podcast answers that question.
With the audio fresh in my memory and my notes in front of me, I see a difference in how I consumed these two different forms (transcript, audio) of the same content: my notes don’t reflect how much I appreciated the personal story - and hence the perspective - of the interviewee: data scientist Cathy O’Neil.
My annotations didn’t capture, for example, her frustration that the real world didn’t work like maths, where things could be proven correct and false, and where mathematicians thanked each other for pointing out their errors. Cathy had her moment during the 2007 financial meltdown, when she discovered that “mathematics … was being used … so that people could go on doing essentially corrupt things but claim some kind of mathematical stamp of approval”. A theoretical physicist by training, I had a similar moment the same year, as described recently and last year.
But I was only reminded of this when I was listening to her voice. I’m guessing that the audio provides a stronger emotional connection between me and her knowledge, which should help embed it in my mind.
Cathy wrote Weapons of Math Destruction, which sets out the serious legal and ethical problems posed by the machine-learning AI algorithms increasingly used by companies, governments and other authorities. These algorithms are usually used to score people: does she get that job? how long should he go to jail? Moreover, they’re secret - subjects often don’t even know that they’re being scored, so the entire process is not accountable. As a result, they are usually unfair and often illegal.
What makes this worse is that they’re also often wrong. These algorithms are ‘trained’ on existing data, and if that data reflects human prejudices, the algorithm will too. Cathy gave the example of Fox News, where women were systematically prevented from succeeding. A machine learning algorithm using their data to predict which job applicants would go on to succeed would filter out the women. And it takes a trained computer scientist to notice.
Cathy O’Neil advocates that companies bring in specialised algorithm auditors, and more generally concludes:
every data science institute…must take ethics seriously and have a class on ethics…. The regulations that already exist around anti-discrimination law, disparate impact, and fair hiring practices have to be enforced in the realm of big data
More Stuff I Think