<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" media="screen" href="/~files/atom.xsl"?>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:feedpress="https://feed.press/xmlns" xmlns:media="http://search.yahoo.com/mrss/">
  <feedpress:locale>en</feedpress:locale>
  <link rel="hub" href="http://feedpress.superfeedr.com/"/>
  <logo>https://static.feedpress.com/logo/markbao.png</logo>
  <id>https://markbao.com/</id>
  <title>Mark Bao</title>
  <updated>2020-10-13T21:27:56.467Z</updated>
  <generator>Architecture v3.0 alpha</generator>
  <author>
    <name>Mark Bao</name>
    <uri>https://markbao.com/</uri>
  </author>
  <link rel="alternate" href="https://markbao.com/"/>
  <link rel="self" href="https://feeds.markbao.com/feed/"/>
  <subtitle>Mark Bao works at the intersection of technology, strategy, and behavioral science, and writes about artificial intelligence, systems, and behavior.</subtitle>
  <icon>https://markbao.com/static/images/icon.png</icon>
  <rights>Copyright 2014-2019 Mark Bao</rights>
  <entry>
    <title type="html"><![CDATA[Things I learned about being more effective at Effective Altruism Global 2016]]></title>
    <id>https://markbao.com/journal/things-i-learned-at-effective-altruism-global-2016</id>
    <link href="https://markbao.com/journal/things-i-learned-at-effective-altruism-global-2016"/>
    <updated>2016-08-10T19:59:00.000Z</updated>
    <content type="html"><![CDATA[<p><img alt="How to solve (reduce uncertainty while regularly updating your approaches towards) your problems. - Stephen Frey, Planning Under Uncertainty, bit.do/planchecklist" width="100%" src="https://mb-prod.imgix.net/journal/2016/ea-global-2016/how-to-solve-your-problems.jpg"></p>
<p>I went to Effective Altruism Global 2016 this past weekend in Berkeley, CA, and came away with a lot of great thoughts from the sessions and talking with folks. Since it was my first EA Global, I went to a good number of sessions. Here are my key take-aways. (If you also went to EA Global, I encourage you to share, even if brief, your key take-aways, and <a href="https://markbao.com/about">email me</a> if you do!)</p>
<p>I’ve listed and provided a one-sentence gist of each one in this table of contents, so you can click into whichever one seems interesting to you. Or, you can just <a href="https://markbao.comthings-i-learned-at-effective-altruism-global-2016#invent-technologies-that-invent-technologies">skip the contents and read the first one</a>.</p>
<ol>
<li><a href="https://markbao.com/journal/things-i-learned-at-effective-altruism-global-2016#invent-technologies-that-invent-technologies">Invent technologies that invent technologies</a> — Developing better tools (research tools or core technology) can be a multiplier on the possibilities of what you can do with research and technology.</li>
<li><a href="https://markbao.com/journal/things-i-learned-at-effective-altruism-global-2016#on-arguments-what-would-change-my-mind">On arguments: you know “what would change my mind?” better than you know “what would change their mind?”</a> — In a disagreement, we’re not good at knowing what would change someone else’s mind, so each person should specialize to the question they’re best at, and then exchange notes.</li>
<li><a href="https://markbao.com/journal/things-i-learned-at-effective-altruism-global-2016#subjective-to-objective-conversions">Making arguments more objective with subjective-to-objective conversions</a> — Using CFAR’s double crux method, we sometimes may be able to convert disagreements about subjective questions to disagreements to slightly more objective questions—which can be answered with data more easily.</li>
<li><a href="https://markbao.com/journal/things-i-learned-at-effective-altruism-global-2016#the-power-of-multipliers">The power of multipliers—people who help get other people into impactful areas of work</a> — People who advocate for others to work in a neglected but impactful area of work can have a huge multiplier impact.</li>
<li><a href="https://markbao.com/journal/things-i-learned-at-effective-altruism-global-2016#outstanding-career-capital">The types of outstanding career capital—of which I need more of</a> — Outstanding career capital, like social impact achievements, extensive resources, or cutting-edge expertise stand out much more than credentials, and we should be intentional about earning it.</li>
<li><a href="https://markbao.com/journal/things-i-learned-at-effective-altruism-global-2016#generating-new-models">Generating new models from just the information in your head</a> — That we’re able to sit down and generate new models and ideas without any outside information suggests that we haven’t explored all of the implications of the information we have in our minds at any given time.</li>
<li><a href="https://markbao.com/journal/things-i-learned-at-effective-altruism-global-2016#find-problems-that-nerd-snipe-you">Find problems that nerd-snipe you</a> — Speaks for itself. Also, being nerd-sniped by a problem might be a signal that you understand a field to a certain degree that you understand what an interesting problem looks like.</li>
<li><a href="https://markbao.com/journal/things-i-learned-at-effective-altruism-global-2016#vicious-rock-paper-scissors">Vicious rock-paper-scissors</a> — Since my top priority at school is doing schoolwork, when I’m burned out of doing schoolwork, I don’t switch to the second-best thing, which is reading or personal projects, but instead to reading Reddit or something. Malcolm Ocean suggests this could be because it seems wrong to your brain to consciously choose the second-best option.</li>
<li><a href="https://markbao.com/journal/things-i-learned-at-effective-altruism-global-2016#meeting-people-with-intentionality">Meeting people with intentionality</a> — Alton Sun uses a set of questions to ask people at the conference. In general, being intentional about meeting people and asking questions that are shortcuts to what people care about makes for illuminating conversations.</li>
</ol>
<a name="invent-technologies-that-invent-technologies"></a>
<h3>1. Invent technologies that invent technologies</h3>
<p>Certain research discoveries can have the potential to be the precursor to many other research discoveries. One thing that can have a huge impact are research tools. The way I think about this is if research is a tool that we use to illuminate some aspects of reality, we can do several things to illuminate more of it. We can have more people illuminating things and illuminate in parallel. But we can also have people who are working on the illumination process itself to make the tool better. In some cases, this can lead to huge downstream improvements in how much illumination is done.</p>
<p>Of course, we can see this in many cases in real life. The one I’m most familiar with is brain imaging, and specifically fMRI (though that one has recently come under fire). When used correctly, the level of illumination that these tools bring to understanding the brain can completely change a field of research or create new ones.</p>
<p>And that’s also the case with technologies that create technologies. Generalizing the concept of making better research tools that help us better illuminate reality, better technologies increase technology capability. What comes to mind here are mobile platforms and the blockchain. While this concept is widely known, it’s good to be reminded about the power of creating technologies that can have a multiplier effect, which can have large downstream effects.</p>
<p>What’s not very well known is the difference in value and risk between working on developing tools (new technologies) vs. using existing tools to make things. If the downstream value of developing tools in a field is, say, <code>10</code>, and the downstream value of using existing tools to make practical things is <code>1</code>, and the risk of developing tools is <code>20x</code> higher than that of working on making practical things (e.g. if the failure probability or difficulty of developing new tools and technologies is higher), it might make sense to just use existing tools to make things.</p>
<p>But there must be neglected fields out there where the risk-adjusted (expected?) value of building tools is pretty high, because if the value of tools is <code>100</code>, and the value of building practical things is <code>1</code>, even if it’s <code>20x</code> harder to make tools than practical things, it’s still worth developing research tools because they have a higher expected value. (I’m reminded of the lack of massive innovation in the financial technology sector.) This is pretty abstract, so let’s get concrete: certain fields are in need of better tools, and in certain fields, the risk-adjusted value of developing a tool might be really high, and maybe higher than building things with tools. If we can figure out, roughly, which fields this is true in, those fields could be good targets for optimization.</p>
<p><em>Thanks to Luca Rade for pointing me to Ed Boyden’s talk related to this. <a href="http://library.fora.tv/2016/08/06/engineering_revolutions">Video here</a>.</em></p>
<a name="on-arguments-what-would-change-my-mind"></a>
<h3>2. On arguments: you know “what would change my mind?” better than you know “what would change their mind?”</h3>
<p>An insight from a workshop taught by <a href="http://acritch.com/">Andrew Critch</a> on a technique from <a href="http://rationality.org/">CFAR</a>: when you disagree with someone on something, it’s best to think ‘what would change my mind’ instead of ‘what would change their mind’. Since you don’t have any direct insight into the other person’s mind, it’s better to think about what would change your mind on that thing. When both of you do this, <strong>each of you specializes on the thing that you’re good at</strong>—knowing what would change your mind—and both people can exchange the lists of things that would change their mind on the disagreement at hand. This reduces the talking-past-each-other that comes from trying to convince someone of something that they don’t even care about in the context of the disagreement.</p>
<p>In general, it’s the mindset that you don’t have as good an understanding of another person’s mind as you might think. This is obvious, but I’ve found that keeping this fact in mind has made me a bit more mindful of the other person and has moved me from giving advice like ‘You should do &lt;thing> because &lt;reason>’ and more toward ‘Have you thought about doing &lt;thing>?’ or ‘Have people suggested you do &lt;thing>? If so, why haven’t you done that yet?’</p>
<a name="subjective-to-objective-conversions"></a>
<h3>3. Making arguments more objective with subjective-to-objective conversions</h3>
<p>The continuation of the above is this: when both of you share the lists of things that would change your respective minds, at times there may be common elements. The workshop leader, Andrew Critch, gave the example of ‘should our organization make fundraising our top priority in 2016?’ with Alice saying ‘yes’ and Bob saying ‘no’.</p>
<p>After Alice and Bob think about “what would change my mind” and compare notes, if Alice thinks ‘if money is <em>not</em> a bottleneck, I’ll change my mind’ and Bob thinks ‘if money <em>is</em> a bottleneck, I’ll change my mind’, then we have what’s called a <em>double crux</em>, a crux shared by both parties. At this point, we’ve narrowed the original question of fundraising down to the question of whether funds are a bottleneck.</p>
<p>One thing not mentioned during the workshop that I noticed is that in this case, we’ve moved from the more subjective question of ‘should we make fundraising the organization’s top priority’ to a <em>slightly</em> more objective question of ‘is money a bottleneck’. The latter question can be more easily answered with data—in a sense, we’ve made the argument at hand more concise by doing a subjective-to-objective conversion, or at least made it <em>more</em> objective. And since this gets us incrementally closer to objectivity, in some cases one can iterate over this to get more objective. This won’t always be the case when using something like double crux, but when it is, it can be pretty powerful.</p>
<a name="the-power-of-multipliers"></a>
<h3>4. The power of multipliers—people who help get other people into impactful areas of work</h3>
<p>People who didn’t have lots of direct involvement in advancing a field through research, but helped popularize a field and get more researchers passionate about it, can have a huge impact. Let’s say that, hypothetically, there is some unknown person that was influential in getting five scientists interested in a field, and they made massive discoveries in that field that moved it forward. Even though nobody knows that person who influenced those who do the object-level work (the actual research), they had an influence that I would say overshadows that of any individual one researcher that they influenced.</p>
<p>That’s the whole idea behind the career coaching at the excellent <a href="https://80000hours.org/">80,000 Hours</a>, people advocating for more research to be done in neglected fields like AI safety and existential risk, and other people working on advocacy. Despite the lack of prestige in this position, this is an upstream multiplier—kind of like ‘invent technology that invents technologies’—that can have a significant downstream impact.</p>
<p>Of course, there can’t just be a hundred advocates and one researcher in that position, so there has to be some true ratio of advocates to researchers. Maybe we need people to advocate for the neglected field of figuring out the right ratio of people advocating for a neglected field to researchers. Seriously though, meta-meta-research aside, it would be useful to know whether a field needs more advocates or not.</p>
<p><em>Thanks to Zach Schlosser and Daniel Colson for discussing ideas related to this.</em></p>
<a name="outstanding-career-capital"></a>
<h3>5. The types of outstanding career capital—of which I need more of</h3>
<p>Ben Todd from <a href="https://80000hours.org">80,000 Hours</a> did a talk on advanced career planning, and he talked about forms of career capital that are most valuable. These are:</p>
<ol>
<li>Impressive social impact achievements (which stand out more than credentials and open the door to meeting high-performing people)</li>
<li>Extensive resources or network</li>
<li>Cutting-edge expertise</li>
</ol>
<p>These are commonsensical, but how much of our time are we really spending on building <em>outstanding</em> career capital like this? I realized I’m not spending enough: while I’ve been focused on doing well academically at school and have more-or-less succeeded on that front, a high GPA is only a slight differentiator. On the other hand, impressive social impact achievements or cutting-edge expertise absolutely stand out more than credentials. My goals have updated toward that direction, and I’ll be using this list as a barometer and something to check when I’m planning. I think it’s more often than not that I find that something qualified as outstanding career capital after the fact, instead of intentionally doing things that constitute outstanding career capital from the beginning.</p>
<p>This is further exacerbated by the number of ridiculously impactful and impressive people that I’ve met at EA Global—which is why one of the reasons I loved being there: not being the smartest person in the room by a long shot is a great motivator.</p>
<a name="generating-new-models"></a>
<h3>6. Generating new models from just the information in your head</h3>
<p>One thing that was mentioned during a workshop by Emily Crotteau was fascinating: you can sit down and generate new models and ideas without any new information—just the information you have in your head. That means that you haven’t fully explored all of the latent information in your brain at any one moment—and by sitting down and building models, you can traverse those nodes in your knowledge graph and expand them. It’s fascinating to think of the depth of information that we haven’t explored yet that are already in our minds, and it’s also a great reason to get lots of different kinds of information into your mind so you can make more connections as you sit down and expand them.</p>
<p>Or even better: have a conversation about them. For me, sitting down and just thinking and expanding ideas in my mind doesn't work as well as writing them down (Kevin Kelly: “I write in order to think… I don't actually know what I think until I write it. Writing is a way to find out what I think”) or talking about them.</p>
<p>I suppose for me I need some sort of writable buffer to record where I've been (writing something, saying something) to feel comfortable and anchored enough to explore adjacencies, and the act of articulation might make ideas and their adjacencies more concrete. And if you're talking to someone else, that's an increase in the variety of information that can be expanded and shared, not to mention that your expansion also triggers expansions in someone else's mind through thin air. Neat!</p>
<a name="find-problems-that-nerd-snipe-you"></a>
<h3>7. Find problems that nerd-snipe you</h3>
<p>This is a simple one. I haven’t heard this one in a while, though most folks in EA/rationality are familiar with it, so I’m throwing it in in case. I heard this a lot at EA Global, and love the phrase. It’s also neat that the fact that some problem nerd-snipes you probably means that you either 1) understand that field well enough so that you know how to interpret a problem and what an interesting problem looks like, or 2) is commonsensical enough that you’re able to understand the gist of it, or the importance of it, without knowing the field behind it (maybe it uses an <a href="https://markbao.com/journal/analogies-are-like-lossy-compression-for-complex-ideas">analogy to something you’re familiar with</a>). In that vein, the feeling of getting nerd-sniped by a problem might be a signal for the level at which you understand the basic ideas in a field.</p>
<p>The feeling of being nerd-sniped is pretty great. The most recent one I can think of is personal knowledge management systems. I’ve been <a href="https://medium.com/the-personal-knowledge-management-saga">writing about them</a> and <a href="https://mind.software/">building them</a> and can’t stop thinking about them. It’s especially great because I feel like I have the competency to go and build software to attack that problem. I’m looking forward to gaining that level of competency and interest when it comes to real-world AI problems.</p>
<p><em>Thanks to Nate Soares and Patrick LaVictoire from <a href="https://intelligence.org/">MIRI</a> for bringing this up.</em></p>
<a name="vicious-rock-paper-scissors"></a>
<h3>8. Vicious rock-paper-scissors</h3>
<p>One problem that I’ve come across when it comes to productivity is a maladaptive prioritization behavior I have. During the semester, I prioritize school over everything else and dedicate most waking hours to doing well at school. However, my efficiency isn’t as high as I’d like it to be, because when I start to get tired of doing schoolwork, I don’t do the thing that’s the second-best thing to do—read a book or do personal projects—because ‘that’s not what I’m supposed to do with my time,’ especially if I’m not as far as I want to be with schoolwork.</p>
<p>Paradoxically, I then do the thing that is easy to do but not as valuable as reading or doing personal projects, like go on Reddit or clean my room or something like that. Despite the fact that the chain of value (from most value to least) goes like <em>A) schoolwork</em>, <em>B) personal projects</em>, <em>C) reading Reddit</em>, I choose <em>C</em> when I don’t want to do <em>A</em>, instead of choosing <em>B</em> which is the better option.</p>
<p>Malcolm Ocean calls this <a href="http://malcolmocean.com/2014/09/vicious-rock-paper-scissors/">vicious rock-paper-scissors</a>, though the way we talked about this was somewhat different. He suggested the idea that while choosing <em>B) personal projects</em> when not wanting to do schoolwork is the best thing to do, it requires conscious effort to choose to do personal projects, and consciously choosing personal projects when schoolwork isn’t even done yet goes against what I consciously think is the right thing to do. Instead, I less-than-consciously choose to go on Reddit or something since it doesn’t feel like a real “decision”.</p>
<p>Going on Reddit feels more like I’m taking a break from the most important work and less like I’m wasting time doing something that is not the most important work—but obviously, my time would be better spent reading or doing personal projects instead. There’s also the problem where reading Reddit or something can be framed as ‘something I’ll do for a few minutes, and then get back to work,’ unlike working on personal projects, which is a more involved process.</p>
<p>The best way to counteract this, I think, might be to reframe the act of reading or working on personal projects as a form of rejuvenation <em>for the purposes of the most important task of schoolwork.</em> I’m not sure if this will be effective, since willpower and energy are also a part of the equation, but the important thing is to try different strategies to get my behavior back to matching what has the highest value. But the root cause is my over-optimization on schoolwork being my top priority to the point where it causes me to think that anything else other than schoolwork is wasting time, which causes an adverse reaction of doing something with less value because I don’t have to explicitly say ‘I’m not doing schoolwork’.</p>
<p><em>Thanks to <a href="http://malcolmocean.com/">Malcolm Ocean</a> for discussing this.</em></p>
<a name="meeting-people-with-intentionality"></a>
<h3>9. Meeting people with intentionality</h3>
<p>I love meeting people, but I hate networking. (Thanks to Twitter, I now know <a href="https://twitter.com/search?f=tweets&vertical=default&q=i%20love%20meeting%20people%20hate%20networking">this is a common feeling</a>.) As an introvert, I want to be <em>in</em> interesting conversations with interesting people, but I don’t like the process of getting to that point.</p>
<p>Alton Sun published an <a href="https://www.facebook.com/AltonSun/posts/10207087169951532?pnref=story">awesome post</a> about how he meets people at the event. He’s very intentional about the whole process, and one thing I really like is the question of ‘what updates have you made recently?’ (In rationality talk, an ‘update’ is when you change a belief based on new information.) I asked people a similar question of what updates they made as a result of the conference and what led to that update, which was fruitful in getting people to talk about stuff they cared about.</p>
<p>The great thing about EA Global is that 1) you could have very good confidence that people are there to have in-depth conversations and aren’t just looking for small talk, which is not always the case with other events, and 2) I feel like people are more open to talking about these sorts of things, like updates, which other groups of people may not be as open to talking about when meeting someone totally new. I think these are characteristics of really open and engaging groups.</p>
<p>In a more general environment, having conversations with intentionality and asking questions like “what have you changed your mind about recently?” or one of my favorites that I <a href="https://www.quora.com/What-is-the-single-most-illuminating-question-I-can-ask-someone">stole from Quora</a>, “What's the most unexpected thing you've learned along the way?”, are like a shortcut to what people actually care about and thus a shortcut to illuminating discussions.</p>
<p><em>Thanks to Alton Sun for the prompts and inspiration.</em></p>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Analogies are like lossy compression for complex ideas]]></title>
    <id>https://markbao.com/journal/analogies-are-like-lossy-compression-for-complex-ideas</id>
    <link href="https://markbao.com/journal/analogies-are-like-lossy-compression-for-complex-ideas"/>
    <updated>2016-08-01T12:50:00.000Z</updated>
    <content type="html"><![CDATA[<p>Scott H. Young wrote an excellent article <a href="https://www.scotthyoung.com/blog/2014/01/16/blogs-vs-books/">comparing and contrasting books vs. blogs for learning complex material</a>. One point I found interesting was the idea that certain ideas need to be taught in book format since they’re too complex to be broken down into articles. I thought: I’ve been able to understand complex ideas both in article form and in book form. What allows complex ideas be able to be squeezed down to something as short as an article?</p>
<p>I think of complex ideas as ones that are situated deeply in a knowledge tree. That is, imagining a complex idea as a point on a tree, complex ideas require more branches, or prerequisites, to reach it, than simple ideas. To get to that point, you need to go through all the branches, understand them, and then you can understand the idea in question.</p>
<p>But some ideas follow patterns—patterns that people may have seen before in a different situation. Using an analogy is like compressing that knowledge tree into a more accessible format. By using an analogy, one removes the requirement of needing to understand all of those branches on the knowledge tree before being able to understand that complex idea. Instead, that complex idea can be communicated in a more compact way, using an analogy to fill in the points that would otherwise be time-consuming to understand. Analogies co-opt behaviors of patterns that we already understand to illustrate similar behaviors in the new idea.</p>
<h3>An example: quantum computing</h3>
<p>I think quantum computing is a pretty good subject to look analogies for. Here’s one, from <a href="https://www.reddit.com/r/explainlikeimfive/comments/1quky2/eli5_quantum_computing/cdgutqs">bigb1 on Reddit’s Explain Like I’m Five</a> (edited for clarity):</p>
<blockquote>While a usual computer is like a mouse trying to find a way out of a labyrinth. A quantum computer would be like flushing the labyrinth with water and look where it comes out.</blockquote>
<p>We’re already used to the pattern of how water moves through a space, and this explanation uses it to illustrate the idea that the power of a quantum computer is in its ability to find a solution instantaneously. Here’s a more detailed one by <a href="https://www.reddit.com/r/explainlikeimfive/comments/1quky2/eli5_quantum_computing/cdh0l3o">Rispetto on the same thread</a> (edited for clarity):</p>
<blockquote>Imagine you're in a maze, starting in the centre. There are a lot of paths going left, right, ahead, and backwards. Your typical computer will go &quot;hmm.. lets take every single route until we find which one leads to the exit.&quot; So it begins the process of going through every route, until it hits a dead end, then starts at the beginning and tries again, avoiding the previous path. It is a slow process. Even though computers can do this very quickly (millions of attempts per second) if the maze is big enough it will still take increasing amounts of time.<br><br>A quantum computer works on an entirely different level. Instead of taking each path separately, it takes them all at once. Logically speaking, one of those paths is the correct one, so therefore it finds it much quicker than a normal computer.</blockquote>
<p>This one expands on the idea of a maze to explain a bit more about the mechanism by which a quantum computer might solve a problem. We’re using the analogy of how we would move through a maze against how a quantum computer would. But these analogies haven’t really brought out <em>how</em> a quantum computer does that, only that it is able to basically do a lot of computations at once, more so than regular computers. (For a rough explanation on <em>how</em> they work, I think <a href="https://www.youtube.com/watch?v=JhHMJCUmq28">this video</a>, which connects quantum computing to regular/classical computing, explains it well.)</p>
<p>We see here that analogies do a pretty amazing job of compressing a complex idea into a more digestible format. But while we’ve gotten the gist of what a quantum computer does, our understanding is imperfect.</p>
<h3>Problems with using analogies</h3>
<p><strong>Analogies are lossy compression for knowledge trees.</strong> When you unpack an analogy, it’s not so much that you’re rebuilding the tree that leads to the thing that the analogy is pointing to. Unpacking an analogy is more like unpacking a ladder that gives you the capability to reach and understand that complex idea, but often skips details that are important to fully understanding that idea—like our example with quantum computing above. That’s obvious—but the gist is that one should cultivate a habit of skepticism and questioning when using analogies. When using an analogy to make sense of something, it’s best to try to be aware of where the analogy may be skipping details and what it is ignoring—like how this analogy of analogies being a ‘ladder,’ not the tree itself, is incomplete and doesn’t impart the full </p>
<p><strong>Analogies can be surreptitiously invalid.</strong> Analogies are attractive because they give you the feeling that you understand something. But when we use an analogy, we have to not only ask the question of ‘does this make sense?’ but also ‘is this analogy valid in this situation?’ It’s easy to accept analogies that seem to allow us to make sense of things, but people often don’t question whether it’s valid to use a particular analogy in that case. For example, one could say that there should be fewer choices for healthcare plans in the health insurance market, tying it to the analogy that people have trouble choosing between many choices in other situations. That helps us make sense of the situation at hand (the number of choices of health insurance plans), but we also have to question whether the analogy to choice <em>in general</em> is valid in this situation. How might health insurance choices be different than other choices? Do people have an increased need, and thus potentially an increased propensity to spend time finding the right decision, when it comes to health insurance choices?</p>
<p>Yet, these aren’t reasons we should avoid using analogies to try to understand complex ideas. It’s much better to use an imperfect analogy to understand a complex idea (and potentially develop a desire to find out more) than to see it as totally intractable and hopeless to understand. We should keep in mind that analogies are lossy compression for complex ideas, and we need to question whether a given analogy is really applicable for a certain situation. But both of these pitfalls are actually great things—analogies give us a lens through which to view something complex and understand it well enough to ask questions about it.</p>
<p><small><em>Oh, and this entire article is an analogy between lossy compression and analogies, right? Damn, that's meta.</em></small></p>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[How do you maintain original thinking and avoid traditional patterns of thought when learning about a new field?]]></title>
    <id>https://markbao.com/journal/maintaining-original-thinking-and-paradigm-blindness</id>
    <link href="https://markbao.com/journal/maintaining-original-thinking-and-paradigm-blindness"/>
    <updated>2016-07-26T20:26:00.000Z</updated>
    <content type="html"><![CDATA[<p><em>These are initial thoughts and a request for comments on paradigm blindness, to be compiled into a more thorough article on the concept in the future.</em></p>
<p>One thing that I've been concerned about lately is how to maintain original thinking when diving into a new field. I think that we are subject to the conscious or unconscious effects of <em>paradigm blindness</em> when we learn about a new field. That is, once we learn how it's done traditionally, it's hard for us to come up with new original ideas. This could mean that we’re less able to come up with great ideas that really change a field, instead of incremental ideas that contribute a relatively smaller amount to it.</p>
<p>The reasoning is that, when we learn something new, we are also learning the frameworks and patterns of thought that accompany that new information. An example is the dual process theory of cognition popularized by Daniel Kahneman and Amos Tversky of System 1 (fast, instinctive thinking) vs. System 2 (slow, deliberative thinking)—when you learn this concept, you’re also learning to <em>think within this framework</em>.</p>
<p>But I think that the more we buy in to a framework of thinking as the ‘right way to think,’ the harder it is to think in novel, original ways. In other words, the more you think in traditional ways, the more you come up with more derivative, less original ideas. Not only that, but I think this happens subconsciously—since these are patterns of thought, we might not even be aware that we are using them, or ignoring that there might be better patterns out there (via confirmation bias).</p>
<p>This phenomenon is known by a number of names, each of which sheds light on a new dimension of it. There’s <em>paradigm blindness</em> (and the related concept <a href="https://en.wikipedia.org/wiki/Einstellung_effect"><em>einstellung</em></a>), where thinking in a particular paradigm (framework) makes you unaware and potentially reject other ways to think about something. There’s the <a href="https://en.wikipedia.org/wiki/Curse_of_knowledge"><em>curse of knowledge</em></a>, where people are unable to think outside of what they know to see how others see things—a common problem with teaching. Finally, there’s the <em>beginner’s mind</em>, which is sort of the reverse of the past concepts: the idea that “In the beginner's mind there are many possibilities, in the expert's mind there are few”<sup>1</sup>—in essence, experts have learned that some things are possible, and other things not; beginners, who are oblivious to this due to a lack of experience and who lack preconceptions, see more opportunities and (I think) sometimes challenge existing thinking.</p>
<p>In general, the idea is this: people who have learned in a traditional way, by learning everything there is to know about a field, may think in <em>traditional paradigms</em> as well. They may have tunnel vision, where they can’t think in new ways. The ideas they come up with tend to be evolutionary, not revolutionary, since they’re based in the patterns of thinking and concepts of the traditional paradigms. The questions I have are: Is this real? What do we do about that? Here, I’ll be talking about some of the evidence behind paradigm blindness, and potential defenses against it, and I’d like to hear from you if you’ve experienced this or have any ideas with regard to defending against paradigm blindness.</p>
<h3>Some evidence for paradigm blindness</h3>
<p>(Not interested in this? Jump to the next section, <a href="https://markbao.com/journal/maintaining-original-thinking-and-paradigm-blindness#defending">defending against paradigm blindness</a>.)</p>
<p>There is little or no empirical research on paradigm blindness, but there is some theoretical support for the idea. The most apparent manifestation of paradigm blindness is its related concept of the <em>curse of knowledge</em> in teaching. As anyone who has tried to teach something to someone has found, it’s pretty difficult to express complex concepts in a way that’s understandable for beginners. The way that you think of a complex concept is probably more high-level than a beginner might start out thinking about it, and it’s hard to try to think outside of your already-solidified frameworks to express some concept in a way that relates easily to a beginner.</p>
<p>More relevant for paradigm blindness is Thomas Kuhn’s application of the concept in the philosophy of science. In <em>The Structure of Scientific Revolutions</em>, Kuhn talks about how scientific revolutions are caused by a shift in paradigms of thinking. But in the meantime:</p>
<blockquote>In 1996, Kuhn observed that as paradigms shape the ways in which scientists (or managers) are trained, they will then find it difficult to challenge those paradigms, because they “are committed to the same rules and standards for scientific practice. That commitment and the apparent consensus it produces are prerequisites for normal science, i.e., for the genesis and continuation of a particular research tradition.”<sup>2</sup></blockquote>
<p>Not only is it difficult to challenge those paradigms because it’s unpopular to, but I think it’s also hard to challenge them since you’ve solidified them as “what you should do,” and subconsciously use those patterns of thought.</p>
<h3>A concrete case</h3>
<p>Let’s say that you want to enter the field of artificial intelligence. This is an interesting case, since it isn’t as codified as fields like physics or biology (which have foundations that are almost certainly true), but where there is not yet any consensus on the right foundations and where new ideas can still be developed. Would it be a good idea to avoid the traditional frameworks for a while and try to come up with some original ideas, before then diving into the traditional canon of knowledge? When diving into the traditional knowledge, should you approach it in some way—with skepticism, perhaps—to maintain originality? Will this increase the chance you might come up with great ideas?</p>
<p>One related question would be: can people who are not experts in a field, who can think outside the box because they’ve never really been indoctrinated by the traditional set of knowledge, come up with great ideas? Can they come up with what Google[x] director Astro Teller calls <a href="http://www.wired.com/2013/02/moonshots-matter-heres-how-to-make-them-happen/">10x ideas, not 10% ideas</a>? Or, are those who really know a field deeply more likely to come up with great ideas, like this:</p>
<figure><img height="196" src="https://i.imgur.com/fD4wmIB.png"></figure>
<a name="defending" href="https://markbao.com"></a>
<h3>Defending against paradigm blindness</h3>
<p><strong>Writing initial thoughts before diving into a field.</strong> I use this approach most of the time, where I write about my own thoughts about a field, which I think are somewhat original, before diving into a field. Most of these ideas suck, but some of them are kind of decent. But this is hard to do, since your initial thoughts on something without knowing anything about it almost always 1) suck or 2) are common ideas that have been thought of before. But sometimes, there’s something decent in here. Maybe as I continue to learn about a field, I can continue to update those ideas and see them in new contexts, which leads to another strategy:</p>
<p><strong>Incorporating reflection while learning traditional knowledge.</strong> I've also thought that you might have your most original thinking when you know just enough to know how to think about a field, but not enough to really have solidified your positions on things. If that's true, the originality might look like the shape of a curve, maybe a bell curve. You start out not being original, learn a bit of the foundation and then you're pretty original, and then you learn too much and solidify too much and then you're not original at all. Maybe in the middle of learning about a field, when you know enough information that it points you into interesting directions with your thinking, you can come up with ideas that are reasonably original and also reasonably interesting.</p>
<p><strong>Seeking out different viewpoints.</strong> For fields that are reasonably developed, different viewpoints might exist. Learning about different approaches to artificial intelligence, for example, allows you to see the differences between them. You might not know where the boundary conditions of, say, the model of System 1 vs. System 2 thinking until you learn about <em>another</em> model which seems to have some explanatory power that System 1 vs. System 2 doesn’t cover. From there, you might be able to find space in between those models that can lead to an interesting idea – though one that is likely in reaction to (and thus derivative of) those other ideas.</p>
<p><strong>A habit of skepticism.</strong> Being able to continually and habitually ask questions about what you’re learning. What are the shortcomings of a concept? What other concepts might similarly explain this? If this concept turned out to be wrong, what would be the weakest link in the concept that could have caused its invalidity? Cultivating a habit of skepticism might help during the process of learning.</p>
<p>Some other ideas: <a href="https://ask.metafilter.com/298593/How-can-you-maintain-original-thinking-when-learning-a-new-field#4325599">teaching the field to someone else</a>, and <a href="https://ask.metafilter.com/298593/How-can-you-maintain-original-thinking-when-learning-a-new-field#4325609">learning a field in a nonstandard way</a>.</p>
<p>These seem good, but I’m not fully satisfied with this yet. What are your ideas on how to maintain originality and ‘outside-the-box thinking’ when entering a new field? How do you avoid thinking in traditional patterns of thought and be subject to paradigm blindness?</p>
<p><sup>1</sup>: Suzuki, Shunryu. <em>Zen Mind, Beginner's Mind.</em> Also <a href="https://en.wikipedia.org/wiki/Shoshin">Shoshin</a>.</p>
<p><sup>2</sup>: Fischbacher-Smith, Dennis. <a href="http://sk.sagepub.com/reference/encyclopedia-of-crisis-management/n245.xml">Paradigm Blindness</a>, in Encyclopedia of Crisis Management (Paywall).</p>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Great books I read in 2015]]></title>
    <id>https://markbao.com/journal/great-books-i-read-in-2015</id>
    <link href="https://markbao.com/journal/great-books-i-read-in-2015"/>
    <updated>2015-12-29T09:47:00.000Z</updated>
    <content type="html"><![CDATA[<p>What defines a great book? For me, one that changes how I think in a fundamental way, or expands my gamut of understanding. Here are the great books I read this year and the books I’m looking forward to next year.</p>
<h3>Contents</h3>
<ul>
<li><strong><a href="https://markbao.com/journal/great-books-i-read-in-2015#excellent">Excellent books</a></strong>
<ul>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#mastery">Mastery</a><em> — by Robert Greene</em></li>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#elon-musk">Elon Musk</a><em> — by Ashlee Vance</em></li>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#thinking-in-systems">Thinking In Systems</a><em> — by Donella Meadows</em></li>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#seeking-wisdom">Seeking Wisdom: From Darwin to Munger</a><em> — by Peter Bevelin</em></li>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#the-future">The Future: Six Drivers of Global Change</a><em> — by Al Gore</em></li>
</ul>
</li>
<li><strong><a href="https://markbao.com/journal/great-books-i-read-in-2015#great">Great books</a></strong>
<ul>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#effective-altruism">The Effective Altruism Handbook</a><em> — edited by Ryan Carey</em></li>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#justice">Justice: What's the Right Thing to Do?</a><em> — by Michael J. Sandel</em></li>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#how-we-got-to-now">How We Got to Now</a><em> — by Steven Johnson</em></li>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#snow-crash">Snow Crash</a><em> — by Neal Stephenson</em></li>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#influence">Influence: The Psychology of Persuasion</a><em> — by Robert B. Cialdini</em></li>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#smartcuts">Smartcuts</a><em> — by Shane Snow</em></li>
</ul>
</li>
<li><strong><a href="https://markbao.com/journal/great-books-i-read-in-2015#good">Good books</a></strong>
<ul>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#misbehaving">Misbehaving: The Making of Behavioral Economics</a><em> — by Richard Thaler</em></li>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#how-to-read-a-book">How to Read a Book</a><em> — by Mortimer J. Adler and Charles Van Doren</em></li>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#liberal-education">In Defense of a Liberal Education</a><em> — by Fareed Zakaria</em></li>
<li><a href="https://markbao.com/journal/great-books-i-read-in-2015#college">College: What It Was, Is, and Should Be</a><em> — by Andrew Delbanco</em></li>
</ul>
</li>
<li><strong><a href="https://markbao.com/journal/great-books-i-read-in-2015#looking-forward">Books I’m looking forward to in 2016</a></strong></li>
</ul>
<a name="excellent"></a>
<h3>Excellent books</h3>
<a name="mastery"></a>
<h4><a href="https://www.goodreads.com/book/show/13589182-mastery">Mastery — by Robert Greene</a></h4>
<a href="https://www.goodreads.com/book/show/13589182-mastery">
<img align="right" width="200" src="https://mb-prod.imgix.net/journal/2015/books-2015/mastery.jpg">
</a>
<blockquote>The great chess Master Bobby Fischer spoke of being able to think beyond the various moves of his pieces on the chessboard; after a while he could see “fields of forces” that allowed him to anticipate the entire direction of the match… In all of these instances, these practitioners of various skills … were suddenly able to grasp an entire situation through an image or an idea, or a combination of images and ideas. They experienced this power as intuition, or a fingertip feel. (256)</blockquote>
<ul>
<li><strong>Why it’s great:</strong> One of the few books I've found that focuses on <em>long-term</em> skill and personal development for excellence.</li>
<li><strong>Key takeaway:</strong> Mastery is the process of gaining knowledge in the right ways, in a field that you feel closely connected to, while in the process arranging support structures that increases your propensity of gaining that knowledge (especially mentors), then applying what you've learned to certain projects and experimenting, with the ultimate goal of attaining a deep, intuitive understanding of your field from which you make progress. The intuition part is essential: his theory is that we gain deep knowledge about a field, so when we face new problems, we are able to activate the disparate parts of our deep memory that turn things up. Seems like the <em>process</em> of gaining mastery is an art in and of itself – and Greene talks about a few paths that others have taken to this art.</li>
<li><strong>Review</strong>: The book gives a good framework for developing mastery. It is highly traditional (heavily focused on apprenticeship), but still has good tried-and-true ideas. Greene’s writing style leaves a lot to be desired (too many assumptions), but the framework and mini-biographies in this book are great.</li>
<li><a title="Mark Bao's Review of Mastery by Robert Greene" href="https://www.goodreads.com/review/show/571037162">Full review on Goodreads.</a></li>
</ul>
<a name="elon-musk"></a>
<h4><a href="https://www.goodreads.com/book/show/22543496-elon-musk">Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future — by Ashlee Vance</a></h4>
<a href="https://www.goodreads.com/book/show/22543496-elon-musk">
<img align="right" width="200" src="https://mb-prod.imgix.net/journal/2015/books-2015/elon-musk.jpg">
</a>
<blockquote>People who have spent significant time with Musk will attest to his abilities to absorb incredible quantities of information with near-flawless recall. It’s one of his most impressive and intimidating skills and seems to work just as well in the present day as it did when he was a child vacuuming books into his brain. After a couple of years running SpaceX, Musk had turned into an aerospace expert on a level that few technology CEOs ever approach in their respective fields. (Loc 3421)</blockquote>
<ul>
<li><strong>Why it’s great:</strong> An incredibly inspiring biography about Elon Musk that really goes into his background and the stories behind Tesla and SpaceX. Musk’s story drives this book, even if the biography itself is slightly lacking.</li>
<li><strong>Key takeaway:</strong> This book brought up one key question: do you have to be a bit reckless to be good? Musk was reckless in two areas: in the risks he took, and the way that he manages his companies. The near–death experiences of Tesla and SpaceX detailed in his book are gripping, and show how far to the edge Musk went, and how Musk’s seemingly reckless behavior saved these companies from failure. And the way that he manages those companies is similar to Steve Jobs’ harsh management style. Is it necessary to be that harsh to be successful?</li>
<li><strong>Other notes:</strong> It really drives home how much of a genius Musk is. Consider that in his childhood, he ran out of books to read at the local library and the school library, and sometimes would read for ten hours a day. Dude also has a memory that's not only photographic, but he can wrangle images and numbers and relationships between them in his head. </li>
<li><strong>Review:</strong> Fantastic. It does lack a more ‘inner’ understanding of Musk, though. For example, Vance says that Musk can be a bit cold and non-emotional because he empathizes differently than others – he empathizes with the human species in general, not just individuals. What would Musk think of this characterization? Hard to say. Wish there were more access on this front.</li>
<li><a title="Mark Bao's review of Elon Musk on Goodreads" href="https://www.goodreads.com/review/show/1287452093">Full review on Goodreads.</a></li>
</ul>
<a name="thinking-in-systems"></a>
<h4><a href="https://www.goodreads.com/book/show/3828902-thinking-in-systems">Thinking In Systems — by Donella Meadows</a></h4>
<a href="https://www.goodreads.com/book/show/3828902-thinking-in-systems">
<img align="right" width="200" src="https://mb-prod.imgix.net/journal/2015/books-2015/thinking-in-systems.png">
</a>
<blockquote>The rules of the system define its scope, its boundaries, its degrees of freedom. Thou shalt not kill. Everyone has the right of free speech. Contracts are to be honored … They are high leverage points. Power over the rules is real power. That’s why … the Supreme Court, which interprets and delineates the Constitution — the rules for writing the rules — has even more power than Congress. If you want to understand the deepest malfunctions of systems, pay attention to the rules and to who has power over them. (158)</blockquote>
<ul>
<li><strong>Why it’s great:</strong> A great introduction to system dynamics that changes how you think (or at least, changed how I think) about, well, everything. Everyone should read this to better understand the world around us. Thanks to <a href="http://danshipper.com/">Dan Shipper</a> for the recommendation.</li>
<li><strong>Key takeaway:</strong> We break down a system into its <em>outcomes</em> and the processes that are <em>generative</em> of those outcomes. These processes can be incredibly complex, but we can model them using certain primitives, like stocks and flows, and analyze their behavior using the paradigms of feedback loops, oscillation, delays, and self-organization. Through this, we can understand bad projections, the tragedy of the commons, and similar outcomes. A key idea: the <em>paradigms</em>—the assumptions and foundations, things like ‘property can be owned,’ that we hold that eventually lead to the behavior and goals of a system—are the important leverage point at which systems can be changed. Finally, complex systems are by definition unpredictable – but if we have a better vocabulary and can build better models, we can build better systems.</li>
<li><strong>Other notes:</strong> Dan Shipper recommends Complex Adaptive Systems by John Miller and Scott Page as another book about complex systems.</li>
</ul>
<a name="seeking-wisdom"></a>
<h4><a href="https://www.goodreads.com/book/show/1995421.Seeking_Wisdom">Seeking Wisdom: From Darwin to Munger — by Peter Bevelin</a></h4>
<a href="https://www.goodreads.com/book/show/1995421.Seeking_Wisdom">
<img align="right" width="200" src="https://mb-prod.imgix.net/journal/2015/books-2015/seeking-wisdom.png">
</a>
<blockquote>I think the best question is, “Is there anything I can do to make my whole life and my whole mental process work better?” And I would say that developing the habit of mastering the multiple models which underlie reality is the best thing you can do. It's just so much fun - and it works so well. —Charlie Munger (189)</blockquote>
<ul>
<li><strong>Why it’s great:</strong> A mind-expanding collection of mental models, common misjudgments, and ‘tools for better thinking’.</li>
<li><strong>Key takeaway</strong>: We can gain wisdom in a systematic process, and there are certain building blocks that we can use – mental models – that we should aim to build. This book details many mental models, and it also spends a lot of time talking about why and how we make misjudgments. It’s hard to really summarize this book, since it’s really a collection of mental models and traps to watch out for, but it’s a valuable jumping-off point. A key focus is on being well-calibrated in your own confidence, which is something that Buffett and Munger apply successfully to their business – they only invest in things that they can really understand.</li>
<li><strong>Other notes:</strong> This book is a favorite of Shane Parrish from Farnam Street, and there’s a fantastic <a title="List of mental models on Farnam Street" href="https://www.farnamstreetblog.com/mental-models/">list of mental models</a> (some of which have posts dedicated to them) on the Farnam Street site.</li>
</ul>
<a name="the-future"></a>
<h4><a href="https://www.goodreads.com/book/show/16054830-the-future">The Future: Six Drivers of Global Change — by Al Gore</a></h4>
<a href="https://www.goodreads.com/book/show/16054830-the-future">
<img align="right" width="200" src="https://mb-prod.imgix.net/journal/2015/books-2015/the-future.png">
</a>

<blockquote>[In a future with an inhospitable Earth,] the story would still be told: in the early decades of the twenty-first century, a generation gifted by those that came before them with the greatest prosperity and most advanced technologies the Earth had ever known broke faith with the future. They thought of themselves and enjoyed the bounty they had received, but cared not for what came after them. Would they forgive us? Or would they curse us with the dying breaths of each generation to come? (L6768)</blockquote>
<ul>
<li><strong>Why it’s great:</strong> An ambitious and wide-ranging look at the huge changes that the future holds – from artificial intelligence to brain mapping to the shift from politics to markets to climate change – and what we need to do to face these changes. It’s dense and boring at points, and expect frequent connections back to climate change and a center-left bias, but it’s nonetheless eye-opening. Thanks to <a title="Alex Godin" href="http://alex.nyc/">Alex Godin</a> for the recommendation.</li>
<li><strong>Key takeaway:</strong> We are at a crossroads in our species. The confluence of globalization, a hyperconnected world, the decline of the U.S. and the rise of the developing world, massive growth on all fronts, imminent biological breakthroughs that we are not morally ready for, and climate change—all of these things lead us to a future that is wildly unpredictable. With all of these changes, we are dealing with highly unpredictable elements—<em>and yet our ability to make decisions as a species has atrophied</em>. Corporate interests and a broken political system make the U.S. unable to provide global leadership. There are exciting changes ahead, but we need better systems for consensus to be able to face the difficult decisions—the morality of cloning, the automation of work, the destruction of the environment—that we will have to face in the future.</li>
<li><strong>Other notes:</strong> To get a sense of the wide array of topics that Gore talks about in his book, check out the <a title="Extended eBook resources for The Future by Al Gore" href="http://content.randomhouse.com/assets/9780679644309/index.php">extended resources for the book</a> for neat mind-maps of all of the topics. Also, reading this has made me think that I would vote for the Gore that comes across in this book: he strongly believes in capitalism but is against corporate greed, believes in technology but knows its limitations, understands privacy, and of course, knows climate change.</li>
</ul>
<a name="great"></a>
<h3>Great books</h3>
<a name="effective-altruism"></a>
<p><strong><a href="http://effective-altruism.com/ea/hx/effective_altruism_handbook_now_online/">The Effective Altruism Handbook</a></strong><em> — edited by Ryan Carey</em> — This isn’t technically a book, but it’s a collection of essays by people associated with the Effective Altruism movement. The key question is: how can we do the most good? My key shift in thinking is thinking about ‘the most good’ in terms of expected value. Donating $5,000 to an anti-malaria foundation can save a life, and thus has a higher EV than donating it to somewhere else where the expenditure per life saved might be $50,000 or more. This was important for me for thinking about where I want to spend my energy in the next few years.</p>
<a name="justice"></a>
<p><strong><a href="https://www.goodreads.com/book/show/6452731-justice">Justice: What's the Right Thing to Do?</a></strong><em> — by Michael J. Sandel</em> — An important book that examines different systems of justice and talks about scenarios and the difficulties of each system. Starting with utilitarianism, he then goes through libertarianism, Kant's categorical imperative, Rawls' theory of justice, and virtue ethics to talk about what the right thing to do is – how to maximize justice. Really good, but the system of justice that he settles on was unconvincing to me, but still an important read that changed how I think. <a title="Mark Bao's review of Justice: What's The Right Thing To Do?" href="https://www.goodreads.com/review/show/1007740985?book_show_action=false">Full review here.</a></p>
<a name="how-we-got-to-now"></a>
<p><strong><a href="https://www.goodreads.com/book/show/20893477-how-we-got-to-now">How We Got to Now: Six Innovations That Made the Modern World</a></strong><em> — by Steven Johnson</em> — A really entertaining and fun read about the innovations that led to where we are today. My favorite one is how the invention of glass enabled so much of what we know about medicine today – I’ll leave the rest for your entertainment. The key idea in this book is that inventions and discoveries are, by nature, networked, and exhibit what Johnson calls the “hummingbird effect”. Each discovery expands what Stuart Kauffman calls the “adjacent possible”, the scope of possibilities now unlocked by that new discovery. I love that. <a title="Mark Bao's review of How We Got To Now: Six Innovations That Made the Modern World" href="https://www.goodreads.com/review/show/1091441764?book_show_action=false">Full review here.</a></p>
<a name="snow-crash"></a>
<p><strong><a href="https://www.goodreads.com/book/show/830.Snow_Crash">Snow Crash</a></strong><em> — by Neal Stephenson</em> — I really don’t read enough fiction. Mike Godwin is right on target when he describes this book as the &quot;manic apotheosis of cyberpunk science fiction,&quot; at least the manic part. What I both loved and disliked about this book was its deep interconnectedness of technology and reality, man and machine, and all the ridiculous Sumerian mythology. It’s silly at times but overall an awesome read, and, so I hear, a must-read sci-fi book. Thanks to Will Johnson for the recommendation. <a title="Mark Bao's review of Snow Crash" href="https://www.goodreads.com/review/show/936865520?book_show_action=false">Full review here.</a></p>
<a name="smartcuts"></a>
<p><strong><a href="https://www.goodreads.com/book/show/20910174-smartcuts">Smartcuts: How Hackers, Innovators, and Icons Accelerate Business</a></strong><em> — by Shane Show</em> — This book is all about how to work smarter and find better paths to success than the conventional way, and it’s actually good, unlike most books of this ilk. Includes a lot of tools that you can use – fast feedback, ladder switching, platforms (e.g. the concept of multiplication, not the times table), harnessing waves (e.g. Michelle Phan’s understanding of the YouTube trending algorithm) – to implement these ‘smartcuts’ and reach success faster. <a title="Book notes from Smartcuts" href="https://www.penflip.com/markbao/book-notes-smartcuts">Read my book notes</a> for a more complete summary.</p>
<a name="influence"></a>
<p><strong><a href="https://www.goodreads.com/book/show/28815.Influence">Influence: The Psychology of Persuasion</a></strong><em> — by Robert B. Cialdini</em> — This is a classic review of a lot of the findings in social psychology about influence, such as the impact of liking, consistency, reciprocation, social proof, and all that. Though I’m biased since I already knew much of it, it’s still cool to see how it’s all used in the real world. Must-read for people who are new social psychology.</p>
<h3>Good books</h3>
<a name="misbehaving"></a>
<p><strong><a href="https://www.goodreads.com/book/show/23316488-misbehaving">Misbehaving: The Making of Behavioral Economics</a></strong><em> — by Richard Thaler</em> — A great history of behavioral economics and how it came to be from one of the most influential behavioral economists. Touches on a lot of the key findings in the process and how <em>homo economicus</em> is a flawed concept. A fun if mostly historical read.</p>
<a name="how-to-read-a-book"></a>
<p><strong><a href="https://www.goodreads.com/book/show/567610.How_to_Read_a_Book">How to Read a Book</a></strong><em> — by Mortimer J. Adler and Charles Van Doren</em> — This was a good read because it laid out the ideal state for reading a book: analytically, deeply, and in concert with other books. It includes a lot of excellent questions to ask yourself and ways to engage with the text that can really improve comprehension, but it’s hard to really implement all of these ideas in the real world because it’s time consuming. So, too, is reading this book, so I suggest checking out one of the <a title="“How to Read a Book” by Mortimer J. Adler &amp; Charles Van Doren" href="https://carmenrodrigueza.wordpress.com/2013/01/24/how-to-read-a-book-by-mortimer-j-adler-charles-van-doren/">many</a> <a title="Vision Room Sums: How To Read A Book" href="http://visionroom.com/sums/Sums-How-to-Read-a-Book.pdf">summaries</a> first.</p>
<a name="liberal-education"></a>
<p><strong><a href="https://www.goodreads.com/book/show/24724590-in-defense-of-a-liberal-education">In Defense of a Liberal Education</a></strong><em> — by Fareed Zakaria</em> — Zakaria makes the argument that the liberal arts, far from being obsolete, are one of the few enduring things in a quickly changing world. The skills of exposition and rhetoric will always be useful, but they’re soft skills that we don’t prioritize highly enough. In summary, a liberal education teaches you how to write (which teaches you how to think), how to speak (which, um, speaks for itself), and how to learn. We need to nurture that.</p>
<a name="college"></a>
<p><strong><a href="https://www.goodreads.com/book/show/13518191-college">College: What It Was, Is, and Should Be</a></strong><em> — by Andrew Delbanco</em> — Goes through the history of higher education and how the modern conceptualization of it in the United States is missing the mark. He makes the common argument that when we focus on vocational training in college, we miss out on the value of college as a place to think about the harder questions of life, ethics, and meaning, questions that science cannot, and usually does not attempt to answer. <a title="Mark Bao's review of College: What It Was, Is and Should Be" href="https://www.goodreads.com/review/show/1280715229?book_show_action=false">Full review here.</a></p>
<a name="looking-forward"></a>
<h3>Books I’m looking forward to in 2016</h3>
<ul>
<li>Make It Stick: The Science of Successful Learning<em> — by Peter C. Brown (thanks Evan Samek)</em></li>
<li>The Information: A History, A Theory, A Flood<em> — by James Gleick</em></li>
<li>Our Final Invention: Artificial Intelligence and the End of the Human Era<em> — by James Barrat</em></li>
<li>Superintelligence: Paths, Dangers, Strategies<em> — by Nick Bostrom</em></li>
<li>The Structure of Scientific Revolutions<em> — by Thomas Kuhn (thanks Dan Shipper)</em></li>
<li>Theoretical Foundations of Artificial General Intelligence<em> — edited by Pei Wang and Ben Goertzel</em></li>
<li>On Intelligence<em> — by Jeff Hawkins</em></li>
<li>Structures: Or Why Things Don't Fall Down<em> — by J.E. Gordon</em></li>
<li>Rationality: From AI to Zombies<em> — by Eliezer Yudkowsky</em></li>
<li>Doing Good Better<em> — by William MacAskill</em></li>
<li>Roguelike<em> — by Sebastian Marshall</em></li>
<li>The Remains of the Day<em> — by Kazuo Ishiguro</em></li>
<li>Impro: Improvisation and the Theatre<em> — by Keith Johnstone</em></li>
<li>Tao Te Ching<em> — by Lao Tzu (thanks Sebastian Marshall)</em></li>
<li>A Brief History of Time<em> — by Stephen Hawking (thanks Peter Boyce)</em></li>
<li>Sapiens: A Brief History of Humankind<em> — by Yuval Harari</em></li>
<li>The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger<em> — by Mark Levinson</em>
<li>Nonviolent Communication<em> — by Marshall B. Rosenberg</em>
<li>Between the World and Me<em> — by Ta-Nehisi Coates</em>
</ul>
<p>Any recommendations? Send 'em to me below.</p>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Question: What is the use of college?]]></title>
    <id>https://markbao.com/journal/question-what-is-the-use-of-college</id>
    <link href="https://markbao.com/journal/question-what-is-the-use-of-college"/>
    <updated>2015-04-27T12:37:00.000Z</updated>
    <content type="html"><![CDATA[<p>I’ve recently realized why I’m bad at regularly publishing blog posts: it’s because I think I know very little. To publish something, you have to be acutely self-assured in the veracity of what you’re writing, which means either a) it’s something you have deep knowledge and experience in that you can speak with authority on, or b) it’s a personal anecdote, which you’re inherently sure about. Call it rationalism or a crippling need to self-question and avoid overconfidence bias—the result is that I don’t think I have a lot of answers.</p>
<p>What I do have are questions, and from something I read last year from Paul Graham, I think the best approach to work with these questions is to <strong>publicly ask well-framed questions</strong>. In <a href="http://www.paulgraham.com/know.html"><em>How You Know</em></a>, Graham quotes Constance Reid who, in this snippet, talks about mathematician David Hilbert:</p>
<blockquote>Hilbert had no patience with mathematical lectures which filled the students with facts but did not teach them how to frame a problem and solve it. He often used to tell them that “a perfect formulation of a problem is already half its solution.”</blockquote>
<p>Likewise, I think a well-framed question is essential for exploring a problem. So let’s get to it. The question I have—which I’ll add detail to shortly—is:</p>
<h3>What's the use of college?<br>What is college good for?</h3>
<h3>Background</h3>
<p>I ask this question because I took a nontraditional path: I enrolled in college, stayed for my freshman year, had a small success on a product in my winter break, and left college to start a startup and travel. Throughout those years, I learned on my own, reading books and papers, browsing endless Wikipedia articles, and talking to people who had similar interests—taking the same self-learning path that many other nontraditional students did.</p>
<p>Self-learning works for tech. But I was switching into the field of behavioral science and social science—does self-learning still work well for the sciences and for fields that are academia-driven instead of industry-driven? Does it make sense to give traditional education another shot to see if it’s the best place to learn about and gain credibility in a field?</p>
<p>I returned to college to experiment with this question. The hypothesis that I wanted to test is that college continues to be the ideal place to build foundations—knowledge in a specific field, a liberal arts background, a toolkit for critical thinking, and a social foundation in terms of meeting fascinating folks that you would otherwise not be able to meet. Or, is there a better place to do this?</p>
<h3>Getting concrete</h3>
<p>Let’s frame this question a bit more specifically. I have here a cascade of questions that emerge when I think about the question of “What is the use of college?”</p>
<ul>
<li><p>What function does college serve for individuals? How does it benefit people?</p>
<ul>
<li>Is college good for <em>learning</em>? That is, is it better than nontraditional or self-directed learning for building knowledge in a specific field of knowledge?</li>
<li>Is college good for building a foundation of knowledge in the liberal arts? Does it foster a “life of the mind” more than self-learning?</li>
<li>Is college a good place to meet interesting, motivated people? Is it better than meeting other people in a self-directed way, such as by identifying people in a field and specifically talking to them?</li>
<li>How much does the credential that comes from obtaining a degree matter, especially in the sciences?</li>
<li>What major benefits does college have over self-directed learning? What major benefits does self-directed learning have over college?</li>
<li>Is college good for income? — <em>While crucial, it’s not something that I’m concerned with right now, and anyway, research suggests that the answer to this is a resounding</em> yes<em>.</em></li>
</ul></li>
<li>What function does college serve for society? — <em>While interesting, this isn’t something I want to focus on at the moment. William Deresiewicz has a lot to say about this in</em> Excellent Sheep<em>.</em></li>
</ul>
<p>A greater understanding of this might tell us what the strengths and weaknesses of college and other forms of learning are, potentially allowing us to see what works for each and find areas of improvement. Issues abound, however: there is not one “college” but many, and that talking about “college” is basically a generalization.</p>
<div><img src="https://i.imgur.com/8SFgXTE.jpg" width="100%"></div>
<div>I couldn't find a Post-It.</div>
<h3>Thinking out loud: some initial thoughts</h3>
<h4>Is college good for learning? Is it better than nontraditional or self-directed learning for building knowledge in a specific field of knowledge?</h4>
<p>In my own experience, the benefits of college for learning lie in its ability to create a structure around your learning. That structure—lectures, coursework, exams, and other forms of assessment—creates a motivational structure that pushes people to learn. Yet, the downside of this, in my experience, is that this creates a quandry of motivation that <a href="http://www.ted.com/talks/dan_pink_on_motivation">Daniel Pink touches on</a>: it’s <em>extrinsic</em> motivation—a desire to hit the goal posts set up by someone else and to make the grade—which is a lot less engaging than intrinsic motivation, an internal desire to learn for the sake of learning. In traditional education, at least a part of one’s motivation to learn stems from the need to perform well and pass a course.</p>
<p>And throughout the past 1.5 years of school, I’ve seen something similar: I do generally like to learn about what I’m learning, but even with similar subjects, I’m much more engaged in self-learning. I can read about John Stuart Mill and utilitarianism in class and be pretty engaged, but I can’t put down the book I’m currently reading, <em>Justice</em>—and there is a significant difference in engagement and internal motivation. When I self-learn, I learn because I want to know. When I’m in a traditional learning environment, I learn because I want to know, absolutely, but there’s also the aspect of “I need to get this in my head so I can perform well on the upcoming test and essay.” And yes, that’s the motivational structure that many colleges <em>want</em> to create, and it works well, but the ramifications are not as tidy as simply ‘students learning more’. I’m willing to bet this has an impact on long-term retention as well—and at the least, it leads some (extremely smart) students to go into college seeing it as an intellectual haven but 2 years later optimizing their schedules for the “easiest” courses, not the ones that are the most challenging or useful.</p>
<p>On the other hand, something that college does that is difficult to replicate in a self-learning environment is the impact of learning something you wouldn’t seek out yourself. In a way, our desires of what we want to learn are somewhat antiquated: we apply past notions of <em>what we care about</em> to influence future decisions on <em>what we want to learn about</em>. What college does is that it forces you to be more progressive, to expose you to what Rumsfeld calls the <a href="http://en.wikipedia.org/wiki/There_are_known_knowns">“unknown unknowns”</a> of knowledge—things you wouldn’t have sought out but that can be really important. I’ve heard countless people who, as part of Columbia’s Core Curriculum, were forced to take Literature Humanities or an art history course and ended up loving it.</p>
<p>Yet, on the other hand, the inflexibility of college curriculums deny highly motivated students from getting exactly what they want from college. The semester/term of college reduces the flexibility in directing exactly when and what to study, and the progression of knowledge that many colleges implement does the same. Personal experience: I’m interested in taking a game theory course, but the course requires microeconomics and macroeconomics, which in turn requires economics, creating a 1- to 2-year lead time to taking a specific course. Undoubtedly, that means that course requires previous knowledge, but without other offerings in game theory, that field of knowledge is closed off everyone other than economics majors. This, combined with registration limits on classes, comprise a set of obstacles and constraints that are not present with self-directed learning.</p>
<p>There are many issues, also, with lecture-based learning. Many self-learners have used textbooks to teach themselves at a faster pace than college courses allow, and it might be that college adds a lot of overhead to learning that is cut out by a lean, efficient self-learning system that incorporates modern techniques (e.g. spaced repetition).</p>
<p><em>Further exploration:</em> Think about how college learning can be more engaging; think about ways to combine nontraditional education with traditional education; talk to pedagogicians (heh) and look at research on what the best learning environment is. Personally: enroll in some Coursera classes and do some self-learning and see if it’s more effective or enjoyable.</p>
<h4>Is college good for building a foundation of knowledge in the liberal arts? Does it foster a “life of the mind” more than self-learning?</h4>
<p>Theoretically, it <em>should</em> be the best place to build a foundation of knowledge in the liberal arts. There seem to be few other places outside of academia where you can explore the Great Books and learn about critical theory and spend months thinking about philosophy, especially in an environment with support, a motivational structure, and discussions with other students going through the same thing. While things like Coursera and peer-teaching programs like those at Brooklyn Brainery go for the same idea, it’s hard for those to replace the college learning experience in the liberal arts, especially when you factor in that college is when a lot of time is <em>dedicated</em> to these efforts and alternative methods are on nights and weekends.</p>
<p>And regarding the life of the mind, college again should theoretically be the best place for this. However, William Deresiewicz notes in his book <em>Excellent Sheep</em> that college is becoming more and more vocational, teaching practical skills like those necessary for finance and consulting careers, and not so much a liberal arts education. And it’s almost inherently this way: the American university system was set up as a combination of the English college and the German research institute, arguably being fully invested in neither (via Alex Miles Younger).</p>
<p><em>Further exploration:</em> Read more about the importance of liberal education. Is it really necessary? What is its role and how can it benefit our lives / society? Fareed Zarkaria (<a href="https://www.goodreads.com/book/show/24724590-in-defense-of-a-liberal-education">In Defense of a Liberal Education</a> and Michael S. Roth (<a href="https://www.goodreads.com/book/show/18723445-beyond-the-university">Beyond the University: Why Liberal Education Matters</a>) have books on this subject.</p>
<h4>Is college a good place to meet interesting, motivated people? Is it better than meeting other people in a self-directed way, such as by identifying people in a field and specifically talking to them?</h4>
<p>This is a fact: there are more people in a specific field (say, behavioral economics) <em>outside</em> of a particular university than <em>inside</em> it. The number of smart people in a field inside a particular university is only a fraction of the total number of smart people in that field in total. We might see that as an advantage for self-learning: why should we restrict ourselves to the fraction of the people in the field that are at our university? Why not make an effort to meet the smart people, <em>everywhere</em>?</p>
<p>The difference lies in two factors: <em>access</em> and <em>depth</em>. Access is obvious: access to a professor at your current college is somewhat more plausible than access to some other professor at some other college. Depth is another thing: while learning outside of college might allow you the entire breadth of folks to connect to and meet, I think the proximity of others and other factors make the possibility that you’ll build a high-depth relationship with someone significantly higher. Despite the advances in FaceTime and related stuff, I still communicate more frequently with people on campus and in the city than those who are elsewhere.</p>
<p>Yet, a strong argument for self-learning is flexibility. In traditional education, you are restricted and locked down—time-wise, to class schedules and deadlines, and location-wise, since you have to attend class at a specific location—in addition to the inflexible nature of the college semester/term system discussed above. This means that you can’t find out that a certain university or city is an epicenter for what you’re interested in (e.g. Carnegie Mellon University and decision science), go and AirBnB your apartment, and spend time there. You can’t take an unorthodox approach like coming up with an idea to explore, like how individuals in different countries conceive of the role of government in their lives, and then book a bunch of dirt-cheap airline tickets and see for yourself. Of course, other constraints are present—money in particular—but the fact is that the time- and location-based inflexibility of college restricts the possibility space of how you can learn.</p>
<p>The importance of college in meeting more interesting people cannot be understated—people say it’s one of the most important parts of college, sometimes even above learning itself. While the Internet makes meeting other amazing people a lot easier, it still seems to me that college offers a tradeoff: you can meet a small subset of all the smart people in a field, and that subset is still really awesome and smart, but it won’t be all of them, obviously—but, you’ll build a much deeper relationship with them. Is that better? Jury’s still out, as far as I know.</p>
<p><em>Further exploration:</em> Think about whether one of these options can incorporate the other, e.g. whether going to college can allow for deeper connections with some people but might not totally restrict the possibilty of meeting others elsewhere—in fact, being a “student” somewhere might actually make this more possible.</p>
<h4>How much does the credential that comes from obtaining a degree matter, especially in the sciences?</h4>
<p>Probably a lot. In other industries, such as in tech startups, credentials take a different form, mostly things like your GitHub repos, past experience in places you worked, what you did there, and talks you gave. But science is still highly dependent on credentials as signals, it seems, and I don’t get the feeling that the lone independent researcher (especially one without a degree) gets much respect (or is really even that possible). I don’t really know much about this, other than that there is a lot of credential inflation and that even a bachelor’s isn’t enough to be taken seriously (though surely more seriously than being degree-less), but it’s one thing to explore.</p>
<h4>What major benefits does college have over self-directed learning? What major benefits does self-directed learning have over college?</h4>
<p>This is a catch-all for the other things that the above questions don’t address, but one other advantage of college that comes to mind is: almost everyone who’s someone has done it. Two things: 1) That doesn’t mean that it’s right; 2) There <em>are</em> people who dropped out of college and made it on their own (or never went in the first place). Those people tend to be outliers, though, and it seems that many of them had some sort of extremely compelling reason—a startup or something else—that both gave them a reason to leave. I think there’s more to think about here, such as the idea that all of the examples that come to mind (Gates, Jobs, Zuckerberg) all got their success from the thing that they dropped out for (Microsoft, Apple, Facebook), but we see few people that dropped out, meandered around, and then built a Fortune 50.</p>
<p>One advantage that self-directed learning might have is that, by being nontraditional, you might come up with nontraditional approaches. I’m reminded of a friend of a friend who came up with a newer, harder way to do a physics calculation, but by relying on this unorthodox method, he was able to best others who used the traditional method. Might traditional education teach us to think traditionally, and might nontraditional education allow us to come up with our own, non-standard ways of seeing things, that might turn out to be better? That’s another open question and is a really big consideration, because as the <a href="http://en.wikipedia.org/wiki/Curse_of_knowledge">curse of knowledge</a> suggests, once you learn and adopt something, it’s really hard to think in a different way.</p>
<h4>Personal notes</h4>
<p>In addition to the above, when I’m at college, I worry about <em>all the experiences I’m not experiencing</em> and all the lives that I’m not living. On one hand, that’s an argument for a self-directed education or an enriched program like <a href="https://minerva.kgi.edu">Minerva</a> or <a href="http://www.uncollege.org">Uncollege</a>: you can gain experience from the real world and have your early twenties be shaped by a wider gamut of experiences. On the other hand, you could make the argument that that’s not exactly what college is for: it’s for enriching the mind through knowledge. I’d say, however, that experiential knowledge is an absolutely essential part of one’s education, and that sort of experience is being hampered by the constraints that college has—heavy workload, rigid curriculum, and binding yourself geographically to the campus and time-wise to the schedule—which I think makes it hard to attain the kind of worldly education that we expect to have in our early twenties. And with the current standard of getting out of college and immediately starting your career, I worry that most of us won’t ever get the chance to gain that kind of education.</p>
<h3>Gist</h3>
<p><em>What’s the use of college? What is college good for?</em> It’s a crucial question as we consider what the right post-secondary education system is for a rapidly changing economy. You’ve probably read a hundred think pieces™ on that, so I won’t go on about it. In any case, this question is personally relevant for me as well as important as we consider the role of college today, the democratization of education that we’ve seen over the past 10 years, and whether traditional and nontraditional forms of education will always be different or if we can combine the benefits of each to modernize education and foster the life of the mind.</p>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Things vs. experiences: two sides of the same coin]]></title>
    <id>https://markbao.com/journal/things-vs-experiences-two-sides-of-the-same-coin</id>
    <link href="https://markbao.com/journal/things-vs-experiences-two-sides-of-the-same-coin"/>
    <updated>2015-03-29T15:20:00.000Z</updated>
    <content type="html"><![CDATA[<p>I’ve had an peculiar experience with minimalism. I’ve spent most of my (short) adult life living out of a suitcase or a backpack, always ready to <em>pack, zip, lock</em> to go to the next destination, whether that was a city or a stage in life. After doing long-term travel for nearly a year, I recently came back to New York, signed a lease, and started accumulating <em>stuff</em>. Stuff, like headphones, blenders, sofas, laundry hampers, flatware, and coffee tables.</p>
<p>Part of me isn’t used to this and—despite the rather liquid furniture market on Craigslist—wants to not be burdened by all this stuff. But while my stint with minimalism was mostly freeing, it was limiting in a different way: I didn’t have access to the <em>stuff</em> that seems superflouous—like a blender—that actually can end up increasing my quality of life.</p>
<p>A common blog post title that I see in the subculture of minimalism goes along the lines of “my experience with minimalism: less stuff equals more experiences.” And this makes a lot of sense: by cutting out a lot of the <em>stuff</em> we’ve accumulated over the years, we can be more free, and focus our spending on experiences, not things.</p>
<p>Yet things and experiences aren’t mutually exclusive. Rather, they seem to me like both sides of the same coin. Stuff isn’t bought to lay around and exist. It’s meant to <em>enable experiences</em>. It doesn’t always get in the way of experiences, but instead it can enable experiences or make them more accessible.</p>
<p>A blender is probably the quintessential “I’m settled down and enjoying the domestic life” thing. But it enables the ability to use it to make, for example, healthy food. Having a blender now means that I can start my day with a kale–raspberry shake, making me eat more greens—something that I couldn’t really do while traveling or avoiding owning things—and makes eating healthier more accessible.</p>
<p>Sometimes, certain things can be more experience-rich than individual experiences. Individual experiences, like travel, happen once, and you gain some benefit from them once. (However, these benefits could be huge, and could multiply over time if your experiences from travel cross-pollinates into, say, gratitude, or other parts of your life.) Things, on the other hand, can enable experiences over and over, like a blender making healthier eating easier day after day. Certain things can pay dividends over time in a way that some experiences can’t. (This is rare, though, and generally experiences matter more.)</p>
<p>So the question shouldn’t be a rejection of things, nor should it be saying that all things are useful. The consumerism that has driven a lot of people to think about minimalism is unhealthy—but the answer seems to be not a rejection of things, but rather <strong>a stronger awareness in considering what kind of experiences something is enabling or making more accessible, and gauging whether those experiences are beneficial or not</strong>.</p>
<p>Three caveats. — Certain things run a larger gamut in terms of what kind of beneficial experiences they can enable. It’s unlikely that a blender might enable unproductive experiences. But a TV can either make watching interesting movies and TED talks more accessible, or it can be a black hole of <em>Breaking Bad</em>. It would be useful to point to past evidence on how the thing in question was actually used, and either make your decision on that evidence, or resolve to use it in a more productive way.</p>
<p>Further, it’s easier to have a clearer view on what things are necessary when you’re starting out with less and accumulating, than starting out with a lot and needing to cut down. The experience of deciding which apps to delete and thinking “well, it could come in handy one day” is a perfect analogue to our rationalizations to physical items. In these cases, it makes sense to, again, look at historical evidence: have you been regularly using it for its purpose, or just waiting for the day you get around to it? Owning a juicer enables useful experiences, yes, but have you actually used it? If not, then it’s not only taking physical space, but also the mental space of the burden of needing to find the time to use it one day.</p>
<p>Finally, certain things can have strong benefits but are also a huge burden to maintain, like owning a car in a city. It’s important, then, that things don’t just enable beneficial experiences, but do so commensurate to its cost.</p>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Time constraints can increase efficiency]]></title>
    <id>https://markbao.com/journal/time-constraints-can-increase-efficiency</id>
    <link href="https://markbao.com/journal/time-constraints-can-increase-efficiency"/>
    <updated>2015-03-29T15:19:00.000Z</updated>
    <content type="html"><![CDATA[<p>In design, constraints can actually be beneficial in the creative process. For instance, designing for a specific size or form factor, such as a small mobile phone, can make you think in ways that bring about new design concepts that would never have emerged without the constraint.</p>
<p>So too are constraints sometimes beneficial in other parts of life. Putting a time constraint (also known as a <a href="https://en.wikipedia.org/wiki/Timeboxing"><em>timebox</em></a>) on a task can make you focus on that task more effectively. Conversely, having a lax timebox can result in <a href="https://en.wikipedia.org/wiki/Parkinson%27s_law">Parkinson’s law</a>, that is, “work expands so as to fill the time available for its completion”.</p>
<p>I’ve been hustling at college for the past few months, and they have easily been one of the busiest months of my life. Every day, my calendar was filled from wake to sleep, and I worked to optimize the amount of time I spent eating and filling in gaps between classes.</p>
<p>At the same time, I still found time to sit down and do a few minutes of journaling on the day, as well as a nightly end-of-day review and a Sunday end-of-week review. I also found time to read the blogs that I wanted to follow, usually during the 15 minutes per day that I allocated to relaxing, and doubled up lunch and dinner time with reading the <em>New York Times</em>.</p>
<p>Since I had the constraint of not having much time, I was able to allocate a timeboxed amount of time to carry out these pretty important activities. I assumed that once the term ended, I would be able to relax and write more intricate journal entries, think about how to improve my review procedure, keep up with the three or four blogs I follow regularly, and read the news a lot more.</p>
<p>Not so. I haven’t written a journal entry in two weeks nor an end-of-day review, despite the fact that they take 5 minutes a day to do. I haven’t kept up with those blogs, and I haven’t read the news in a while.</p>
<p>It turns out that used my time more effectively when I had more constraints than when I had fewer. Put another way, <strong>having constraints actually let me use my time more effectively</strong>.</p>
<p>When talking to Dan Shipper about balancing college and work, he says that despite school taking up a bulk of his time, he finds that he sometimes gets more done with the 2 hours of focused time between classes than when he has a full day free.</p>
<p>It’s easier to sit down and say “okay, I need to get this, this, and this done” when you only have 2 hours. When you have 12, things are a bit more fuzzy, and forces such as overestimation of the amount of time you have, and micro-practices such as letting yourself get distracted can add up to actually make you less effective.</p>
<p>In other words, without constraints, work expands to fill the time, and having more time does not necessarily mean a better output—it might actually decrease output.</p>
<h3>Toward a theory of constraints</h3>
<p>One theory is that having constant constraints and demands helps define the value of the activities that you have little time for, since by contrast they become more important to you since the time to do them is scarce. Another theory might relate to Taleb’s “<a href="https://en.wikipedia.org/wiki/Antifragile">antifragile</a>” concept, where systems actually benefit from uncertainty and stress.</p>
<p>More abstractly, it may be that as a resource increases in amount (such as more time), other forces come into play that sometimes introduce inefficiencies and secondary effects that are not present when that resource is less abundant; when the resource is constrained, doing so might actually eradicate inefficiencies to reduce the impact of a constraint, break even, or even go as far as <em>increase</em> efficiency with a decreased resource. More work should be done to figure out what these inefficiencies are.</p>
<p>In any case, it seems that constraints are not always detrimental, and, at least in the case of time management, can actually be beneficial for efficiency.</p>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Building Sustainable Habits: Why We Make Excuses and Resist Habit Change]]></title>
    <id>https://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change</id>
    <link href="https://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change"/>
    <updated>2015-03-29T15:18:00.000Z</updated>
    <content type="html"><![CDATA[<p>Why do we have so many goals in our lives that we never do? Why do people know that exercise is good for them, and will make them healthier, but never do it? Answering this question is core to figuring out how to change people’s behaviors and help people execute on the goals and habits they’ve been trying to build.</p>
<p>The answer lies in the idea that we know that it’s good for us, but the instinctive and impulsive part of our mind doesn’t want to carry out the habit because <em>it</em> doesn’t know that it’s good for us. Why is that? And how do we change this and build sustainable habits using as little willpower as possible?</p>
<p>Interactive: Think of a habit or goal you’ve been trying to build, but haven’t gotten around to doing. For many people, the top one is exercising and losing weight; for others, it’s writing more, or reading more, or focusing on work. I’ll be using an example throughout the article, and for me, my main habit that I’m trying to cultivate is exercise. As you read this article, see if you can apply the concepts herein to that habit that you’re working on.</p>
<h3>We know that it’s good for us</h3>
<p>We consciously know that our habit that we want to build is good for us and that it will improve our life. We know that exercising every day will allow us to be healthier, feel better, and improve both how long we live and the quality of our life. Those sound like amazing advantages—who wouldn’t like to work towards that? Or, if you’re trying to build a writing habit, you know that writing will allow you to express yourself better, be better and communication and persuasion, and it pays dividends for many areas of your life.</p>
<p>So if we know that it’s good for us, why don’t we do it? The fact that logical reasoning alone can’t change our behavior in many cases suggests that there’s something other than logical reasoning that controls our behavior and what we decide to do and pursue. When we want to start every morning with a workout routine, or write for fifteen minutes a day, what’s stopping us?</p>
<p>Figuring out the reason why we <em>know</em> we should be doing a habit (like exercising or reading or focusing at work) but don’t do it is critical to figuring out what’s holding us back, and how to break through that glass ceiling that prevents us from building habits.</p>
<h3>We know that it’s good for us, but we don’t do it</h3>
<p>Your conscious side has reasoned out the benefits, and knows, rationally, that exercising is a good idea and will greatly improve your quality of life, or that writing will pay dividends over the entire period of your life by helping you communicate better. But there’s another side that sometimes differs in opinion with the conscious side, and it’s the side that resists change and pushes back on us when we want to build new habits: the instinctive side.</p>
<p>When I get up and contemplate doing a morning run or hitting the gym for an hour to do a workout, the resistance from my instinctive side kicks in. My conscious side knows that it’s a good idea to go for a run or go to the gym and get it done, for all the reasons I already know: better health, feel great during the day, higher mental performance, etc. But my instinctive side says that it’ll be difficult and painful, and I just don’t <em>feel</em> like doing it… it’s just so much work, and I’d rather just go and get my day started.</p>
<p>My conscious side might agree that exercising is a good idea, but my instinctive side resists.</p>
<h3>Knowing is conscious, doing is mostly instinctive</h3>
<p>There are two different systems that we use to think, according to the dual process theory popularized by Daniel Kahneman. System 1 is the fast, instinctive, and automatic method of thinking, and some functions include fast reactions, skills, and other instinctive actions. System 2 is the slow, calculated, logic-based method of thinking, which relies heavily on rationality. It turns out that System 1 is the one that decides most of our actions, though System 2 is consulted from time to time for when a decision requires more thought and deliberation.<a name="return-1" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#note-1"><sup>1</sup></a></p>
<p>Our conscious side is the one that <em>knows</em> what the right thing to do is. Using rational explanations and reasoning, we know that exercise is better for us. But our instinctive side is the one that is deciding a lot of the <em>doing</em>. And we’re naturally opposed to pain and difficulty, so the instinctive side is naturally against doing new things and implementing new habits. The way that we’re able to persuade ourselves to go and do new things is by allowing our conscious side to win over the instinctive side when we’re at the point of decision.</p>
<p>When we think about whether we want to carry out a new habit, the thing in our minds that is making excuses to avoid carrying it out and saying “I don’t feel like doing it today” is the instinctive side creating resistance. We are instinctively and impulsively opposed to carrying out the habit.</p>
<p><img width="100%" alt="" src="https://mb-prod.imgix.net/articles/building-sustainable-habits/visualizing-the-gap.png"></p>
<h3>Sit back and think for a minute</h3>
<p>This might help you go from <em>reading</em> the idea that I’m writing about, to <em>feeling</em> what I mean. I think it’ll lead to a better understanding of this article, and going beyond “just reading” to having a real connection and spark in your mind where you’ll really get it, which I think will be beneficial.</p>
<p> Sit back and think about the habit you’ve been trying to do, and think about the conscious side. Logically, why is it good to do that habit? What are the benefits? Why do we want to pursue it—for health, or better work, or for learning?</p>
<p>It’s strange that despite those benefits, we still don’t do it.</p>
<p>Then, think about the instinctive side, and be honest with yourself. What were the excuses you gave last time you considered doing that habit and didn’t? </p>
<p>What if I told you to do that habit right now? Pause on reading this article, and exercise, right now. Or meditate, or write, or pursue your dream project, right now. What’s going through your mind? Do you notice your conscious side knowing you <em>should</em> be doing that thing, and the instinctive side resisting against it, and the excuses it’s giving? <a name="return-0" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#note-0"><sup>0</sup></a></p>
<p>That’s the conscious side and instinctive side at odds, and that’s why we feel resistance when we try to build new habits.</p>
<p>And in my experience, most of the time, the instinctive side wins.</p>
<h3>Conscious vs. instinctive value systems</h3>
<p>Let’s step back and take a closer look at the two sides.</p>
<p>My theory is that our conscious and instinctive sides differ in what they want because they have different value systems. They value habits and behaviors in different ways.</p>
<p>I believe that <strong>the conscious side assigns value using logical reasoning and rational explanations.</strong> From what we know about health, exercising is the most important thing that we can do to keep in good health, physically and mentally. We’ve read the articles, we’ve had the conversations with friends, and we’ve seen the scientific evidence. It makes sense to us, <em>consciously</em>, that exercising is crucial for our well-being and for living well.</p>
<p><strong>The instinctive side assigns value using past experience and past evidence.</strong> It distills past experience and builds evidence to value a certain behavior. In contrast to the conscious side’s logic-based approach, if our past experience of exercise has shown that it’s difficult and painful and stressful, then we’re going to avoid doing it. We expect it to continue being difficult and painful and stressful, so we are instinctively against doing it since that’s what we expect.</p>
<p>Conscious values are made up of logical evidence, whereas instinctive values are made up of experiential evidence.<a name="return-2" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#note-2"><sup>2</sup></a></p>
<p>For some behaviors, the conscious and instinctive values match. This is the case for those core day-to-day things that we do, like brushing your teeth. In these cases where it matches, such as brushing your teeth, <strong>the conscious side, through reason,</strong> knows that it’s beneficial for good health, looking good, and social acceptance, and <strong>the instinctive side, through past evidence and experience,</strong> agrees that this is true and that it’s worth doing. The values match.</p>
<p>However, in the cases that it doesn’t match, where our conscious values are at odds with our instinctive values, it results in resistance where the instinctive side doesn’t want to go through with the habit. In some cases, like with exercise, the instinctive side is partially right. Exercise <em>is</em> painful and difficult and stressful. But what the instinctive side doesn’t understand is that it’s for the better. The instinctive side isn’t good at calculating long-term benefit<a name="return-3" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#note-3"><sup>3</sup></a>, whereas our conscious side knows that it does have an immense long-term benefit. What we need to do is to impose our conscious values on our instinctive side, to try to override what our instinctive side wants to do, and try to convince our instinctive side using logical reasoning and awareness of long-term benefit from the conscious side.</p>
<h3>Imposing our conscious values on our instinctive side</h3>
<p>At the point of decision, we have two choices. We can decide to put on workout shorts and hit the gym, or otherwise carry out the habit that we consciously want to do. Or we can make excuses, say “I don’t feel like it” and procrastinate it to the eternal tomorrow. Resistance is created between the two sides. The way that we decide to go forward with the habit is by imposing our conscious values on our instinctive side. The way that we decide to skip the habit and say “I’ll do it tomorrow” is by succumbing to our instinctive side and its excuses, and letting it win.</p>
<p>There’s a gap that exists between our conscious value and instinctive value of that habit. It’s what I call a decision gap, and it defines whether or not the conscious and instinctive side are in accord with its decision to do something. The gap is bigger if the habit is more difficult or unattractive, such as exercising, because the more difficult the habit, the more the instinctive side doesn’t want to do it, creating a bigger gap.</p>
<p>We cross that gap and execute on the habit by imposing our conscious values on our instinctive side.</p>
<p>My theory on the way that we do this is through <strong>willpower</strong>. Willpower is what allows us to put in the work to do an action, even though we don’t feel like doing it. We use willpower to force ourselves to ignore what we <em>feel like</em> doing, and instead we do what we <em>should</em> be doing. We use willpower to get ourselves to the gym when we don’t feel like it, and when we do that, we override our instinctive values and what it <em>wants</em> to do, with our conscious values and what we <em>should</em> be doing, and bridge the gap between them, resulting in doing the right action.</p>
<p>If the habit is more difficult, the gap is bigger, and more willpower is needed. A small gap would be doing 3 pushups; a large one would be doing 100 in a minute. You’d need a lot more willpower to be able to convince yourself to try to do 100 pushups in a minute than you’d need to do 3.</p>
<p>So now, we’ve talked about the two different ways of thinking that sometimes create resistance with new habits, why they create resistance, and conceptually, how we overcome that. Time to put it together. How do we start building sustainable habits?</p>
<h3>Building sustainable habits</h3>
<p>We know that the difference between how much we consciously value and instinctively value a habit creates a gap. We bridge that gap using willpower, which is imposing our conscious values on our instinctive values for that habit. And instinctive values are derived from experience and evidence.</p>
<p>My theory on developing sustainable habits is based on two ideas:</p>
<ol>
<li>
<p>The main thing that will allow us to make a habit automatic is by having our instinctive values and conscious values be in accord, and both want to carry out an action. <strong>So, we should work towards matching our instinctive values with our conscious values.</strong> We do so by giving our instinctive side evidence to match and agree with the conscious values.</p>
<p>We can’t rely on willpower forever, so it’s best if we try to close that gap, which requires willpower to bridge. This is the point where a habit becomes habitual, since you find value in it, and carry it out without resistance.</p>
</li>
<li>
<p>In order to work towards building evidence for the instinctive side, <strong>we carry out the habit, which results in developing evidence for the habit, but aim to use as little willpower as possible to do so.</strong> If possible, we have to stop relying on willpower and assume that we have close to zero willpower. If we design it so that it takes very little willpower to carry out an action, extra willpower is a bonus.<a name="return-4" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#note-4"><sup>4</sup></a></p>
</li>
</ol>
<p>The idea is to eventually match our instinctive values with our conscious values, so that they can be in accord and make the habit automatic. The way that we do that is to build up evidence for our instinctive side, while using as little willpower as possible to build up that evidence.</p>
<h3>Applying this model to an exercise habit</h3>
<p>This is how I’d do it, in the context of exercising:</p>
<p><strong>Setting the goal.</strong> I’d like to exercise every morning. I’ll narrow the scope of the exercise to focus on pushups; to put a quantitative metric on it, I’ll say that I want to do 100 pushups a day; and anchor it to my living routine, I’ll say that I want to get in the habit of doing 100 pushups every morning.</p>
<p>Right now, going from zero to 100 pushups is very, very difficult, and I’m bound to have a lot of resistance to doing 100 pushups, especially right after waking up and needing coffee. The gap there is huge, and it would take a lot of willpower, every morning, to try to achieve that goal. So instead, I’ll apply what we know about willpower and how it takes a lot more to bridge a large gap, and reduce the gap.</p>
<p><strong>Starting out.</strong> The way to use as little willpower as possible is to start out really, really small. I’ll start out with 3 pushups. The gap of required willpower to do just 3 pushups is considerably smaller. My conscious side thinks it’s a good idea to get some exercise, and my instinctive side, while it would still prefer to not do pushups, has far less resistance to just doing three pushups because of the ease of doing so. It only requires a bit of willpower to get myself to drop down and do just those three pushups.</p>
<p>After doing those three pushups, I add a little bit of evidence to my instinctive side. “That wasn’t so bad. I could do that. And I feel good about accomplishing the habit, even though it was small.” And the fact that I did carry out the habit creates momentum for the next time I consider doing the habit, and I’ll be more inclined to do that habit in the future since there’s evidence that I’ve done it in the past.<a name="return-5" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#note-5"><sup>5</sup></a></p>
<p>Tomorrow, I might continue doing three pushups. I’m continuing to build evidence on the instinctive side, and it becomes easier and easier to do, and requires less and less willpower. I’m also continuing to <em>make doing the pushups themselves habitual</em>, and get in the habit of doing pushups at all, which is an essential foundation to scale up from.</p>
<p><strong>Scaling up.</strong> Once I’ve made that habit stick and the instinctive side has that evidence, I’m set to scale the intensity up a bit. I’ll scale up to five pushups, continuing to focus on using as little willpower as possible. Now that my instinctive side has more evidence, I can convince myself using little willpower to do two pushups. Then I’ll continue building evidence until I’m comfortable with two pushups and that it becomes easy to do, and then I can scale up again when two becomes easy. As I build more evidence up, I can do 5, then 8, then 12, then 16, then 20, then 30, then 50.</p>
<p><img width="100%" alt="" src="https://mb-prod.imgix.net/articles/building-sustainable-habits/scaling-up.png"></p>
<p><strong>The important part to scaling up is to be mindful of how much willpower I am using, and use that as feedback on how much I should scale up and when.</strong> I need to make sure that I’m not overstepping my bounds, and that I’m building the habit with a good success rate that’s not too low and not too high. If I’m not succeeding enough, then it means that the gap is too large and I need to use more willpower than necessary, and I need to scale back. If I’m succeeding all the time, that means that I’m not scaling up fast enough. Being at a biased balance<a name="return-6" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#note-6"><sup>6</sup></a> of a point where it’s only a bit of a stretch is the optimal position, in my opinion, since you’re making progress on the stretch part while keeping yourself emotionally grounded by having some success.</p>
<p>Contrast this with the usual method of doing this, which is to set an arbitrary habit and try to will ourselves to do it, like setting up a habit of 50 pushups a day. That’s not sustainable, since we have to will ourselves into doing 50 pushups a day, and I believe the success rate, both short-term and especially long-term, is low, because we’re relying on having a lot of willpower every day, which is not always the case. As we try to do this habit, we might succeed for the first few days or the first week on willpower and being inspired, but the gap is still large, and it still requires a lot of willpower to execute at the end. After the first week, we’re bound to drop it one day when we don’t have time, or just don’t <em>feel</em> like doing it (i.e. not having the willpower to do it). Now we’re at risk of dropping the habit altogether since we’ve missed a day, and the willpower gap is still large, and we might miss another, and end up thinking a few weeks later, “whatever happened to that habit?” This is the stumbling around that many people do with new habits that they do, and seems to be a textbook model of failure for new year’s resolutions.</p>
<p>In this method, at each stage, I’m making sure I’m using as little willpower as possible to reach success for that particular level. And at each stage, I’m simultaneously building up evidence for those levels on the instinctive side. Hopefully, the instinctive side is seeing benefits, continuing to build up evidence, and at some point, I might notice real benefits. Maybe I feel better during the day, or maybe I feel like I’m performing better mentally, or, if I’m doing more intense exercise, maybe the pounds on the scale are decreasing instead of going the other way. These act as additional pieces of evidence that really make a big difference on us, to keep going and continue doing this habit.</p>
<p>This is more sustainable. We are building up real value on our instinctive side instead of always trying to use willpower to convince our instinctive side to do the habit and adopt the conscious viewpoint. We can start small, scale up, and constantly be at the point where it’s enough of a stretch for us to make progress, but not difficult enough that there’s a big gap and we have to constantly have to use a lot of willpower to carry out the habit. Over time, we build real evidence for the instinctive side to value the habit and eventually do it automatically.</p>
<p>And eventually, and sustainably, I reach 100 pushups every morning. Not by willing myself to do it for weeks—no, I would have quit way before reaching 100 pushups. Instead, it’s by building up real evidence for doing 3, 5, 8, 12 pushups, and scaling up.<a name="return-7" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#note-7"><sup>7</sup></a> And one December 3rd, in Sarajevo, Bosnia, I reached 100 myself.</p>
<h3>Sustainable habit development, summarized</h3>
<ol>
<li>Define the habit or goal. Focus (pushups, not just exercise), and be specific (100 pushups, every morning). Keep in mind that the idea is to build instinctive evidence so that it matches your conscious desire to develop the habit, which leads to the point where the habit is truly habitual, and is done automatically.</li>
<li>Start incredibly small, at the point where you’ll feel almost no resistance to doing it. In my example, this was starting out with just 3 pushups to get yourself into the habit of doing <em>something</em>. This will allow you to sustainably execute the habit, to build instinctive evidence.</li>
<li>Keep doing the habit, while keeping in mind how much willpower that you require to carry out the habit. While you do the habit, the instinctive side is developing evidence for you to continue doing that habit.</li>
<li>Scale up when you’re comfortable with the current level. Aim towards doing what you can do and then some (a ‘stretch’ goal). Scale down when it feels too difficult, but make sure it’s still at a stretch level.</li>
</ol>
<h3>The real end game</h3>
<p>When we try to develop habits, we’re trying to get to the point where we’re doing something habitually—when we’re doing it automatically. The critical point where a habit truly becomes habitual is when your conscious and instinctive values are in line with each other. That’s when you feel like you rationally know that it’s a good thing to do, and the instinctive side agrees. That’s when you feel weird when you <em>don’t</em> do it. Like the compulsion you have to brush your teeth, because it would be weird not to. And the compulsion that people who are used to exercising every day to do their workout or their run or whatnot. Those people feel weird when they don’t do it. They’re happy to do the habit and they <em>want</em> to—and it doesn’t take much work or self-convincing to do so. It’s automatic.</p>
<p>This means that after you get through the difficult stage of habit formation, you get to the point where you’re automatically doing the habit and it’s paying dividends in your life. You get to the point where exercise feels natural, writing is enjoyable, or eating well is something you do normally—just like how brushing your teeth is just natural and habitual—and it continues to improve your life, automatically, from then on. Amazing how that works, and it really inspires me to get the right habits in place so that they can improve my life.</p>
<p>So let’s get to it. Engineer your habit building to work towards developing a sustainable habit. Keep in mind that the resistance you feel is the instinctive side lacking evidence. Build that evidence, and use the easiest pathway to do so: by using as little willpower as possible. Scale up and adjust. See the results, and feel good about them. The initial investment is immense, but the long-term benefit of building a sustainable habit is creating a habit that will improve the quality of your life, automatically, for the rest of your life.</p>
<p><em>My primary work at the moment is research on habit formation and how we can use technology to assist in it. This is part of my theory of habit formation, which is constantly changing, and I present this as a possible explanation about the role of the two minds in habit formation and the source of resistance.</em></p>
<p><em>I’d love to hear your thoughts, and I’d also love to chat if you’re in the space. Drop me a line at mark@markbao.com.</em></p>
<div>
<div></div>
<div>
<p><a name="note-0" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#return-0">0</a> — “In summary, most of what you (your System 2) think and do originates in your System 1, but System 2 takes over when things get difficult, and it normally has the last word.” I agree that System 2 does deliver input when things get difficult, but I feel that whether it can <em>override</em> System 1 depends on whether System 1 can answer the question in the first place. It can offer input and be the final say in situations of complex thinking, like difficult math problems, since System 1 doesn’t have a way to figure out the problem and doesn’t have a say in the issue. But when System 1 and System 2 both believe something, I don’t think System 1 defers to System 2. In that case, System 2 overriding System 1 is not so easy (and requires willpower). Kahneman seems to touch on this later: “One of the tasks of System 2 is to overcome the impulses of System 1. In other words, System 2 is in charge of self-control.”</p>
<p><em>Kahneman, D. (2011). Two Systems – Plot Synopsis. (24-26) In Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.</em></p>
</div>
<div>
<p><a name="note-1" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#return-1">1</a> — Sometimes, I think the excuses we’re giving ourselves are post-rationalizations, where we decide instinctively that we don’t want to do something, then we try to rationalize that by saying we have too much work or we don’t “feel like it” that day. It seems that rationalization is not leading to a decision, but the decision leads to rationalization to try to convince ourselves retroactively on why that decision is right.</p>
</div>
<div>
<p><a name="note-2" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#return-2">2</a> — One interesting way to look at conscious values and instinctive values is to look at what they’re made up of. Assuming that conscious values are made up of logical evidence, and instinctive values are made up of experiential evidence, the ‘units’ of evidence differ. I think logical evidence is made up of bits of information gained externally, and bits of reasoning which is done through thought internally using external information, which all combine somehow to result in some degree of value. (External information: “Exercise is good for losing weight”; internal information: “I’m overweight, so I need to lose weight, by using exercise.”)</p>
<p>On the other hand, I believe that experiential evidence is made up of singular events and our rationalizations about those events (that was awesome, that was stressful, that was disappointing, etc.) and those similarly combine to result in a degree of value. (Exercise event: “Worked out for half an hour yesterday.” Resulting rationalization/feelings: “It was painful.”)</p>
<p>It’s interesting to think about the ‘bits’ that go into creating a singular metric of how we value something. Is there a linear scale for value, or is it more complicated than that? Now we’re treading the line between psychology and philosophy.</p>
</div>
<div>
<p><a name="note-3" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#return-3">3</a> — Kahneman presents the idea of WYSIATI (What You See Is All There Is), which essentially says that the instinctive side uses limited evidence to jump to conclusions, making it difficult for that side to make long-term decisions. I believe that it still tries to make long-term decisions when emotionally charged.</p>
<p><em>Kahneman, D. (2011). Two Systems – What You See Is All There Is (WYSIATI). (85) In Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.</em></p>
</div>
<div>
<p><a name="note-4" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#return-4">4</a> — At some point, I’d like to research if <em>matching</em> willpower works even better, meaning matching the amount of willpower that someone actually has (or that we predict that they have). I wonder if that allows someone to perform better since it matches their current inherent motivation to do something. People who are more disciplined (see next note) might be able to apply more willpower to a goal, and might be frustrated with the slow movement of a regular path that assumes less willpower.</p>
</div>
<div>
<p><a name="note-5" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#return-5">5</a> — I came up with a preliminary idea of ‘momentum’, which seems to be an interesting way to look at if our minds tend to believe that simply <em>doing</em> something in the past is enough evidence for doing it in the future, and other evidence like your current performance with that habit (such as weight already lost or other dividends already paid) are just bonuses. A habit becoming easier and easier and requiring less willpower to carry out might be partly the result of it being done in the past and you having the momentum to keep carrying out that habit.</p>
<p>After some thinking, I’m inclined to believe that momentum is simply a subset of instinctual evidence (experiential evidence) that contributes to the whole of instinctual value, not something that acts on its own in the decision-making process.</p>
</div>
<div>
<p><a name="note-6" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#return-6">6</a> — <em>Biased balance</em> is a concept that I’ve been thinking a lot about lately. The idea is that there are rarely instances where we should be on either extreme of something. Working all the time and having zero time to relax is a recipe for burnout and disaster. Believing something 100% and not being flexible to seeing the other sides of it is a recipe for ignorance. So we should be in balance: enough work, enough play. Enough belief, but also being flexible to seeing other viewpoints. Being in balance means that we can find the best parts of both extremes and apply them.</p>
<p>But in many cases, we should be biased in that balance, to make progress on one side or the other. For me, it’s making sure that I’m working and relaxing, but I’m having the biased balance of focusing on working. It’s also making sure that I’m getting out of my comfort zone and experiencing new things, and also having time to be comfortable and enjoying that, but leaning on the side of getting out of my comfort zone more often than not. A balance allows us to be sustainable in what we do, and avoid burnout; a biased balance allows us to use balance as a good foundation, while making progress towards one side or the other.</p>
</div>
<div>
<p><a name="note-7" href="http://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change#return-7">7</a> — Interestingly, we might see the ability to apply willpower and ‘will’ ourselves to do something as equivalent to discipline. The more discipline you have, the more able you are to use willpower to make yourself do something, and the bigger of a gap you can potentially attack. (In that, while normal people might scale up from 2 to 4 pushups successfully, highly-disciplined people might be able to scale up from 2 to 8 since they have a higher tolerance for using willpower, and can deal with larger gaps successfully.)</p>
</div>
</div>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[The void of losing someone you don’t know]]></title>
    <id>https://markbao.com/journal/the-void-of-losing-someone-you-dont-know</id>
    <link href="https://markbao.com/journal/the-void-of-losing-someone-you-dont-know"/>
    <updated>2015-03-29T15:17:00.000Z</updated>
    <content type="html"><![CDATA[<p>I didn’t know Aaron Swartz personally. We never spoke, not in person nor by email.</p><p>Yet, <a data-mce-href="http://tech.mit.edu/V132/N61/swartz.html" href="http://tech.mit.edu/V132/N61/swartz.html">his suicide today</a> has left a big hole in the world for me.</p><p>I found my own sadness baffling. I didn’t know the guy. Why did I, deep down, feel such a void in the world?</p><p>The reason was: I felt a rare connection to Aaron because of his thoughts and actions. An invisible connection that only existed at the intellectual level, not a social one, through his writing, technology, politics, and his willingness to show humanness.</p><p>His writing and thoughts connected with me, especially his <a data-mce-href="http://www.aaronsw.com/weblog/rawnerve" href="http://www.aaronsw.com/weblog/rawnerve">Raw Nerve</a> series on how to become better at being human. His writing showed me that other people were thinking about the same things I was, in terms of the “backstory” of being human, the inner. I felt like I was on the same wavelength with another human that was thinking and devoting time to these inner pursuits.</p><p>His code and contributions to software were inspiring, in Python, RSS, and elsewhere. Relentlessly making progress and thinking about the macro game of software and technology. Same wavelength.</p><p>His JSTOR incident? Not exactly the same wavelength. But fighting for progressive policies in government, liberating information in science and law, using the closer-to-democracy tool of the Internet to do that? Absolutely.</p><p>His <a data-mce-href="http://www.aaronsw.com/weblog/verysick" href="http://www.aaronsw.com/weblog/verysick">writings on depression</a> showed that, like all of us, he was human, and, like all of us, he suffered. But few of us show vulnerability and humanity. Many of us hide behind facades of <em>“how are you?” “great!”</em>, smiling photos, and upbeat Facebook statuses, preferring not to talk about what really goes on inside our heads.</p><p><strong>Here’s a guy who I felt a deep connection to</strong>, because we were on the same wavelength – through openly showing humanity, a devotion to improving oneself, using technology for change, and changing the macro political environment. There aren’t a lot of people that I feel a multi-faceted intellectual connection with, but Aaron was one of them.</p><p>And despite not knowing him at all, his death left me feeling a void in the world. Because the world lost a brilliant person, but also because the world lost someone whose ideas I believed so much in, whose ability to put those thoughts into action was admirable, whose willingness to show vulnerability and humanness was something I feel like the world desperately needs more of.</p><p><strong>But good often comes from bad.</strong> And the good, in this case, is the realization that we should aim to connect with more people, on a deeper wavelength. We should all be working relentlessly to put our feelings into words and into action, and not be afraid to show that, yes, we are actually human, and yes, we do have things we really believe in but haven’t yet acted upon, and we do have moments where we feel on top of the world and also the moments where we feel absolutely hopeless.</p><p>And we should all be working to make the most of our time in the world, to make sure we don’t squander our most limited resource, and instead maximize it, to connect to and affect more lives in this world.</p><p><strong>We might not all be socially connected, but the work that we do connects us as a community. And our collective work makes history.</strong></p><p>Thanks, Aaron.</p>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Always choose to be happy’ is toxic advice]]></title>
    <id>https://markbao.com/journal/always-choose-to-be-happy-is-toxic-advice</id>
    <link href="https://markbao.com/journal/always-choose-to-be-happy-is-toxic-advice"/>
    <updated>2015-03-29T15:16:00.000Z</updated>
    <content type="html"><![CDATA[<p>I hear about the idea of ‘choosing to be happy’ frequently. When we talk about improving our lives during our short existence, it’s oft-repeated advice.</p><p>Here’s the idea: when you’re not happy, or when you’re not satisfied, or even when you’re depressed, you can make the decision to be happy instead. You have the choice to be happy or sad – and, given the fact that you only have limited time on Earth, which one do you want to pick? Happy, of course.</p><p>So, ‘always choose to be happy.’</p><p>I find this approach to be extremely ineffective. Although it’s nice to acknowledge that you always have the choice to be happy or not when dealing with a situation, I think that there is less value in simply ‘choosing to be happy’ and <strong>more value in choosing to be unhappy and doing something about it</strong>.</p><h3>Choosing to be unhappy</h3><p>In my personal life, changes have often stemmed from my unhappiness with something and making a decision to change it. I’ve made positive changes because I chose to be unhappy (or even angry) about something that needed to change.</p><p>I feel like the idea of ‘choosing to be happy’ is simply a temporary escape, a band-aid that treats the surface, but not the root cause. It solves the symptom of unhappiness, but not the problem itself. That mindset <strong>robs us of the anger and impetus we need</strong> to make a change and attack the root of the problem.</p><p>For example, you might not be happy because you’re out of shape, which is making dating difficult. In that instance, you can choose to reject being unhappy and be happy instead, which allows you to relax and feel not so bad about the problems you’re facing.</p><p>But what does that change? What progress have you made? In this instance, choosing to be happy is only a temporary solution to the symptom, not the actual root cause, of your unhappiness. Here, choosing to be happy only solves, “<strong>I’m unhappy</strong> because I’m overweight”, the symptom, not “<strong>I’m overweight</strong>, and need to start exercising and eating better”, the problem.</p><p>Being unhappy is difficult, and it’s far from satisfying. However, I think some of the most important developments in your life can come from being unhappy and choosing to do something about it. Choosing to do something about the root cause of your unhappiness isn’t the same as choosing to solve the symptom of unhappiness itself. Lasting happiness comes from <strong>understanding that root cause and making something happen</strong>, not from numbing the resulting unhappiness by ignoring it.</p><h3>When you’re unhappy, there are three things you can do</h3><ol><li><strong>You can choose to be happy</strong>, but that only solves the symptom temporarily and doesn’t result in any long-term resolution – it just makes you feel better for the moment.</li><li><strong>You can choose to continue being unhappy</strong> and wallow in sadness (which is addictive), but that also will change nothing – and it will continue to make you more and more unhappy.</li><li><strong>You can change something</strong> that actually attacks the cause of your unhappiness, not just the effect of unhappiness itself, and try to eliminate the reason you are unhappy.</li></ol><p>Conquering the root causes of unhappiness is very difficult to do, because it requires so much willpower, and the alternative options – wallowing in sadness, or choosing to be happy for the short-term and treating the symptom – are so much easier to do (and are so much more tempting) than working to cure the true underlying issue.</p><p>But choosing to be unhappy and doing something about it is the only way that you will solve the actual problem. It’s the only way you can make progress in your life, by solving the real problems that are holding you back.</p>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Repositioning your perspective to achieve goals]]></title>
    <id>https://markbao.com/journal/repositioning-your-perspective-to-achieve-your-goals</id>
    <link href="https://markbao.com/journal/repositioning-your-perspective-to-achieve-your-goals"/>
    <updated>2015-03-29T15:14:00.000Z</updated>
    <content type="html"><![CDATA[<p>Sebastian Marshall wrote a great article about a way to <a href="http://www.sebastianmarshall.com/self-destruction-is-generally-counterproductive">prevent yourself from “giving in” when you’re working towards a goal</a>. Often times, I say “screw it, I finished such-and-such medium-sized project, let’s dig into some steak/these brownies/some dessert… I haven’t in a long time.” Not only is it dangerous, but you eventually lower the criteria for “event for celebration”, and it’s so easy to give in.</p>
<p>One way to suppress this urge to give in, says Sebastian, is thinking the following: “Self destruction is generally counterproductive.” It’s smart. The idea is that, all things considered, giving in is almost always net negative. So why do it?</p>
<p>The thought goes from appealing to counterintuitive—usually, at least. Sometimes those brownies just smell <em>too good</em>.</p>
<p>* * *</p>
<p>It’s important to think about why this works, since that could tell us what makes it so effective, and maybe we could apply this thinking elsewhere. When you get into a stage where you’ve done away with something and used your self-control to do so, you eventually fatigue of doing the correct, but enormously less satisfying thing. (At least, that’s what dieting tastes like.) Rewarding yourself is now appealing and your self-discipline is weak.</p>
<p>So how does it work? My thought is: getting this reminder triggers a subconscious memory of when you first decided to set the goal, and reminds you of why you did it and what you imagined the end result to be. With this perspective floating in your mind, the urge to do better and be better, because indulging does mean a net negative, overpowers the nagging thought of the satisfaction of indulgence. <strong>This reminder gets you into the perspective and mindset from when you set your goal.</strong></p>
<p>Another method for reconsidering the decision to indulge (or in this case, to drop the ball) is the well-known <a href="http://lifehacker.com/281626/jerry-seinfelds-productivity-secret">Seinfeld rule of “don’t break the chain”</a> for keeping habits. I think there are a lot of ways this can be triggered.</p>
<p>However, rewards are definitely important, and sometimes indulging is the right thing to do. The problem lies with that it’s too easy to get into a habit of bad rewards. The idea of repositioning your perspective to see things from a past mindset can help set better, net-positive rewards. It’s powerful because one thing that is incredibly hard to hold on to is a mindset you had in the past, which you used to set a goal. Sometimes, after a short while of inspiration and discipline, it deteriorates, while the urge to defect becomes stronger. Being in the original mindset is a good way to hold fast to your original goal.</p>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[The Optimization–Cognitive Load Tradeoff]]></title>
    <id>https://markbao.com/journal/the-optimization-cognitive-load-tradeoff</id>
    <link href="https://markbao.com/journal/the-optimization-cognitive-load-tradeoff"/>
    <updated>2015-02-24T16:23:00.000Z</updated>
    <content type="html"><![CDATA[<p>Lately, I’ve been exploring what the tradeoffs of optimization are. That is, when we try to optimize what we do in work and life, what are the effects? What actually gets <em>worse</em> when we try to make things better?</p>
<p>One that seems obvious is that <strong>more optimization leads to higher cognitive load</strong>. The more we want to do something well, the more mental effort we’ll have to put into doing that task. This doesn’t just amount to the additional work needed to do something well, but I hypothesize that optimizing also seems to involve a <em>second track</em> of thinking, running alongside the track of the action itself, that is dedicated to observing how we are doing that action and evaluating whether we are doing it well or not.</p>
<p>If we assume that the need to optimize increases cognitive load, either beacuse the desire to do something well involves more mental effort, or because the simultaneous reflexive self-evaluation of our performance requires us to perform two tasks (the main task and the optimization task) at the same time, we can make a few hypotheses:</p>
<p><strong>Optimization may lead to avoidance and procrastination.</strong> — A desire to optimize how well you do a task, which leads to cognitive higher load, may make us avoid that task more. Common sense tells us that we are inclined to avoid tasks that are cognitively demanding. Having a desire to optimize on a task may make us procrastinate on that task, since we have such a high bar of performance when we carry out the task that it feels daunting.</p>
<p>Take one example: I have to send out a bunch of cold emails. I want to send out the perfect cold emails since these are very important emails, but that makes the task of “send cold emails” more difficult and daunting. As a result, I tend to avoid the task beacuse I added the need to optimize to it. If the task was simply “send cold emails” without an optimization element, I’d be able to send out some damn emails. But if I attempt to optimize the emails and as a result procrastinate to the last minute, rush the emails, and send out crappy ones because of the time pressure, that amounts to a very bad optimization on the whole.</p>
<p><strong>Optimization may undermine performance.</strong> — If optimization takes up a certain level of cognitive resources, then the act of optimization can actually decrease performance. Research has found that increased cognitive load, represented by increased working memory usage, can decrease performance (Ward &amp; Mann, 2000). One group of researchers suggest that “cognitive load has a detrimental effect on goal pursuit by diverting processing resources away from the goal” (Vohs, Kaikati, Kerkof, &amp; Schmeichel, 2009). As a result, the need to optimize may lead to us ‘overthinking it’, or exhaust more cognitive resources, which may lead to us actually doing <em>worse</em> on a task even though we are ostensibly attempting to optimize how well we are doing it.</p>
<p>Perhaps this is what is behind Buddhists’ conception of the “beginner’s mind,” the unexperienced, unprejudiced mind that does not try too hard to optimize, which we might consider as the source of “beginner’s luck”. Or this might be why (to pull a story from Timothy Gallwey’s <em>The Inner Game of Tennis</em>) a tennis player might be in the zone, but once his opponent comments on how well his backhand is doing today, he reflexively tries to understand what exactly he was doing with his backhand that was working, leading to him losing his streak.</p>
<p>With both of these, we see that there may be a tradeoff between the desire to optimize and the effects of cognitive load. Optimization is a good thing, until we try to optimize too much. Then, it might lead to avoidance of the daunting task, or to decreased performance on the task. Perfectionists—those who optimize to an extreme degree—may actually find themselves doing <em>worse</em> and procrastinating more due to the need to be perfect. What do we do about that?</p>
<p><img alt="Optimization - Cognitive Load Tradeoff Chart" src="https://mb-prod.imgix.net/journal/2015/the-optimization-cognitive-load-tradeoff/optimization-cognitive-load-summary@2x.png"></p>
<h3>Counter-strategies</h3>
<p>Here are three counter-strategies that I think may be interesting to consider:</p>
<p><strong>Increase cognitive capacity or willpower.</strong> — If the problem is that optimization makes us procrastinate, we may be able to continue to optimize at the same level if we come at it from the direction of training ourselves to procrastinate less and focus more. As someone that has been able to do this for short stretches of time, I know it’s possible, but the difficulty is sustaining this long-term. The negative impact of optimization on task performance, however, is more difficult to solve.</p>
<p><strong>Rebalancing.</strong> — The potentially better option is to be aware of when one’s desire to optimize is causing procrastination or decreased task performance. Then, keeping the end goal in mind, we may try to shift the balance of optimization so that we are optimizing less but also freeing up cognitive resources so we can either reduce procrastination (so we can do the task in the first place) or increase performance (so we can do that task well). Educational psychologists have used cognitive load in conjunction with the concept of the zone of proximal development to advocate “reach” goals, which is in the space between goals we can easily achieve and goals we are unable to achieve (Schnotz, 2008). We may take the same approach, finding, say, a <em>zone of proximal performance</em>, where we are still optimizing what we are doing but not so much that it becomes cognitively overbearing. Striking a better balance seems to me to be an effective way to hit the sweet spot of optimizing enough but not to the point of excess.</p>
<p><strong>Separate doing the task and evaluating performance.</strong> — We might be able to reduce the detrimental effect of optimization on cognitive load and performance by segregating the processes of “actually performing the task” and “evaluating our performance”, doing them sequentially instead of simultaneously. Taking the example of writing cold emails, I might write a cold email out, promising myself to review it after I’m done, instead of worrying about how to make each part perfect. When I’m done, I can shift into an ‘evaluative’ stage. This way, I can separate the two tracks of performing the task and evaluating how I’m doing, potentially reducing the negative effect of cognitive load on performance by not needing to keep the two tasks in mind at the same time.</p>
<h3>Gist</h3>
<p>As we can see here, optimization is a double-edged sword. I hypothesize that it can be really beneficial when used correctly, but potentially detrimental and counterproductive when taken to excess. In my next post, I’ll talk about another tradeoff: the tradeoff of optimization with contentment and happiness.</p>
<div>
<p>—</p>
<p><em>Thank you to Quinten Farmer and Dan Shipper for reading a draft of this post.</em></p>
<p>References</p>
<p>Schnotz, W. (2008). Why multimedia learning is not always helpful. In J.-F. Rouet (Ed.), <em>Understanding multimedia documents</em> (pp. 17–43). New York; London: Springer. Retrieved from <a href="http://public.eblib.com/choice/publicfullrecord.aspx?p=367576">http://public.eblib.com/choice/publicfullrecord.aspx?p=367576</a><br>Vohs, K. D., Kaikati, A. M., Kerkhof, P., &amp; Schmeichel, B. J. (2009). Self-regulatory resource depletion: A model for understanding the limited nature of goal pursuit. In G. B. Moskowitz &amp; H. Grant (Eds.), <em>The psychology of goals</em>. New York: Guilford Press.<br>Ward, A., &amp; Mann, T. (2000). Don’t mind if I do: Disinhibited eating under cognitive load. <em>Journal of Personality and Social Psychology</em>, 78(4), 753–763. doi:10.1037/0022-3514.78.4.753</p>
</div>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[Request: help on over-optimization. Reward: a story from Thailand]]></title>
    <id>https://markbao.com/journal/my-email-for-the-listserve</id>
    <link href="https://markbao.com/journal/my-email-for-the-listserve"/>
    <updated>2015-01-03T16:22:00.000Z</updated>
    <content type="html"><![CDATA[<p>
<em>The Listserve is an email listserve with about 25,000 subscribers, in which one person every day is selected to email the entire group. A few days ago, the random number generator smiled upon my user ID (or some such). I didn’t know what to write about, and I didn’t want to give some obvious life advice—so I asked for some, and told a story to add some value.</em>
</p>
<p>
<em>Published January 3, 2015, copied here, with a photo for context.</em>
</p>
<p>
<em>—</em>
</p>
<p>
Hey Listserve,
</p>
<p>
I’m Mark Bao. I’d like to ask for some life advice. And tell you a story.
</p>
<p>
1. For most of my life, I’ve been trying to optimize things as much as possible. Optimize the things I’m working on. Make sure that I’m learning exactly the right things, to build the mental structures so I can be different than others. And above all -- make sure I’m working on something that I think will have the most impact on the world -- which right now I think is behavioral science. But lately, such a focus on optimization, and perfectionism, has gotten difficult -- in part because I realize that there’s so much uncertainty and I can’t predict things, and trying to make sure things work out while not knowing everything has been overwhelming. Has anyone else dealt with this? I’d love to chat with you.
</p>
<p>
2. I’m starting a group of people who are interested in psychology, thoughtful topics, life-long learning, and understanding things on a deeper level. If you read Farnam Street or Raptitude or Less Wrong or are interested in understanding behavior and improving personal growth, it would be rad to have you in the group! Just shoot me an email. The goal is to have a collaborative discussion among thoughtful people trying to make the world a better place.
</p>
<p>
And now a story. Northwest Thailand. During my round-the-world trip. T and I decide to take a day hike into a valley, between two mountains, to a waterfall, crossing a river a few times, climbing boulders, walking through idyllic paths through damp forests and brushes teeming with weird bugs we’ve never seen before.
</p>
<p>
We get to the waterfall, and eat our sandwiches in victory.
</p>
<p>
<img src="https://i.imgur.com/x0vzCqF.jpg">
</p>
<p>
3 hours to sundown - just enough time to get back home before things go dark. But when it does go dark... It gets below zero. If you stayed in the valley, things aren’t looking great for you. We had no more food. No water. Hiking in shorts and a t-shirt. No worries, plenty of time to go.
</p>
<p>
Walking back, T spots an upper trail. I’m thrilled -- I hate backtracking and always like to take new trials. We walk up and see a whole new view of the valley, almost reaching the top of the mountain. But...
</p>
<p>
“Hey, T?”
</p>
<p>
“Yeah?”
</p>
<p>
“Did we lose the trail?”
</p>
<p>
We look down. What was the trail now was a few leaves on the ground.
</p>
<p>
“Uh, weird.”
</p>
<p>
1.5 hours to go before sundown. We tried backtracking, trying to find the leaves on the ground we followed before. It was all shrubs and trees and weird bugs.
</p>
<p>
Nothing.
</p>
<p>
1 hour to get out. No trail. Getting dark. No food. No water. And already feeling chilly.
</p>
<p>
Panic. But after a moment: we remembered we crossed the river at the bottom of the valley. So we thought: well, maybe we should try to get down to the river.
</p>
<p>
We found a relatively flat incline with some leaves, got on our butts, and slid down the side, getting scratched, bit by bugs, dodging tree trunks, and trying to control ourselves going down.
</p>
<p>
We didn’t know if that would lead to the right place. We didn’t know how far we went up and if we had enough time to get down.
</p>
<p>
But then we caught a glimpse of the river. We got up, jumped over a bunch of boulders, and ran over to the river, ridiculously happy that we made it down. Followed the river for a while, found the path again -- and found our way back, walking back home just as the sun set.
</p>
<p>
Mark Bao
<br>
New York, NY
</p>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
  <entry>
    <title type="html"><![CDATA[A technique for starting new habits and maintaining motivation: Attack Doses]]></title>
    <id>https://markbao.com/journal/a-technique-for-starting-new-habits-attack-doses</id>
    <link href="https://markbao.com/journal/a-technique-for-starting-new-habits-attack-doses"/>
    <updated>2015-01-02T16:21:00.000Z</updated>
    <content type="html"><![CDATA[<p>I’ve always had a nagging feeling (since writing <a href="https://markbao.com/journal/building-sustainable-habits-why-we-make-excuses-and-resist-habit-change"><em>Building Sustainable Habits: Why We Make Excuses and Resist Habit Change</em></a>) that sometimes building habits using small steps isn’t always the right way to go. There are people who are able to start a new habit, ramp up fast, build a self-reinforcing loop of motivation, and continue to execute over and over—without the need for small steps. So, which one is the right way to build a new habit? Using small steps, or using the fast-track approach of getting motivated and going hard on the new habit?</p>
<p>One argument is that most people won’t be able do the fast-track approach. We might hear about the cases where this was successful, but there are many where the fast-track approach didn’t work, and we don’t hear about those cases.</p>
<p>Another argument might be that we can combine both of them to match the different levels of motivation that we have, such as the high motivation we have in the beginning of a habit and the sometimes declining motivation we have in the middle of one. In this article, I’ll introduce a hybrid approach, which takes advantage of both high motivation and sustainable habit development, and I’ll outline some example habit development plans that incorporate the attack dose technique. While this is wholly empirical and strictly a theory, it may be a useful concept to incorporate into your habit development, especially as you implement new goals for 2015.</p>
<h3>Motivation waves</h3>
<p>BJ Fogg, the director of the Stanford Persuasive Tech Lab, and Tiny Habits, a habit development program, introduced the idea of <a href="https://www.youtube.com/watch?v=fqUSjHjIEFg"><em>motivation waves</em></a>, stating that we have different levels of motivation at different points during the development of a habit.</p>
<p><img width="100%" src="https://mb-prod.imgix.net/journal/2014/habits-attack-doses/fogg-motivation-wave.png"></p>
<p>Fogg argues that during these high motivation points, we have a temporarily higher ability to engage in habit-building tasks. We might have higher willpower during these points, which would allow us to follow through on doing the habit, which we can apply to dual-process habit development.</p>
<p>A recap of dual-process habit development theory: the goal of habit development is to close the gap between “what you want to do” and “what you actually do”. Let’s take exercise: you <em>know</em> that you should exercise to be healthy, maintain a good appearance, increase confidence, and other reasons of that sort. This is the <em>conscious</em> side, which has reasoned out the benefits. But—a lot of people don’t actually do it, because they feel like it’s annoying, painful, don’t have enough energy, and other reasons, which is their <em>instinctual</em> side thinking about why they don’t want to exercise. These two sides are often in opposition. The goal of sustainable habit development is to build <em>instinctual evidence</em>—such as seeing results, weight loss, better health, etc.—so that “how much you know you <em>should</em> exercise” matches “how much you <em>want to</em> exercise”.</p>
<p>When we start a new habit, we’re almost always in a high motivation state: either it’s a new year, or something just gets us Mad As Hell and we have enough energy to change it.</p>
<p>For me, that’s certainly getting up earlier, which has been a challenge for, I dunno, 22 years or so. In my daily writing exercise for today, I noted my frustration for not being able to develop the waking-up-early habit for so long. For me, the key problem with waking up late is going to sleep late. Right now, I’m in a high motivation state, and I have the rare opportunity to use the energy I have to create the new habit of getting up early.</p>
<h3>The attack dose</h3>
<p>I could engage in the sustainable habit model and build the habit using small steps. With 2am as my usual bedtime, I could probably make “going to bed before 1am” a good first step, and wake up at 9am, feeling moderately accomplished, building some instinctual evidence that “waking up early is a good idea”.</p>
<p>Alternatively, I could harness the high motivation wave that I’m on in the beginning of a habit, and use an <strong>attack dose</strong> to take advantage of that high motivation. The key question is: “what’s the most intense thing I’m willing to do right now to act on this habit?“—to which my answer is, go to sleep 4 hours before my average bedtime. If you’re a midnight sleeper, that means going to sleep at 8pm. (I’m a 2am sleeper, so an attack dose would be 10pm for me). I’ll also have a specific metric for success as usual, like how good I feel after waking up earlier.</p>
<p>I believe the attack dose will allow you to fast-track building instinctual evidence. My attack dose for waking up earlier, which is going to sleep at 10pm and waking up at 6am, would get me up 4 more hours earlier and—presumably—I would feel much better about that and it would allow me to taste a bit of how good it feels to wake up so early, which can be highly motivating. A normal ‘small steps’ approach, waking up at 9am, would make me feel good, but not as euophriously good as waking up at 6am. <strong>Since we have the initial motivation to do it, we should use that to build more motivation for us to continue building the habit.</strong></p>
<h3>Planned deceleration</h3>
<p>But there’s a caveat: keeping this up will be extremely difficult. Both internal factors (such as fatigue and willpower depletion, <em>especially</em> at night) plus external factors (such as events in the evening and urgent things that prevent you from keeping this up) will throw this into chaos given enough time and the randomness of life. Make no mistake: during the attack dose period, the prospective habit is highly volatile and not at all stable nor sustainable. As a result, we need to predict that these things will happen, and plan to relax the habit intensity over time to more sustainable levels. Consider the following:</p>
<p><img width="100%" src="https://mb-prod.imgix.net/journal/2014/habits-attack-doses/attack-doses@2x.png"></p>
<p>For example, I might start with an attack dose of going to sleep at 10pm for 3 nights, and then gradually decelerate to going to sleep at 11pm, and then 12am, and then 1am. I’ll then engage in sustainable habit development, making small steps to get back to the 10pm goal—perhaps taking a few weeks to get back to 10pm, but doing so sustainably, facing some challenges but being able to get through them, and building a strong foundation of instinctual evidence.</p>
<p>The reason we deliberately decelerate the habit is because random events will happen that will disrupt the high bar you’ve set for yourself with the attack dose. Habits, especially the early days, are highly vulnerable to disruption, and we need to build <em>resilience</em> to disruption, which is one of the goals of using small steps for sustainable habit development. Habit resilience is what can allow us to continue developing a habit even when we’re facing challenges, like missing a day, but if we don’t have the resilience, failing to make good on a habit one day may unravel the entire habit.</p>
<p>In mindfulness meditation and in Zen, perhaps the most important lesson is to reserve judgment and be kind to oneself when things go wrong. If, in the middle of meditation, you start thinking about bills or something that happened yesterday or things you’re about to do, and then catch yourself in this state of ‘monkey mind,’ the key is to not berate yourself for failing to keep focused. Rather, the better course of action is to recognize it, accept that it happened, and refocus without judgment.</p>
<p>Resilience is the acceptance of challenges and pressing on regardless. You could be strong-willed and stick to the attack dose and see challenges as just “one-time things” that you can bounce back from—but this depends strongly on willpower, which is unreliable. Instead, a more accessible goal is to lower the bar, so when challenges inevitably crop up, it’s more likely that we are resilient enough to continue the habit, instead of losing it altogether. In other words, <strong>if we expect that there will be challenges, and plan that into habit development, we may be more capable of continuing the habit despite challenges.</strong></p>
<p>This is especially relevant for new year’s resolutions. Many people go hard on a new resolution, such as by going to the gym every morning, and expect to keep this up for the entirety of the habit. But then they miss a day, then another, and the habit is often lost after a few discouraging failures. Instead of thinking that we’ll keep up an ambitious habit, we have to expect that our motivation will wane, and things will come up that will disrupt the habit—we have to <em>plan</em> for them, lower the bar for the habit, expect disruptions instead of being able to continue going to the gym every day at 6am. Building resilience.</p>
<p>When we do so, the habit is easier to do, and we may be more easily motivated to do it, especially compared to the attack dose. When we have a task that you are easily motivated to do, it can create a safety net for when your motivation is lower—the task is still easy enough to do regardless.</p>
<p>Consider the following hypothetical diagram:</p>
<p><img width="100%" src="https://mb-prod.imgix.net/journal/2014/habits-attack-doses/motivation-resilience@2x.png"></p>
<p>Here, we see that we match our high motivation with high habit intensity in the beginning (the attack dose). Then, we taper down, but our motivation is most likely still high. The reason is that by creating a gap between the (lower) intensity of the habit and our (higher) motivation level, we <em>may</em> be more resilient to drops in motivation that may happen over the course of time. Assuming that missing one day of a habit puts us at high risk of dropping the habit altogether, it seems to be essential to have this safety net.</p>
<h3>The attack dose: pros and cons</h3>
<p>My theory is that the attack dose:</p>
<ul>
<li><strong> Builds more instinctual evidence</strong> for the habit. After waking up at 6am and getting a lot done, I’ll be able to really see the benefit of waking up early, which will increase the instinctual evidence way more than just doing the small-steps plan, taking advantage of the higher motivation wave, which will theoretically maintain my motivation to continue the habit due to the stronger evidence for doing it.</li>
<li><strong>May create self-reinforcing motivation</strong>. By getting big successes early on, your initial high motivation may be even more elevated, potentially boosting future habit performance. Seeing lots of results from attack-dose habit development may contribute a lot more to your motivation than seeing smaller gains from normal small-steps habit development. (This may become a motivation ‘multiplier’ of sorts, causing cascading effects.)</li>
<li><strong>Keeps the instinctual evidence more accessible</strong>. I can probably think of a day a long time ago when I got up early, but that far-away memory doesn’t affect my decision-making much. Rather, immediate, near-term evidence (such as how great it was to wake up early) may be more effective, since you experienced the evidence recently. [We know that activating certain memories using words makes them more accessible, which in turn influences judgment (Forster &amp; Liberman, 2007). It doesn’t seem too much a jump to hypothesize that <em>experiential</em> activation, that is, doing some action, increases accessibility and influences behavior.]</li>
<li><strong>Shows the contrast between the goal and your initial state</strong>. Imagine going from a few super-productive days waking up at 6am and then scaling down to a schedule close to your previous one. Seeing the contrast between the two may be highly motivating for you to get back to waking up at 6am.</li>
<li><strong>Can be combined with other techniques to increase success</strong>. We can use techniques that may seem too heavy-handed for small-steps habit development, but that allow us to hit the attack dose goals. For waking up early, we can have negative consequences, such as losing money, if we don’t wake up in time. For exercise, we can enlist a personal trainer to make sure that we get through our attack dose, which could be, say, a full 60 minutes of training. For meditation, a difficult habit to start, we can enroll in a meditation class that will guide us through. While these are not necessary for small-steps development, they can increase the potential that we succeed at achieving the attack dose to build motivation.</li>
</ul>
<p>There are risks present, of course:</p>
<ul>
<li><strong>Deceleration may be demotivating.</strong> Someone going through the planned deceleration process, despite knowing that this is what they planned all along, may be demotivated from seeing their goals dwindle during that period.</li>
<li><strong>The habit may be lost during deceleration.</strong> If someone went through hardship to achieve the attack dose, perhaps if the costs outweighed the benefits, they may lose the habit during deceleration and fail to engage in sustainable development.</li>
</ul>
<p>These risks can be potentially sidestepped if we build enough evidence in the beginning, during the attack dose, for the importance of the habit, ideally creating enough momentum to allow the person to either go back to the attack dose or find a balance between sustainable levels for the habit and the attack dose levels. There should be more work on figuring out how to reduce these risks.</p>
<h3>Example implementations</h3>
<section>
<h4>Exercise</h4>
<p><strong>Attack dose during high motivation</strong>: Go to the gym for 60 minutes, 3 times a week</p>
<p><strong>Planned deceleration</strong>: Scale back to going 15 minutes per day, 3 days a week, then escalating back to 1 hour</p>
<p><strong>Building evidence for</strong>: How exercise makes you feel great about yourself and your health, makes you more confident, etc.</p>
<p><strong>Attack dose benefits</strong>: Allows you to build early evidence and taste a bit of how great it feels to exercise, but also plans for future disruptions by scaling back and developing sustainably</p>
<p><strong>Attack dose risks</strong>: Someone may not be physically able to exercise for 60 minutes; scaling back to 15 minutes may make someone lazier over time; someone may not see the results they want to during the attack dose and may be less motivated.</p>
<p><strong>Co-techniques</strong> (simultaneous techniques that can increase adoption): External incentives, such as signing up for a class or personal trainer that will make sure that we get through our attack dose.</p>
</section>
<section>
<h4>Meditation</h4>
<p><strong>Attack dose during high motivation</strong>: Meditate for 20 minutes for 5 days</p>
<p><strong>Planned deceleration</strong>: Scale back to meditating for 3 minutes per day, then gradually increasing to 30 minutes</p>
<p><strong>Building evidence for</strong>: How beneficial meditation is, awareness of how busy our minds are, and the need to meditate to become more mindful</p>
<p><strong>Attack dose benefits</strong>: Allows you to build early evidence and understand how useful meditation is; 3 minutes is often too little time to see the benefits of meditation apart from “wow, my mind is really chatty”</p>
<p><strong>Attack dose risks</strong>: Starting out on a meditation habit is difficult because the first few sessions are frustrating, and the attack dose can potentially exacerbate this frustration, but this is highly dependent on the individual.</p>
<p><strong>Co-techniques</strong>: Enrolling in a meditation class, or meditating with someone who meditates often, who can convince you that the difficulties and frustrations are normal.</p>
</section>
<section>
<h4>Waking up early</h4>
<p><strong>Attack dose during high motivation</strong>: Wake up 4 hours before average wake-up time (e.g. waking up at 6am if you usually wake up at 10am)</p>
<p><strong>Planned deceleration</strong>: Scale back to waking up one hour earlier, then work up to desired time</p>
<p><strong>Building evidence for</strong>: How great it is to wake up early, feeling less guilty about wasting the day, better work, etc.</p>
<p><strong>Attack dose benefits</strong>: Allows you to build early evidence and really see the benefits of waking up early, and it can also get you out of the cycle of waking up late, going to bed late, waking up late, etc.</p>
<p><strong>Attack dose risks</strong>: Going to sleep earlier than normal may be difficult, even with high motivation (physical limitations).</p>
<p><strong>Co-techniques</strong>: External incentives, such as paying a certain amount of money if you don’t get up at a certain time, can help increase initial adoption. For waking up early, one can combat the physical limitations by using e.g. melatonin to meet the attack dose goals. (These are only for achieving the attack dose, and I believe they are counterproductive in the sustainable development process for building a true internal incentive for the habit.)</p>
</section>
<h3>Gist</h3>
<p>When we start a new habit, we are most likely highly motivated to carry it out. It may be smart to take advantage of this high motivation wave and engage in high-intensity habits to build a lot of instinctual evidence for the habit, and then enter into planned deceleration, shifting gears to sustainable habit development. The net result may be higher motivation and much more instinctual evidence that the habit is worthwhile—evidence that can give us strong motivation to continue building the habit, and evidence that is <em>so much more real</em> because, well, you just proved it was.</p>
<div></div>
<section>
<p><em>Acknowledgements</em></p>
<p>Thank you to <a href="https://twitter.com/dngoo">David Ngo</a>, <a href="https://twitter.com/arjunblj">Arjun Balaji</a>, and <a href="https://twitter.com/conradd">Conrad Barrett</a> for reviewing and discussing drafts of this article.</p>
<p><em>References</em></p>
<p>Forster, J., &amp; Liberman, N. (2007). Knowledge Activation. In A. W. Kruglanski &amp; E. T. Higgins (Eds.), <em>Social Psychology: Handbook of Basic Principles</em> (2nd., pp. 201–231).</p>
</section>]]></content>
    <author>
      <name>Mark Bao</name>
      <uri>https://markbao.com/</uri>
    </author>
  </entry>
</feed>
