๐Ÿค”๐Ÿ’ญJustin’s Comments on Yes Or No Philosophy, Part 3 (Main Video Continued)๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป๐Ÿ“

Part 1
Part 2

Elliot Temple has commented on this post here

(These are comments on the longest/”main” video in the Yes/No educational product, and specifically on the content starting around the 60 min mark and continuing until around the 90 minute mark. This is a selective summary/discussion of items and will omit many points and details, which you will have to pay for the whole product to get!)

Part 3 of Video (continued from last post)

Degrees of problem solving

Elliot says there’s no degrees of problem solving if we’re being precise. A solution does or doesn’t solve a problem. You can think otherwise if you aren’t carefully defining what you consider to be your success criteria but still taking those criteria into consideration when evaluating solutions.

Example: if you say you want to make as much money as possible, then you should act in a way which would actually achieve your supposed goal — in other words, you should be spending every minute doing the actions which will make you more money. But people don’t really mean it when they say something like “i want to make as much money as possible.” They mean they’d like to make lots of money subject to a bunch of constraints regarding working hours, interesting work, etc.

Getting Stuck

If you get stuck in solving a problem, you may want to reconsider the problem, and not just focus on the ideas you’re considering as solutions.

One thing you could do is make the problem more demanding so more stuff gets ruled out.

Elliot says being precise about problems can help get epistemology correct.

Summary so far

Elliot gives a summary so far, some highlights:

Amounts of support & positive args are myths.

All args are decisive or false!

Ideas are refuted in the context of solving a problem.

If you have only one yes idea for a problem, act on it

Part 4 of Video: Decision Charting

(Elliot gives an example of a pet decision chart)

J’s Comment: Decision charting is good idea. I had a recent decision regarding a computer purchase and had something like a decision chart in mind when making the decision.

I’m not gonna try and do a chart in text ๐Ÿ™‚ But my proposed solutions were something like:

Macbook
Macbook Air
Macbook Pro
Windows laptop
Don’t buy anything

And my problems were something like

  1. Lets me have access to a full computer outside house
  2. Runs OSX
  3. retina screen

One thing i got kinda stuck on was, i couldn’t decide whether i wanted MAX LIGHTNESS or MORE PERFORMANCE. So while the problems above knocked out the Macbook Air, Windows laptop, and not buying anything, I was indecisive for a while between the Macbook and Macbook Pro.


Elliot says Decision Charts are trying to solve a similar problem to things like pro/con lists, giving things a 1-10 score, etc. Except its more useful in lots of cases because saying “no” to a bunch of stuff gives you more useful information than a kinda vague score.

Elliot says you can use decision charts to analyze complex questions like what’s a good economic system. But for this kinda thing you need footnotes to the shorthand expressions for the issues in your chart.

The charts help you organize your conclusions, but you still need to come to those conclusions by thinking, writing arguments, etc.

Part 5 of Video

Elliot says if you have multiple yes ideas, you can try and find flaws with each, brainstorm crit, clarify the ideas, etc. Say you do this, but you’re still stuck.

You can consider various things, like whether you should change your standards to try and solve a different, easier problem.

Time limits

Elliot says that if a time limit comes up in solving a problem, you can reconsider the idea in light of the time limit, and come up with a new plan.

J’s Comment: I did this yesterday. I was planning on making a somewhat elaborate baked penne pesto with sausage, but decided that would take too long and be too much effort for my level of interest at that time. So I did a simpler version of the penne pesto dish that took less time.

Elliot points out that reconsidering the problem in light of time pressure generally leads to doing something easier, because you donโ€™t have time to do something harder right now.

J’s Comment: Just like my pesto example ๐Ÿ˜€

Temporary Solutions

If you have two things you really want, and they seem to conflict, and youโ€™re running out of time, you can come up with a temporary solution.

Elliot gives an example: if youโ€™re conflicted about whether to drop out of school, you can decide to stay in school for one more week.

Elliot says good ideas are reasonable and flexible. They generally wonโ€™t insist you figure out something youโ€™re struggling with RIGHT NOW unless thereโ€™s a good reason

๐Ÿค”๐Ÿ’ญJustin’s Comments on Yes Or No Philosophy, Part 2 (Main Video Continued)๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป๐Ÿ“

Note: These are comments on the longest/”main” video in the Yes/No educational product, and specifically on the content starting at the 30 min mark and continuing until around the 60 minute mark. This is a selective summary/discussion of items and will omit many points and details, which you will have to pay for the whole product to get!

Update: Elliot has replied to this post here and here

Part 1 of Video

(continued from previous post which also discussed Part 1)

Arguments

Elliot says all correct arguments are decisive.

There’s two types of crit: false crits, and decisive crits which settle the matter by completely refuting an idea.

Elliot notes some people gather a bunch of bad arguments and think the collection of bad arguments is worth something. But it’s not.

J’s Comment: Yeah like, people will do spaghetti arguments in favor of some kinda conspiracy theory and have 20 args. And suppose you quickly refute five, they’ll still be like “BUT SURELY THERE MUST BE SOMETHING HERE!!! I HAVE FIFTEEN MORE!”

And if you refute the fifteen more they’ll just make minor variations or come up with more new bad stuff!

Refutations

Elliot gives an example of someone thinking they’ve discovered a planet and it turns out there’s candy on their lens. The theory that they discovered a new planet is thus totally refuted, not 50% refuted.

Elliot says medium arguments don’t exist.


Part 2 of Video

Negative Arguments

Elliot says that Yes/No only uses negative arguments (criticisms), though positive args which can be rephrased into an equivalent negative argument are okay.

Negative arguments say why an idea doesn’t work. People think an idea can work but have a negative side effect — Elliot gives the example of a useful app costing money.

J’s Comment: From an economics perspective, being willing to buy something means you value it more than the money. Like if you buy a slice of pizza for $3, that means you value the pizza more than the $3. People can be really confused about this though. I guess partially because they don’t have a clear idea of what their values are, so they often don’t buy stuff in a non-coerced way. Like, their attitude isn’t “I’ve made judgments about what’s important given the wealth I have, and am happy to pay accordingly.” It’s more like “I HAVE to buy X, and Y, and Z, and now I can’t buy A, and B, and C, and that sucks.”

Context

Elliot says you gotta think about your problems carefully in the context of your life. An app might do what you want it to do in terms of functionality, but cost too much. It can solve problem X but not X + Y. An app that has some functionality solves one problem — an app with the required functionality, that also costs less than $5, is a different problem.

Elliot gives an extended example involving pets. The main point is when you take a problem and then add some constraint onto it, that’s a different problem. And you need to understand what problem you wanna solve.

Positive Arguments

Positive arguments (that cannot be restated as negative arguments) are a myth. They are supposed to support ideas, but that’s false.

Elliot gives some examples re: positive arguments that can be restated as negative arguments and those that cannot.

J’s Comment: rather than restating Elliot’s examples, I’ll give my own to ensure understanding.

“I’ll buy a Mac, cuz I want a computer that runs OSX” could be restated as “I will not buy a Windows PC, because it does not run OSX.” So the argument is valid.

“I will buy a Mac, cuz I want a computer with a screen”, if restated as a negative argument, would be something like “I will not buy a Windows PC, cuz those don’t have screens.” That’s false, so the argument is invalid.


Part 3 of Video

Purpose of ideas

Elliot says ideas have a purpose. So one idea can work for one purpose and not for another. And a refutation of an idea relates to the purpose.

J’s Comment: yeah. “buy a calzone” is a good idea for lunch but not for making money.

More On Context

Elliot says people look at flaws out of context. That can be okay as an approximation but not if you need to be precise. And if a flaw doesn’t prevent an idea from succeeding at its purpose, it doesn’t matter.

Elliot says precision about the purpose and context can help when you are having problems figuring something out. With the support type approach, people are being wishy-washy and refusing to make judgments.

You don’t refute idea X. You refute idea X for purpose Y. There’s no out-of-context refutations. Ideas are trying to solve a problem of some sort.

Limited Information

Elliot says limited information is something you can deal with — there’s a best guess to make about how to solve X given not knowing Y.

J’s Comment: and i’d think in principle, even specifying some narrow context, you’re always dealing with limited information.

Like suppose you’re deciding what to have for lunch, and one of your considerations is wait time. You won’t know in advance the wait times down to the nanosecond. That’s no problem though! You don’t need infinite precision to come to a judgment regarding what’s too long a wait time.


Where to put the complexity

Elliot makes a subtle point: he says people often have a really simple problem, like “what dog should i get”, and put all the complexity in the solution. but some of the complexity should be in the solution and some should be in the problem. You can write down what problem you’re trying to solve and have a more sophisticated understanding of what the problem is.

J’s Comment: I think one reason it’d be good to have some of the complexity in the problem is it would help narrow down the field from the very outset and thus save on “search costs.” Like if you are considering a dog, then whether you want a cute pet dog, or a big scary dog to protect your house in a rough neighborhood, or a hunting dog, or a dogshow dog, or a dog to help you navigate cuz you’re blind, etc, (whoa there’s a lot of dog contexts!) is going to dramatically narrow the scope of dogs you’ll have to consider. There’s sort of like, pre-built standard lists of dogs that satisfy these functions, and you can just search through those and then apply your other crits to the candidates within that list instead of having to go through the whole universe of dog possibilities. Is that right? And either way, can you elaborate on the benefits to putting some of the complexity in the problem? I’m a bit fuzzy on that point.

Philosophical Problem Solving

Elliot describes the philosophical method of problem solving (I’m summarizing heavily):

Problems require you to learn the solution (create knowledge).

You learn my brainstorming ideas and criticizing those ideas (and then coming up with similar ideas that address the crit, or dropping the idea). This process is literally (not figuratively) evolution (since ideas are replicators).

๐Ÿค”๐Ÿ’ญJustin’s Comments on Yes Or No Philosophy, Part 1๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป๐Ÿ“

(These are comments on the first 30 minutes of the longest/”main” video in the Yes/No educational product. This is a selective summary/discussion of items and will omit many points and details, which you will have to pay for the whole product to get!)

Elliot describes the standard view, which is that ideas have amounts of goodness. These amounts can be described numerically or with words. Favorable args or evidence increase support, and crits reduce it. But no one knows how to measure an idea’s goodness.

Elliot says people use the idea of criticism reducing idea goodness/support in order to ignore crit. That’s bad!

Elliot mentions that there’s various words for idea goodness people use and specifically mentions authority, which is controversial. Some people reject it and try to think for themselves, but then their method is to look at support!

J’s Comment: a good example of how people can fall into an “intellectual trap” without the right epistemology. People can rightly reject authority but then switch to a method which makes the same sort of epistemological mistake. They might still improve their ideas and understanding, but their efforts could be more successful if they had more philosophical perspective on the issue.

On the issue of words for goodness, some I would not have recognized as “goodness” terms before watching the video were educated guess and myth.

Elliot discusses how the support approach leads to people having different, irreconcilable conclusions due to assigning things different “weights.” The weights are not the process used to determine the truth of the matter in their mind — the weights are an argument technique.

J’s Lengthy Comment: in US law there is frequent use of “balancing tests.” The idea is you consider a list of factors and “weigh” them somehow to come to a conclusion.

So for instance, when considering what procedures are required to deprive someone of life, liberty, or property, a court will supposedly weigh

(1) The importance of the private interest affected.

(2) The risk of erroneous deprivation through the procedures used, and the probable value of any additional or substitute procedural safeguards.

(3) The importance of the state interest involved and the burdens which any additional or substitute procedural safeguards would impose on the state.

Justice Scalia once said of a balancing test:

This process is ordinarily called “balancing,” but the scale analogy is not really appropriate, since the interests on both sides are incommensurate. It is more like judging whether a particular line is longer than a particular rock is heavy.

And I think that’s a very good way to put it. In coming up with an idea of what (for example) procedural due process you need in some circumstance, you can’t take a bunch of criteria and “weigh” their relative importance in order to come up with an idea. How many super important private interests equals a moderately important state interest? There’s no answer.

Elliot says that one reason people like talking in terms of numbers is even if they give a very high number for their “certainty” level on an idea being true, they give themselves a built-in excuse if they say 99% and they’re wrong. Basically, people don’t like dealing with fallibility, unlikely stuff, etc.

J’s Comment: People might say its like 99.999999999999% certain the sun will rise tomorrow. They think talking about the sun rising is pretty safe, but wanna cover their bases in case a giant asteroid hits us and knocks us out of orbit or something wacky like that. But really what’s going on is we have an explanatory model of reality which says events will happen that we call the sun rising under certain conditions. And as long as our explanatory model is true and those conditions hold, then the sun will rise, 100%. And when those conditions don’t hold anymore or our theory turns out to deviate from reality in some relevant respect, then the sun definitely won’t rise.

And also as a side note, I bet there’s modeling for things like the statistical chance of SURPRISE SNEAKY ASTEROID KNOCKING US OUT OF ORBIT, and it has actual numbers, not arbitrary tiny percentage guesses.

Elliot says people think support works cuz people think they do it and attribute lots of successful progress to it. But they’re wrong about how their thinking works.

Elliot talks about the relationship between authority and support. Basically, prestigious people believing an idea adds to its support. Elliot makes the good point that if you aren’t judging the idea itself, you’re left with authority (fame/prestige/academic degrees of speaker, popularity of idea).

J’s Comment: One thing I bring up a lot when talking about prestigious people is … they disagree! You can find people with fancy Harvard and Yale degrees who think all sorts of stuff. So what do you do with that situation? Do you go by number of people? What about if more prestigious people (who are numerically fewer) think something on some issue, and numerically more less prestigious people think something else. Do the more prestigious people count more? How much more?

Seems like a big, impossible mess to try and sort that out, just to avoid thinking about issues directly!!!

Here’s an example: CNN ran a whole big hit piece on Sebastian Gorka (who just left the White House) basically saying he’s not considered prestigious enough by other experts in the field:

That’s what authority-based approaches lead to…fighting over credentials instead of ideas.

Elliot says we should reject the whole support model and use yes or no/boolean judgments instead. Support doesn’t work and can’t solve the problem of how to believe good ideas and reject bad ideas.

Under this new approach, we can believe good ideas (“yes” ideas) and reject bad ones (“no” ideas). But we can’t directly compare two ideas we currently think are good. We have to come up with criticisms that will allow us to reject one of the ideas.

J’s Comment:

If people go by authorities, they still have to pick which authorities to go by. They are still making a judgment and still responsible. But it seems much easier for people to not feel responsible when they rely on other people’s thinking. To explicitly and consciously take responsibility for one’s ideas is a big deal and a hard step for many people. So I think this would be an objection many people would have to moving away from support to a YES/NO approach.

Elliot points out that when you decide between a “good” idea and a “great” idea, you’re choosing, you’re picking aside, you’re saying yes to one and no to the other one. So just admit that!

Ideas are “yes” by default, and “no” if you refute them. So all ideas can be categorized this way.

NOTE: Elliot Temple replied here