πŸ€”πŸ’­Justin’s Comments on Yes Or No Philosophy, Part 2 (Main Video Continued)πŸ‘¨πŸ»β€πŸ’»πŸ“

Note: These are comments on the longest/”main” video in the Yes/No educational product, and specifically on the content starting at the 30 min mark and continuing until around the 60 minute mark. This is a selective summary/discussion of items and will omit many points and details, which you will have to pay for the whole product to get!

Update: Elliot has replied to this post here and here

Part 1 of Video

(continued from previous post which also discussed Part 1)

Arguments

Elliot says all correct arguments are decisive.

There’s two types of crit: false crits, and decisive crits which settle the matter by completely refuting an idea.

Elliot notes some people gather a bunch of bad arguments and think the collection of bad arguments is worth something. But it’s not.

J’s Comment: Yeah like, people will do spaghetti arguments in favor of some kinda conspiracy theory and have 20 args. And suppose you quickly refute five, they’ll still be like “BUT SURELY THERE MUST BE SOMETHING HERE!!! I HAVE FIFTEEN MORE!”

And if you refute the fifteen more they’ll just make minor variations or come up with more new bad stuff!

Refutations

Elliot gives an example of someone thinking they’ve discovered a planet and it turns out there’s candy on their lens. The theory that they discovered a new planet is thus totally refuted, not 50% refuted.

Elliot says medium arguments don’t exist.


Part 2 of Video

Negative Arguments

Elliot says that Yes/No only uses negative arguments (criticisms), though positive args which can be rephrased into an equivalent negative argument are okay.

Negative arguments say why an idea doesn’t work. People think an idea can work but have a negative side effect — Elliot gives the example of a useful app costing money.

J’s Comment: From an economics perspective, being willing to buy something means you value it more than the money. Like if you buy a slice of pizza for $3, that means you value the pizza more than the $3. People can be really confused about this though. I guess partially because they don’t have a clear idea of what their values are, so they often don’t buy stuff in a non-coerced way. Like, their attitude isn’t “I’ve made judgments about what’s important given the wealth I have, and am happy to pay accordingly.” It’s more like “I HAVE to buy X, and Y, and Z, and now I can’t buy A, and B, and C, and that sucks.”

Context

Elliot says you gotta think about your problems carefully in the context of your life. An app might do what you want it to do in terms of functionality, but cost too much. It can solve problem X but not X + Y. An app that has some functionality solves one problem — an app with the required functionality, that also costs less than $5, is a different problem.

Elliot gives an extended example involving pets. The main point is when you take a problem and then add some constraint onto it, that’s a different problem. And you need to understand what problem you wanna solve.

Positive Arguments

Positive arguments (that cannot be restated as negative arguments) are a myth. They are supposed to support ideas, but that’s false.

Elliot gives some examples re: positive arguments that can be restated as negative arguments and those that cannot.

J’s Comment: rather than restating Elliot’s examples, I’ll give my own to ensure understanding.

“I’ll buy a Mac, cuz I want a computer that runs OSX” could be restated as “I will not buy a Windows PC, because it does not run OSX.” So the argument is valid.

“I will buy a Mac, cuz I want a computer with a screen”, if restated as a negative argument, would be something like “I will not buy a Windows PC, cuz those don’t have screens.” That’s false, so the argument is invalid.


Part 3 of Video

Purpose of ideas

Elliot says ideas have a purpose. So one idea can work for one purpose and not for another. And a refutation of an idea relates to the purpose.

J’s Comment: yeah. “buy a calzone” is a good idea for lunch but not for making money.

More On Context

Elliot says people look at flaws out of context. That can be okay as an approximation but not if you need to be precise. And if a flaw doesn’t prevent an idea from succeeding at its purpose, it doesn’t matter.

Elliot says precision about the purpose and context can help when you are having problems figuring something out. With the support type approach, people are being wishy-washy and refusing to make judgments.

You don’t refute idea X. You refute idea X for purpose Y. There’s no out-of-context refutations. Ideas are trying to solve a problem of some sort.

Limited Information

Elliot says limited information is something you can deal with — there’s a best guess to make about how to solve X given not knowing Y.

J’s Comment: and i’d think in principle, even specifying some narrow context, you’re always dealing with limited information.

Like suppose you’re deciding what to have for lunch, and one of your considerations is wait time. You won’t know in advance the wait times down to the nanosecond. That’s no problem though! You don’t need infinite precision to come to a judgment regarding what’s too long a wait time.


Where to put the complexity

Elliot makes a subtle point: he says people often have a really simple problem, like “what dog should i get”, and put all the complexity in the solution. but some of the complexity should be in the solution and some should be in the problem. You can write down what problem you’re trying to solve and have a more sophisticated understanding of what the problem is.

J’s Comment: I think one reason it’d be good to have some of the complexity in the problem is it would help narrow down the field from the very outset and thus save on “search costs.” Like if you are considering a dog, then whether you want a cute pet dog, or a big scary dog to protect your house in a rough neighborhood, or a hunting dog, or a dogshow dog, or a dog to help you navigate cuz you’re blind, etc, (whoa there’s a lot of dog contexts!) is going to dramatically narrow the scope of dogs you’ll have to consider. There’s sort of like, pre-built standard lists of dogs that satisfy these functions, and you can just search through those and then apply your other crits to the candidates within that list instead of having to go through the whole universe of dog possibilities. Is that right? And either way, can you elaborate on the benefits to putting some of the complexity in the problem? I’m a bit fuzzy on that point.

Philosophical Problem Solving

Elliot describes the philosophical method of problem solving (I’m summarizing heavily):

Problems require you to learn the solution (create knowledge).

You learn my brainstorming ideas and criticizing those ideas (and then coming up with similar ideas that address the crit, or dropping the idea). This process is literally (not figuratively) evolution (since ideas are replicators).

πŸ€”πŸ’­Justin’s Comments on Yes Or No Philosophy, Part 1πŸ‘¨πŸ»β€πŸ’»πŸ“

(These are comments on the first 30 minutes of the longest/”main” video in the Yes/No educational product. This is a selective summary/discussion of items and will omit many points and details, which you will have to pay for the whole product to get!)

Elliot describes the standard view, which is that ideas have amounts of goodness. These amounts can be described numerically or with words. Favorable args or evidence increase support, and crits reduce it. But no one knows how to measure an idea’s goodness.

Elliot says people use the idea of criticism reducing idea goodness/support in order to ignore crit. That’s bad!

Elliot mentions that there’s various words for idea goodness people use and specifically mentions authority, which is controversial. Some people reject it and try to think for themselves, but then their method is to look at support!

J’s Comment: a good example of how people can fall into an “intellectual trap” without the right epistemology. People can rightly reject authority but then switch to a method which makes the same sort of epistemological mistake. They might still improve their ideas and understanding, but their efforts could be more successful if they had more philosophical perspective on the issue.

On the issue of words for goodness, some I would not have recognized as “goodness” terms before watching the video were educated guess and myth.

Elliot discusses how the support approach leads to people having different, irreconcilable conclusions due to assigning things different “weights.” The weights are not the process used to determine the truth of the matter in their mind — the weights are an argument technique.

J’s Lengthy Comment: in US law there is frequent use of “balancing tests.” The idea is you consider a list of factors and “weigh” them somehow to come to a conclusion.

So for instance, when considering what procedures are required to deprive someone of life, liberty, or property, a court will supposedly weigh

(1) The importance of the private interest affected.

(2) The risk of erroneous deprivation through the procedures used, and the probable value of any additional or substitute procedural safeguards.

(3) The importance of the state interest involved and the burdens which any additional or substitute procedural safeguards would impose on the state.

Justice Scalia once said of a balancing test:

This process is ordinarily called “balancing,” but the scale analogy is not really appropriate, since the interests on both sides are incommensurate. It is more like judging whether a particular line is longer than a particular rock is heavy.

And I think that’s a very good way to put it. In coming up with an idea of what (for example) procedural due process you need in some circumstance, you can’t take a bunch of criteria and “weigh” their relative importance in order to come up with an idea. How many super important private interests equals a moderately important state interest? There’s no answer.

Elliot says that one reason people like talking in terms of numbers is even if they give a very high number for their “certainty” level on an idea being true, they give themselves a built-in excuse if they say 99% and they’re wrong. Basically, people don’t like dealing with fallibility, unlikely stuff, etc.

J’s Comment: People might say its like 99.999999999999% certain the sun will rise tomorrow. They think talking about the sun rising is pretty safe, but wanna cover their bases in case a giant asteroid hits us and knocks us out of orbit or something wacky like that. But really what’s going on is we have an explanatory model of reality which says events will happen that we call the sun rising under certain conditions. And as long as our explanatory model is true and those conditions hold, then the sun will rise, 100%. And when those conditions don’t hold anymore or our theory turns out to deviate from reality in some relevant respect, then the sun definitely won’t rise.

And also as a side note, I bet there’s modeling for things like the statistical chance of SURPRISE SNEAKY ASTEROID KNOCKING US OUT OF ORBIT, and it has actual numbers, not arbitrary tiny percentage guesses.

Elliot says people think support works cuz people think they do it and attribute lots of successful progress to it. But they’re wrong about how their thinking works.

Elliot talks about the relationship between authority and support. Basically, prestigious people believing an idea adds to its support. Elliot makes the good point that if you aren’t judging the idea itself, you’re left with authority (fame/prestige/academic degrees of speaker, popularity of idea).

J’s Comment: One thing I bring up a lot when talking about prestigious people is … they disagree! You can find people with fancy Harvard and Yale degrees who think all sorts of stuff. So what do you do with that situation? Do you go by number of people? What about if more prestigious people (who are numerically fewer) think something on some issue, and numerically more less prestigious people think something else. Do the more prestigious people count more? How much more?

Seems like a big, impossible mess to try and sort that out, just to avoid thinking about issues directly!!!

Here’s an example: CNN ran a whole big hit piece on Sebastian Gorka (who just left the White House) basically saying he’s not considered prestigious enough by other experts in the field:

That’s what authority-based approaches lead to…fighting over credentials instead of ideas.

Elliot says we should reject the whole support model and use yes or no/boolean judgments instead. Support doesn’t work and can’t solve the problem of how to believe good ideas and reject bad ideas.

Under this new approach, we can believe good ideas (“yes” ideas) and reject bad ones (“no” ideas). But we can’t directly compare two ideas we currently think are good. We have to come up with criticisms that will allow us to reject one of the ideas.

J’s Comment:

If people go by authorities, they still have to pick which authorities to go by. They are still making a judgment and still responsible. But it seems much easier for people to not feel responsible when they rely on other people’s thinking. To explicitly and consciously take responsibility for one’s ideas is a big deal and a hard step for many people. So I think this would be an objection many people would have to moving away from support to a YES/NO approach.

Elliot points out that when you decide between a “good” idea and a “great” idea, you’re choosing, you’re picking aside, you’re saying yes to one and no to the other one. So just admit that!

Ideas are “yes” by default, and “no” if you refute them. So all ideas can be categorized this way.

NOTE: Elliot Temple replied here

Yes/No “Check Your Understanding” questions and replies

Comments on the Yes or No Philosophy educational philosophy material (BUY IT TODAY!)

What is the standard view about how to judge ideas? What’s wrong with it?

The standard view is that you can judge arguments according to amounts of support the idea has, weight of the evidence, that kind of thing.

There are some problems here
1. All actual decisions involve choosing to act on one theory and rejecting others. Pretending you are doing otherwise is faking the reality of what’s going on
2. There’s infinite theories compatible with any given set of evidence, so in a sense they are all equally “supported” by the evidence

What is Karl Popper’s view about how to judge ideas? What’s wrong with it?

Popper is thoroughly against authorities in epistemology. He thinks you should judge ideas according to the merits of the idea itself, not the source. He emphasizes how science began with the criticism of myths.

There are some issues with Popper’s views of how to judge ideas though.

One issue is that Popper thinks arguments can be weighty though inconclusive. So he thinks there are medium strength arguments. This contradicts Yes-No epistemology, which says that either an argument decisively refutes an idea, or fails to refute it. So there’s always conclusiveness.

Other things: Popper thinks you can rationally prefer one non-refuted theory over another. But how, without a criticism which refutes one of the ideas?

From the Karl Popper Commentary in Yes/No:

Confirmations shouldn’t count at all, because the purpose of an idea is to solve a problem. A confirmation (a piece of evidence which fits with an idea) neither tells us that an idea solves a problem, nor does it refute a competing idea as unable to solve the problem. So confirmations accomplish nothing.

Question: Wasn’t Popper specifically thinking about situations where a “confirmation” would consist of a “risky prediction” which would refute a competing idea? So like, his focus on it being a confirmation was wrong, but in context it seems like a reasonable point.

Why are all successful criticisms decisive?

Either an argument refutes an idea or it doesn’t. If it doesn’t, the idea isn’t refuted by that argument. If it does, then the idea is totally refuted in the context the idea was addressing.

What can only have a binary evaluation, and what can have amounts? (“Binary” means two-valued, e.g. yes-or-no.)

Key issues in the philosophy of knowledge are binary. Things like whether an idea solves a problem, whether its refuted, etc.

Other stuff can have amounts. You can talk about how heavy something is or how original it is. That’s fine.

How are observations, facts, and attributes of ideas used?

They can be referred to in talking about ideas, criticisms, problems, solutions.

Like you could observe that socialism leads to mass death and chaos and use that to criticize socialism.

Or a common example is using observations from a scientific test to refute a theory.

How do you choose between two ideas that both solve the same problem?

If you feel like you could act on one of the ideas and it’s not an important issue (like if you have multiple good ideas for where to go to lunch) you could choose randomly.

If you’re feeling stuck between the yes ideas then they’d actually aren’t good enough for acting in the situation. That’s a criticism! So they’re all refuted, and you need to brainstorm new ideas.

You could also reconsider the problem and try and act on a less ambitious one.

Why shouldn’t you act on a criticized idea?

Because a criticism is a reason the idea won’t work for the problem you want to solve!!

How do decision charts work?

You use them to organize the problems you want an idea to solve, and the propose solutions for that problem. And then for each proposed solution, you fill in yes or no for whether it solves a given problem. And you can use this technique to assses whether you have any idea that solves all the problems you wanna solve, or no ideas, or more than one such idea.

What’s wrong with weighing the evidence?

Sometimes people talk about weight and don’t use actual numbers. So their determines of what’s a more important argument to get greater “weight” is based on their own intuitive, pre-existing sense of what’s right. Thus the “weighing of the evidence” just winds up rationalizing decisions that have been made according to some (unstated) arguments.

What’s a library of criticism?

It’s a stock of criticisms you’ve accumulated in your mind, that you can use in assessing some idea you hadn’t heard before.

Example: some people are saying maybe govt should regulate the internet as a PUBLIC UTILITY so that companies will stop censoring speech. I have some crits from my STOCK OF CRITS relevant to this idea, like:

1) it violates property rights

2) getting govt involved in some area will just give more power to the other political party to do stuff you don’t like next time they have power (and you will have partially sanctioned this!)

3) govt regulation tends to decrease quality and increase price of a service

4) people should take initiative to solve problems themselves instead of asking for govt force to help them (especially the side that says its for freedom and individual responsibility). This could include stuff like trying to start or support businesses that respect free speech

5) public utilities are some of the lamest, least-customer-responsive entities we have, and we shouldn’t try and force more stuff into that model

How are ideas organized in a tree with footnotes and summary ideas?

Ideas are connected to other ideas. Like an idea about wanting to buy a camera could have a bunch of reasons for why you want to buy the camera, what it’d be useful for, the fact that you have enough money to buy a camera, etc.

The footnotes can get into a lot of detail and complexity, but you can only deal with so much at one time, so when thinking about some footnotes you think in terms of simple summaries. That is, unless an issue comes up and you need to get into details.

Here’s a simple example of building up some abstractions and turning a concept into a “footnote” to another concept:

Suppose you’re in a supermarket. There’s tomatoes bundled up in packages of 2. You want to buy a bunch of tomatoes. You’re gonna get four packages. How many tomatoes is that?

2 tomatoes + 2 tomatoes + 2 tomatoes + 2 tomatoes = 8

That’s not just true of tomatoes, though. You can add stuff together in general, and drop the tomatoes:

2 + 2 + 2 + 2 = 8
You can bring up tomatoes as an example for this if you want, but you can just think about adding numbers directly.

But you could also think of it this way:

2 x 4 = 8

Multiplication is repeated addition. If you need to think about the underlying addition you can — its a footnote to multiplication — but you can just multiply the numbers directly.

But we can represent it another way too:

2^(1) x 2^(2) = 8
(then using exponent product rule)
2^(3) = 8

Exponents represent repeated multiplication. If you need to think about the underlying multiplication you can — its a footnote to exponents — but you can just think about exponents of numbers directly.

And now there’s a whole chain of footnotes for the exponent idea, leading all the way down to statements about how many tomatoes 2 to the third power of tomatoes is!

Note: lots of math problems people have are because they learn each bit of math in a disintegrated way and don’t actually have a tree of footnotes in their mind they can refer to when an issue comes up.

What’s important to know when naming solutions and problems?

You want to keep names unambiguous, instead of attaching a having a bunch of memes attached to the same word and therefore having a bunch of confusion from that. You can give something a more specific name (like “laissez-faire capitalism”) when a more general name (like “capitalism”) has become ambiguous.

Under what circumstances should you change your mind about a previous judgement?

Ideally you should change an idea’s status from non refuted to refuted and that’s it. If you screw up, decide an idea is refuted, and miss a footnote to that idea explaining why your alleged refutation doesn’t succeed as a refutation, you can change your judgment of that ideas status back to non refuted np. But that should not happen a ton and if it does you should examine your methods for judging ideas!!!

What’s an idea? A criticism? A problem?

An idea is any thing you think up, including wrong stuff, nonsense, etc. As distinct from say knowledge which is an idea that solves a problem.

A problem is anything we might try to know or do.

A criticism is an explanation of a mistake in an idea. It says why an idea won’t work to solve the problem it’s supposed to.

Why can’t one idea solve a problem better than another?

Because it either solves some problem or it doesn’t. Multiple ideas can solve some problem (like, what to get for lunch according to some criteria, or a job that pays at least $1000/week), but they either do or don’t. When people speak of ideas solving some problems better than others, what they are doing is grouping different problems together and evaluating various ideas against different sub-problems of that problem, and then saying that a problem which solves more

What’s wrong with arguments having an amount of strength?

We come up with ideas to solve some problem we have. It either does solve that problem, or it doesn’t. A criticism either refutes an idea, or it doesn’t. So on a 0-100 scale all ideas are 100 or 0. So there’s no room for degrees of strength, and talking about strength doesn’t add a ton.