raginrayguns:
there’s a certain dishonest charisma boost you can get from claiming you’re the only one who understands something
Jaynes does this all the time. Says like, a Bayesian statistical analysis is good in this way, other kinds of analysis have this problem, and by the way non-Bayesians don’t understand these issues and don’t perceive the problems with what they’re doing. This is important, and by the way non-Bayesians don’t know it’s important.
from the part of the town-hall debate I saw, Trump might do this too? There’s drug imports, there’s ISIS, and by the way Clinton doesn’t think these are problems, nobody in Washington has a problem with this or sees it as a threat, just me
it’s never really true I don’t think.
I don’t see this trick employed that often, is this a thing? like really a thing?
I think this is something that can be true, which I’ll talk about in three different contexts.
First is when it’s temporarily true. That is, someone says “my rivals don’t even consider X!”, those rivals start considering X, and then later they either deliberately do or don’t do things related to X.
And so when you read them as statements at a particular time (“non-Bayesians before 2002″ instead of “non-Bayesians”) they tend to be more true.
Second are actual value and priority differences. In the town-hall debate, my perception of Trump’s answer about Syria was that we’re going to take out ISIS, and that’s the main priority. (In a question about how to deal with the humanitarian issue there, his answer was, if I recall correctly, entirely about ISIS, with no mention of reducing Assad’s ability to tyrannize his citizens or anything about supporting refugees.)
My perception of Clinton’s answer, on the other hand, was that Syria was a proxy war between the US and Russia, with basically no mention of stopping ISIS, which if it’s a priority at all is secondary to the issue of deposing Assad.
In that sort of situation, it does seem fair of each of them to say “look, I care about X, which my rival is either ignoring or putting at a third priority or lower.”
Third are differences in model structure. The difference between evidential decision theory and causal decision theory is whether they use conditional probabilities or counterfactual probabilities, and as it turns out counterfactual probabilities are both more powerful and more suited to decision theory.
And here it does seem fair for a CDTer to point out “look, my opponent doesn’t have the necessary language to provide the correct answer.” This typically only works in situations that are mathematical instead of social–EDT will always describe conditional probabilities, but decision theorists can switch from using one bit of math to using another.
The main time I think this is a trick and bothersome is stealing chaos, as Yudkowsky puts it.