_id
stringlengths
0
24
slug
stringlengths
0
132
title
stringlengths
0
313
draft
null
shortform
bool
1 class
hideCommentKarma
bool
1 class
af
bool
2 classes
currentUserReviewVote
null
userId
stringlengths
17
24
coauthorStatuses
listlengths
0
18
hasCoauthorPermission
bool
2 classes
rejected
bool
1 class
debate
bool
2 classes
collabEditorDialogue
bool
2 classes
__typename
stringclasses
1 value
url
stringlengths
0
432
postedAt
stringdate
2007-06-22 22:30:00
2025-06-28 01:40:04
createdAt
null
sticky
bool
2 classes
metaSticky
bool
2 classes
stickyPriority
int64
2
2
status
int64
2
2
frontpageDate
stringdate
2018-01-30 00:32:03
2025-06-28 02:24:31
meta
bool
2 classes
deletedDraft
bool
1 class
postCategory
stringclasses
3 values
shareWithUsers
sequencelengths
0
23
sharingSettings
float64
linkSharingKey
null
contents_latest
stringlengths
17
24
commentCount
int64
0
2k
voteCount
int64
-59
922
baseScore
int64
-10
945
unlisted
bool
1 class
score
float64
-0
5.05
lastVisitedAt
null
isFuture
bool
1 class
isRead
bool
1 class
lastCommentedAt
stringdate
2007-08-06 20:29:51
2025-06-28 14:23:54
lastCommentPromotedAt
stringclasses
21 values
canonicalCollectionSlug
stringclasses
4 values
curatedDate
stringclasses
691 values
commentsLocked
bool
2 classes
commentsLockedToAccountsCreatedAfter
stringclasses
1 value
question
bool
2 classes
hiddenRelatedQuestion
bool
1 class
originalPostRelationSourceId
stringclasses
46 values
location
null
googleLocation
null
onlineEvent
bool
1 class
globalEvent
bool
1 class
startTime
null
endTime
null
localStartTime
null
localEndTime
null
eventRegistrationLink
null
joinEventLink
null
facebookLink
stringclasses
1 value
meetupLink
null
website
stringclasses
1 value
contactInfo
stringclasses
1 value
isEvent
bool
1 class
eventImageId
null
eventType
null
types
sequencelengths
0
2
groupId
stringclasses
106 values
reviewedByUserId
stringclasses
19 values
suggestForCuratedUserIds
null
suggestForCuratedUsernames
null
reviewForCuratedUserId
stringclasses
12 values
authorIsUnreviewed
bool
1 class
afDate
stringclasses
590 values
suggestForAlignmentUserIds
sequencelengths
0
4
reviewForAlignmentUserId
stringclasses
6 values
afBaseScore
float64
-21
217
afCommentCount
int64
0
149
afLastCommentedAt
stringdate
2007-06-26 21:13:26
2025-06-28 01:40:04
afSticky
bool
2 classes
hideAuthor
bool
2 classes
moderationStyle
stringclasses
4 values
ignoreRateLimits
bool
2 classes
submitToFrontpage
bool
2 classes
onlyVisibleToLoggedIn
bool
1 class
onlyVisibleToEstablishedAccounts
bool
2 classes
reviewCount
int64
0
8
reviewVoteCount
int64
0
115
positiveReviewVoteCount
int64
0
98
manifoldReviewMarketId
stringclasses
900 values
annualReviewMarketProbability
float64
0.01
0.99
annualReviewMarketIsResolved
bool
2 classes
annualReviewMarketYear
float64
2.02k
2.03k
annualReviewMarketUrl
stringclasses
900 values
group
float64
podcastEpisodeId
stringclasses
396 values
forceAllowType3Audio
bool
1 class
nominationCount2019
int64
0
6
reviewCount2019
int64
0
6
votingSystem
stringclasses
2 values
disableRecommendation
bool
2 classes
coauthors
listlengths
0
18
readTimeMinutes
int64
1
315
rejectedReason
stringclasses
12 values
customHighlight
float64
lastPromotedComment
float64
bestAnswer
float64
tags
listlengths
0
31
feedId
stringclasses
45 values
totalDialogueResponseCount
int64
0
0
unreadDebateResponseCount
int64
0
0
dialogTooltipPreview
stringclasses
6 values
disableSidenotes
bool
2 classes
currentUserVote
null
currentUserExtendedVote
null
extendedScore.agreement
float64
-6
2
extendedScore.approvalVoteCount
float64
1
922
extendedScore.agreementVoteCount
float64
0
1
afExtendedScore.agreement
float64
-6
2
afExtendedScore.approvalVoteCount
float64
0
175
afExtendedScore.agreementVoteCount
float64
0
1
user._id
stringlengths
17
24
user.slug
stringlengths
2
40
user.createdAt
stringdate
2009-02-17 05:49:50
2025-06-26 13:32:01
user.username
stringlengths
1
64
user.displayName
stringlengths
1
43
user.profileImageId
float64
user.previousDisplayName
float64
user.fullName
stringclasses
979 values
user.karma
float64
-1,560
150k
user.afKarma
float64
-63
6.7k
user.deleted
bool
1 class
user.isAdmin
bool
2 classes
user.htmlBio
stringlengths
0
9.48k
user.jobTitle
float64
user.organization
float64
user.postCount
float64
0
1.02k
user.commentCount
float64
0
16.1k
user.sequenceCount
float64
0
40
user.afPostCount
float64
-4
364
user.afCommentCount
float64
0
1.39k
user.spamRiskScore
float64
0
1
user.tagRevisionCount
float64
0
3.8k
user.reviewedByUserId
stringclasses
18 values
user.__typename
stringclasses
1 value
user.moderationStyle
stringclasses
4 values
user.bannedUserIds
sequencelengths
0
6
user.moderatorAssistance
bool
2 classes
user.groups
sequencelengths
0
289
user.banned
stringclasses
30 values
user.allCommentingDisabled
float64
socialPreviewData._id
stringlengths
0
24
socialPreviewData.imageUrl
stringlengths
0
149k
socialPreviewData.__typename
stringclasses
1 value
contents._id
stringlengths
17
24
contents.htmlHighlight
stringlengths
0
2.31M
contents.plaintextDescription
stringlengths
0
2k
contents.wordCount
float64
0
78.7k
contents.version
stringclasses
299 values
contents.__typename
stringclasses
1 value
fmCrosspost.isCrosspost
bool
2 classes
fmCrosspost.hostedHere
bool
2 classes
fmCrosspost.foreignPostId
stringlengths
17
17
fmCrosspost.__typename
stringclasses
1 value
6i3zToomS86oj9bS6
mysterious-answers-to-mysterious-questions
Mysterious Answers to Mysterious Questions
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-25T22:27:47.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
fSz8Qku9ZQ6pfnnTJ
160
191
244
false
0.00027
null
false
false
2025-04-24T11:44:59.589Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
24
0
2007-08-25T22:27:47.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
fcr9jdyoSYgbocGvF
false
0
0
namesAttachedReactions
false
[]
4
null
null
null
null
[ { "__typename": "Tag", "_id": "wMPYFGmhcFg4bSb4Z", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 22, "canEditUserIds": null, "core": false, "createdAt": "2020-07-30T20:24:30.131Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" }, { "_id": "si6LoAENzqPCmi2Dh", "displayName": "ihatenumbersinusernames7" }, { "_id": "8btiLJDabHgZuiSAB", "displayName": "Ggwp" }, { "_id": "XTBvjvdgenMAbyweJ", "displayName": "김형모" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Map and Territory", "needsReview": false, "noindex": false, "postCount": 75, "score": 22, "shortName": null, "slug": "map-and-territory", "suggestedAsFilter": false, "userId": "HoGziwmhpMGqGeWZy", "voteCount": 5, "wikiOnly": false }, { "__typename": "Tag", "_id": "5f5c37ee1b5cdee568cfb12b", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-09-11T19:58:51.930Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Semantic Stopsign", "needsReview": false, "noindex": false, "postCount": 8, "score": 0, "shortName": null, "slug": "semantic-stopsign", "suggestedAsFilter": false, "userId": "qf77EiaoMw7tH3GSr", "voteCount": 0, "wikiOnly": true }, { "__typename": "Tag", "_id": "y9RtK47xhBT5Db6z8", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2016-06-30T07:49:22.000Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": true, "isPlaceholderPage": false, "isSubforum": false, "name": "Mind projection fallacy", "needsReview": false, "noindex": false, "postCount": 29, "score": 0, "shortName": null, "slug": "mind-projection-fallacy", "suggestedAsFilter": false, "userId": "nmk3nLpQE89dMRzzN", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "bmfs4jiLaF6HiiYkC", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-07-29T17:48:27.328Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Reductionism", "needsReview": false, "noindex": false, "postCount": 55, "score": 19, "shortName": null, "slug": "reductionism", "suggestedAsFilter": false, "userId": "HoGziwmhpMGqGeWZy", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
191
0
0
19
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
6i3zToomS86oj9bS6
https://res.cloudinary.c…ram_1_dr5tbq.svg
SocialPreviewType
fSz8Qku9ZQ6pfnnTJ
<p>Imagine looking at your hand, and knowing nothing of cells, nothing of biochemistry, nothing of DNA. You’ve learned some anatomy from dissection, so you know your hand contains muscles; but you don’t know why muscles move instead of lying there like clay. Your hand is just . . . stuff . . . and for some reason it moves under your direction. Is this not magic?</p><blockquote><p>It seemed to me then, and it still seems to me, most probable that the animal body does not act as a thermodynamic engine . . . The influence of animal or vegetable life on matter is infinitely beyond the range of any scientific inquiry hitherto entered on. Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concourse of atoms[.]<sup>1</sup></p><p>[C]onsciousness teaches every individual that they are, to some extent, subject to the direction of his will. It appears, therefore, that animated creatures have the power of immediately applying, to certain moving particles of matter within their bodies, forces by which the motions of these particles are directed to produce desired mechanical effects.<sup>2</sup></p><p>Modern biologists are coming once more to a firm acceptance of something beyond mere gravitational, chemical, and physical forces; and that unknown thing is a vital principle.<sup>3</sup></p><p>—Lord Kelvin</p></blockquote><p>This was the theory of <i>vitalism</i> ; that the mysterious difference between living matter and non-living matter was explained by an <i>Élan vital</i> or <i>vis vitalis</i>. <i>Élan vital</i> infused living matter and caused it to move as consciously directed. <i>Élan vital</i> participated in chemical transformations which no mere non-living particles could undergo—Wöhler’s later synthesis of urea, a component of urine, was a major blow to the vitalistic theory because it showed that mere <i>chemistry</i> could duplicate a product of biology.</p><p>Calling “Élan vital” an explanation, even a fake explanation like phlogiston, is probably giving it too much credit. It functioned primarily as a curiosity-stopper. You said “Why?” and the answer was “Élan vital!”</p><p>When you say “Élan vital!” it <i>feels</i> like you know why your hand moves. You have a little causal diagram in your head that says:</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/v1586123558/MysteriousAnswersToMysteriousQuestions_diagram_1_dr5tbq.svg"></figure><p>But actually you know nothing you didn’t know before. You don’t know, say, whether your hand will generat... </p>
Imagine looking at your hand, and knowing nothing of cells, nothing of biochemistry, nothing of DNA. You’ve learned some anatomy from dissection, so you know your hand contains muscles; but you don’t know why muscles move instead of lying there like clay. Your hand is just . . . stuff . . . and for some reason it moves under your direction. Is this not magic? > It seemed to me then, and it still seems to me, most probable that the animal body does not act as a thermodynamic engine . . . The influence of animal or vegetable life on matter is infinitely beyond the range of any scientific inquiry hitherto entered on. Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concourse of atoms[.]1 > > [C]onsciousness teaches every individual that they are, to some extent, subject to the direction of his will. It appears, therefore, that animated creatures have the power of immediately applying, to certain moving particles of matter within their bodies, forces by which the motions of these particles are directed to produce desired mechanical effects.2 > > Modern biologists are coming once more to a firm acceptance of something beyond mere gravitational, chemical, and physical forces; and that unknown thing is a vital principle.3 > > —Lord Kelvin This was the theory of vitalism ; that the mysterious difference between living matter and non-living matter was explained by an Élan vital or vis vitalis. Élan vital infused living matter and caused it to move as consciously directed. Élan vital participated in chemical transformations which no mere non-living particles could undergo—Wöhler’s later synthesis of urea, a component of urine, was a major blow to the vitalistic theory because it showed that mere chemistry could duplicate a product of biology. Calling “Élan vital
1,078
2.3.0
Revision
false
null
null
CrosspostOutput
FWMfQKG3RpZx6irjm
semantic-stopsigns
Semantic Stopsigns
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-24T19:29:10.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
uTrfTEnxgwdbsCm9E
111
152
161
false
0.000182
null
false
false
2025-04-23T23:02:01.288Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
8
0
2007-08-24T19:29:10.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
RJmAHuGzPvkJnehAF
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "L3NcKBNTvQaFXwv9u", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-05-13T14:55:12.741Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Paradoxes", "needsReview": false, "noindex": false, "postCount": 76, "score": 0, "shortName": null, "slug": "paradoxes", "suggestedAsFilter": false, "userId": "GusfN9ks3eJxBYo5C", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "5f5c37ee1b5cdee568cfb12b", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-09-11T19:58:51.930Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Semantic Stopsign", "needsReview": false, "noindex": false, "postCount": 8, "score": 0, "shortName": null, "slug": "semantic-stopsign", "suggestedAsFilter": false, "userId": "qf77EiaoMw7tH3GSr", "voteCount": 0, "wikiOnly": true }, { "__typename": "Tag", "_id": "5hpGj9nDLgokfghvR", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-02-20T02:56:23.333Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Confirmation Bias", "needsReview": false, "noindex": false, "postCount": 39, "score": 9, "shortName": null, "slug": "confirmation-bias", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
152
0
0
10
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
FWMfQKG3RpZx6irjm
SocialPreviewType
uTrfTEnxgwdbsCm9E
<p><i>And the child asked:</i></p><p>Q: Where did this rock come from?</p><p>A: I chipped it off the big boulder, at the center of the village.</p><p>Q: Where did the boulder come from?</p><p>A: It probably rolled off the huge mountain that towers over our village.</p><p>Q: Where did the mountain come from?</p><p>A: The same place as all stone: it is the bones of Ymir, the primordial giant.</p><p>Q: Where did the primordial giant, Ymir, come from?</p><p>A: From the great abyss, Ginnungagap.</p><p>Q: Where did the great abyss, Ginnungagap, come from?</p><p>A: Never ask that question.</p><p>Consider the seeming paradox of the First Cause. Science has traced events back to the Big Bang, but why did the Big Bang happen? It’s all well and good to say that the zero of time begins at the Big Bang—that there is nothing before the Big Bang in the ordinary flow of minutes and hours. But saying this presumes our physical law, which itself appears highly structured; it calls out for explanation. Where did the physical laws come from? You could say that we’re all a computer simulation, but then the computer simulation is running on some other world’s laws of physics—where did <i>those</i> laws of physics come from?</p><p>At this point, some people say, “God!”</p><p>What could possibly make anyone, even a highly religious person, think this even <i>helped</i> answer the paradox of the First Cause? Why wouldn’t you automatically ask, “Where did God come from?” Saying “God is uncaused” or “God created Himself” leaves us in exactly the same position as “Time began with the Big Bang.” We just ask why the whole metasystem exists in the first place, or why some events but not others are allowed to be uncaused.</p><p>My purpose here is not to discuss the seeming paradox of the First Cause, but to ask why anyone would think “God!” <i>could</i> resolve the paradox. Saying “God!” is a way of belonging to a tribe, which gives people a motive to say it as often as possible—some people even say it for questions like “Why did this hurricane strike New Orleans?” Even so, you’d hope people would notice that on the <i>particular</i> puzzle of the First Cause, saying “God!” doesn’t help. It doesn’t make the paradox seem any less paradoxical <i>even if true</i>. How could anyone <i>not</i> notice this?</p><p>Jonathan Wallace suggested that “God!” functions as a semantic stopsign—that it isn’t a propositional assertion, so much as a cognitive traffic signal: do not think past this point.<sup>1</sup> Saying “God!” doesn’t so much resolve the paradox, as put... </p>
And the child asked: Q: Where did this rock come from? A: I chipped it off the big boulder, at the center of the village. Q: Where did the boulder come from? A: It probably rolled off the huge mountain that towers over our village. Q: Where did the mountain come from? A: The same place as all stone: it is the bones of Ymir, the primordial giant. Q: Where did the primordial giant, Ymir, come from? A: From the great abyss, Ginnungagap. Q: Where did the great abyss, Ginnungagap, come from? A: Never ask that question. Consider the seeming paradox of the First Cause. Science has traced events back to the Big Bang, but why did the Big Bang happen? It’s all well and good to say that the zero of time begins at the Big Bang—that there is nothing before the Big Bang in the ordinary flow of minutes and hours. But saying this presumes our physical law, which itself appears highly structured; it calls out for explanation. Where did the physical laws come from? You could say that we’re all a computer simulation, but then the computer simulation is running on some other world’s laws of physics—where did those laws of physics come from? At this point, some people say, “God!” What could possibly make anyone, even a highly religious person, think this even helped answer the paradox of the First Cause? Why wouldn’t you automatically ask, “Where did God come from?” Saying “God is uncaused” or “God created Himself” leaves us in exactly the same position as “Time began with the Big Bang.” We just ask why the whole metasystem exists in the first place, or why some events but not others are allowed to be uncaused. My purpose here is not to discuss the seeming paradox of the First Cause, but to ask why anyone would think “God!” could resolve the paradox. Saying “God!” is a way of belonging to a tribe, which gives people a motive to say it as often as possible—some people even say it for questions like “Why did this hurricane strike New Orleans?” Even so, you’d hope people woul
874
2.1.0
Revision
false
null
null
CrosspostOutput
RgkqLqkg8vLhsYpfh
fake-causality
Fake Causality
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-23T18:12:31.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
jhWpqoCX9vQmZbJig
88
125
141
false
0.000162
null
false
false
2020-05-14T16:15:14.031Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
8
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
qps8qjxFN6i5RFCJB
false
0
0
namesAttachedReactions
false
[]
5
null
null
null
null
[ { "__typename": "Tag", "_id": "cq69M9ceLNA35ShTR", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 20, "canEditUserIds": null, "core": false, "createdAt": "2020-05-12T15:56:45.599Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" }, { "_id": "jChjgR5sGftjozvK5", "displayName": "Mathan K" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Causality", "needsReview": false, "noindex": false, "postCount": 152, "score": 20, "shortName": null, "slug": "causality", "suggestedAsFilter": false, "userId": "pgi5MqvGrtvQozEH8", "voteCount": 3, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
125
0
0
14
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
RgkqLqkg8vLhsYpfh
https://39669.cdn.cke-cs…b7270952b46c.png
SocialPreviewType
jhWpqoCX9vQmZbJig
<p>Phlogiston was the eighteenth century’s answer to the Elemental Fire of the Greek alchemists. Ignite wood, and let it burn. What is the orangey-bright “fire” stuff? Why does the wood transform into ash? To both questions, the eighteenth-century chemists answered, “phlogiston.”</p><p>. . . and that was it, you see, that was their answer: “Phlogiston.”</p><p>Phlogiston escaped from burning substances as visible fire. As the phlogiston escaped, the burning substances lost phlogiston and so became ash, the “true material.” Flames in enclosed containers went out because the air became saturated with phlogiston, and so could not hold any more. Charcoal left little residue upon burning because it was nearly pure phlogiston.</p><p>Of course, one didn’t use phlogiston theory to <i>predict</i> the outcome of a chemical transformation. You looked at the result first, then you used phlogiston theory to <i>explain</i> it. It’s not that phlogiston theorists predicted a flame would extinguish in a closed container; rather they lit a flame in a container, watched it go out, and then said, “The air must have become saturated with phlogiston.” You couldn’t even use phlogiston theory to say what you ought <i>not</i> to see; it could explain everything.</p><p>This was an earlier age of science. For a long time, no one realized there was a problem. Fake explanations don’t <i>feel</i> fake. That’s what makes them dangerous.</p><p>Modern research suggests that humans think about cause and effect using something like the directed acyclic graphs (DAGs) of Bayes nets. Because it rained, the sidewalk is wet; because the sidewalk is wet, it is slippery:</p><figure class="image image_resized" style="width:69.03%"><img src="https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8f32ffc1c38cd1e27bb9704309d0b2bf863bb7270952b46c.png" srcset="https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8f32ffc1c38cd1e27bb9704309d0b2bf863bb7270952b46c.png/w_147 147w, https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8f32ffc1c38cd1e27bb9704309d0b2bf863bb7270952b46c.png/w_227 227w, https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8f32ffc1c38cd1e27bb9704309d0b2bf863bb7270952b46c.png/w_307 307w, https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8f32ffc1c38cd1e27bb9704309d0b2bf863bb7270952b46c.png/w_387 387w, https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8f32ffc1c38cd1e27bb9704309d0b2bf863bb7270952b46c.png/w_467 467w"></figure><p>From this we can infer—or, in a Bayes net, rigorously calculate in probabilities—that when the sidewalk is slippery, it probably rained; but if we already know that the sidewalk is wet, learning that the sidewalk is slippery tells us nothing more about whether it rained.</p><p>Why is fire hot and bright when it burns?</p><figure class="image image_resized" style="width:58.03%"><img src="https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8ff736bb4279ee44938bc54a05d6849665b3433c7bc69ab2.png" srcset="https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8ff736bb4279ee44938bc54a05d6849665b3433c7bc69ab2.png/w_152 152w, https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8ff736bb4279ee44938bc54a05d6849665b3433c7bc69ab2.png/w_232 232w, https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8ff736bb4279ee44938bc54a05d6849665b3433c7bc69ab2.png/w_312 312w, https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/8ff736bb4279ee44938bc54a05d6849665b3433c7bc69ab2.png/w_392 392w"></figure><p>It <i>feels</i> like an explanation. It’s <i>represented</i> using the same cognitive data format. But the human mind does not automatically detect when a cause has an unconstraining arrow to its effect. Worse, thanks to hindsight bias, it may feel like the cause constrains the effect, when it was merely <a href="https://www.lesswrong.com/rationality/conservation-of-expected-evidence">fitted</a> to the effect.</p><p>Interestingly, our modern understanding of probabilistic reasoning about causality can describe precisely what the phlogiston theorists were doing wrong. One of the primary inspirations... </p>
Phlogiston was the eighteenth century’s answer to the Elemental Fire of the Greek alchemists. Ignite wood, and let it burn. What is the orangey-bright “fire” stuff? Why does the wood transform into ash? To both questions, the eighteenth-century chemists answered, “phlogiston.” . . . and that was it, you see, that was their answer: “Phlogiston.” Phlogiston escaped from burning substances as visible fire. As the phlogiston escaped, the burning substances lost phlogiston and so became ash, the “true material.” Flames in enclosed containers went out because the air became saturated with phlogiston, and so could not hold any more. Charcoal left little residue upon burning because it was nearly pure phlogiston. Of course, one didn’t use phlogiston theory to predict the outcome of a chemical transformation. You looked at the result first, then you used phlogiston theory to explain it. It’s not that phlogiston theorists predicted a flame would extinguish in a closed container; rather they lit a flame in a container, watched it go out, and then said, “The air must have become saturated with phlogiston.” You couldn’t even use phlogiston theory to say what you ought not to see; it could explain everything. This was an earlier age of science. For a long time, no one realized there was a problem. Fake explanations don’t feel fake. That’s what makes them dangerous. Modern research suggests that humans think about cause and effect using something like the directed acyclic graphs (DAGs) of Bayes nets. Because it rained, the sidewalk is wet; because the sidewalk is wet, it is slippery: From this we can infer—or, in a Bayes net, rigorously calculate in probabilities—that when the sidewalk is slippery, it probably rained; but if we already know that the sidewalk is wet, learning that the sidewalk is slippery tells us nothing more about whether it rained. Why is fire hot and bright when it burns? It feels like an explanation. It’s represented using the same cognitive data forma
1,221
2.3.0
Revision
false
null
null
CrosspostOutput
4Bwr6s9dofvqPWakn
science-as-attire
Science as Attire
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-23T05:10:21.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
mCh7hjHJLRFoNhqqW
88
152
172
false
0.000193
null
false
false
2022-05-07T02:10:58.946Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
6
0
2007-08-23T05:10:21.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
9abh5d2hkqximJCQp
false
0
0
namesAttachedReactions
false
[]
2
null
null
null
null
[ { "__typename": "Tag", "_id": "Q6P8jLn8hH7kbuXRr", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-05-19T19:49:05.946Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Signaling", "needsReview": false, "noindex": false, "postCount": 85, "score": 19, "shortName": null, "slug": "signaling", "suggestedAsFilter": false, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "ZpG9rheyAkgCoEQea", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-07-10T11:53:33.735Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Practice & Philosophy of Science", "needsReview": false, "noindex": false, "postCount": 262, "score": 9, "shortName": null, "slug": "practice-and-philosophy-of-science", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "DdgSyQoZXjj3KnF4N", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-07-13T15:43:11.661Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Tribalism", "needsReview": false, "noindex": false, "postCount": 68, "score": 19, "shortName": null, "slug": "tribalism", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 2, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
152
0
0
12
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
4Bwr6s9dofvqPWakn
https://www.lesswrong.co…allerstorm_2.jpg
SocialPreviewType
mCh7hjHJLRFoNhqqW
<p>The preview for the <i>X-Men</i> movie has a voice-over saying: “In every human being . . . there is the genetic code . . . for mutation.” Apparently you can acquire all sorts of neat abilities by mutation. The mutant Storm, for example, has the ability to throw lightning bolts.</p><figure class="image image_resized" style="width:37.2%"><img src="https://www.lesswrong.com/static/imported/2007/08/22/smallerstorm_2.jpg"></figure><p>I beg you, dear reader, to consider the biological machinery necessary to generate electricity; the biological adaptations necessary to avoid being harmed by electricity; and the cognitive circuitry required for finely tuned control of lightning bolts. If we actually observed any organism acquiring these abilities <i>in one generation</i>, as the result of <i>mutation</i>, it would outright falsify the neo-Darwinian model of natural selection. It would be worse than finding rabbit fossils in the pre-Cambrian. If evolutionary theory could <i>actually</i> stretch to cover Storm, it would be able to explain anything, and we all know what that would imply.</p><p>The <i>X-Men</i> comics use terms like “evolution,” “mutation,” and “genetic code,” purely to place themselves in what they conceive to be the <i>literary genre</i> of science. The part that scares me is wondering how many people, especially in the media, understand science <i>only</i> as a literary genre.</p><p>I encounter people who very definitely believe in evolution, who sneer at the folly of creationists. And yet they have no idea of what the theory of evolutionary biology permits and prohibits. They’ll talk about “the next step in the evolution of humanity,” as if natural selection got here by following a plan. Or even worse, they’ll talk about something completely outside the domain of evolutionary biology, like an improved design for computer chips, or corporations splitting, or humans uploading themselves into computers, and they’ll call <i>that</i> “evolution.” If evolutionary biology could cover that, it could cover anything.</p><p>Probably an actual majority of the people who <i>believe in</i> evolution use the phrase “because of evolution” because they want to be part of the scientific in-crowd—belief as scientific attire, like wearing a lab coat. If the scientific in-crowd instead used the phrase “because of intelligent design,” they would just as cheerfully use that instead—it would make no difference to their anticipation-controllers. Saying “because of evolution” instead of “because of intelligent design” does not, <i>for them</i>, prohibit Storm. Its only purpose, for them, is to identify with a tribe... </p>
The preview for the X-Men movie has a voice-over saying: “In every human being . . . there is the genetic code . . . for mutation.” Apparently you can acquire all sorts of neat abilities by mutation. The mutant Storm, for example, has the ability to throw lightning bolts. I beg you, dear reader, to consider the biological machinery necessary to generate electricity; the biological adaptations necessary to avoid being harmed by electricity; and the cognitive circuitry required for finely tuned control of lightning bolts. If we actually observed any organism acquiring these abilities in one generation, as the result of mutation, it would outright falsify the neo-Darwinian model of natural selection. It would be worse than finding rabbit fossils in the pre-Cambrian. If evolutionary theory could actually stretch to cover Storm, it would be able to explain anything, and we all know what that would imply. The X-Men comics use terms like “evolution,” “mutation,” and “genetic code,” purely to place themselves in what they conceive to be the literary genre of science. The part that scares me is wondering how many people, especially in the media, understand science only as a literary genre. I encounter people who very definitely believe in evolution, who sneer at the folly of creationists. And yet they have no idea of what the theory of evolutionary biology permits and prohibits. They’ll talk about “the next step in the evolution of humanity,” as if natural selection got here by following a plan. Or even worse, they’ll talk about something completely outside the domain of evolutionary biology, like an improved design for computer chips, or corporations splitting, or humans uploading themselves into computers, and they’ll call that “evolution.” If evolutionary biology could cover that, it could cover anything. Probably an actual majority of the people who believe in evolution use the phrase “because of evolution” because they want to be part of the scientific in-crowd—beli
598
2.1.0
Revision
false
null
null
CrosspostOutput
NMoLJuDJEms7Ku9XS
guessing-the-teacher-s-password
Guessing the Teacher's Password
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-22T03:40:48.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
DfB3LGGRX8KjTjaaF
100
258
304
false
0.000335
null
false
false
2025-03-06T01:24:03.690Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
14
0
2007-08-22T03:40:48.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
zqZYaSiWZgodAbKkL
false
0
0
namesAttachedReactions
false
[]
4
null
null
null
null
[ { "__typename": "Tag", "_id": "SJFsFfFhE6m2ThAYJ", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-07-13T16:19:09.687Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Anticipated Experiences", "needsReview": false, "noindex": false, "postCount": 49, "score": 9, "shortName": null, "slug": "anticipated-experiences", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "5f5c37ee1b5cdee568cfb20d", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-09-11T19:58:52.330Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Guessing The Teacher's Password", "needsReview": false, "noindex": false, "postCount": 1, "score": 0, "shortName": null, "slug": "guessing-the-teachers-password", "suggestedAsFilter": false, "userId": "9c2mQkLQq6gQSksMs", "voteCount": 0, "wikiOnly": true }, { "__typename": "Tag", "_id": "BzghQYM9GnkMHxZKb", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-07-14T22:37:34.105Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Problem-solving (skills and techniques)", "needsReview": false, "noindex": false, "postCount": 22, "score": 0, "shortName": null, "slug": "problem-solving-skills-and-techniques", "suggestedAsFilter": false, "userId": "gXeEWGjTWyqgrQTzR", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "fH8jPjHF2R27sRTTG", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-07-12T11:04:34.644Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Education", "needsReview": false, "noindex": false, "postCount": 263, "score": 9, "shortName": null, "slug": "education", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
258
0
0
25
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
NMoLJuDJEms7Ku9XS
SocialPreviewType
DfB3LGGRX8KjTjaaF
<p>When I was young, I read popular physics books such as Richard Feynman’s <i>QED: The Strange Theory of Light and Matter</i>. I knew that light was waves, sound was waves, matter was waves. I took pride in my scientific literacy, when I was nine years old.</p><p>When I was older, and I began to read the <i>Feynman Lectures on Physics</i>, I ran across a gem called “the wave equation.” I could follow the equation’s derivation, but, <a href="http://www.math.utah.edu/~pa/math/polya.html">looking back</a>, I couldn’t see its truth at a glance. So I thought about the wave equation for three days, on and off, until I saw that it was embarrassingly obvious. And when I finally understood, I realized that the whole time I had accepted the honest assurance of physicists that light was waves, sound was waves, matter was waves, I had not had the vaguest idea of what the word “wave” meant to a physicist.</p><p>There is an instinctive tendency to think that if a physicist says “light is made of waves,” and the teacher says “What is light made of?” and the student says “Waves!”, then the student has made a true statement. That’s only fair, right? We accept “waves” as a correct answer from the physicist; wouldn’t it be unfair to reject it from the student? Surely, the answer “Waves!” is either <i>true</i> or <i>false</i>, right?</p><p>Which is one more bad habit to <a href="http://lesswrong.com/lw/i2/two_more_things_to_unlearn_from_school/">unlearn from school</a>. Words do not have intrinsic definitions. If I hear the syllables “bea-ver” and think of a large rodent, that is a fact about my own state of mind, not a fact about the syllables “bea-ver.” The sequence of syllables “made of waves” (or “because of heat conduction”) is not a <i>hypothesis</i>; it is a pattern of vibrations traveling through the air, or ink on paper. It can <i>associate</i> to a hypothesis in someone’s mind, but it is not, of itself, right or wrong. But in school, the teacher hands you a gold star for <i>saying</i> “made of waves,” which must be the correct answer because the teacher heard a physicist emit the same sound-vibrations. Since verbal behavior (spoken or written) is what gets the gold star, students begin to think that verbal behavior has a truth-value. After all, either light is made of waves, or it isn’t, right?</p><p>And this leads into an even worse habit. Suppose the teacher asks you why the far side of a metal plate feels warmer than the side next to the radiator. If you say “I don’t know,” you have <i>no</i> chance of getting a gold star—it won’t even count as class participation. But, during the curre... </p>
When I was young, I read popular physics books such as Richard Feynman’s QED: The Strange Theory of Light and Matter. I knew that light was waves, sound was waves, matter was waves. I took pride in my scientific literacy, when I was nine years old. When I was older, and I began to read the Feynman Lectures on Physics, I ran across a gem called “the wave equation.” I could follow the equation’s derivation, but, looking back, I couldn’t see its truth at a glance. So I thought about the wave equation for three days, on and off, until I saw that it was embarrassingly obvious. And when I finally understood, I realized that the whole time I had accepted the honest assurance of physicists that light was waves, sound was waves, matter was waves, I had not had the vaguest idea of what the word “wave” meant to a physicist. There is an instinctive tendency to think that if a physicist says “light is made of waves,” and the teacher says “What is light made of?” and the student says “Waves!”, then the student has made a true statement. That’s only fair, right? We accept “waves” as a correct answer from the physicist; wouldn’t it be unfair to reject it from the student? Surely, the answer “Waves!” is either true or false, right? Which is one more bad habit to unlearn from school. Words do not have intrinsic definitions. If I hear the syllables “bea-ver” and think of a large rodent, that is a fact about my own state of mind, not a fact about the syllables “bea-ver.” The sequence of syllables “made of waves” (or “because of heat conduction”) is not a hypothesis; it is a pattern of vibrations traveling through the air, or ink on paper. It can associate to a hypothesis in someone’s mind, but it is not, of itself, right or wrong. But in school, the teacher hands you a gold star for saying “made of waves,” which must be the correct answer because the teacher heard a physicist emit the same sound-vibrations. Since verbal behavior (spoken or written) is what gets the gold star, studen
1,067
2.1.0
Revision
false
null
null
CrosspostOutput
fysgqk4CjAwhBgNYT
fake-explanations
Fake Explanations
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-20T21:13:35.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
A7Ps5Jd4B5LyPoscq
97
164
183
false
0.000206
null
false
false
2025-03-11T02:28:05.184Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
14
0
2007-08-20T21:13:35.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
ojwA8wXBf74Wcd6pr
false
0
0
namesAttachedReactions
false
[]
2
null
null
null
null
[ { "__typename": "Tag", "_id": "SJFsFfFhE6m2ThAYJ", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-07-13T16:19:09.687Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Anticipated Experiences", "needsReview": false, "noindex": false, "postCount": 49, "score": 9, "shortName": null, "slug": "anticipated-experiences", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "5f5c37ee1b5cdee568cfb12b", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-09-11T19:58:51.930Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Semantic Stopsign", "needsReview": false, "noindex": false, "postCount": 8, "score": 0, "shortName": null, "slug": "semantic-stopsign", "suggestedAsFilter": false, "userId": "qf77EiaoMw7tH3GSr", "voteCount": 0, "wikiOnly": true }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
164
0
0
18
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
fysgqk4CjAwhBgNYT
SocialPreviewType
A7Ps5Jd4B5LyPoscq
<p>Once upon a time, there was an instructor who taught physics students. One day the instructor called them into the classroom and showed them a wide, square plate of metal, next to a hot radiator. The students each put their hand on the plate and found the side next to the radiator cool, and the distant side warm. And the instructor said, <i>Why do you think this happens?</i> Some students guessed convection of air currents, and others guessed strange metals in the plate. They devised many creative explanations, none stooping so low as to say “I don’t know” or “This seems impossible.”</p><p>And the answer was that before the students entered the room, the instructor turned the plate around.<sup>1</sup></p><p>Consider the student who frantically stammers, “Eh, maybe because of the heat conduction and so?” I ask: Is this answer a proper belief? The words are easily enough professed—said in a loud, emphatic voice. But do the words actually control anticipation?</p><p>Ponder that innocent little phrase, “because of,” which comes before “heat conduction.” Ponder some of the <i>other</i> things we could put after it. We could say, for example, “Because of phlogiston,” or “Because of magic.”</p><p>“Magic!” you cry. “That’s not a <i>scientific</i> explanation!” Indeed, the phrases “because of heat conduction” and “because of magic” are readily recognized as belonging to different <i>literary genres.</i> “Heat conduction” is something that Spock might say on <i>Star Trek</i>, whereas “magic” would be said by Giles in <i>Buffy the Vampire Slayer</i>.</p><p>However, as Bayesians, we take no notice of literary genres. For us, the substance of a model is the control it exerts on anticipation. If you say “heat conduction,” what experience does that lead you to <i>anticipate</i>? Under normal circumstances, it leads you to anticipate that, if you put your hand on the side of the plate near the radiator, that side will feel warmer than the opposite side. If “because of heat conduction” can also explain the radiator-adjacent side feeling <i>cooler</i>, then it can explain pretty much <i>anything.</i></p><p>And as we all know by this point (I do hope), if you are equally good at explaining any outcome, you have zero knowledge. “Because of heat conduction,” used in such fashion, is a disguised hypothesis of maximum entropy. It is anticipation-isomorphic to saying “magic.” It feels like an explanation, but it’s not.</p><p>Suppose that instead of guessing, we measured the heat of the metal plate at v... </p>
Once upon a time, there was an instructor who taught physics students. One day the instructor called them into the classroom and showed them a wide, square plate of metal, next to a hot radiator. The students each put their hand on the plate and found the side next to the radiator cool, and the distant side warm. And the instructor said, Why do you think this happens? Some students guessed convection of air currents, and others guessed strange metals in the plate. They devised many creative explanations, none stooping so low as to say “I don’t know” or “This seems impossible.” And the answer was that before the students entered the room, the instructor turned the plate around.1 Consider the student who frantically stammers, “Eh, maybe because of the heat conduction and so?” I ask: Is this answer a proper belief? The words are easily enough professed—said in a loud, emphatic voice. But do the words actually control anticipation? Ponder that innocent little phrase, “because of,” which comes before “heat conduction.” Ponder some of the other things we could put after it. We could say, for example, “Because of phlogiston,” or “Because of magic.” “Magic!” you cry. “That’s not a scientific explanation!” Indeed, the phrases “because of heat conduction” and “because of magic” are readily recognized as belonging to different literary genres. “Heat conduction” is something that Spock might say on Star Trek, whereas “magic” would be said by Giles in Buffy the Vampire Slayer. However, as Bayesians, we take no notice of literary genres. For us, the substance of a model is the control it exerts on anticipation. If you say “heat conduction,” what experience does that lead you to anticipate? Under normal circumstances, it leads you to anticipate that, if you put your hand on the side of the plate near the radiator, that side will feel warmer than the opposite side. If “because of heat conduction” can also explain the radiator-adjacent side feeling cooler, then it can explain p
584
2.1.0
Revision
false
null
null
CrosspostOutput
XKcawbsB6Tj5e2QRK
is-molecular-nanotechnology-scientific
Is Molecular Nanotechnology "Scientific"?
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-20T04:11:56.000Z
null
false
false
2
2
null
false
false
post
[]
null
null
5c639214bcb4ac6367c12c08
49
46
40
false
0.000043
null
false
false
2021-11-08T00:44:14.263Z
null
null
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
4
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
null
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "XJjvxWB68GYpts93N", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-07-10T12:26:51.104Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Nanotechnology", "needsReview": false, "noindex": false, "postCount": 36, "score": 9, "shortName": null, "slug": "nanotechnology", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "ZpG9rheyAkgCoEQea", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-07-10T11:53:33.735Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Practice & Philosophy of Science", "needsReview": false, "noindex": false, "postCount": 262, "score": 9, "shortName": null, "slug": "practice-and-philosophy-of-science", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
46
0
0
6
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
XKcawbsB6Tj5e2QRK
SocialPreviewType
5c639214bcb4ac6367c12c08
<p><strong>Prerequisite / Read this first:</strong>&nbsp; <a href="/lw/in/scientific_evidence_legal_evidence_rational/">Scientific Evidence, Legal Evidence, Rational Evidence</a> </p><p>Consider the statement &quot;It is physically possible to construct <a href="http://www.e-drexler.com/d/06/00/Nanosystems/toc.html">diamondoid nanomachines</a> which <a href="http://www.nanomedicine.com/NMI.htm">repair</a> biological cells.&quot;&nbsp; Some people will tell you that molecular nanotechnology is &quot;pseudoscience&quot; because it has not been verified by experiment - no one has ever seen a nanofactory, so how can believing in their possibility be <u>scientific</u>?</p><p>Drexler, I think, would reply that his extrapolations of diamondoid nanomachines are based on standard physics, which is to say, scientific generalizations. Therefore, if you say that nanomachines <em>cannot</em> work, you must be inventing new physics.&nbsp; Or to put it more sharply:&nbsp; If you say that a <a href="http://www.wag.caltech.edu/gallery/gallery_nanotec.html">simulation of a molecular gear</a> is inaccurate, if you claim that atoms thus configured would behave differently from depicted, then either you know a flaw in the simulation algorithm or you're inventing your own laws of physics.</p><a id="more"></a><p>My own sympathies, I confess, are with Drexler.&nbsp; And not just because you could apply the same argument of &quot;I've never seen it, therefore it can't happen&quot; to my own field of Artificial Intelligence.</p><p>What about the Wright Brothers' attempt to build a non-biological heavier-than-air powered flying machine?&nbsp; Was that &quot;pseudoscience&quot;?&nbsp; No one had ever seen one before.&nbsp; Wasn't &quot;all flying machines crash&quot; a generalization true over all previous observations?&nbsp; Wouldn't it be scientific to extend this generalization to predict future experiments?</p><p>&quot;Flying machines crash&quot; is a qualitative, imprecise, verbal, surface-level generalization.&nbsp; If you have a quantitative theory of aerodynamics which can calculate <em>precisely</em> how previous flying machines crashed, that same theory of aerodynamics would predict the Wright Flyer will fly (and how high, at what speed).&nbsp; Deep quantitative generalizations take strict precedence over verbal surface generalizations.&nbsp; Only deep laws possess the <a href="/lw/hr/universal_law/">absolute universality and stability</a> of physics.&nbsp; Perhaps there are no new quarks under the Sun, but on higher levels of organization, new things happen all the time.</p><p>&quot;No one has ever seen a non-biological nanomachine&quot; is a verbalish surface-level generalization, which can hardly overrule the precise physical models used to simulate a molecul... </p>
Prerequisite / Read this first:  Scientific Evidence, Legal Evidence, Rational Evidence Consider the statement "It is physically possible to construct diamondoid nanomachines which repair biological cells."  Some people will tell you that molecular nanotechnology is "pseudoscience" because it has not been verified by experiment - no one has ever seen a nanofactory, so how can believing in their possibility be scientific? Drexler, I think, would reply that his extrapolations of diamondoid nanomachines are based on standard physics, which is to say, scientific generalizations. Therefore, if you say that nanomachines cannot work, you must be inventing new physics.  Or to put it more sharply:  If you say that a simulation of a molecular gear is inaccurate, if you claim that atoms thus configured would behave differently from depicted, then either you know a flaw in the simulation algorithm or you're inventing your own laws of physics. My own sympathies, I confess, are with Drexler.  And not just because you could apply the same argument of "I've never seen it, therefore it can't happen" to my own field of Artificial Intelligence. What about the Wright Brothers' attempt to build a non-biological heavier-than-air powered flying machine?  Was that "pseudoscience"?  No one had ever seen one before.  Wasn't "all flying machines crash" a generalization true over all previous observations?  Wouldn't it be scientific to extend this generalization to predict future experiments? "Flying machines crash" is a qualitative, imprecise, verbal, surface-level generalization.  If you have a quantitative theory of aerodynamics which can calculate precisely how previous flying machines crashed, that same theory of aerodynamics would predict the Wright Flyer will fly (and how high, at what speed).  Deep quantitative generalizations take strict precedence over verbal surface generalizations.  Only deep laws possess the absolute universality and stability of physics.  Perhaps there are n
667
1.0.0
Revision
false
null
null
CrosspostOutput
fhojYBGGiYAFcryHZ
scientific-evidence-legal-evidence-rational-evidence
Scientific Evidence, Legal Evidence, Rational Evidence
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-19T05:36:12.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
3TTMvZ6cDDiZehq2M
18
122
138
false
0.000158
null
false
false
2022-03-07T15:32:42.691Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
16
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
F7mZjnZMCCeLZG6Ea
false
0
0
namesAttachedReactions
false
[]
4
null
null
null
null
[ { "__typename": "Tag", "_id": "AHK82ypfxF45rqh9D", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-12-23T20:53:12.566Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Distinctions", "needsReview": false, "noindex": false, "postCount": 108, "score": 0, "shortName": null, "slug": "distinctions", "suggestedAsFilter": false, "userId": "sKAL2jzfkYkDbQmx9", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "wGGAjTfXZBatQkft5", "adminOnly": false, "afBaseScore": 7, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "sKAL2jzfkYkDbQmx9", "displayName": "Yoav Ravid" } ] }, "baseScore": 17, "canEditUserIds": null, "core": false, "createdAt": "2020-08-09T09:26:08.406Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "sKAL2jzfkYkDbQmx9", "displayName": "Yoav Ravid" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Law and Legal systems", "needsReview": false, "noindex": false, "postCount": 101, "score": 17, "shortName": null, "slug": "law-and-legal-systems", "suggestedAsFilter": false, "userId": "QBvPFLFyZyuHcBwFm", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "ZpG9rheyAkgCoEQea", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-07-10T11:53:33.735Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Practice & Philosophy of Science", "needsReview": false, "noindex": false, "postCount": 262, "score": 9, "shortName": null, "slug": "practice-and-philosophy-of-science", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "32DdRimdM7sB5wmKu", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-08-02T08:13:33.288Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Empiricism", "needsReview": false, "noindex": false, "postCount": 45, "score": 9, "shortName": null, "slug": "empiricism", "suggestedAsFilter": false, "userId": "sKAL2jzfkYkDbQmx9", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
122
0
0
15
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
fhojYBGGiYAFcryHZ
SocialPreviewType
3TTMvZ6cDDiZehq2M
<p>Suppose that your good friend, the police commissioner, tells you in strictest confidence that the crime kingpin of your city is Wulky Wilkinsen. As a rationalist, are you licensed to believe this statement? Put it this way: if you go ahead and insult Wulky, I’d call you foolhardy. Since it is prudent to act as if Wulky has a substantially higher-than-default probability of being a crime boss, the police commissioner’s statement must have been strong Bayesian evidence.</p><p>Our legal system will not imprison Wulky on the basis of the police commissioner’s statement. It is not admissible as <i>legal evidence</i>. Maybe if you locked up every person accused of being a crime boss by a police commissioner, you’d <i>initially</i> catch a lot of crime bosses, and relatively few people the commissioner just didn’t like. But unrestrained power attracts corruption like honey attracts flies: over time, you’d catch fewer and fewer real crime bosses (who would go to greater lengths to ensure anonymity), and more and more innocent victims.</p><p>This does not mean that the police commissioner’s statement is not rational evidence. It still has a lopsided likelihood ratio, and you’d still be a fool to insult Wulky. But on a <i>social</i> level, in pursuit of a social goal, we deliberately define “legal evidence” to include only particular kinds of evidence, such as the police commissioner’s own observations on the night of April 4th. All legal evidence should ideally be rational evidence, but not the other way around. We impose special, strong, additional standards before we anoint rational evidence as “legal evidence.”</p><p>As I write this sentence at 8:33 p.m., Pacific time, on August 18th, 2007, I am wearing white socks. As a rationalist, are you licensed to believe the previous statement? Yes. Could I testify to it in court? Yes. Is it a <i>scientific</i> statement? No, because there is no experiment you can perform yourself to verify it. Science is made up of <i>generalizations</i> which apply to many particular instances, so that you can run new real-world experiments which test the generalization, and thereby verify for yourself that the generalization is true, without having to trust anyone’s authority. Science is the <i>publicly reproducible</i> knowledge of humankind.</p><p>Like a court system, science as a social process is made up of fallible humans. We want a protected pool of beliefs that are <i>especially</i> reliable. And we want ... </p>
Suppose that your good friend, the police commissioner, tells you in strictest confidence that the crime kingpin of your city is Wulky Wilkinsen. As a rationalist, are you licensed to believe this statement? Put it this way: if you go ahead and insult Wulky, I’d call you foolhardy. Since it is prudent to act as if Wulky has a substantially higher-than-default probability of being a crime boss, the police commissioner’s statement must have been strong Bayesian evidence. Our legal system will not imprison Wulky on the basis of the police commissioner’s statement. It is not admissible as legal evidence. Maybe if you locked up every person accused of being a crime boss by a police commissioner, you’d initially catch a lot of crime bosses, and relatively few people the commissioner just didn’t like. But unrestrained power attracts corruption like honey attracts flies: over time, you’d catch fewer and fewer real crime bosses (who would go to greater lengths to ensure anonymity), and more and more innocent victims. This does not mean that the police commissioner’s statement is not rational evidence. It still has a lopsided likelihood ratio, and you’d still be a fool to insult Wulky. But on a social level, in pursuit of a social goal, we deliberately define “legal evidence” to include only particular kinds of evidence, such as the police commissioner’s own observations on the night of April 4th. All legal evidence should ideally be rational evidence, but not the other way around. We impose special, strong, additional standards before we anoint rational evidence as “legal evidence.” As I write this sentence at 8:33 p.m., Pacific time, on August 18th, 2007, I am wearing white socks. As a rationalist, are you licensed to believe the previous statement? Yes. Could I testify to it in court? Yes. Is it a scientific statement? No, because there is no experiment you can perform yourself to verify it. Science is made up of generalizations which apply to many particular instances,
899
2.1.0
Revision
false
null
null
CrosspostOutput
WnheMGAka4fL99eae
hindsight-devalues-science
Hindsight Devalues Science
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-17T19:39:42.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
jWDHaAqSedgDb2xH5
44
220
249
false
0.000275
null
false
false
2024-04-23T17:41:46.144Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
14
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
zWvSBMDLJHLjWLAMS
false
0
0
namesAttachedReactions
false
[]
2
null
null
null
null
[ { "__typename": "Tag", "_id": "5f5c37ee1b5cdee568cfb127", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-09-11T19:58:51.920Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Hindsight Bias", "needsReview": false, "noindex": false, "postCount": 14, "score": 0, "shortName": null, "slug": "hindsight-bias", "suggestedAsFilter": false, "userId": "qf77EiaoMw7tH3GSr", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "gjfAJRD54uPiDRmn7", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": false, "createdAt": "2016-05-26T04:30:15.000Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "jKbRsstM3MyEGNzsm", "displayName": "Jeffrey Eppler" } ] }, "isArbitalImport": true, "isPlaceholderPage": false, "isSubforum": false, "name": "Fallacies", "needsReview": false, "noindex": false, "postCount": 92, "score": 1, "shortName": null, "slug": "fallacies", "suggestedAsFilter": false, "userId": "nmk3nLpQE89dMRzzN", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "4R8JYu4QF2FqzJxE5", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-05-13T15:40:30.194Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Heuristics & Biases", "needsReview": false, "noindex": false, "postCount": 272, "score": 19, "shortName": null, "slug": "heuristics-and-biases", "suggestedAsFilter": false, "userId": "BpBzKEueak7J8vHNi", "voteCount": 2, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
220
0
0
27
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
WnheMGAka4fL99eae
SocialPreviewType
jWDHaAqSedgDb2xH5
<p>This essay is closely based on an <a href="https://web.archive.org/web/20170801042830/http://csml.som.ohio-state.edu:80/Music829C/hindsight.bias.html">excerpt</a> from Meyers’s <i>Exploring Social Psychology</i>; the excerpt is worth reading in its entirety.</p><p>Cullen Murphy, editor of <i>The Atlantic</i>, said that the social sciences turn up “no ideas or conclusions that can’t be found in [any] encyclopedia of quotations . . . Day after day social scientists go out into the world. Day after day they discover that people’s behavior is pretty much what you’d expect.”</p><p>Of course, the “expectation” is all <a href="http://lesswrong.com/lw/il/hindsight_bias/">hindsight</a>. (Hindsight bias: Subjects who know the actual answer to a question assign much higher probabilities they “would have” guessed for that answer, compared to subjects who must guess without knowing the answer.)</p><p>The historian Arthur Schlesinger, Jr. dismissed scientific studies of World War II soldiers’ experiences as “ponderous demonstrations” of common sense. For example:</p><ol><li>Better educated soldiers suffered more adjustment problems than less educated soldiers. (Intellectuals were less prepared for battle stresses than street-smart people.)&nbsp;</li><li>Southern soldiers coped better with the hot South Sea Island climate than Northern soldiers. (Southerners are more accustomed to hot weather.)&nbsp;</li><li>White privates were more eager to be promoted to noncommissioned officers than Black privates. (Years of oppression take a toll on achievement motivation.)&nbsp;</li><li>Southern Blacks preferred Southern to Northern White officers. (Southern officers were more experienced and skilled in interacting with Blacks.)&nbsp;</li><li>As long as the fighting continued, soldiers were more eager to return home than after the war ended. (During the fighting, soldiers knew they were in mortal danger.)</li></ol><p>How many of these findings do you think you <i>could have</i> predicted in advance? Three out of five? Four out of five? Are there any cases where you would have predicted the opposite—where your model takes a hit? Take a moment to think before continuing . . .</p><p>&nbsp;</p><p>&nbsp;</p><p>. . .</p><p>&nbsp;</p><p>&nbsp;</p><p>In this demonstration (from Paul Lazarsfeld by way of Meyers), all of the findings above are the <i>opposite</i> of what was actually found.<sup>1</sup> How many times did you think your model took a hit? How many times did you admit you would have been wrong? That’s how good your model really was. The measure of your strength as a rationalist is your ability to be more confused by fiction than by reality.</p><p>Unless, of course, I reversed the results again. What do you think?</p><p>Do your t... </p>
This essay is closely based on an excerpt from Meyers’s Exploring Social Psychology; the excerpt is worth reading in its entirety. Cullen Murphy, editor of The Atlantic, said that the social sciences turn up “no ideas or conclusions that can’t be found in [any] encyclopedia of quotations . . . Day after day social scientists go out into the world. Day after day they discover that people’s behavior is pretty much what you’d expect.” Of course, the “expectation” is all hindsight. (Hindsight bias: Subjects who know the actual answer to a question assign much higher probabilities they “would have” guessed for that answer, compared to subjects who must guess without knowing the answer.) The historian Arthur Schlesinger, Jr. dismissed scientific studies of World War II soldiers’ experiences as “ponderous demonstrations” of common sense. For example: 1. Better educated soldiers suffered more adjustment problems than less educated soldiers. (Intellectuals were less prepared for battle stresses than street-smart people.)  2. Southern soldiers coped better with the hot South Sea Island climate than Northern soldiers. (Southerners are more accustomed to hot weather.)  3. White privates were more eager to be promoted to noncommissioned officers than Black privates. (Years of oppression take a toll on achievement motivation.)  4. Southern Blacks preferred Southern to Northern White officers. (Southern officers were more experienced and skilled in interacting with Blacks.)  5. As long as the fighting continued, soldiers were more eager to return home than after the war ended. (During the fighting, soldiers knew they were in mortal danger.) How many of these findings do you think you could have predicted in advance? Three out of five? Four out of five? Are there any cases where you would have predicted the opposite—where your model takes a hit? Take a moment to think before continuing . . .     . . .     In this demonstration (from Paul Lazarsfeld by way of Meyers)
607
2.1.0
Revision
false
null
null
CrosspostOutput
fkM9XsNvXdYH6PPAx
hindsight-bias
Hindsight bias
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-16T21:58:45.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
5c63924bbcb4ac6367c13f80
25
66
74
false
0.00009
null
false
false
2015-04-02T22:30:09.578Z
null
null
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
4
0
2007-08-16T21:58:45.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
null
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "5f5c37ee1b5cdee568cfb127", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-09-11T19:58:51.920Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Hindsight Bias", "needsReview": false, "noindex": false, "postCount": 14, "score": 0, "shortName": null, "slug": "hindsight-bias", "suggestedAsFilter": false, "userId": "qf77EiaoMw7tH3GSr", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "4R8JYu4QF2FqzJxE5", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-05-13T15:40:30.194Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Heuristics & Biases", "needsReview": false, "noindex": false, "postCount": 272, "score": 19, "shortName": null, "slug": "heuristics-and-biases", "suggestedAsFilter": false, "userId": "BpBzKEueak7J8vHNi", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
66
0
0
8
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
fkM9XsNvXdYH6PPAx
SocialPreviewType
5c63924bbcb4ac6367c13f80
<p><em>Hindsight bias</em> is when people who know the answer vastly overestimate its <em>predictability</em> or <em>obviousness,</em> compared to the estimates of subjects who must guess without advance knowledge.&nbsp; Hindsight bias is sometimes called the <em>I-knew-it-all-along effect</em>.</p><p>Fischhoff and Beyth (1975) presented students with historical accounts of unfamiliar incidents, such as a conflict between the Gurkhas and the British in 1814.&nbsp; Given the account as background knowledge, five groups of students were asked what they would have predicted as the probability for each of four outcomes: British victory, Gurkha victory, stalemate with a peace settlement, or stalemate with no peace settlement.&nbsp; Four experimental groups were respectively told that these four outcomes were the historical outcome.&nbsp; The fifth, control group was not told any historical outcome.&nbsp; In every case, a group told an outcome assigned substantially higher probability to that outcome, than did any other group or the control group.</p><p><a id="more"></a></p><p>Hindsight bias matters in legal cases, where a judge or jury must determine whether a defendant was legally negligent in failing to foresee a hazard (Sanchiro 2003). In an experiment based on an actual legal case, Kamin and Rachlinski (1995) asked two groups to estimate the probability of flood damage caused by blockage of a city-owned drawbridge. The control group was told only the background information known to the city when it decided not to hire a bridge watcher. The experimental group was given this information, plus the fact that a flood had actually occurred. Instructions stated the city was negligent if the foreseeable probability of flooding was greater than 10%. 76% of the control group concluded the flood was so unlikely that no precautions were necessary; 57% of the experimental group concluded the flood was so likely that failure to take precautions was legally negligent. A third experimental group was told the outcome andalso explicitly instructed to avoid hindsight bias, which made no difference: 56% concluded the city was legally negligent.</p><p>Viewing history through the lens of hindsight, we vastly underestimate the cost of effective safety precautions.&nbsp; In 1986, the <em>Challenger</em> exploded for reasons traced to an O-ring losing flexibility at low temperature.&nbsp; There were warning signs of a problem with the O-rings.&nbsp; But preventing the <em>Challe</em>... </p>
Hindsight bias is when people who know the answer vastly overestimate its predictability or obviousness, compared to the estimates of subjects who must guess without advance knowledge.  Hindsight bias is sometimes called the I-knew-it-all-along effect. Fischhoff and Beyth (1975) presented students with historical accounts of unfamiliar incidents, such as a conflict between the Gurkhas and the British in 1814.  Given the account as background knowledge, five groups of students were asked what they would have predicted as the probability for each of four outcomes: British victory, Gurkha victory, stalemate with a peace settlement, or stalemate with no peace settlement.  Four experimental groups were respectively told that these four outcomes were the historical outcome.  The fifth, control group was not told any historical outcome.  In every case, a group told an outcome assigned substantially higher probability to that outcome, than did any other group or the control group. Hindsight bias matters in legal cases, where a judge or jury must determine whether a defendant was legally negligent in failing to foresee a hazard (Sanchiro 2003). In an experiment based on an actual legal case, Kamin and Rachlinski (1995) asked two groups to estimate the probability of flood damage caused by blockage of a city-owned drawbridge. The control group was told only the background information known to the city when it decided not to hire a bridge watcher. The experimental group was given this information, plus the fact that a flood had actually occurred. Instructions stated the city was negligent if the foreseeable probability of flooding was greater than 10%. 76% of the control group concluded the flood was so unlikely that no precautions were necessary; 57% of the experimental group concluded the flood was so likely that failure to take precautions was legally negligent. A third experimental group was told the outcome andalso explicitly instructed to avoid hindsight bias, which
739
1.0.0
Revision
false
null
null
CrosspostOutput
WN73eiLQkuDtSC8Ag
one-argument-against-an-army
One Argument Against An Army
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-15T18:39:43.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
uZYQ2GhZHRxrvPLgS
37
91
109
false
0.000127
null
false
false
2024-11-25T09:45:33.341Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
5
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
oJyfLwurxfvjF7q97
false
0
0
namesAttachedReactions
false
[]
2
null
null
null
null
[ { "__typename": "Tag", "_id": "LDTSbmXtokYAsEq8e", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-05-24T07:47:20.152Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Motivated Reasoning", "needsReview": false, "noindex": false, "postCount": 73, "score": 9, "shortName": null, "slug": "motivated-reasoning", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "ZzxvopS4BwLuQy42n", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-05-24T18:04:01.718Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationalization", "needsReview": false, "noindex": false, "postCount": 83, "score": 19, "shortName": null, "slug": "rationalization", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 2, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
91
0
0
13
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
WN73eiLQkuDtSC8Ag
SocialPreviewType
uZYQ2GhZHRxrvPLgS
<p>I talked about a style of reasoning in which not a single contrary argument is allowed, with the result that every non-supporting observation has to be argued away. Here I suggest that when people encounter a contrary argument, they prevent themselves from downshifting their confidence by <em>rehearsing</em> already-known support. </p> <p>Suppose the country of Freedonia is debating whether its neighbor, Sylvania, is responsible for a recent rash of meteor strikes on its cities. There are several pieces of evidence suggesting this: the meteors struck cities close to the Sylvanian border; there was unusual activity in the Sylvanian stock markets <em>before</em> the strikes; and the Sylvanian ambassador Trentino was heard muttering about &#x201C;heavenly vengeance.&#x201D;</p> <p>Someone comes to you and says: &#x201C;I don&#x2019;t think Sylvania is responsible for the meteor strikes. They have trade with us of billions of dinars annually.&#x201D; &#x201C;Well,&#x201D; you reply, &#x201C;the meteors struck cities close to Sylvania, there was suspicious activity in their stock market, and their ambassador spoke of heavenly vengeance afterward.&#x201D; Since these three arguments outweigh the first, you <em>keep</em> your belief that Sylvania is responsible&#x2014;you believe rather than disbelieve, qualitatively. Clearly, the balance of evidence weighs against Sylvania.</p> <p>Then another comes to you and says: &#x201C;I don&#x2019;t think Sylvania is responsible for the meteor strikes. Directing an asteroid strike is really hard. Sylvania doesn&#x2019;t even have a space program.&#x201D; You reply, &#x201C;But the meteors struck cities close to Sylvania, and their investors knew it, and the ambassador came right out and admitted it!&#x201D; Again, these three arguments outweigh the first (by three arguments against one argument), so you keep your belief that Sylvania is responsible.</p> <p>Indeed, your convictions are <em>strengthened.</em> On two separate occasions now, you have evaluated the balance of evidence, and both times the balance was tilted against Sylvania by a ratio of 3 to 1.</p> <p>You encounter further arguments by the pro-Sylvania traitors&#x2014;again, and again, and a hundred times again&#x2014;but each time the new argument is handily defeated by 3 to 1. And on every occasion, you feel yourself becoming more confident that Sylvania was indeed responsible, shifting your prior according to the f... </p>
I talked about a style of reasoning in which not a single contrary argument is allowed, with the result that every non-supporting observation has to be argued away. Here I suggest that when people encounter a contrary argument, they prevent themselves from downshifting their confidence by rehearsing already-known support. Suppose the country of Freedonia is debating whether its neighbor, Sylvania, is responsible for a recent rash of meteor strikes on its cities. There are several pieces of evidence suggesting this: the meteors struck cities close to the Sylvanian border; there was unusual activity in the Sylvanian stock markets before the strikes; and the Sylvanian ambassador Trentino was heard muttering about “heavenly vengeance.” Someone comes to you and says: “I don’t think Sylvania is responsible for the meteor strikes. They have trade with us of billions of dinars annually.” “Well,” you reply, “the meteors struck cities close to Sylvania, there was suspicious activity in their stock market, and their ambassador spoke of heavenly vengeance afterward.” Since these three arguments outweigh the first, you keep your belief that Sylvania is responsible—you believe rather than disbelieve, qualitatively. Clearly, the balance of evidence weighs against Sylvania. Then another comes to you and says: “I don’t think Sylvania is responsible for the meteor strikes. Directing an asteroid strike is really hard. Sylvania doesn’t even have a space program.” You reply, “But the meteors struck cities close to Sylvania, and their investors knew it, and the ambassador came right out and admitted it!” Again, these three arguments outweigh the first (by three arguments against one argument), so you keep your belief that Sylvania is responsible. Indeed, your convictions are strengthened. On two separate occasions now, you have evaluated the balance of evidence, and both times the balance was tilted against Sylvania by a ratio of 3 to 1. You encounter further arguments by the pro-Sy
600
2.0.0
Revision
false
null
null
CrosspostOutput
627DZcvme7nLDrbZu
update-yourself-incrementally
Update Yourself Incrementally
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-14T14:56:33.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
eiggX3eFgYi6vx3zW
29
100
116
false
0.000134
null
false
false
2021-01-07T23:04:56.502Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
10
0
2007-08-14T14:56:33.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
zpuyY9rydY3kQGEKg
false
0
0
namesAttachedReactions
false
[]
4
null
null
null
null
[ { "__typename": "Tag", "_id": "hLp77TQsRkooioj86", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-08-18T22:02:39.454Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Probabilistic Reasoning", "needsReview": false, "noindex": false, "postCount": 57, "score": 9, "shortName": null, "slug": "probabilistic-reasoning", "suggestedAsFilter": false, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "tZsfB6WfpRy6kFb6q", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 20, "canEditUserIds": null, "core": false, "createdAt": "2020-04-24T20:32:21.413Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" }, { "_id": "4RnpNbKfsiCHDHLcu", "displayName": "haussteiner" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Conservation of Expected Evidence", "needsReview": false, "noindex": false, "postCount": 21, "score": 20, "shortName": null, "slug": "conservation-of-expected-evidence", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 3, "wikiOnly": false }, { "__typename": "Tag", "_id": "mQbxDKHxPcKKRG4mb", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-08-04T04:39:14.172Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Changing Your Mind", "needsReview": false, "noindex": false, "postCount": 29, "score": 0, "shortName": null, "slug": "changing-your-mind", "suggestedAsFilter": false, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "ZzxvopS4BwLuQy42n", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-05-24T18:04:01.718Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationalization", "needsReview": false, "noindex": false, "postCount": 83, "score": 19, "shortName": null, "slug": "rationalization", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
100
0
0
13
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
627DZcvme7nLDrbZu
SocialPreviewType
eiggX3eFgYi6vx3zW
<p><a href="https://www.lesswrong.com/lw/gw/politics_is_the_mindkiller/">Politics is the mind-killer</a>.&nbsp; Debate is war, <a href="https://www.lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/">arguments are soldiers</a>.&nbsp; There is the temptation to search for ways to <a href="https://www.lesswrong.com/lw/i4/belief_in_belief/">interpret every possible experimental result</a> to confirm your theory, like securing a citadel against every possible line of attack.&nbsp; This you cannot do.&nbsp; It is mathematically impossible. <a href="https://www.lesswrong.com/lw/ii/conservation_of_expected_evidence/">For every expectation of evidence, there is an equal and opposite expectation of counterevidence.</a></p><p>But it’s okay if your cherished belief isn’t <i>perfectly</i> defended. If the hypothesis is that the coin comes up heads 95% of the time, then one time in twenty you will expect to see what looks like contrary evidence. This is okay. It’s normal. It’s even expected, so long as you’ve got nineteen supporting observations for every contrary one. A probabilistic model can <a href="https://www.lesswrong.com/lw/ig/i_defy_the_data/">take a hit or two</a>, and still survive, so long as the hits don't <i>keep on</i> coming in.<a href="#fn2x25"><sup>2</sup></a></p><p>Yet it is widely believed, especially in the court of public opinion, that a true theory can have <i>no</i> failures and a false theory <i>no</i> successes.</p><p>You find people holding up a single piece of what they conceive to be evidence, and claiming that their theory can “explain” it, as though this were all the support that any theory needed. Apparently a false theory can have <i>no</i> supporting evidence; it is impossible for a false theory to fit even a single event. Thus, a single piece of confirming evidence is all that any theory needs.</p><p>It is only slightly less foolish to hold up a single piece of <i>probabilistic</i> counterevidence as disproof, as though it were impossible for a correct theory to have even a <i>slight</i> argument against it. But this is how humans have argued for ages and ages, trying to defeat all enemy arguments, while denying the enemy even a single shred of support. People want their debates to be one-sided; they are accustomed to a world in which their preferred theories have not one iota of antisupport. Thus, allowing a single item of probabilistic counterevidence would be the end of the world.</p><p>I just know someone in the audience out there is going to say, “But you <i>can’t</i> concede even a single point if you want to win debates in the real world! If you concede that any counterarguments exist, the Enemy will harp on them over and over—you can’t let the Enemy do that! You’ll <i>lose!</i> What could be more viscerally terrifying than <i>that?</i>”</p><p>Whatever. Rationality is not for winning debates, it is for deciding which side to j... </p>
Politics is the mind-killer.  Debate is war, arguments are soldiers.  There is the temptation to search for ways to interpret every possible experimental result to confirm your theory, like securing a citadel against every possible line of attack.  This you cannot do.  It is mathematically impossible. For every expectation of evidence, there is an equal and opposite expectation of counterevidence. But it’s okay if your cherished belief isn’t perfectly defended. If the hypothesis is that the coin comes up heads 95% of the time, then one time in twenty you will expect to see what looks like contrary evidence. This is okay. It’s normal. It’s even expected, so long as you’ve got nineteen supporting observations for every contrary one. A probabilistic model can take a hit or two, and still survive, so long as the hits don't keep on coming in.2 Yet it is widely believed, especially in the court of public opinion, that a true theory can have no failures and a false theory no successes. You find people holding up a single piece of what they conceive to be evidence, and claiming that their theory can “explain” it, as though this were all the support that any theory needed. Apparently a false theory can have no supporting evidence; it is impossible for a false theory to fit even a single event. Thus, a single piece of confirming evidence is all that any theory needs. It is only slightly less foolish to hold up a single piece of probabilistic counterevidence as disproof, as though it were impossible for a correct theory to have even a slight argument against it. But this is how humans have argued for ages and ages, trying to defeat all enemy arguments, while denying the enemy even a single shred of support. People want their debates to be one-sided; they are accustomed to a world in which their preferred theories have not one iota of antisupport. Thus, allowing a single item of probabilistic counterevidence would be the end of the world. I just know someone in the audienc
974
2.2.0
Revision
false
null
null
CrosspostOutput
jiBFC7DcCrZjGmZnJ
conservation-of-expected-evidence
Conservation of Expected Evidence
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-13T15:55:26.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
EynYqvkWrvugEXqRf
82
241
293
false
0.000323
null
false
false
2024-11-14T16:47:54.323Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
23
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
v4kWJwFqqefibE8GS
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "tZsfB6WfpRy6kFb6q", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 20, "canEditUserIds": null, "core": false, "createdAt": "2020-04-24T20:32:21.413Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" }, { "_id": "4RnpNbKfsiCHDHLcu", "displayName": "haussteiner" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Conservation of Expected Evidence", "needsReview": false, "noindex": false, "postCount": 21, "score": 20, "shortName": null, "slug": "conservation-of-expected-evidence", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 3, "wikiOnly": false }, { "__typename": "Tag", "_id": "LhX3F2SvGDarZCuh6", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 20, "canEditUserIds": null, "core": false, "createdAt": "2020-04-29T02:02:50.973Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" }, { "_id": "dRpSdnsGWhYrNj7ki", "displayName": "AbsurdVoid" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Bayes' Theorem", "needsReview": false, "noindex": false, "postCount": 182, "score": 20, "shortName": null, "slug": "bayes-theorem", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 3, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
241
0
0
31
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
jiBFC7DcCrZjGmZnJ
SocialPreviewType
EynYqvkWrvugEXqRf
<p>Friedrich Spee von Langenfeld, a priest who heard the confessions of condemned witches, wrote in 1631 the <i>Cautio Criminalis</i> (“prudence in criminal cases”), in which he bitingly described the decision tree for condemning accused witches: If the witch had led an evil and improper life, she was guilty; if she had led a good and proper life, this too was a proof, for witches dissemble and try to appear especially virtuous. After the woman was put in prison: if she was afraid, this proved her guilt; if she was not afraid, this proved her guilt, for witches characteristically pretend innocence and wear a bold front. Or on hearing of a denunciation of witchcraft against her, she might seek flight or remain; if she ran, that proved her guilt; if she remained, the devil had detained her so she could not get away.</p><p>Spee acted as confessor to many witches; he was thus in a position to observe <i>every</i> branch of the accusation tree, that no matter <i>what</i> the accused witch said or did, it was held as proof against her. In any individual case, you would only hear one branch of the dilemma. It is for this reason that scientists write down their experimental predictions in advance.</p><p>But <i>you can’t have it both ways</i> —as a matter of probability theory, not mere fairness. The rule that “absence of evidence <i>is</i> evidence of absence” is a special case of a more general law, which I would name Conservation of Expected Evidence: the <i>expectation</i> of the posterior probability, after viewing the evidence, must equal the prior probability.</p><blockquote><p><strong>P(H) = P(H)</strong><br><strong>P(H) = P(H,E) + P(H,~E)</strong><br><strong>P(H) = P(H|E)*P(E) + P(H|~E)*P(~E)</strong></p></blockquote><p><i>Therefore,</i> for every expectation of evidence, there is an equal and opposite expectation of counterevidence.</p><p>If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. If you’re very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis, this can only provide a very small increment to your belief (it is already close to 1); but the unexpected failure of your prediction would (and must) deal your confidence a huge blow. On <i>average</i>, you must expect to be <i>exactly</i> as confident as when you started out. Equivalently, the mere <i>expectation</i> of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs.</p><p>So if you claim that “no... </p>
Friedrich Spee von Langenfeld, a priest who heard the confessions of condemned witches, wrote in 1631 the Cautio Criminalis (“prudence in criminal cases”), in which he bitingly described the decision tree for condemning accused witches: If the witch had led an evil and improper life, she was guilty; if she had led a good and proper life, this too was a proof, for witches dissemble and try to appear especially virtuous. After the woman was put in prison: if she was afraid, this proved her guilt; if she was not afraid, this proved her guilt, for witches characteristically pretend innocence and wear a bold front. Or on hearing of a denunciation of witchcraft against her, she might seek flight or remain; if she ran, that proved her guilt; if she remained, the devil had detained her so she could not get away. Spee acted as confessor to many witches; he was thus in a position to observe every branch of the accusation tree, that no matter what the accused witch said or did, it was held as proof against her. In any individual case, you would only hear one branch of the dilemma. It is for this reason that scientists write down their experimental predictions in advance. But you can’t have it both ways —as a matter of probability theory, not mere fairness. The rule that “absence of evidence is evidence of absence” is a special case of a more general law, which I would name Conservation of Expected Evidence: the expectation of the posterior probability, after viewing the evidence, must equal the prior probability. > P(H) = P(H) > P(H) = P(H,E) + P(H,~E) > P(H) = P(H|E)*P(E) + P(H|~E)*P(~E) Therefore, for every expectation of evidence, there is an equal and opposite expectation of counterevidence. If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. If you’re very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis
740
2.1.0
Revision
false
null
null
CrosspostOutput
mnS2WYLCGJP2kQkRn
absence-of-evidence-is-evidence-of-absence
Absence of Evidence Is Evidence of Absence
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-12T20:34:16.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
6xHJbP2Cv5mhB9xEz
119
153
176
false
0.000198
null
false
false
2025-01-01T19:23:27.051Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
8
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
EsB8F65EdTdS4CsFM
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "tZsfB6WfpRy6kFb6q", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 20, "canEditUserIds": null, "core": false, "createdAt": "2020-04-24T20:32:21.413Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" }, { "_id": "4RnpNbKfsiCHDHLcu", "displayName": "haussteiner" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Conservation of Expected Evidence", "needsReview": false, "noindex": false, "postCount": 21, "score": 20, "shortName": null, "slug": "conservation-of-expected-evidence", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 3, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
153
0
0
17
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
mnS2WYLCGJP2kQkRn
SocialPreviewType
6xHJbP2Cv5mhB9xEz
<p>From Robyn Dawes’s <i>Rational Choice in an Uncertain World</i>:</p><blockquote><p>In fact, this post-hoc fitting of evidence to hypothesis was involved in a most grievous chapter in United States history: the internment of Japanese-Americans at the beginning of the Second World War. When California governor Earl Warren testified before a congressional hearing in San Francisco on February 21, 1942, a questioner pointed out that there had been no sabotage or any other type of espionage by the Japanese-Americans up to that time. Warren responded, “I take the view that this lack [of subversive activity] is the most ominous sign in our whole situation. It convinces me more than perhaps any other factor that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbor was timed . . . I believe we are just being lulled into a false sense of security.”</p></blockquote><p>Consider Warren’s argument from a Bayesian perspective. When we see evidence, hypotheses that assigned a <i>higher</i> likelihood to that evidence gain probability, at the expense of hypotheses that assigned a <i>lower</i> likelihood to the evidence. This is a phenomenon of <i>relative</i> likelihoods and <i>relative</i> probabilities. You can assign a high likelihood to the evidence and still lose probability mass to some other hypothesis, if that other hypothesis assigns a likelihood that is even higher.</p><p>Warren seems to be arguing that, given that we see no sabotage, this <i>confirms</i> that a Fifth Column exists. You could argue that a Fifth Column <i>might</i> delay its sabotage. But the likelihood is still higher that the <i>absence</i> of a Fifth Column would perform an absence of sabotage.</p><p>Let E stand for the observation of sabotage, and ¬E for the observation of no sabotage. The symbol H1 stands for the hypothesis of a Japanese-American Fifth Column, and H2 for the hypothesis that no Fifth Column exists. The <i>conditional probability</i> P(E | H), or “E given H,” is how confidently we’d expect to see the evidence E if we assumed the hypothesis H were true.</p><p>Whatever the likelihood that a Fifth Column would do no sabotage, the probability P(¬E | H1), it won’t be as large as the likelihood that there’s no sabotage <i>given that there’s no Fifth Column</i>, the probability P(¬E | H2). So observing a lack of sabotage increases the probability that no Fifth Column exists.</p><p>A lack of sabotage doesn’t <i>prove</i> that no Fifth Column exists. Absence of <i>proof</i> is not <i>proof</i> of ... </p>
From Robyn Dawes’s Rational Choice in an Uncertain World: > In fact, this post-hoc fitting of evidence to hypothesis was involved in a most grievous chapter in United States history: the internment of Japanese-Americans at the beginning of the Second World War. When California governor Earl Warren testified before a congressional hearing in San Francisco on February 21, 1942, a questioner pointed out that there had been no sabotage or any other type of espionage by the Japanese-Americans up to that time. Warren responded, “I take the view that this lack [of subversive activity] is the most ominous sign in our whole situation. It convinces me more than perhaps any other factor that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbor was timed . . . I believe we are just being lulled into a false sense of security.” Consider Warren’s argument from a Bayesian perspective. When we see evidence, hypotheses that assigned a higher likelihood to that evidence gain probability, at the expense of hypotheses that assigned a lower likelihood to the evidence. This is a phenomenon of relative likelihoods and relative probabilities. You can assign a high likelihood to the evidence and still lose probability mass to some other hypothesis, if that other hypothesis assigns a likelihood that is even higher. Warren seems to be arguing that, given that we see no sabotage, this confirms that a Fifth Column exists. You could argue that a Fifth Column might delay its sabotage. But the likelihood is still higher that the absence of a Fifth Column would perform an absence of sabotage. Let E stand for the observation of sabotage, and ¬E for the observation of no sabotage. The symbol H1 stands for the hypothesis of a Japanese-American Fifth Column, and H2 for the hypothesis that no Fifth Column exists. The conditional probability P(E | H), or “E given H,” is how confidently we’d expect to see the evidence E if we assumed the hypothesis H wer
753
2.1.0
Revision
false
null
null
CrosspostOutput
vrHRcEDMjZcx5Yfru
i-defy-the-data
I Defy the Data!
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-11T21:33:19.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
5c6392afbcb4ac6367c16208
12
82
104
false
0.000122
null
false
false
2017-06-17T04:13:32.027Z
null
null
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
9
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
null
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "5f5c37ee1b5cdee568cfb1cf", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-09-11T19:58:52.226Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Defying The Data", "needsReview": false, "noindex": false, "postCount": 1, "score": 0, "shortName": null, "slug": "defying-the-data", "suggestedAsFilter": false, "userId": "cn4SiEmqWbu7K9em5", "voteCount": 0, "wikiOnly": true }, { "__typename": "Tag", "_id": "ZpG9rheyAkgCoEQea", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-07-10T11:53:33.735Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Practice & Philosophy of Science", "needsReview": false, "noindex": false, "postCount": 262, "score": 9, "shortName": null, "slug": "practice-and-philosophy-of-science", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "NSMKfa8emSbGNXRKD", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-05-15T23:11:08.425Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Religion", "needsReview": false, "noindex": false, "postCount": 218, "score": 0, "shortName": null, "slug": "religion", "suggestedAsFilter": false, "userId": "qgdGA4ZEyW7zNdK84", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "vg4LDxjdwHLotCm8w", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-06-02T20:55:24.286Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Replication Crisis", "needsReview": false, "noindex": false, "postCount": 66, "score": 19, "shortName": null, "slug": "replication-crisis", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 2, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
82
0
0
15
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
vrHRcEDMjZcx5Yfru
SocialPreviewType
5c6392afbcb4ac6367c16208
<p>One of the great weaknesses of Science is this mistaken idea that if an experiment contradicts the dominant theory, we should throw out the theory instead of the experiment.</p><p>Experiments <em>can</em> go awry.&nbsp; They can contain design flaws. They can be deliberately corrupted.&nbsp; They can be unconsciously corrupted.&nbsp; They can be selectively reported.&nbsp; Most of all, 1 time in 20 they can be &quot;statistically significant&quot; by sheer coincidence, and there are a lot of experiments out there.</p><p>Unfortunately, Science has this notion that you can never go against an honestly obtained experimental result.&nbsp; So, when someone obtains an experimental result that contradicts the standard model, researchers are faced with a dilemma for resolving their cognitive dissonance: they either have to <em>immediately</em> throw away the standard model, or else <em>attack the experiment</em> - accuse the researchers of dishonesty, or flawed design, or conflict of interest...<br /> </p><p>Someone once presented me with a new study on the effects of intercessory prayer (that is, people praying for patients who are not told about the prayer), which showed 50% of the prayed-for patients achieving success at <a href="http://www.reproductivemedicine.com/Features/2001/2001Sep.htm">in-vitro fertilization</a>, versus 25% of the control group.&nbsp; I <em>liked</em> this claim.&nbsp; It had a nice large effect size.&nbsp; Claims of blatant impossible effects are much more pleasant to deal with than claims of small impossible effects that are &quot;statistically significant&quot;.</p><p>So I cheerfully said:&nbsp; &quot;I defy the data.&quot;</p><a id="more"></a><p>My original phrasing was actually &quot;I deny the data&quot;.&nbsp; Nonetheless I said it outright, without apology, and with deliberate insolence.&nbsp; I am keeping my theory; your experiment is wrong.</p><p>If an experimental result contradicts the Standard Model, this is an <em>important</em> fact.&nbsp; It needs to be openly acknowledged.&nbsp; An experiment that makes traditionalists <em>want</em> to discard the data - or even an experiment that makes traditionalists very skeptical of the data - should be a high priority&nbsp; for replication.&nbsp; An experiment <em>worth defying</em> should command attention!</p><p>But it is not socially acceptable to say, &quot;The hell with your experimental falsification, I'm keeping my theory.&quot;&nbsp; So the data has to be defied covertly - by character assassination of the researchers, by sly innuendos, by dire hints of controversy.&nbsp; The data ha... </p>
One of the great weaknesses of Science is this mistaken idea that if an experiment contradicts the dominant theory, we should throw out the theory instead of the experiment. Experiments can go awry.  They can contain design flaws. They can be deliberately corrupted.  They can be unconsciously corrupted.  They can be selectively reported.  Most of all, 1 time in 20 they can be "statistically significant" by sheer coincidence, and there are a lot of experiments out there. Unfortunately, Science has this notion that you can never go against an honestly obtained experimental result.  So, when someone obtains an experimental result that contradicts the standard model, researchers are faced with a dilemma for resolving their cognitive dissonance: they either have to immediately throw away the standard model, or else attack the experiment - accuse the researchers of dishonesty, or flawed design, or conflict of interest... Someone once presented me with a new study on the effects of intercessory prayer (that is, people praying for patients who are not told about the prayer), which showed 50% of the prayed-for patients achieving success at in-vitro fertilization, versus 25% of the control group.  I liked this claim.  It had a nice large effect size.  Claims of blatant impossible effects are much more pleasant to deal with than claims of small impossible effects that are "statistically significant". So I cheerfully said:  "I defy the data." My original phrasing was actually "I deny the data".  Nonetheless I said it outright, without apology, and with deliberate insolence.  I am keeping my theory; your experiment is wrong. If an experimental result contradicts the Standard Model, this is an important fact.  It needs to be openly acknowledged.  An experiment that makes traditionalists want to discard the data - or even an experiment that makes traditionalists very skeptical of the data - should be a high priority  for replication.  An experiment worth defying should comm
763
1.0.0
Revision
false
null
null
CrosspostOutput
5JDkW4MYXit2CquLs
your-strength-as-a-rationalist
Your Strength as a Rationalist
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-11T00:21:20.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
Yr6nAqbHCNjAQ5ozE
123
246
298
false
0.000327
null
false
false
2024-04-19T16:26:25.345Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
19
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
EsECWq2CprAJiRbLc
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "5gcpKG2XEAZGj5DEf", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-07-15T19:10:11.841Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Noticing", "needsReview": false, "noindex": false, "postCount": 35, "score": 19, "shortName": null, "slug": "noticing", "suggestedAsFilter": false, "userId": "gXeEWGjTWyqgrQTzR", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "ZzxvopS4BwLuQy42n", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-05-24T18:04:01.718Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationalization", "needsReview": false, "noindex": false, "postCount": 83, "score": 19, "shortName": null, "slug": "rationalization", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
246
0
0
25
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
5JDkW4MYXit2CquLs
SocialPreviewType
Yr6nAqbHCNjAQ5ozE
<p>The following happened to me in an IRC chatroom, long enough ago that I was still hanging around in IRC chatrooms. Time has fuzzed the memory and my report may be imprecise.</p><p>So there I was, in an IRC chatroom, when someone reports that a friend of his needs medical advice. His friend says that he’s been having sudden chest pains, so he called an ambulance, and the ambulance showed up, but the paramedics told him it was nothing, and left, and now the chest pains are getting worse. What should his friend do?</p><p>I was confused by this story. I remembered reading about homeless people in New York who would call ambulances just to be taken someplace warm, and how the paramedics always had to take them to the emergency room, even on the 27th iteration. Because if they didn’t, the ambulance company could be sued for lots and lots of money. Likewise, emergency rooms are legally obligated to treat anyone, regardless of ability to pay.<sup>1</sup> So I didn’t quite understand how the described events could have happened. <i>Anyone</i> reporting sudden chest pains should have been hauled off by an ambulance instantly.</p><p>And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, “Well, if the paramedics told your friend it was nothing, it must <i>really be</i> nothing—they’d have hauled him off if there was the tiniest chance of serious trouble.”</p><p>Thus I managed to explain the story within my existing model, though the fit still felt a little forced . . .</p><p>Later on, the fellow comes back into the IRC chatroom and says his friend made the whole thing up. Evidently this was not one of his more reliable friends.</p><p>I should have realized, perhaps, that an unknown acquaintance of an acquaintance in an IRC channel might be less reliable than a published journal article. Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.<sup>2</sup></p><p>So instead, by dint of mighty straining, I forced my model of reality to explain an anomaly that <i>never actually happened.</i> And I <i>knew</i> how embarrassing this was. I <i>knew</i> that the usef... </p>
The following happened to me in an IRC chatroom, long enough ago that I was still hanging around in IRC chatrooms. Time has fuzzed the memory and my report may be imprecise. So there I was, in an IRC chatroom, when someone reports that a friend of his needs medical advice. His friend says that he’s been having sudden chest pains, so he called an ambulance, and the ambulance showed up, but the paramedics told him it was nothing, and left, and now the chest pains are getting worse. What should his friend do? I was confused by this story. I remembered reading about homeless people in New York who would call ambulances just to be taken someplace warm, and how the paramedics always had to take them to the emergency room, even on the 27th iteration. Because if they didn’t, the ambulance company could be sued for lots and lots of money. Likewise, emergency rooms are legally obligated to treat anyone, regardless of ability to pay.1 So I didn’t quite understand how the described events could have happened. Anyone reporting sudden chest pains should have been hauled off by an ambulance instantly. And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, “Well, if the paramedics told your friend it was nothing, it must really be nothing—they’d have hauled him off if there was the tiniest chance of serious trouble.” Thus I managed to explain the story within my existing model, though the fit still felt a little forced . . . Later on, the fellow comes back into the IRC chatroom and says his friend made the whole thing up. Evidently this was not one of his more reliable friends. I should have realized, perhaps,
734
2.3.0
Revision
false
null
null
CrosspostOutput
dLzZWNGD23zqNLvt3
the-apocalypse-bet
The Apocalypse Bet
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-09T17:23:33.000Z
null
false
false
2
2
null
false
false
post
[]
null
null
5c63921ebcb4ac6367c1326a
51
36
50
false
0.000054
null
false
false
2013-08-09T23:42:06.394Z
null
null
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
3
0
2007-08-09T17:23:33.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
null
false
0
0
namesAttachedReactions
false
[]
1
null
null
null
null
[ { "__typename": "Tag", "_id": "E8PHMuf7tsr8teXAe", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-04-25T21:27:40.796Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Betting", "needsReview": false, "noindex": false, "postCount": 95, "score": 9, "shortName": null, "slug": "betting", "suggestedAsFilter": false, "userId": "dRGmZYGDzf5LFNjtz", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "R6dqPii4cyNpuecLt", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-01-14T03:06:53.703Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Prediction Markets", "needsReview": false, "noindex": false, "postCount": 171, "score": 19, "shortName": null, "slug": "prediction-markets", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 2, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
36
0
0
9
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
dLzZWNGD23zqNLvt3
SocialPreviewType
5c63921ebcb4ac6367c1326a
<p>A problem with betting on engineered superplagues, physics disasters, nanotechnological warfare, or <a href="http://intelligence.org/AIRisk.pdf">intelligence explosions</a> of both Friendly and unFriendly type, is that all these events are likely to disrupt settlement of trades (to put it mildly).&nbsp; It's not easy to sell a bet that pays off only if the prediction market ceases to exist.</p> <p>And yet everyone still wants to know the year, month, and day of the Singularity.&nbsp; Even I <em>want</em> to know, I'm just professionally aware that the knowledge is not available.</p> <p>This morning, I saw that someone had launched yet another poll on "when the Singularity will occur".&nbsp; Just a raw poll, mind you, not a prediction market.&nbsp; I was thinking of how completely and utterly worthless this poll was, and how a prediction market might be slightly less than completely worthless, when it occurred to me how to structure the bet - bet that "settlement of trades will be disrupted / the resources gambled will become worthless, no later than time T".</p> <p>Suppose you think that gold will become worthless on April 27th, 2020 at between four and four-thirty in the morning.&nbsp; I, on the other hand, think this event will not occur until 2030.&nbsp; We can sign a contract in which I pay you one ounce of gold per year from 2010 to 2020, and then you pay me two ounces of gold per year from 2020 to 2030.&nbsp; If gold becomes worthless when you say, you will have profited; if gold becomes worthlesss when I say, I will have profited.&nbsp; We can have a prediction market on a generic apocalypse, in which participants who believe in an earlier apocalypse are paid by believers in a later apocalypse, until they pass the date of their prediction, at which time the flow reverses with interest.&nbsp; I don't see any way to distinguish between apocalypses, but we can ask the participants why they were willing to bet, and probably receive a decent answer.</p> <p>I would be quite interested in seeing what such a market had to say.&nbsp; And if the predicted date was hovering around 2080, I would pick up as much of that free money as I dared.</p> <hr /> <p>EDIT: &nbsp;Robin Hanson pointed out why this wouldn't work. &nbsp;See comments.</p>
A problem with betting on engineered superplagues, physics disasters, nanotechnological warfare, or intelligence explosions of both Friendly and unFriendly type, is that all these events are likely to disrupt settlement of trades (to put it mildly).  It's not easy to sell a bet that pays off only if the prediction market ceases to exist. And yet everyone still wants to know the year, month, and day of the Singularity.  Even I want to know, I'm just professionally aware that the knowledge is not available. This morning, I saw that someone had launched yet another poll on "when the Singularity will occur".  Just a raw poll, mind you, not a prediction market.  I was thinking of how completely and utterly worthless this poll was, and how a prediction market might be slightly less than completely worthless, when it occurred to me how to structure the bet - bet that "settlement of trades will be disrupted / the resources gambled will become worthless, no later than time T". Suppose you think that gold will become worthless on April 27th, 2020 at between four and four-thirty in the morning.  I, on the other hand, think this event will not occur until 2030.  We can sign a contract in which I pay you one ounce of gold per year from 2010 to 2020, and then you pay me two ounces of gold per year from 2020 to 2030.  If gold becomes worthless when you say, you will have profited; if gold becomes worthlesss when I say, I will have profited.  We can have a prediction market on a generic apocalypse, in which participants who believe in an earlier apocalypse are paid by believers in a later apocalypse, until they pass the date of their prediction, at which time the flow reverses with interest.  I don't see any way to distinguish between apocalypses, but we can ask the participants why they were willing to bet, and probably receive a decent answer. I would be quite interested in seeing what such a market had to say.  And if the predicted date was hovering around 2080, I would pick
370
1.0.0
Revision
false
null
null
CrosspostOutput
HYWhKXRsMAyvRKRYz
you-can-face-reality
You Can Face Reality
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-09T01:46:36.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
nD5b2QGSJbc9Ks62m
41
163
194
false
0.000217
null
false
false
2007-08-09T01:46:36.000Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
15
0
2007-08-09T01:46:36.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
fGR69uc7Fa5Ef4MY6
false
0
0
namesAttachedReactions
false
[]
1
null
null
null
null
[ { "__typename": "Tag", "_id": "AXhEhCkTrHZbjXXu3", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-07-09T06:27:17.454Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Poetry", "needsReview": false, "noindex": false, "postCount": 60, "score": 0, "shortName": null, "slug": "poetry", "suggestedAsFilter": false, "userId": "mPipmBTniuABY5PQy", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "ymWzfKxBchRvmCTNX", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-08-15T00:49:40.314Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Courage", "needsReview": false, "noindex": false, "postCount": 16, "score": 9, "shortName": null, "slug": "courage", "suggestedAsFilter": false, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "SW3euSNqpozcsxXaX", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-08-02T20:25:10.374Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Litanies & Mantras", "needsReview": false, "noindex": false, "postCount": 10, "score": 0, "shortName": null, "slug": "litanies-and-mantras", "suggestedAsFilter": false, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
163
0
0
20
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
HYWhKXRsMAyvRKRYz
SocialPreviewType
nD5b2QGSJbc9Ks62m
<blockquote><p>What is true is already so.</p><p>Owning up to it doesn’t make it worse.</p><p>Not being open about it doesn’t make it go away.</p><p>And because it’s true, it is what is there to be interacted with.</p><p>Anything untrue isn’t there to be lived.</p><p>People can stand what is true,</p><p>for they are already enduring it.</p></blockquote><p>—<i>Eugene Gendlin</i></p>
> What is true is already so. > > Owning up to it doesn’t make it worse. > > Not being open about it doesn’t make it go away. > > And because it’s true, it is what is there to be interacted with. > > Anything untrue isn’t there to be lived. > > People can stand what is true, > > for they are already enduring it. —Eugene Gendlin
71
2.1.0
Revision
false
null
null
CrosspostOutput
yDfxTj9TKYsYiWH5o
the-virtue-of-narrowness
The Virtue of Narrowness
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-07T17:57:46.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
f3zdqcwtqJRaPPXsn
66
122
137
false
0.000156
null
false
false
2024-09-08T08:06:52.888Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
9
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
6gEzESEdtiZiQLFwn
false
0
0
namesAttachedReactions
false
[]
4
null
null
null
null
[ { "__typename": "Tag", "_id": "8uNFGxejo5hykCEez", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-05-24T18:38:54.405Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Virtues", "needsReview": false, "noindex": false, "postCount": 121, "score": 0, "shortName": null, "slug": "virtues", "suggestedAsFilter": false, "userId": "B5EreDfjALzEbSo6R", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "bmfs4jiLaF6HiiYkC", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-07-29T17:48:27.328Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Reductionism", "needsReview": false, "noindex": false, "postCount": 55, "score": 19, "shortName": null, "slug": "reductionism", "suggestedAsFilter": false, "userId": "HoGziwmhpMGqGeWZy", "voteCount": 2, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
122
0
0
22
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
yDfxTj9TKYsYiWH5o
SocialPreviewType
f3zdqcwtqJRaPPXsn
<blockquote><p>What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world.</p><p>—“The Twelve Virtues of Rationality”</p></blockquote><p>Within their own professions, people grasp the importance of narrowness; a car mechanic knows the difference between a carburetor and a radiator, and would not think of them both as “car parts.” A hunter-gatherer knows the difference between a lion and a panther. A janitor does not wipe the floor with window cleaner, even if the bottles look similar to one who has not mastered the art.</p><p>Outside their own professions, people often commit the misstep of trying to broaden a word as widely as possible, to cover as much territory as possible. Is it not more glorious, more wise, more impressive, to talk about <i>all</i> the apples in the world? How much loftier it must be to <i>explain human thought in general</i>, without being distracted by smaller questions, such as how humans invent techniques for solving a Rubik’s Cube. Indeed, it scarcely seems necessary to consider <i>specific</i> questions at all; isn’t a general theory a worthy enough accomplishment on its own?</p><p>It is the way of the curious to lift up one pebble from among a million pebbles on the shore, and see something new about it, something interesting, something different. You call these pebbles “diamonds,” and ask what might be special about them—what inner qualities they might have in common, beyond the glitter you first noticed. And then someone else comes along and says: “Why not call <i>this</i> pebble a diamond too? And this one, and this one?” They are enthusiastic, and they mean well. For it seems undemocratic and exclusionary and elitist and unholistic to call some pebbles “diamonds,” and others not. It seems . . . <i>narrow-minded . . .</i> if you’ll pardon the phrase. Hardly <i>open</i>, hardly <i>embracing</i>, hardly <i>communal.</i></p><p>You might think it poetic, to give one word many meanings, and thereby spread shades of connotation all around. But even poets, if they are good poets, must learn to see the world precisely. It is not enough to compare love to a flower. Hot jealous unconsummated love is not the same as the love of a couple married for decades. If you need a flower to symbolize jealous love, you must go into the garden, and look, and make subtle distinctions—find a flower with a heady scent, and a bright color, and thorns. Even if your intent is to shade meani... </p>
> What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world. > > —“The Twelve Virtues of Rationality” Within their own professions, people grasp the importance of narrowness; a car mechanic knows the difference between a carburetor and a radiator, and would not think of them both as “car parts.” A hunter-gatherer knows the difference between a lion and a panther. A janitor does not wipe the floor with window cleaner, even if the bottles look similar to one who has not mastered the art. Outside their own professions, people often commit the misstep of trying to broaden a word as widely as possible, to cover as much territory as possible. Is it not more glorious, more wise, more impressive, to talk about all the apples in the world? How much loftier it must be to explain human thought in general, without being distracted by smaller questions, such as how humans invent techniques for solving a Rubik’s Cube. Indeed, it scarcely seems necessary to consider specific questions at all; isn’t a general theory a worthy enough accomplishment on its own? It is the way of the curious to lift up one pebble from among a million pebbles on the shore, and see something new about it, something interesting, something different. You call these pebbles “diamonds,” and ask what might be special about them—what inner qualities they might have in common, beyond the glitter you first noticed. And then someone else comes along and says: “Why not call this pebble a diamond too? And this one, and this one?” They are enthusiastic, and they mean well. For it seems undemocratic and exclusionary and elitist and unholistic to call some pebbles “diamonds,” and others not. It seems . . . narrow-minded . . . if you’ll pardon the phrase. Hardly open, hardly embracing, hardly communal. You might think it poetic, to give one word many meanings, and thereby spread shades of connotation all around. But even poets, if t
1,081
2.1.0
Revision
false
null
null
CrosspostOutput
43PTNr4ZMaezyAJ5o
the-proper-use-of-doubt
The Proper Use of Doubt
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-06T20:29:51.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
8PGFyegkSHcLkLNo9
35
83
91
false
0.000108
null
false
false
2007-08-06T20:29:51.000Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
9
0
2007-08-06T20:29:51.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
CRL8riyLDMuFtca6u
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "LDTSbmXtokYAsEq8e", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-05-24T07:47:20.152Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Motivated Reasoning", "needsReview": false, "noindex": false, "postCount": 73, "score": 9, "shortName": null, "slug": "motivated-reasoning", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "AHK82ypfxF45rqh9D", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-12-23T20:53:12.566Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Distinctions", "needsReview": false, "noindex": false, "postCount": 108, "score": 0, "shortName": null, "slug": "distinctions", "suggestedAsFilter": false, "userId": "sKAL2jzfkYkDbQmx9", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "moeYqrcakMgXnQNyF", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-07-18T01:18:43.943Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Curiosity", "needsReview": false, "noindex": false, "postCount": 39, "score": 19, "shortName": null, "slug": "curiosity", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "qf3kDBak4BQDDw3f2", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2021-04-07T17:49:30.078Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Modest Epistemology", "needsReview": false, "noindex": false, "postCount": 29, "score": 9, "shortName": null, "slug": "modest-epistemology", "suggestedAsFilter": false, "userId": "Q7NW4XaWQmfPfdcFj", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "NSMKfa8emSbGNXRKD", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-05-15T23:11:08.425Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Religion", "needsReview": false, "noindex": false, "postCount": 218, "score": 0, "shortName": null, "slug": "religion", "suggestedAsFilter": false, "userId": "qgdGA4ZEyW7zNdK84", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
82
0
0
15
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
43PTNr4ZMaezyAJ5o
SocialPreviewType
8PGFyegkSHcLkLNo9
<p>Once, when I was holding forth upon the Way, I remarked upon how most organized belief systems exist to <em>flee from doubt</em>. A listener replied to me that the Jesuits must be immune from this criticism, because they practice organized doubt: their novices, he said, are told to doubt Christianity; doubt the existence of God; doubt if their calling is real; doubt that they are suitable for perpetual vows of chastity and poverty. And I said: <em>Ah, but they&#x2019;re supposed to overcome these doubts, right?</em> He said: <em>No, they are to doubt that perhaps their doubts may grow and become stronger.</em> </p> <p>Googling failed to confirm or refute these allegations. But I find this scenario fascinating, worthy of discussion, regardless of whether it is true or false of Jesuits. <em>If</em> the Jesuits practiced deliberate doubt, as described above, would they <em>therefore</em> be virtuous as rationalists?</p> <p>I think I have to concede that the Jesuits, in the (possibly hypothetical) scenario above, would not properly be described as &#x201C;fleeing from doubt.&#x201D; But the (possibly hypothetical) conduct still strikes me as highly suspicious. To a truly virtuous rationalist, doubt should not be scary. The conduct described above sounds to me like a program of desensitization for something <em>very</em> scary, like exposing an arachnophobe to spiders under carefully controlled conditions.</p> <p>But even so, they are encouraging their novices to doubt&#x2014;right? Does it matter if their reasons are flawed? Is this not still a worthy deed unto a rationalist?</p> <p>All curiosity seeks to annihilate itself; there is no curiosity that does not <em>want</em> an answer. But if you obtain an answer, if you satisfy your curiosity, then the glorious mystery will no longer be mysterious.</p> <p>In the same way, every doubt exists in order to annihilate some particular belief. If a doubt fails to destroy its target, the doubt has died unfulfilled&#x2014;but that is still a resolution, an ending, albeit a sadder one. A doubt that neither destroys itself nor destroys its target might as well have never existed at all. It is the <em>resolution</em> of doubts, not the mere act of doubting, which drives the ratchet of rationality forward.</p> <p>Every improvement is a change, but not every change is an improvement. Every rationalist doubts, but not all doubts are rational. Wearing doubts doesn&#x2019;t make you a rationalist any more than wearing a whi... </p>
Once, when I was holding forth upon the Way, I remarked upon how most organized belief systems exist to flee from doubt. A listener replied to me that the Jesuits must be immune from this criticism, because they practice organized doubt: their novices, he said, are told to doubt Christianity; doubt the existence of God; doubt if their calling is real; doubt that they are suitable for perpetual vows of chastity and poverty. And I said: Ah, but they’re supposed to overcome these doubts, right? He said: No, they are to doubt that perhaps their doubts may grow and become stronger. Googling failed to confirm or refute these allegations. But I find this scenario fascinating, worthy of discussion, regardless of whether it is true or false of Jesuits. If the Jesuits practiced deliberate doubt, as described above, would they therefore be virtuous as rationalists? I think I have to concede that the Jesuits, in the (possibly hypothetical) scenario above, would not properly be described as “fleeing from doubt.” But the (possibly hypothetical) conduct still strikes me as highly suspicious. To a truly virtuous rationalist, doubt should not be scary. The conduct described above sounds to me like a program of desensitization for something very scary, like exposing an arachnophobe to spiders under carefully controlled conditions. But even so, they are encouraging their novices to doubt—right? Does it matter if their reasons are flawed? Is this not still a worthy deed unto a rationalist? All curiosity seeks to annihilate itself; there is no curiosity that does not want an answer. But if you obtain an answer, if you satisfy your curiosity, then the glorious mystery will no longer be mysterious. In the same way, every doubt exists in order to annihilate some particular belief. If a doubt fails to destroy its target, the doubt has died unfulfilled—but that is still a resolution, an ending, albeit a sadder one. A doubt that neither destroys itself nor destroys its target might as we
835
2.0.0
Revision
false
null
null
CrosspostOutput
GJ4ZQm7crTzTM6xDW
focus-your-uncertainty
Focus Your Uncertainty
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-05T20:49:59.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
Qde8FmZaMmNB8DoJi
21
109
121
false
0.000139
null
false
false
2024-10-08T01:28:36.405Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
8
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
NgsAfjM4FqfqnEYDd
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "5gcpKG2XEAZGj5DEf", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-07-15T19:10:11.841Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Noticing", "needsReview": false, "noindex": false, "postCount": 35, "score": 19, "shortName": null, "slug": "noticing", "suggestedAsFilter": false, "userId": "gXeEWGjTWyqgrQTzR", "voteCount": 2, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
109
0
0
12
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
GJ4ZQm7crTzTM6xDW
SocialPreviewType
Qde8FmZaMmNB8DoJi
<p>Will bond yields go up, or down, or remain the same? If you&#x2019;re a TV pundit and your job is to explain the outcome after the fact, then there&#x2019;s no reason to worry. No matter <em>which</em> of the three possibilities comes true, you&#x2019;ll be able to explain why the outcome perfectly fits your pet market theory. There&#x2019;s no reason to think of these three possibilities as somehow <em>opposed</em> to one another, as <em>exclusive</em>, because you&#x2019;ll get full marks for punditry no matter which outcome occurs. </p> <p>But wait! Suppose you&#x2019;re a <em>novice</em> TV pundit, and you aren&#x2019;t experienced enough to make up plausible explanations on the spot. You need to prepare remarks in advance for tomorrow&#x2019;s broadcast, and you have limited time to prepare. In this case, it would be helpful to know <em>which</em> outcome will actually occur&#x2014;whether bond yields will go up, down, or remain the same&#x2014;because then you would only need to prepare <em>one</em> set of excuses. </p> <p>Alas, no one can possibly foresee the future. What are you to do? You certainly can&#x2019;t use &#x201C;probabilities.&#x201D; We all <a href="http://lesswrong.com/lw/i2/two_more_things_to_unlearn_from_school/">know from school</a> that &#x201C;probabilities&#x201D; are little numbers that appear next to a word problem, and there aren&#x2019;t any little numbers here. Worse, you <em>feel</em> uncertain. You don&#x2019;t remember <em>feeling</em> uncertain while you were manipulating the little numbers in word problems. <em>College classes teaching math</em> are nice clean places, so math can&#x2019;t apply to life situations that aren&#x2019;t nice and clean. You wouldn&#x2019;t want to inappropriately <a href="https://www.aft.org/sites/default/files/periodicals/Crit_Thinking.pdf">transfer thinking skills from one context to another</a>. Clearly, this is not a matter for &#x201C;probabilities.&#x201D; </p> <p>Nonetheless, you only have 100 minutes to prepare your excuses. You can&#x2019;t spend the entire 100 minutes on &#x201C;up,&#x201D; and also spend all 100 minutes on &#x201C;down,&#x201D; and also spend all 100 minutes on &#x201C;same.&#x201D; You&#x2019;ve got to prioritize somehow.</p> <p>If you needed to justify your time expenditure to a review committee, you would have to spend equal time on each possibility. Since there are no little numbers written down, you&#x2019;d have no documentation to justify spending different amounts of time. You can hear the reviewers now: <em>And why, Mr. Finkledinger, did you spend exactly 42 minutes on excuse #3? Why not 41 minutes, or 43? Admit it&#x</em>... </p>
Will bond yields go up, or down, or remain the same? If you’re a TV pundit and your job is to explain the outcome after the fact, then there’s no reason to worry. No matter which of the three possibilities comes true, you’ll be able to explain why the outcome perfectly fits your pet market theory. There’s no reason to think of these three possibilities as somehow opposed to one another, as exclusive, because you’ll get full marks for punditry no matter which outcome occurs. But wait! Suppose you’re a novice TV pundit, and you aren’t experienced enough to make up plausible explanations on the spot. You need to prepare remarks in advance for tomorrow’s broadcast, and you have limited time to prepare. In this case, it would be helpful to know which outcome will actually occur—whether bond yields will go up, down, or remain the same—because then you would only need to prepare one set of excuses. Alas, no one can possibly foresee the future. What are you to do? You certainly can’t use “probabilities.” We all know from school that “probabilities” are little numbers that appear next to a word problem, and there aren’t any little numbers here. Worse, you feel uncertain. You don’t remember feeling uncertain while you were manipulating the little numbers in word problems. College classes teaching math are nice clean places, so math can’t apply to life situations that aren’t nice and clean. You wouldn’t want to inappropriately transfer thinking skills from one context to another. Clearly, this is not a matter for “probabilities.” Nonetheless, you only have 100 minutes to prepare your excuses. You can’t spend the entire 100 minutes on “up,” and also spend all 100 minutes on “down,” and also spend all 100 minutes on “same.” You’ve got to prioritize somehow. If you needed to justify your time expenditure to a review committee, you would have to spend equal time on each possibility. Since there are no little numbers written down, you’d have no documentation to justify spending
861
2.0.0
Revision
false
null
null
CrosspostOutput
wCqfCLs8z5Qw4GbKS
the-importance-of-saying-oops
The Importance of Saying "Oops"
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-05T03:17:46.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
E6WdwQmTHgHuP8gXs
36
233
272
false
0.000299
null
false
false
2025-06-02T23:28:44.530Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
19
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
whypGGDHRksJjAFZL
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "mQbxDKHxPcKKRG4mb", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-08-04T04:39:14.172Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Changing Your Mind", "needsReview": false, "noindex": false, "postCount": 29, "score": 0, "shortName": null, "slug": "changing-your-mind", "suggestedAsFilter": false, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "EdRnMXBRbY5JDf5df", "adminOnly": false, "afBaseScore": 6, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "nmk3nLpQE89dMRzzN", "displayName": "Eliezer Yudkowsky" } ] }, "baseScore": 13, "canEditUserIds": null, "core": false, "createdAt": "2015-07-02T01:53:10.000Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "nmk3nLpQE89dMRzzN", "displayName": "Eliezer Yudkowsky" } ] }, "isArbitalImport": true, "isPlaceholderPage": false, "isSubforum": false, "name": "Epistemology", "needsReview": false, "noindex": false, "postCount": 424, "score": 13, "shortName": null, "slug": "epistemology", "suggestedAsFilter": false, "userId": "nmk3nLpQE89dMRzzN", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "nANxo5C4sPG9HQHzr", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-07-09T05:49:33.108Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Honesty", "needsReview": false, "noindex": false, "postCount": 75, "score": 19, "shortName": null, "slug": "honesty", "suggestedAsFilter": false, "userId": "mPipmBTniuABY5PQy", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "YTCrHWYHAsAD74EHo", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-04-29T02:47:19.876Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 15, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Self-Deception", "needsReview": false, "noindex": false, "postCount": 89, "score": 19, "shortName": null, "slug": "self-deception", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "stnsBEmuGpnSfQ5vj", "adminOnly": false, "afBaseScore": 6, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" } ] }, "baseScore": 10, "canEditUserIds": null, "core": false, "createdAt": "2020-01-14T04:02:47.333Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Sunk-Cost Fallacy", "needsReview": false, "noindex": false, "postCount": 12, "score": 10, "shortName": null, "slug": "sunk-cost-fallacy", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
233
0
0
28
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
wCqfCLs8z5Qw4GbKS
SocialPreviewType
E6WdwQmTHgHuP8gXs
<p>I just finished reading a history of Enron&#x2019;s downfall, <em>The Smartest Guys in the Room</em>, which hereby wins my award for &#x201C;Least Appropriate Book Title.&#x201D; </p> <p>An unsurprising feature of Enron&#x2019;s slow rot and abrupt collapse was that the executive players never admitted to having made a <em>large</em> mistake. When catastrophe #247 grew to such an extent that it required an actual policy change, they would say, &#x201C;Too bad that didn&#x2019;t work out&#x2014;it was such a good idea&#x2014;how are we going to hide the problem on our balance sheet?&#x201D; As opposed to, &#x201C;It now seems obvious in retrospect that it was a mistake from the beginning.&#x201D; As opposed to, &#x201C;I&#x2019;ve been stupid.&#x201D; There was never a watershed moment, a moment of humbling realization, of acknowledging a <em>fundamental</em> problem. After the bankruptcy, Jeff Skilling, the former COO and brief CEO of Enron, declined his own lawyers&#x2019; advice to take the Fifth Amendment; he testified before Congress that Enron had been a <em>great</em> company.</p> <p>Not every change is an improvement, but every improvement is necessarily a change. If we only admit small local errors, we will only make small local changes. The motivation for a <em>big</em> change comes from acknowledging a <em>big</em> mistake.</p> <p>As a child I was raised on equal parts science and science fiction, and from Heinlein to Feynman I learned the tropes of Traditional Rationality: theories must be bold and expose themselves to falsification; be willing to commit the heroic sacrifice of giving up your own ideas when confronted with contrary evidence; play nice in your arguments; try not to deceive yourself; and other fuzzy verbalisms.</p> <p>A traditional rationalist upbringing tries to produce arguers who will concede to contrary evidence <em>eventually</em>&#x2014;there should be <em>some</em> mountain of evidence sufficient to move you. This is not trivial; it distinguishes science from religion. But there is less focus on <em>speed</em>, on giving up the fight <em>as quickly as possible</em>, integrating evidence <em>efficiently</em> so that it only takes a <em>minimum</em> of contrary evidence to destroy your cherished belief.</p> <p>I was raised in Traditional Rationality, and thought myself quite the rationalist. I switched to Bayescraft (Laplace / Jaynes / Tversky / Kahneman) in the aftermath of . . . well, it&#x2019;s a long story. Roughly, I switched because I realized th... </p>
I just finished reading a history of Enron’s downfall, The Smartest Guys in the Room, which hereby wins my award for “Least Appropriate Book Title.” An unsurprising feature of Enron’s slow rot and abrupt collapse was that the executive players never admitted to having made a large mistake. When catastrophe #247 grew to such an extent that it required an actual policy change, they would say, “Too bad that didn’t work out—it was such a good idea—how are we going to hide the problem on our balance sheet?” As opposed to, “It now seems obvious in retrospect that it was a mistake from the beginning.” As opposed to, “I’ve been stupid.” There was never a watershed moment, a moment of humbling realization, of acknowledging a fundamental problem. After the bankruptcy, Jeff Skilling, the former COO and brief CEO of Enron, declined his own lawyers’ advice to take the Fifth Amendment; he testified before Congress that Enron had been a great company. Not every change is an improvement, but every improvement is necessarily a change. If we only admit small local errors, we will only make small local changes. The motivation for a big change comes from acknowledging a big mistake. As a child I was raised on equal parts science and science fiction, and from Heinlein to Feynman I learned the tropes of Traditional Rationality: theories must be bold and expose themselves to falsification; be willing to commit the heroic sacrifice of giving up your own ideas when confronted with contrary evidence; play nice in your arguments; try not to deceive yourself; and other fuzzy verbalisms. A traditional rationalist upbringing tries to produce arguers who will concede to contrary evidence eventually—there should be some mountain of evidence sufficient to move you. This is not trivial; it distinguishes science from religion. But there is less focus on speed, on giving up the fight as quickly as possible, integrating evidence efficiently so that it only takes a minimum of contrary evidence to de
717
2.0.0
Revision
false
null
null
CrosspostOutput
fAuWLS7RKWD2npBFR
religion-s-claim-to-be-non-disprovable
Religion's Claim to be Non-Disprovable
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-04T03:21:50.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
JgcrQDLAuhCf5CL8D
333
311
354
false
0.000386
null
false
false
2025-02-16T12:59:10.771Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
30
0
2007-08-04T03:21:50.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
3rndey8YSaatyaxLY
false
0
0
namesAttachedReactions
false
[]
5
null
null
null
null
[ { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
311
0
0
45
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
fAuWLS7RKWD2npBFR
SocialPreviewType
JgcrQDLAuhCf5CL8D
<p>The earliest account I know of a scientific experiment is, ironically, the story of <a href="http://web.archive.org/web/20100312042939/http://www.nccbuscc.org/nab/bible/1kings/1kings18.htm">Elijah and the priests of Baal</a>.</p><p>The people of Israel are wavering between Jehovah and Baal, so Elijah announces that he will conduct an experiment to settle it—quite a novel concept in those days! The priests of Baal will place their bull on an altar, and Elijah will place Jehovah’s bull on an altar, but neither will be allowed to start the fire; whichever God is real will call down fire on His sacrifice. The priests of Baal serve as control group for Elijah—the same wooden fuel, the same bull, and the same priests making invocations, but to a false god. Then Elijah pours water on his altar—ruining the experimental symmetry, but this was back in the early days—to signify deliberate acceptance of the burden of proof, like needing a 0.05 significance level. The fire comes down on Elijah’s altar, which is the experimental observation. The watching people of Israel shout “The Lord is God!”—peer review.</p><p>And then the people haul the 450 priests of Baal down to the river Kishon and slit their throats. This is stern, but necessary. You must firmly discard the falsified hypothesis, and do so swiftly, before it can generate excuses to protect itself. If the priests of Baal are allowed to survive, they will start babbling about how religion is a separate magisterium which can be neither proven nor disproven.</p><p>Back in the old days, people actually <i>believed</i> their religions instead of just <i>believing in</i> them. The biblical archaeologists who went in search of Noah’s Ark did not think they were wasting their time; they anticipated they might become famous. Only after failing to find confirming evidence—and finding disconfirming evidence in its place—did religionists execute what William Bartley called <i>the retreat to commitment</i>, “I believe because I believe.”</p><p>Back in the old days, there was no concept of religion’s being a separate magisterium. The Old Testament is a stream-of-consciousness culture dump: history, law, moral parables, and yes, <a href="http://www.skepticfiles.org/atheist/genesisd.htm">models</a> of how the universe works—like the universe being created in six days (which is a metaphor for the Big Bang), or rabbits chewing their cud. (Which is a metaphor for . . .)</p><p>Back in the old days, saying the local religion “could not be proven” would have gotten you burned at the stake. One of the core beliefs of Orthodox Judaism is that God appeared at Mou... </p>
The earliest account I know of a scientific experiment is, ironically, the story of Elijah and the priests of Baal. The people of Israel are wavering between Jehovah and Baal, so Elijah announces that he will conduct an experiment to settle it—quite a novel concept in those days! The priests of Baal will place their bull on an altar, and Elijah will place Jehovah’s bull on an altar, but neither will be allowed to start the fire; whichever God is real will call down fire on His sacrifice. The priests of Baal serve as control group for Elijah—the same wooden fuel, the same bull, and the same priests making invocations, but to a false god. Then Elijah pours water on his altar—ruining the experimental symmetry, but this was back in the early days—to signify deliberate acceptance of the burden of proof, like needing a 0.05 significance level. The fire comes down on Elijah’s altar, which is the experimental observation. The watching people of Israel shout “The Lord is God!”—peer review. And then the people haul the 450 priests of Baal down to the river Kishon and slit their throats. This is stern, but necessary. You must firmly discard the falsified hypothesis, and do so swiftly, before it can generate excuses to protect itself. If the priests of Baal are allowed to survive, they will start babbling about how religion is a separate magisterium which can be neither proven nor disproven. Back in the old days, people actually believed their religions instead of just believing in them. The biblical archaeologists who went in search of Noah’s Ark did not think they were wasting their time; they anticipated they might become famous. Only after failing to find confirming evidence—and finding disconfirming evidence in its place—did religionists execute what William Bartley called the retreat to commitment, “I believe because I believe.” Back in the old days, there was no concept of religion’s being a separate magisterium. The Old Testament is a stream-of-consciousness culture
1,319
2.1.0
Revision
false
null
null
CrosspostOutput
ZpMDQsgLY9eF89ocF
god-is-irrelevant
God is irrelevant
null
false
false
false
null
jRRYAy2mQAHy2Mq3f
null
true
false
false
false
Post
null
2007-08-03T10:57:00.000Z
null
false
false
2
2
null
false
false
post
[]
null
null
5c6392dcbcb4ac6367c170fe
1
1
1
false
0.000001
null
false
false
2020-05-28T23:37:40.785Z
null
null
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
1
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
null
false
0
0
namesAttachedReactions
false
[]
2
null
null
null
null
[]
null
0
0
null
false
null
null
null
null
null
null
null
null
jRRYAy2mQAHy2Mq3f
katjagrace
2009-02-27T14:15:22.378Z
KatjaGrace
KatjaGrace
null
null
null
9,330
309
false
false
null
null
627
509
0
3
7
1
0
r38pkCm7wF4M44MDQ
User
null
null
null
[ "trustLevel1", "canModeratePersonal", "alignmentVoters", "alignmentForum" ]
null
null
ZpMDQsgLY9eF89ocF
SocialPreviewType
5c6392dcbcb4ac6367c170fe
<p>Philosophically that is. Psychologically he fulfils an important role &#8211; to distance us from philosophy.</p><p>In no way would the existence of a God alter the important properties of the universe. Most of the problems a God supposedly solves are merely shifted to the other side of him &#8211; a step further away from humans, where we can comfortably ignore them.</p><p>Some solutions God doesn&#8217;t really provide (presumably all thought of before by various philosophers, but I don&#8217;t know which ones, and it&#8217;s irrelevant, so please excuse the plagiarism) :</p><p><span style="font-weight:bold;">Creator of the universe: </span>An obvious one. Where did God come from then? If he&#8217;s existed forever then so could a universe. If you think something as complex as a universe couldn&#8217;t come from nothing, how complex would God have to be to be able to make universes?</p><p><span style="font-weight:bold;">Source of morality: </span>Where does God get his moral principles from? If he invents them himself they are just as arbitrary a set of restrictions on behaviour as any other (such as an atheist&#8217;s morals are feared to be by the religious). Why follow them? If they are inherent in the universe, related to other people, or a matter of choice then God isn&#8217;t needed.</p><p>Morality is a set of value judgements. If God and I both have a set of value judgements (a moral code), to say that God&#8217;s takes precedence is a value judgement in itself. Who judges? God? Why?</p><p><span style="font-weight:bold;">Provider of free will:</span> For reasons discussed in the previous post, <span style="font-style:italic;"><a href="http://meteuphoric.blogspot.com/2007/07/free-will-isnt-concept-unless-you-mean.html">Free will isn&#8217;t a concept (unless you mean determinism)</a>, </span>God can&#8217;t have &#8211; or give humans &#8211; free will which isn&#8217;t deterministic. The absence of God&#8217;s &#8216;free will&#8217; is even more apparent if he must be good all the time (unless he invents his own changeable moral code as he goes, but is that the kind of morality God should subscribe to? Well yes, if he does! But there&#8217;s still the old problem of free will not existing &#8211; he can&#8217;t escape).</p><p>If he&#8217;s all powerful as well, then he just ends up as another natural law &#8211; one that makes good things always happen. Anyone who&#8217;s been alive can tell you there&#8217;s fairly solid empirical evidence against such a law existing, but my point isn&#8217;t to draw attention to the problem of evil so much as to point out that natural laws are nothing new.</p><p>The final picture? A God who may well exist*. But who... </p>
Philosophically that is. Psychologically he fulfils an important role – to distance us from philosophy. In no way would the existence of a God alter the important properties of the universe. Most of the problems a God supposedly solves are merely shifted to the other side of him – a step further away from humans, where we can comfortably ignore them. Some solutions God doesn’t really provide (presumably all thought of before by various philosophers, but I don’t know which ones, and it’s irrelevant, so please excuse the plagiarism) : Creator of the universe: An obvious one. Where did God come from then? If he’s existed forever then so could a universe. If you think something as complex as a universe couldn’t come from nothing, how complex would God have to be to be able to make universes? Source of morality: Where does God get his moral principles from? If he invents them himself they are just as arbitrary a set of restrictions on behaviour as any other (such as an atheist’s morals are feared to be by the religious). Why follow them? If they are inherent in the universe, related to other people, or a matter of choice then God isn’t needed. Morality is a set of value judgements. If God and I both have a set of value judgements (a moral code), to say that God’s takes precedence is a value judgement in itself. Who judges? God? Why? Provider of free will: For reasons discussed in the previous post, Free will isn’t a concept (unless you mean determinism), God can’t have – or give humans – free will which isn’t deterministic. The absence of God’s ‘free will’ is even more apparent if he must be good all the time (unless he invents his own changeable moral code as he goes, but is that the kind of morality God should subscribe to? Well yes, if he does! But there’s still the old problem of free will not existing – he can’t escape). If he’s all powerful as well, then he just ends up as another natural law – one that makes good things always happen. Anyone who’s been aliv
462
1.0.0
Revision
false
null
null
CrosspostOutput
nYkMLFpx77Rz3uo9c
belief-as-attire
Belief as Attire
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-02T17:13:56.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
APXkuLs2HjoCc88mC
105
132
142
false
0.000161
null
false
false
2024-03-08T11:23:53.236Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
6
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
dLBJ6BGabxdsXrxZG
false
0
0
namesAttachedReactions
false
[]
2
null
null
null
null
[ { "__typename": "Tag", "_id": "DdgSyQoZXjj3KnF4N", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-07-13T15:43:11.661Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Tribalism", "needsReview": false, "noindex": false, "postCount": 68, "score": 19, "shortName": null, "slug": "tribalism", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "LDTSbmXtokYAsEq8e", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-05-24T07:47:20.152Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Motivated Reasoning", "needsReview": false, "noindex": false, "postCount": 73, "score": 9, "shortName": null, "slug": "motivated-reasoning", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "iP2X4jQNHMWHRNPne", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-06-08T00:06:01.955Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Motivations", "needsReview": false, "noindex": false, "postCount": 200, "score": 9, "shortName": null, "slug": "motivations", "suggestedAsFilter": false, "userId": "qgdGA4ZEyW7zNdK84", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "EnFKSZYiDHqMJuvJL", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-04-22T00:33:26.846Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Social Reality", "needsReview": false, "noindex": false, "postCount": 64, "score": 19, "shortName": null, "slug": "social-reality", "suggestedAsFilter": false, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
132
0
0
9
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
nYkMLFpx77Rz3uo9c
SocialPreviewType
APXkuLs2HjoCc88mC
<p>I have so far distinguished between belief as <a href="https://www.lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/">anticipation-controller</a>, <a href="https://www.lesswrong.com/lw/i4/belief_in_belief/">belief in belief</a>, <a href="https://www.lesswrong.com/lw/i6/professing_and_cheering/">professing and cheering</a>.&nbsp; Of these, we might call anticipation-controlling beliefs "proper beliefs" and the other forms "improper belief". Proper belief can be wrong or irrational, as when someone genuinely anticipates that prayer will cure their sick baby. But the other forms are arguably “not belief at all.”</p><p>Yet another form of improper belief is belief as group identification—as a way of belonging. Robin Hanson uses the excellent <a href="http://lesswrong.com/lw/i6/professing_and_cheering/egb">metaphor</a> of wearing unusual clothing, a group uniform like a priest’s vestments or a Jewish skullcap, and so I will call this “belief as attire.”</p><p>In terms of <a href="https://www.lesswrong.com/lw/i0/are_your_enemies_innately_evil/">humanly realistic psychology</a>, the Muslims who flew planes into the World Trade Center undoubtedly saw themselves as heroes defending truth, justice, and the Islamic Way from hideous alien monsters a la the movie <i>Independence Day</i>. Only a very inexperienced nerd, the sort of nerd who has no idea how non-nerds see the world, would say this out loud in an Alabama bar. It is not an American thing to say. The American thing to say is that the terrorists “hate our freedom” and that flying a plane into a building is a “cowardly act.” You cannot say the phrases “heroic self-sacrifice” and “suicide bomber” in the same sentence, even for the sake of accurately describing how the Enemy sees the world. The very <i>concept</i> of the courage and altruism of a suicide bomber is Enemy attire—you can tell, because the Enemy talks about it. The cowardice and sociopathy of a suicide bomber is American attire. There are no quote marks you can use to talk about how the Enemy sees the world; it would be like dressing up as a Nazi for Halloween.</p><p>Belief-as-attire may help explain how people can be <i>passionate</i> about improper beliefs. Mere belief in belief, or religious professing, would have some trouble creating genuine, deep, powerful emotional effects. Or so I suspect; I confess I’m not an expert here. But my impression is this: People who’ve stopped anticipating-as-if their religion is true, will go to great lengths to <i>convince</i> themselves they are passionate, and this desperation can be mistaken for passion. But it’s not the same fire they had as a child.</p><p>On the other hand, it is very easy for a human being to genuinely, passionately, gut-level belong to a group, to cheer for<a href="https://www.lesswrong.com/lw/gt/a_fable_of_science_and_politics/"> their favorite sports team</a>.<sup>1</sup> Identifyin... </p>
I have so far distinguished between belief as anticipation-controller, belief in belief, professing and cheering.  Of these, we might call anticipation-controlling beliefs "proper beliefs" and the other forms "improper belief". Proper belief can be wrong or irrational, as when someone genuinely anticipates that prayer will cure their sick baby. But the other forms are arguably “not belief at all.” Yet another form of improper belief is belief as group identification—as a way of belonging. Robin Hanson uses the excellent metaphor of wearing unusual clothing, a group uniform like a priest’s vestments or a Jewish skullcap, and so I will call this “belief as attire.” In terms of humanly realistic psychology, the Muslims who flew planes into the World Trade Center undoubtedly saw themselves as heroes defending truth, justice, and the Islamic Way from hideous alien monsters a la the movie Independence Day. Only a very inexperienced nerd, the sort of nerd who has no idea how non-nerds see the world, would say this out loud in an Alabama bar. It is not an American thing to say. The American thing to say is that the terrorists “hate our freedom” and that flying a plane into a building is a “cowardly act.” You cannot say the phrases “heroic self-sacrifice” and “suicide bomber” in the same sentence, even for the sake of accurately describing how the Enemy sees the world. The very concept of the courage and altruism of a suicide bomber is Enemy attire—you can tell, because the Enemy talks about it. The cowardice and sociopathy of a suicide bomber is American attire. There are no quote marks you can use to talk about how the Enemy sees the world; it would be like dressing up as a Nazi for Halloween. Belief-as-attire may help explain how people can be passionate about improper beliefs. Mere belief in belief, or religious professing, would have some trouble creating genuine, deep, powerful emotional effects. Or so I suspect; I confess I’m not an expert here. But my impression i
477
2.4.0
Revision
false
null
null
CrosspostOutput
RmCjazjupRGcHSm5N
professing-and-cheering
Professing and Cheering
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-08-02T07:20:21.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
YYpiWcpfprJ2NsnzD
45
128
126
false
0.000144
null
false
false
2025-02-06T19:38:19.852Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
5
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
kzyriJkTdQkycFA3D
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "iP2X4jQNHMWHRNPne", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-06-08T00:06:01.955Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Motivations", "needsReview": false, "noindex": false, "postCount": 200, "score": 9, "shortName": null, "slug": "motivations", "suggestedAsFilter": false, "userId": "qgdGA4ZEyW7zNdK84", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "NSMKfa8emSbGNXRKD", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-05-15T23:11:08.425Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Religion", "needsReview": false, "noindex": false, "postCount": 218, "score": 0, "shortName": null, "slug": "religion", "suggestedAsFilter": false, "userId": "qgdGA4ZEyW7zNdK84", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "gHCNhqxuJq2bZ2akb", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-07-10T11:36:05.706Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Social & Cultural Dynamics", "needsReview": false, "noindex": false, "postCount": 384, "score": 0, "shortName": null, "slug": "social-and-cultural-dynamics", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "EnFKSZYiDHqMJuvJL", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-04-22T00:33:26.846Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Social Reality", "needsReview": false, "noindex": false, "postCount": 64, "score": 19, "shortName": null, "slug": "social-reality", "suggestedAsFilter": false, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "DdgSyQoZXjj3KnF4N", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-07-13T15:43:11.661Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Tribalism", "needsReview": false, "noindex": false, "postCount": 68, "score": 19, "shortName": null, "slug": "tribalism", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
128
0
0
10
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
RmCjazjupRGcHSm5N
SocialPreviewType
YYpiWcpfprJ2NsnzD
<p>I once attended a panel on the topic, “Are science and religion compatible?” One of the women on the panel, a pagan, held forth interminably upon how she believed that the Earth had been created when a giant primordial cow was born into the primordial abyss, who licked a primordial god into existence, whose descendants killed a primordial giant and used its corpse to create the Earth, etc. The tale was long, and detailed, and more absurd than the Earth being supported on the back of a giant turtle. And the speaker clearly knew enough science to know this.</p><p>I still find myself struggling for words to describe what I saw as this woman spoke. She spoke with . . . pride? Self-satisfaction? A deliberate flaunting of herself?</p><p>The woman went on describing her creation myth for what seemed like forever, but was probably only five minutes. That strange pride/satisfaction/flaunting clearly had something to do with her <i>knowing</i> that her beliefs were scientifically outrageous. And it wasn’t that she hated science; as a panelist she professed that religion and science were compatible. She even talked about how it was quite understandable that the Vikings talked about a primordial abyss, given the land in which they lived—explained away her own religion!—and yet nonetheless insisted this was what she “believed,” said with peculiar satisfaction.</p><p>I’m not sure that Daniel Dennett’s concept of “belief in belief” stretches to cover this event. It was weirder than that. She didn’t recite her creation myth with the fanatical faith of someone who needs to reassure herself. She didn’t act like she expected us, the audience, to be convinced—or like she needed our belief to validate her.</p><p>Dennett, in addition to introducing the idea of belief in belief, has also suggested that much of what is called “religious belief” should really be studied as “religious profession” instead. Suppose an alien anthropologist studied a group of English students who all seemingly <i>believed</i> that Wulky Wilkensen was a retropositional author. The appropriate question may not be “Why do the students all believe this strange belief?” but “Why do they all write this strange sentence on quizzes?” Even if a sentence is essentially meaningless, you can still know when you are supposed to chant the response aloud.</p><p>I think Dennett may be slightly too cynical in suggesting that religious profession is <i>just</i> saying the beli... </p>
I once attended a panel on the topic, “Are science and religion compatible?” One of the women on the panel, a pagan, held forth interminably upon how she believed that the Earth had been created when a giant primordial cow was born into the primordial abyss, who licked a primordial god into existence, whose descendants killed a primordial giant and used its corpse to create the Earth, etc. The tale was long, and detailed, and more absurd than the Earth being supported on the back of a giant turtle. And the speaker clearly knew enough science to know this. I still find myself struggling for words to describe what I saw as this woman spoke. She spoke with . . . pride? Self-satisfaction? A deliberate flaunting of herself? The woman went on describing her creation myth for what seemed like forever, but was probably only five minutes. That strange pride/satisfaction/flaunting clearly had something to do with her knowing that her beliefs were scientifically outrageous. And it wasn’t that she hated science; as a panelist she professed that religion and science were compatible. She even talked about how it was quite understandable that the Vikings talked about a primordial abyss, given the land in which they lived—explained away her own religion!—and yet nonetheless insisted this was what she “believed,” said with peculiar satisfaction. I’m not sure that Daniel Dennett’s concept of “belief in belief” stretches to cover this event. It was weirder than that. She didn’t recite her creation myth with the fanatical faith of someone who needs to reassure herself. She didn’t act like she expected us, the audience, to be convinced—or like she needed our belief to validate her. Dennett, in addition to introducing the idea of belief in belief, has also suggested that much of what is called “religious belief” should really be studied as “religious profession” instead. Suppose an alien anthropologist studied a group of English students who all seemingly believed that Wulky Wilkense
720
2.2.0
Revision
false
null
null
CrosspostOutput
NKaPFf98Y5otMbsPk
bayesian-judo
Bayesian Judo
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-07-31T05:53:13.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
5c6391edbcb4ac6367c11bd9
110
132
89
false
0.000105
null
false
false
2024-04-14T16:15:27.319Z
null
null
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
4
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
null
false
0
0
namesAttachedReactions
false
[]
1
null
null
null
null
[ { "__typename": "Tag", "_id": "WH5ZmNSjZmK9SMj7k", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-08-10T21:53:03.399Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Aumann's Agreement Theorem", "needsReview": false, "noindex": false, "postCount": 26, "score": 9, "shortName": null, "slug": "aumann-s-agreement-theorem", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "tZsfB6WfpRy6kFb6q", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 20, "canEditUserIds": null, "core": false, "createdAt": "2020-04-24T20:32:21.413Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" }, { "_id": "4RnpNbKfsiCHDHLcu", "displayName": "haussteiner" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Conservation of Expected Evidence", "needsReview": false, "noindex": false, "postCount": 21, "score": 20, "shortName": null, "slug": "conservation-of-expected-evidence", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 3, "wikiOnly": false }, { "__typename": "Tag", "_id": "wzgcQCrwKfETcBpR9", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-06-19T19:27:22.164Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Disagreement", "needsReview": false, "noindex": false, "postCount": 134, "score": 9, "shortName": null, "slug": "disagreement", "suggestedAsFilter": false, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "sYm3HiWcfZvrGu3ui", "adminOnly": false, "afBaseScore": 2, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "nLbwLhBaQeG6tCNDN", "displayName": "jimrandomh" } ] }, "baseScore": 12, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:22.097Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 2000, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "nLbwLhBaQeG6tCNDN", "displayName": "jimrandomh" }, { "_id": "sof55TPMQaeBaxhsS", "displayName": "tommylees112" }, { "_id": "AayjS8XzcnDKhGdTv", "displayName": "shark" }, { "_id": "HnALuwRdo6k9HLaMt", "displayName": "Alex Firssoff" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "AI", "needsReview": false, "noindex": false, "postCount": 12545, "score": 12, "shortName": null, "slug": "ai", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 4, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
132
0
0
15
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
NKaPFf98Y5otMbsPk
SocialPreviewType
5c6391edbcb4ac6367c11bd9
<p class="MsoNormal">You can have some fun with people whose <a href="/lw/i4/belief_in_belief/">anticipations get out of sync with what they believe they believe</a>.</p> <p class="MsoNormal">I was once at a dinner party, trying to explain to a man what I did for a living, when he said: "I don't believe Artificial Intelligence is possible because only God can make a soul."</p> <p class="MsoNormal">At this point I must have been divinely inspired, because I instantly responded: "You mean if I can make an Artificial Intelligence, it proves your religion is false?"</p><p><a id="more"></a></p> <p class="MsoNormal">He said, "What?"</p> <p class="MsoNormal">I said, "Well, if your religion predicts that I can't possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false. Either your religion allows that it might be possible for me to build an AI; or, if I build an AI, that disproves your religion."</p> <p class="MsoNormal">There was a pause, as the one realized he had just made his hypothesis vulnerable to falsification, and then he said, "Well, I didn't mean that you couldn't make an intelligence, just that it couldn't be emotional in the same way we are."</p> <p class="MsoNormal">I said, "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, <em>that</em> means your religion is wrong."</p> <p class="MsoNormal">He said, "Well, um, I guess we may have to agree to disagree on this."</p> <p class="MsoNormal">I said: "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong."</p> <p class="MsoNormal">We went back and forth on this briefly. Finally, he said, "Well, I guess I was really trying to say that I don't think you can make something eternal."</p> <p class="MsoNormal">I said, "Well, I don't think so either! I'm glad we were able to reach agreement on this, as Aumann's Agreement Theorem requires."&nbsp; I stretched out my hand, and he shook it, and then he wandered away.</p> <p class="MsoNormal">A woman who had stood nearby, listening to the conversation, said to me gravely, "That was beautiful."</p> <p class="MsoNormal">"Thank you very much," I said.</p><p>&nbsp;</p> <p style="text-align:right">Part of the sequence <a href="http://wiki.lesswrong.com/wiki/Mysterious_Answers_to_Mysterious_Questions"><em>Mysterious Answers to Mysterious Questions</em></a></p> <p style="text-align:right">Next post: "<a href="/lw/i6/professing_and_cheering/">Professing and Cheering</a>"</p> <p style="text-align:right">Previous post: "<a href="/lw/i4/belief_in_belief/">Belief in Belief</a>"</p>
You can have some fun with people whose anticipations get out of sync with what they believe they believe. I was once at a dinner party, trying to explain to a man what I did for a living, when he said: "I don't believe Artificial Intelligence is possible because only God can make a soul." At this point I must have been divinely inspired, because I instantly responded: "You mean if I can make an Artificial Intelligence, it proves your religion is false?" He said, "What?" I said, "Well, if your religion predicts that I can't possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false. Either your religion allows that it might be possible for me to build an AI; or, if I build an AI, that disproves your religion." There was a pause, as the one realized he had just made his hypothesis vulnerable to falsification, and then he said, "Well, I didn't mean that you couldn't make an intelligence, just that it couldn't be emotional in the same way we are." I said, "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong." He said, "Well, um, I guess we may have to agree to disagree on this." I said: "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong." We went back and forth on this briefly. Finally, he said, "Well, I guess I was really trying to say that I don't think you can make something eternal." I said, "Well, I don't think so either! I'm glad we were able to reach agreement on this, as Aumann's Agreement Theorem requires."  I stretched out my hand, and he shook it, and then he wandered away. A woman who had stood nearby, listening to the conversation, said to me g
367
1.0.0
Revision
false
null
null
CrosspostOutput
CqyJzDZWvGhhFJ7dY
belief-in-belief
Belief in Belief
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-07-29T17:49:43.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
JuZyMs5b2rDNFjAPv
178
197
222
false
0.000246
null
false
false
2025-05-12T09:19:27.448Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
12
0
2007-07-29T17:49:43.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
GHPDsvFEzps6kzhGp
false
0
0
namesAttachedReactions
false
[]
5
null
null
null
null
[ { "__typename": "Tag", "_id": "YTCrHWYHAsAD74EHo", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-04-29T02:47:19.876Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 15, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Self-Deception", "needsReview": false, "noindex": false, "postCount": 89, "score": 19, "shortName": null, "slug": "self-deception", "suggestedAsFilter": false, "userId": "nLbwLhBaQeG6tCNDN", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "HXA9WxPpzZCCEwXHT", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-07-13T14:38:18.890Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Alief", "needsReview": false, "noindex": false, "postCount": 24, "score": 9, "shortName": null, "slug": "alief", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "SJFsFfFhE6m2ThAYJ", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-07-13T16:19:09.687Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Anticipated Experiences", "needsReview": false, "noindex": false, "postCount": 49, "score": 9, "shortName": null, "slug": "anticipated-experiences", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "5f5c37ee1b5cdee568cfb15a", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-09-11T19:58:52.025Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Belief In Belief", "needsReview": false, "noindex": false, "postCount": 2, "score": 0, "shortName": null, "slug": "belief-in-belief", "suggestedAsFilter": false, "userId": "9c2mQkLQq6gQSksMs", "voteCount": 0, "wikiOnly": true }, { "__typename": "Tag", "_id": "LDTSbmXtokYAsEq8e", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-05-24T07:47:20.152Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Motivated Reasoning", "needsReview": false, "noindex": false, "postCount": 73, "score": 9, "shortName": null, "slug": "motivated-reasoning", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
197
0
0
21
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
CqyJzDZWvGhhFJ7dY
SocialPreviewType
JuZyMs5b2rDNFjAPv
<p>Carl Sagan once told a <a href="http://www.godlessgeeks.com/LINKS/Dragon.htm">parable</a> of someone who comes to us and claims: “There is a dragon in my garage.” Fascinating! We reply that we wish to see this dragon—let us set out at once for the garage! “But wait,” the claimant says to us, “it is an <i>invisible</i> dragon.”</p><p>Now as Sagan points out, this doesn’t make the hypothesis unfalsifiable. Perhaps we go to the claimant’s garage, and although we see no dragon, we hear heavy breathing from no visible source; footprints mysteriously appear on the ground; and instruments show that something in the garage is consuming oxygen and breathing out carbon dioxide.</p><p>But now suppose that we say to the claimant, “Okay, we’ll visit the garage and see if we can hear heavy breathing,” and the claimant quickly says no, it’s an <i>inaudible</i> dragon. We propose to measure carbon dioxide in the air, and the claimant says the dragon does not breathe. We propose to toss a bag of flour into the air to see if it outlines an invisible dragon, and the claimant immediately says, “The dragon is permeable to flour.”</p><p>Carl Sagan used this parable to illustrate the classic moral that poor hypotheses need to do fast footwork to avoid falsification. But I tell this parable to make a different point: The claimant must have an accurate model of the situation <i>somewhere</i> in their mind, because they can anticipate, in advance, <i>exactly which experimental results they’ll need to excuse.</i></p><p>Some philosophers have been much confused by such scenarios, asking, “Does the claimant <i>really</i> believe there’s a dragon present, or not?” As if the human brain only had enough disk space to represent one belief at a time! Real minds are more tangled than that. There are different types of belief; not all beliefs are direct anticipations. The claimant clearly does not <i>anticipate</i> seeing anything unusual upon opening the garage door. Otherwise they wouldn’t make advance excuses. It may also be that the claimant’s pool of propositional beliefs contains the free-floating statement <i>There is a dragon in my garage.</i> It may seem, to a rationalist, that these two beliefs should collide and conflict even though they are of different types. Yet it is a physical fact that you can write “The sky is green!” next to a picture of a blue sky without the paper bursting into flames.</p><p>The rationalist virtue of empiricism is supposed to prevent us from making this class of mistake. We’re supposed to constan... </p>
Carl Sagan once told a parable of someone who comes to us and claims: “There is a dragon in my garage.” Fascinating! We reply that we wish to see this dragon—let us set out at once for the garage! “But wait,” the claimant says to us, “it is an invisible dragon.” Now as Sagan points out, this doesn’t make the hypothesis unfalsifiable. Perhaps we go to the claimant’s garage, and although we see no dragon, we hear heavy breathing from no visible source; footprints mysteriously appear on the ground; and instruments show that something in the garage is consuming oxygen and breathing out carbon dioxide. But now suppose that we say to the claimant, “Okay, we’ll visit the garage and see if we can hear heavy breathing,” and the claimant quickly says no, it’s an inaudible dragon. We propose to measure carbon dioxide in the air, and the claimant says the dragon does not breathe. We propose to toss a bag of flour into the air to see if it outlines an invisible dragon, and the claimant immediately says, “The dragon is permeable to flour.” Carl Sagan used this parable to illustrate the classic moral that poor hypotheses need to do fast footwork to avoid falsification. But I tell this parable to make a different point: The claimant must have an accurate model of the situation somewhere in their mind, because they can anticipate, in advance, exactly which experimental results they’ll need to excuse. Some philosophers have been much confused by such scenarios, asking, “Does the claimant really believe there’s a dragon present, or not?” As if the human brain only had enough disk space to represent one belief at a time! Real minds are more tangled than that. There are different types of belief; not all beliefs are direct anticipations. The claimant clearly does not anticipate seeing anything unusual upon opening the garage door. Otherwise they wouldn’t make advance excuses. It may also be that the claimant’s pool of propositional beliefs contains the free-floating statement There
1,362
2.2.0
Revision
false
null
null
CrosspostOutput
a7n8GdKiAZRX86T5A
making-beliefs-pay-rent-in-anticipated-experiences
Making Beliefs Pay Rent (in Anticipated Experiences)
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-07-28T22:59:48.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
mYao5XRDuPvRBLDKF
269
462
513
false
0.000553
null
false
false
2025-06-27T05:55:03.562Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
23
0
2007-07-28T22:59:48.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
JLMKfnnGEaXkTixAJ
false
0
0
namesAttachedReactions
false
[]
4
null
null
null
null
[ { "__typename": "Tag", "_id": "SJFsFfFhE6m2ThAYJ", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-07-13T16:19:09.687Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Anticipated Experiences", "needsReview": false, "noindex": false, "postCount": 49, "score": 9, "shortName": null, "slug": "anticipated-experiences", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "EdRnMXBRbY5JDf5df", "adminOnly": false, "afBaseScore": 6, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "nmk3nLpQE89dMRzzN", "displayName": "Eliezer Yudkowsky" } ] }, "baseScore": 13, "canEditUserIds": null, "core": false, "createdAt": "2015-07-02T01:53:10.000Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "nmk3nLpQE89dMRzzN", "displayName": "Eliezer Yudkowsky" } ] }, "isArbitalImport": true, "isPlaceholderPage": false, "isSubforum": false, "name": "Epistemology", "needsReview": false, "noindex": false, "postCount": 424, "score": 13, "shortName": null, "slug": "epistemology", "suggestedAsFilter": false, "userId": "nmk3nLpQE89dMRzzN", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "32DdRimdM7sB5wmKu", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-08-02T08:13:33.288Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Empiricism", "needsReview": false, "noindex": false, "postCount": 45, "score": 9, "shortName": null, "slug": "empiricism", "suggestedAsFilter": false, "userId": "sKAL2jzfkYkDbQmx9", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "vcvfjGJwRmFbMMS3d", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-08-05T09:44:25.740Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Principles", "needsReview": false, "noindex": false, "postCount": 23, "score": 0, "shortName": null, "slug": "principles", "suggestedAsFilter": false, "userId": "sKAL2jzfkYkDbQmx9", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "dBPou4ihoQNY4cquv", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-08-01T16:09:30.226Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Psychology", "needsReview": false, "noindex": false, "postCount": 348, "score": 9, "shortName": null, "slug": "psychology", "suggestedAsFilter": false, "userId": "p8SHJFHRgZeMuw7qk", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
462
0
0
38
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
a7n8GdKiAZRX86T5A
SocialPreviewType
mYao5XRDuPvRBLDKF
<p>Thus begins the ancient parable:</p><p><i>If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”</i></p><p>If there’s a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don’t.</p><p>Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other?</p><p>Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will <i>happen to</i> them; their maps of the world do not diverge in any sensory detail.</p><p>It’s tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don’t see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don’t <i>experience</i> the floor directly; you see the light <i>reflected</i> from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a step.</p><p>You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground?</p><p>To answer precisely, you must use beliefs like <i>Earth’s gravity is 9.8 meters per second per second,</i> and <i>This building is around 120 meters ta</i>... </p>
Thus begins the ancient parable: If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.” If there’s a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don’t. Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them; their maps of the world do not diverge in any sensory detail. It’s tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don’t see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don’t experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a
1,112
2.1.0
Revision
false
null
null
CrosspostOutput
gXgq2Fwm2s2GwhjF3
free-will-isn-t-a-concept-unless-you-mean-determinism
Free will isn’t a concept (unless you mean determinism)
null
false
false
false
null
jRRYAy2mQAHy2Mq3f
null
true
false
false
false
Post
null
2007-07-15T13:51:00.000Z
null
false
false
2
2
null
false
false
post
[]
null
null
5c6392dcbcb4ac6367c170ff
1
2
3
false
0.000004
null
false
false
2024-06-25T18:16:58.629Z
null
null
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
0
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
null
false
0
0
namesAttachedReactions
false
[]
1
null
null
null
null
[ { "__typename": "Tag", "_id": "5f5c37ee1b5cdee568cfb1b8", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 10, "canEditUserIds": null, "core": false, "createdAt": "2020-09-11T19:58:52.186Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" }, { "_id": "dvMTMFdcjgWBxi9jp", "displayName": "wlxqt" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Free Will", "needsReview": false, "noindex": false, "postCount": 66, "score": 10, "shortName": null, "slug": "free-will", "suggestedAsFilter": false, "userId": "nmk3nLpQE89dMRzzN", "voteCount": 2, "wikiOnly": false } ]
null
0
0
null
false
null
null
null
null
null
null
null
null
jRRYAy2mQAHy2Mq3f
katjagrace
2009-02-27T14:15:22.378Z
KatjaGrace
KatjaGrace
null
null
null
9,330
309
false
false
null
null
627
509
0
3
7
1
0
r38pkCm7wF4M44MDQ
User
null
null
null
[ "trustLevel1", "canModeratePersonal", "alignmentVoters", "alignmentForum" ]
null
null
gXgq2Fwm2s2GwhjF3
SocialPreviewType
5c6392dcbcb4ac6367c170ff
<p>Imagine something happens. For instance you make a decision. There are three possibilities for this occurence:</p> <ol> <li>It could be related purely to other factors (determinism)</li> <li>It could be not related to other factors (randomness)</li> <li>It could be a combination of these (a mixture of determinism and randomness)</li> </ol> <p>None of these are free will (as commonly understood). So where does the concept of free will fit in? How could an occurence escape from being in one of these categories? Clearly it can&#8217;t. So there is no possibility of a concept of free will that is in opposition to determinism, let alone a chance of it existing in reality.</p><p>But you feel like you have free will (whatever that is &#8211; just don&#8217;t think about it), don&#8217;t you? Or to put it another way, you feel like your actions are neither determined nor random. You choose them.</p><p>And that is precisely why they are determined. They are determined by you. And you already exist to the finest detail at the time you are making the decision. If you made choices (or some element of them) not controlled by your personality, experience, thoughts and anything else that comes under the heading of ‘the state of your brain as a result of genetics and your prior environments’, they would be random, which still isn’t free will (not to mention being a less personal and less appealing model, if that&#8217;s how you choose your beliefs).</p><p>You might argue that you can choose what to think and how to feel , and how heavily to let those things influence you, when making a decision. That doesn&#8217;t alter the situation however. Those are then choices too, and your decisions for them would presumably have to be made based on other thoughts and feelings , which you would presumably choose, and so on. The point at which free will should have occurred would just be shifted back indefinitely. Again you just have a long chain of cause and effect.</p><p>The closest thing you can have to free will is for your actions to be determined purely by the state of your brain. Free will is determinism.</p><br /><img alt="" border="0" src="http://feeds.wordpress.com/1.0/categories/meteuphoric.wordpress.com/1744/" /> <img alt="" border="0" src="http://feeds.wordpress.com/1.0/tags/meteuphoric.wordpress.com/1744/" /> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/gocomments/meteuphoric.wordpress.com/1744/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/meteuphoric.wordpress.com/1744/" /></a> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/godelicious/meteuphoric.wordpress.com/1744/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/delicious/meteuphoric.wordpress.com/1744/" /></a> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/gofacebook/meteuphoric.wordpress.com/1744/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/facebook/meteuphoric.wordpress.com/1744/" /></a> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/gotwitter/meteuphoric.wordpress.com/1744/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/twitter/meteuphoric.wordpress.com/1744/" /></a> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/gostumble/meteuphoric.wordpress.com/1744/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/stumble/meteuphoric.wordpress.com/1744/" /></a> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/godigg/meteuphoric.wordpress.com/1744/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/digg/meteuphoric.wordpress.com/1744/" /></a> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/goreddit/meteuphoric.wordpress.com/1744/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/reddit/meteuphoric.wordpress.com/1744/" /></a> <img alt="" border="0" src="https://pixel.wp.com/b.gif?host=meteuphoric.wordpress.com&#038;blog=8643840&#038;post=1744&#038;subd=meteuphoric&#038;ref=&#038;feed=1" width="1" height="1" />
Imagine something happens. For instance you make a decision. There are three possibilities for this occurence: 1. It could be related purely to other factors (determinism) 2. It could be not related to other factors (randomness) 3. It could be a combination of these (a mixture of determinism and randomness) None of these are free will (as commonly understood). So where does the concept of free will fit in? How could an occurence escape from being in one of these categories? Clearly it can’t. So there is no possibility of a concept of free will that is in opposition to determinism, let alone a chance of it existing in reality. But you feel like you have free will (whatever that is – just don’t think about it), don’t you? Or to put it another way, you feel like your actions are neither determined nor random. You choose them. And that is precisely why they are determined. They are determined by you. And you already exist to the finest detail at the time you are making the decision. If you made choices (or some element of them) not controlled by your personality, experience, thoughts and anything else that comes under the heading of ‘the state of your brain as a result of genetics and your prior environments’, they would be random, which still isn’t free will (not to mention being a less personal and less appealing model, if that’s how you choose your beliefs). You might argue that you can choose what to think and how to feel , and how heavily to let those things influence you, when making a decision. That doesn’t alter the situation however. Those are then choices too, and your decisions for them would presumably have to be made based on other thoughts and feelings , which you would presumably choose, and so on. The point at which free will should have occurred would just be shifted back indefinitely. Again you just have a long chain of cause and effect. The closest thing you can have to free will is for your actions to be determined purely by the state of you
367
1.0.0
Revision
false
null
null
CrosspostOutput
48WeP7oTec3kBEada
two-more-things-to-unlearn-from-school
Two More Things to Unlearn from School
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-07-12T17:45:33.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
5c63917bbcb4ac6367c0ee89
157
144
174
false
0.000195
null
false
false
2022-04-09T23:41:44.177Z
null
null
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
14
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
null
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "5gcpKG2XEAZGj5DEf", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-07-15T19:10:11.841Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Noticing", "needsReview": false, "noindex": false, "postCount": 35, "score": 19, "shortName": null, "slug": "noticing", "suggestedAsFilter": false, "userId": "gXeEWGjTWyqgrQTzR", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "csMv9MvvjYJyeHqoo", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-07-07T21:07:09.006Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Physics", "needsReview": false, "noindex": false, "postCount": 290, "score": 19, "shortName": null, "slug": "physics", "suggestedAsFilter": false, "userId": "qgdGA4ZEyW7zNdK84", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "fH8jPjHF2R27sRTTG", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-07-12T11:04:34.644Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Education", "needsReview": false, "noindex": false, "postCount": 263, "score": 9, "shortName": null, "slug": "education", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "Ng8Gice9KNkncxqcj", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 1, "canEditUserIds": null, "core": true, "createdAt": "2020-06-14T22:24:17.072Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 100, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "iMqytjy9ns89Fzfyv", "displayName": "miakko" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Rationality", "needsReview": false, "noindex": false, "postCount": 4302, "score": 1, "shortName": null, "slug": "rationality", "suggestedAsFilter": true, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
144
0
0
23
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
48WeP7oTec3kBEada
SocialPreviewType
5c63917bbcb4ac6367c0ee89
<p class="MsoNormal">In <a href="http://ben.casnocha.com/2007/07/three-things-to.html">Three Things to Unlearn from School</a>, Ben Casnocha cites Bill Bullard's list of three bad habits of thought: Attaching importance to personal opinions, solving given problems, and earning the approval of others. Bullard's proposed alternatives don't look very good to me, but Bullard has surely identified some important problems.</p> <p class="MsoNormal">I can think of other school-inculcated bad habits of thought, too many to list, but I'll name two of my least favorite.</p> <p class="MsoNormal">I suspect the <em>most</em> dangerous habit of thought taught in schools is that even if you don't really understand something, you should parrot it back anyway. One of the most fundamental life skills is realizing when you are confused, and school actively destroys this ability - teaches students that they &quot;understand&quot; when they can successfully answer questions on an exam, which is very very very far from absorbing the knowledge and making it a part of you. Students learn the habit that eating consists of putting food into mouth; the exams can't test for chewing or swallowing, and so they starve.</p> <a id="more"></a><p class="MsoNormal">Much of this problem may come from needing to take three 4-credit courses per quarter, with a textbook chapter plus homework to be done every week - the courses are <em>timed</em> for frantic memorization, it's not <em>possible</em> to deeply chew over and leisurely digest knowledge in the same period. College students aren't <em>allowed</em> to be confused; if they started saying, &quot;Wait, do I really understand this? Maybe I'd better spend a few days looking up related papers, or consult another textbook,&quot; they'd fail all the courses they took that quarter. A month later they would understand the material far better and remember it much longer - but one month after finals is too late; it counts for nothing in the lunatic university utility function.</p> <p class="MsoNormal">Many students who have gone through this process no longer even <em>realize</em> when something confuses them, or notice gaps in their understanding. They have been trained out of pausing to think.</p> <p class="MsoNormal">I recall reading, though I can't remember where, that physicists in some country were more likely to become extreme religious fanatics. This confused me, until the author suggested that physics students are presented with a received truth that is actually correct, from which they learn the habit of trusting authority.</p> <p class="MsoNormal">It may be dangerous to present people with a giant mass of authoritative... </p>
In Three Things to Unlearn from School, Ben Casnocha cites Bill Bullard's list of three bad habits of thought: Attaching importance to personal opinions, solving given problems, and earning the approval of others. Bullard's proposed alternatives don't look very good to me, but Bullard has surely identified some important problems. I can think of other school-inculcated bad habits of thought, too many to list, but I'll name two of my least favorite. I suspect the most dangerous habit of thought taught in schools is that even if you don't really understand something, you should parrot it back anyway. One of the most fundamental life skills is realizing when you are confused, and school actively destroys this ability - teaches students that they "understand" when they can successfully answer questions on an exam, which is very very very far from absorbing the knowledge and making it a part of you. Students learn the habit that eating consists of putting food into mouth; the exams can't test for chewing or swallowing, and so they starve. Much of this problem may come from needing to take three 4-credit courses per quarter, with a textbook chapter plus homework to be done every week - the courses are timed for frantic memorization, it's not possible to deeply chew over and leisurely digest knowledge in the same period. College students aren't allowed to be confused; if they started saying, "Wait, do I really understand this? Maybe I'd better spend a few days looking up related papers, or consult another textbook," they'd fail all the courses they took that quarter. A month later they would understand the material far better and remember it much longer - but one month after finals is too late; it counts for nothing in the lunatic university utility function. Many students who have gone through this process no longer even realize when something confuses them, or notice gaps in their understanding. They have been trained out of pausing to think. I recall reading, thoug
661
1.0.0
Revision
false
null
null
CrosspostOutput
JCnYq4SBZ29zngRf4
open-thread-0
Open Thread
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-07-01T19:38:34.000Z
null
false
false
2
2
null
false
false
post
[]
null
null
5c6391d0bcb4ac6367c11258
38
4
5
false
0.000006
null
false
false
2017-06-17T04:36:15.940Z
null
null
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
0
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
null
false
0
0
namesAttachedReactions
false
[]
1
null
null
null
null
[ { "__typename": "Tag", "_id": "ABG8vt87eW4FFA6gD", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2023-04-26T21:28:55.828Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Open Threads", "needsReview": false, "noindex": false, "postCount": 483, "score": 9, "shortName": null, "slug": "open-threads", "suggestedAsFilter": false, "userId": null, "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
null
null
null
null
null
null
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
JCnYq4SBZ29zngRf4
SocialPreviewType
5c6391d0bcb4ac6367c11258
<p>By request of the community, an Open Thread for free-form comments, so long as they're still related to the basic project of this blog.</p> <p>A word on post requests:&nbsp; You're free to ask, but the authors can't commit to posting on requested topics - it's hard enough to do the ones we have in mind already.</p>
By request of the community, an Open Thread for free-form comments, so long as they're still related to the basic project of this blog. A word on post requests:  You're free to ask, but the authors can't commit to posting on requested topics - it's hard enough to do the ones we have in mind already.
55
1.0.0
Revision
false
null
null
CrosspostOutput
28bAMAxhoX3bwbAKC
are-your-enemies-innately-evil
Are Your Enemies Innately Evil?
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-06-26T21:13:26.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
tPrFGRTfTh4FrAmYR
148
198
226
false
0.000248
null
false
false
2024-11-06T16:25:24.011Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
13
0
2007-06-26T21:13:26.000Z
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
DLdfGraXkdcEAyyfu
false
0
0
namesAttachedReactions
false
[]
4
null
null
null
null
[ { "__typename": "Tag", "_id": "kdbs6xBndPkmrYAxM", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2017-01-12T07:44:11.000Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": true, "isPlaceholderPage": false, "isSubforum": false, "name": "Politics", "needsReview": false, "noindex": false, "postCount": 571, "score": 0, "shortName": null, "slug": "politics", "suggestedAsFilter": false, "userId": "7iXcndyHDvmt77ggr", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "5f5c37ee1b5cdee568cfb157", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-09-11T19:58:52.020Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Correspondence Bias", "needsReview": false, "noindex": false, "postCount": 5, "score": 0, "shortName": null, "slug": "correspondence-bias", "suggestedAsFilter": false, "userId": "9c2mQkLQq6gQSksMs", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "iP2X4jQNHMWHRNPne", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-06-08T00:06:01.955Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Motivations", "needsReview": false, "noindex": false, "postCount": 200, "score": 9, "shortName": null, "slug": "motivations", "suggestedAsFilter": false, "userId": "qgdGA4ZEyW7zNdK84", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "gHCNhqxuJq2bZ2akb", "adminOnly": false, "afBaseScore": 0, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-07-10T11:36:05.706Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Social & Cultural Dynamics", "needsReview": false, "noindex": false, "postCount": 384, "score": 0, "shortName": null, "slug": "social-and-cultural-dynamics", "suggestedAsFilter": false, "userId": "qxJ28GN72aiJu96iF", "voteCount": 0, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
198
0
0
28
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
28bAMAxhoX3bwbAKC
SocialPreviewType
tPrFGRTfTh4FrAmYR
<p>We see far too direct a correspondence between others&#x2019; actions and their inherent dispositions. We see unusual dispositions that exactly match the unusual behavior, rather than asking after real situations or imagined situations that could explain the behavior. We hypothesize mutants. </p> <p>When someone actually <em>offends</em> us&#x2014;commits an action of which we (rightly or wrongly) disapprove&#x2014;then, I observe, the correspondence bias redoubles. There seems to be a <em>very</em> strong tendency to blame evil deeds on the Enemy&#x2019;s mutant, evil disposition. Not as a moral point, but as a strict question of prior probability, we should ask what the Enemy might believe about their situation that would reduce the seeming bizarrity of their behavior. This would allow us to hypothesize a less exceptional disposition, and thereby shoulder a lesser burden of improbability.</p> <p>On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America. Now why do you suppose they might have done that? Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom?</p> <p><em>Realistically</em>, most people don&#x2019;t construct their life stories with themselves as the villains. Everyone is the hero of their own story. The Enemy&#x2019;s story, as seen by the Enemy, <em>is not going to make the Enemy look bad.</em> If you try to construe motivations that <em>would</em> make the Enemy look bad, you&#x2019;ll end up flat wrong about what actually goes on in the Enemy&#x2019;s mind.</p> <p>But politics is the mind-killer. Debate is war; arguments are soldiers. If the Enemy did have an evil disposition, that would be an argument in favor of your side. And <em>any</em> argument that favors your side must be supported, no matter how silly&#x2014;otherwise you&#x2019;re letting up the pressure somewhere on the battlefront. Everyone strives to outshine their neighbor in patriotic denunciation, and no one dares to contradict. Soon the Enemy has horns, bat wings, flaming breath, and fangs that drip corrosive venom. If you deny any aspect of this on merely factual grounds, you are arguing the Enemy&#x2019;s side; you are a traitor. Very few people will understand that you aren&#x2019;t defending the Enemy, just defending the truth.</p> <p>If it took a mutant to do monstrous things, the... </p>
We see far too direct a correspondence between others’ actions and their inherent dispositions. We see unusual dispositions that exactly match the unusual behavior, rather than asking after real situations or imagined situations that could explain the behavior. We hypothesize mutants. When someone actually offends us—commits an action of which we (rightly or wrongly) disapprove—then, I observe, the correspondence bias redoubles. There seems to be a very strong tendency to blame evil deeds on the Enemy’s mutant, evil disposition. Not as a moral point, but as a strict question of prior probability, we should ask what the Enemy might believe about their situation that would reduce the seeming bizarrity of their behavior. This would allow us to hypothesize a less exceptional disposition, and thereby shoulder a lesser burden of improbability. On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America. Now why do you suppose they might have done that? Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom? Realistically, most people don’t construct their life stories with themselves as the villains. Everyone is the hero of their own story. The Enemy’s story, as seen by the Enemy, is not going to make the Enemy look bad. If you try to construe motivations that would make the Enemy look bad, you’ll end up flat wrong about what actually goes on in the Enemy’s mind. But politics is the mind-killer. Debate is war; arguments are soldiers. If the Enemy did have an evil disposition, that would be an argument in favor of your side. And any argument that favors your side must be supported, no matter how silly—otherwise you’re letting up the pressure somewhere on the battlefront. Everyone strives to outshine their neighbor in patriotic denunciation, and no one dares to contradict. Soon the Enemy has horns, bat wings, fl
909
2.0.0
Revision
false
null
null
CrosspostOutput
DB6wbyrMugYMK5o6a
correspondence-bias
Correspondence Bias
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-06-25T00:58:26.000Z
null
false
false
2
2
2018-01-30T00:32:03.501Z
false
false
post
[]
null
null
GocoA3c9Pmp3DcnYA
49
96
107
false
0.000123
null
false
false
2021-01-06T23:50:54.952Z
null
rationality
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
5
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
tgym2YRj9YexRCtiQ
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "5f5c37ee1b5cdee568cfb157", "adminOnly": false, "afBaseScore": null, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [] }, "baseScore": 0, "canEditUserIds": null, "core": false, "createdAt": "2020-09-11T19:58:52.020Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Correspondence Bias", "needsReview": false, "noindex": false, "postCount": 5, "score": 0, "shortName": null, "slug": "correspondence-bias", "suggestedAsFilter": false, "userId": "9c2mQkLQq6gQSksMs", "voteCount": 0, "wikiOnly": false }, { "__typename": "Tag", "_id": "4R8JYu4QF2FqzJxE5", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 19, "canEditUserIds": null, "core": false, "createdAt": "2020-05-13T15:40:30.194Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Heuristics & Biases", "needsReview": false, "noindex": false, "postCount": 272, "score": 19, "shortName": null, "slug": "heuristics-and-biases", "suggestedAsFilter": false, "userId": "BpBzKEueak7J8vHNi", "voteCount": 2, "wikiOnly": false }, { "__typename": "Tag", "_id": "iP2X4jQNHMWHRNPne", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-06-08T00:06:01.955Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Motivations", "needsReview": false, "noindex": false, "postCount": 200, "score": 9, "shortName": null, "slug": "motivations", "suggestedAsFilter": false, "userId": "qgdGA4ZEyW7zNdK84", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
96
0
0
9
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
DB6wbyrMugYMK5o6a
SocialPreviewType
GocoA3c9Pmp3DcnYA
<blockquote><p>The correspondence bias is the tendency to draw inferences about a person’s unique and enduring dispositions from behaviors that can be entirely explained by the situations in which they occur.</p><p>—Gilbert and Malone<a href="#fn1x17"><sup>1</sup></a></p></blockquote><p>We tend to see far too direct a correspondence between others’ actions and personalities. When we see someone else kick a vending machine for no visible reason, we assume they are “an angry person.” But when you yourself kick the vending machine, it’s because the bus was late, the train was early, your report is overdue, and now the damned vending machine has eaten your lunch money for the second day in a row. <i>Surely</i>, you think to yourself, <i>anyone would kick the vending machine, in that situation</i>.</p><p>We attribute our own actions to our <i>situations</i>, seeing our behaviors as perfectly normal responses to experience. But when someone else kicks a vending machine, we don’t see their past history trailing behind them in the air. We just see the kick, for no reason <i>we</i> know about, and we think this must be a naturally angry person—since they lashed out without any provocation.</p><p>Yet consider the prior probabilities. There are more late buses in the world, than mutants born with unnaturally high anger levels that cause them to sometimes spontaneously kick vending machines. Now the average human is, in fact, a mutant. If I recall correctly, an average individual has two to ten somatically expressed mutations. But any <i>given</i> DNA location is very unlikely to be affected. Similarly, any given aspect of someone’s disposition is probably not very far from average. To suggest otherwise is to shoulder a burden of improbability.</p><p>Even when people are informed explicitly of situational causes, they don’t seem to properly discount the observed behavior. When subjects are told that a pro-abortion or anti-abortion speaker was <i>randomly assigned</i> to give a speech on that position, subjects still think the speakers harbor leanings in the direction randomly assigned.<a href="#fn2x17"><sup>2</sup></a></p><p>It seems quite intuitive to explain rain by water spirits; explain fire by a fire-stuff (phlogiston) escaping from burning matter; explain the soporific effect of a medication by saying that it contains a “dormitive potency.” Reality usually involves more complicated mechanisms: an evaporation and condensation cycle underlying rain, oxidizing combustion underlying fire, chemical interactions with the nervous system for sopo... </p>
> The correspondence bias is the tendency to draw inferences about a person’s unique and enduring dispositions from behaviors that can be entirely explained by the situations in which they occur. > > —Gilbert and Malone1 We tend to see far too direct a correspondence between others’ actions and personalities. When we see someone else kick a vending machine for no visible reason, we assume they are “an angry person.” But when you yourself kick the vending machine, it’s because the bus was late, the train was early, your report is overdue, and now the damned vending machine has eaten your lunch money for the second day in a row. Surely, you think to yourself, anyone would kick the vending machine, in that situation. We attribute our own actions to our situations, seeing our behaviors as perfectly normal responses to experience. But when someone else kicks a vending machine, we don’t see their past history trailing behind them in the air. We just see the kick, for no reason we know about, and we think this must be a naturally angry person—since they lashed out without any provocation. Yet consider the prior probabilities. There are more late buses in the world, than mutants born with unnaturally high anger levels that cause them to sometimes spontaneously kick vending machines. Now the average human is, in fact, a mutant. If I recall correctly, an average individual has two to ten somatically expressed mutations. But any given DNA location is very unlikely to be affected. Similarly, any given aspect of someone’s disposition is probably not very far from average. To suggest otherwise is to shoulder a burden of improbability. Even when people are informed explicitly of situational causes, they don’t seem to properly discount the observed behavior. When subjects are told that a pro-abortion or anti-abortion speaker was randomly assigned to give a speech on that position, subjects still think the speakers harbor leanings in the direction randomly assigned.2 It seems
832
2.1.0
Revision
false
null
null
CrosspostOutput
mgmvs6BT3dSNxmyP2
risk-free-bonds-aren-t
Risk-Free Bonds Aren't
null
false
false
false
null
nmk3nLpQE89dMRzzN
null
true
false
false
false
Post
null
2007-06-22T22:30:00.000Z
null
false
false
2
2
null
false
false
post
[]
null
null
5c639264bcb4ac6367c14963
40
25
24
false
0.000025
null
false
false
2017-06-17T04:05:42.154Z
null
null
null
null
null
false
false
null
null
null
false
false
null
null
null
null
null
null
null
null
null
null
false
null
null
null
null
XtphY3uYHwruKqDyG
null
null
null
false
null
[]
null
2
0
null
false
false
null
null
true
false
false
0
0
0
null
null
null
null
null
null
null
false
0
0
namesAttachedReactions
false
[]
3
null
null
null
null
[ { "__typename": "Tag", "_id": "SrW9iP2j6Hi8R5PmT", "adminOnly": false, "afBaseScore": 6, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" } ] }, "baseScore": 10, "canEditUserIds": null, "core": false, "createdAt": "2020-08-22T12:22:56.075Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Black Swans", "needsReview": false, "noindex": false, "postCount": 12, "score": 10, "shortName": null, "slug": "black-swans", "suggestedAsFilter": false, "userId": "HoGziwmhpMGqGeWZy", "voteCount": 1, "wikiOnly": false }, { "__typename": "Tag", "_id": "PDJ6KqJBRzvKPfuS3", "adminOnly": false, "afBaseScore": 10, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" }, { "_id": "2B6Hxu48xeRXygvca", "displayName": "Arjun Pitchanathan" } ] }, "baseScore": 25, "canEditUserIds": null, "core": false, "createdAt": "2020-06-14T22:24:48.135Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" }, { "_id": "2B6Hxu48xeRXygvca", "displayName": "Arjun Pitchanathan" }, { "_id": "8btiLJDabHgZuiSAB", "displayName": "Ggwp" }, { "_id": "Au8JpEqoZgEhEXLD7", "displayName": "KlayugMonk" }, { "_id": "Ns8Q7rJZaFoz53Szy", "displayName": "Gabriel Stechschulte" }, { "_id": "xF5nfdddHjFThHy49", "displayName": "[email protected]" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Economics", "needsReview": false, "noindex": false, "postCount": 547, "score": 25, "shortName": null, "slug": "economics", "suggestedAsFilter": false, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 7, "wikiOnly": false }, { "__typename": "Tag", "_id": "jgcAJnksReZRuvgzp", "adminOnly": false, "afBaseScore": 9, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 20, "canEditUserIds": null, "core": false, "createdAt": "2020-06-10T23:32:39.817Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "EQNTWXLKMeWMp2FQS", "displayName": "Ben Pace" }, { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" }, { "_id": "B8NsWfXYFKcXSGm8q", "displayName": "Pranav Nirmal" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Financial Investing", "needsReview": false, "noindex": false, "postCount": 180, "score": 20, "shortName": null, "slug": "financial-investing", "suggestedAsFilter": false, "userId": "r38pkCm7wF4M44MDQ", "voteCount": 3, "wikiOnly": false }, { "__typename": "Tag", "_id": "xYLtnJ6keSHGfrLpe", "adminOnly": false, "afBaseScore": 3, "afExtendedScore": { "reacts": { "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "baseScore": 9, "canEditUserIds": null, "core": false, "createdAt": "2020-08-22T09:02:46.252Z", "currentUserExtendedVote": null, "currentUserVote": null, "deleted": false, "descriptionTruncationCount": 0, "extendedScore": { "reacts": { "important": null, "insightful": null, "thinking": null, "typo": null }, "usersWhoLiked": [ { "_id": "qgdGA4ZEyW7zNdK84", "displayName": "Ruby" } ] }, "isArbitalImport": false, "isPlaceholderPage": false, "isSubforum": false, "name": "Risk Management", "needsReview": false, "noindex": false, "postCount": 36, "score": 9, "shortName": null, "slug": "risk-management", "suggestedAsFilter": false, "userId": "QBvPFLFyZyuHcBwFm", "voteCount": 1, "wikiOnly": false } ]
null
0
0
null
false
null
null
0
25
0
0
4
0
nmk3nLpQE89dMRzzN
eliezer_yudkowsky
2009-02-23T21:58:56.739Z
Eliezer_Yudkowsky
Eliezer Yudkowsky
null
null
null
150,014
1,892
false
false
null
null
951
7,677
40
18
120
1
3,803
r38pkCm7wF4M44MDQ
User
reign-of-terror
[ "sBWszXPhPsNNemv4Q", "YBHSPmZEfyyY2E2au" ]
true
[ "trustLevel1", "alignmentVoters", "alignmentForum", "canModeratePersonal" ]
null
null
mgmvs6BT3dSNxmyP2
SocialPreviewType
5c639264bcb4ac6367c14963
<p>I've always been annoyed by the term &quot;risk-free bonds rate&quot;, meaning the return on US Treasury bills.&nbsp; Just because US bonds have not defaulted within their trading experience, people assume this is <em>impossible?</em>&nbsp; A list of major governments in 1900 would probably put the Ottoman Empire or Austria-Hungary well ahead of the relatively young United States.&nbsp; Citing the good track record of the US alone, and not all governments of equal apparent stability at the start of the same time period, is purest survivorship bias.</p><p>The United States is a democracy; if enough people vote for representatives who decide not to pay off the bonds, they won't get paid.&nbsp; Do you want to look at recent history, let alone ancient history, and tell me this is impossible?&nbsp; The Internet could enable coordinated populist voting that would sweep new candidates into office, in defiance of prevous political machines.&nbsp; Then the US economy melts under the burden of consumer debt, which causes China to stop buying US bonds and dump its dollar reserves.&nbsp; Then Al Qaeda finally smuggles a nuke into Washington, D.C.&nbsp; Then the next global pandemic hits.&nbsp; And these are just &quot;good stories&quot; - the probability of the US defaulting on its bonds <em>for any reason</em>, is necessarily higher than the probability of it happening for the particular reasons I've just described.&nbsp; I'm not saying these are high probabilities, but they are probabilities.&nbsp; Treasury bills are nowhere near &quot;risk free&quot;.<em>&nbsp;</em></p><a id="more"></a><p>I may be prejudiced here, because I anticipate particular Black Swans (AI, nanotech, biotech) that I see as having a high chance of striking over the lifetime of a 30-year Treasury bond.&nbsp; But even if you don't share those particular assumptions, do you expect the United States to still be around in 300 years?&nbsp; If not, do you know exactly when it will go bust?&nbsp; Then why isn't the risk of losing your capital on a 30-year Treasury bond at least, say, 10%?</p><p>Nassim Nicholas Taleb's latest, <em>The Black Swan,</em> is about the impact of unknown unknowns - sudden blowups, processes that seem to behave normally for long periods and then melt down, variables in which most of the movement may occur on a tiny fraction of the moves.&nbsp; Taleb inveighs against the dangers of induction, the ludic fallacy, hindsight, survivorship bias.&nbsp; And t... </p>
I've always been annoyed by the term "risk-free bonds rate", meaning the return on US Treasury bills.  Just because US bonds have not defaulted within their trading experience, people assume this is impossible?  A list of major governments in 1900 would probably put the Ottoman Empire or Austria-Hungary well ahead of the relatively young United States.  Citing the good track record of the US alone, and not all governments of equal apparent stability at the start of the same time period, is purest survivorship bias. The United States is a democracy; if enough people vote for representatives who decide not to pay off the bonds, they won't get paid.  Do you want to look at recent history, let alone ancient history, and tell me this is impossible?  The Internet could enable coordinated populist voting that would sweep new candidates into office, in defiance of prevous political machines.  Then the US economy melts under the burden of consumer debt, which causes China to stop buying US bonds and dump its dollar reserves.  Then Al Qaeda finally smuggles a nuke into Washington, D.C.  Then the next global pandemic hits.  And these are just "good stories" - the probability of the US defaulting on its bonds for any reason, is necessarily higher than the probability of it happening for the particular reasons I've just described.  I'm not saying these are high probabilities, but they are probabilities.  Treasury bills are nowhere near "risk free".  I may be prejudiced here, because I anticipate particular Black Swans (AI, nanotech, biotech) that I see as having a high chance of striking over the lifetime of a 30-year Treasury bond.  But even if you don't share those particular assumptions, do you expect the United States to still be around in 300 years?  If not, do you know exactly when it will go bust?  Then why isn't the risk of losing your capital on a 30-year Treasury bond at least, say, 10%? Nassim Nicholas Taleb's latest, The Black Swan, is about the impact of unknown
647
1.0.0
Revision
false
null
null
CrosspostOutput