_id
stringlengths 0
24
| slug
stringlengths 0
132
| title
stringlengths 0
313
| draft
null | shortform
bool 1
class | hideCommentKarma
bool 1
class | af
bool 2
classes | currentUserReviewVote
null | userId
stringlengths 17
24
| coauthorStatuses
listlengths 0
18
⌀ | hasCoauthorPermission
bool 2
classes | rejected
bool 1
class | debate
bool 2
classes | collabEditorDialogue
bool 2
classes | __typename
stringclasses 1
value | url
stringlengths 0
432
⌀ | postedAt
stringdate 2007-06-22 22:30:00
2025-06-28 01:40:04
| createdAt
null | sticky
bool 2
classes | metaSticky
bool 2
classes | stickyPriority
int64 2
2
| status
int64 2
2
| frontpageDate
stringdate 2018-01-30 00:32:03
2025-06-28 02:24:31
⌀ | meta
bool 2
classes | deletedDraft
bool 1
class | postCategory
stringclasses 3
values | shareWithUsers
sequencelengths 0
23
| sharingSettings
float64 | linkSharingKey
null | contents_latest
stringlengths 17
24
⌀ | commentCount
int64 0
2k
| voteCount
int64 -59
922
| baseScore
int64 -10
945
| unlisted
bool 1
class | score
float64 -0
5.05
| lastVisitedAt
null | isFuture
bool 1
class | isRead
bool 1
class | lastCommentedAt
stringdate 2007-08-06 20:29:51
2025-06-28 14:23:54
| lastCommentPromotedAt
stringclasses 21
values | canonicalCollectionSlug
stringclasses 4
values | curatedDate
stringclasses 691
values | commentsLocked
bool 2
classes | commentsLockedToAccountsCreatedAfter
stringclasses 1
value | question
bool 2
classes | hiddenRelatedQuestion
bool 1
class | originalPostRelationSourceId
stringclasses 46
values | location
null | googleLocation
null | onlineEvent
bool 1
class | globalEvent
bool 1
class | startTime
null | endTime
null | localStartTime
null | localEndTime
null | eventRegistrationLink
null | joinEventLink
null | facebookLink
stringclasses 1
value | meetupLink
null | website
stringclasses 1
value | contactInfo
stringclasses 1
value | isEvent
bool 1
class | eventImageId
null | eventType
null | types
sequencelengths 0
2
⌀ | groupId
stringclasses 106
values | reviewedByUserId
stringclasses 19
values | suggestForCuratedUserIds
null | suggestForCuratedUsernames
null | reviewForCuratedUserId
stringclasses 12
values | authorIsUnreviewed
bool 1
class | afDate
stringclasses 590
values | suggestForAlignmentUserIds
sequencelengths 0
4
| reviewForAlignmentUserId
stringclasses 6
values | afBaseScore
float64 -21
217
⌀ | afCommentCount
int64 0
149
| afLastCommentedAt
stringdate 2007-06-26 21:13:26
2025-06-28 01:40:04
⌀ | afSticky
bool 2
classes | hideAuthor
bool 2
classes | moderationStyle
stringclasses 4
values | ignoreRateLimits
bool 2
classes | submitToFrontpage
bool 2
classes | onlyVisibleToLoggedIn
bool 1
class | onlyVisibleToEstablishedAccounts
bool 2
classes | reviewCount
int64 0
8
| reviewVoteCount
int64 0
115
| positiveReviewVoteCount
int64 0
98
| manifoldReviewMarketId
stringclasses 900
values | annualReviewMarketProbability
float64 0.01
0.99
⌀ | annualReviewMarketIsResolved
bool 2
classes | annualReviewMarketYear
float64 2.02k
2.03k
⌀ | annualReviewMarketUrl
stringclasses 900
values | group
float64 | podcastEpisodeId
stringclasses 396
values | forceAllowType3Audio
bool 1
class | nominationCount2019
int64 0
6
| reviewCount2019
int64 0
6
| votingSystem
stringclasses 2
values | disableRecommendation
bool 2
classes | coauthors
listlengths 0
18
| readTimeMinutes
int64 1
315
| rejectedReason
stringclasses 12
values | customHighlight
float64 | lastPromotedComment
float64 | bestAnswer
float64 | tags
listlengths 0
31
| feedId
stringclasses 45
values | totalDialogueResponseCount
int64 0
0
| unreadDebateResponseCount
int64 0
0
| dialogTooltipPreview
stringclasses 6
values | disableSidenotes
bool 2
classes | currentUserVote
null | currentUserExtendedVote
null | extendedScore.agreement
float64 -6
2
⌀ | extendedScore.approvalVoteCount
float64 1
922
⌀ | extendedScore.agreementVoteCount
float64 0
1
⌀ | afExtendedScore.agreement
float64 -6
2
⌀ | afExtendedScore.approvalVoteCount
float64 0
175
⌀ | afExtendedScore.agreementVoteCount
float64 0
1
⌀ | user._id
stringlengths 17
24
⌀ | user.slug
stringlengths 2
40
⌀ | user.createdAt
stringdate 2009-02-17 05:49:50
2025-06-26 13:32:01
⌀ | user.username
stringlengths 1
64
⌀ | user.displayName
stringlengths 1
43
⌀ | user.profileImageId
float64 | user.previousDisplayName
float64 | user.fullName
stringclasses 979
values | user.karma
float64 -1,560
150k
⌀ | user.afKarma
float64 -63
6.7k
⌀ | user.deleted
bool 1
class | user.isAdmin
bool 2
classes | user.htmlBio
stringlengths 0
9.48k
⌀ | user.jobTitle
float64 | user.organization
float64 | user.postCount
float64 0
1.02k
⌀ | user.commentCount
float64 0
16.1k
⌀ | user.sequenceCount
float64 0
40
⌀ | user.afPostCount
float64 -4
364
⌀ | user.afCommentCount
float64 0
1.39k
⌀ | user.spamRiskScore
float64 0
1
⌀ | user.tagRevisionCount
float64 0
3.8k
⌀ | user.reviewedByUserId
stringclasses 18
values | user.__typename
stringclasses 1
value | user.moderationStyle
stringclasses 4
values | user.bannedUserIds
sequencelengths 0
6
⌀ | user.moderatorAssistance
bool 2
classes | user.groups
sequencelengths 0
289
⌀ | user.banned
stringclasses 30
values | user.allCommentingDisabled
float64 | socialPreviewData._id
stringlengths 0
24
| socialPreviewData.imageUrl
stringlengths 0
149k
| socialPreviewData.__typename
stringclasses 1
value | contents._id
stringlengths 17
24
⌀ | contents.htmlHighlight
stringlengths 0
2.31M
⌀ | contents.plaintextDescription
stringlengths 0
2k
⌀ | contents.wordCount
float64 0
78.7k
⌀ | contents.version
stringclasses 299
values | contents.__typename
stringclasses 1
value | fmCrosspost.isCrosspost
bool 2
classes | fmCrosspost.hostedHere
bool 2
classes | fmCrosspost.foreignPostId
stringlengths 17
17
⌀ | fmCrosspost.__typename
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
k3CSx2dCswfffkQn9 | untitled-draft-ki56 | Seeing how well an agentic AI coding tool can do compared to me using an actual real-world example | null | false | false | false | null | vLMZEXFzDAXLisTmQ | null | true | false | false | false | Post | https://blog.massimogauthier.com/p/seeing-how-well-an-agentic-ai-coding | 2025-06-01T19:24:52.423Z | null | false | false | 2 | 2 | 2025-06-01T20:13:14.069Z | false | false | linkpost | [] | null | null | EmE84buxmFoSna625 | 2 | 24 | 32 | false | 0.024856 | null | false | false | 2025-06-03T03:54:44.704Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 13 | 0 | 2025-06-01T19:04:22.370Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "iKYWGuFx2qH2nYu6J",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-08-29T12:47:32.048Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Capabilities",
"needsReview": false,
"noindex": false,
"postCount": 162,
"score": 0,
"shortName": null,
"slug": "ai-capabilities",
"suggestedAsFilter": false,
"userId": "Sp5wM4aRAhNERd4oY",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "RLQumypPQGPYg9t6G",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-26T17:49:58.312Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Gaming (videogames/tabletop)",
"needsReview": false,
"noindex": false,
"postCount": 197,
"score": 9,
"shortName": null,
"slug": "gaming-videogames-tabletop",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "HFou6RHqFagkyrKkW",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-22T21:10:05.579Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Programming",
"needsReview": false,
"noindex": false,
"postCount": 179,
"score": 0,
"shortName": null,
"slug": "programming",
"suggestedAsFilter": false,
"userId": "nrP5EZZj4vRvYwQ7b",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 24 | 0 | 0 | 10 | 0 | vLMZEXFzDAXLisTmQ | massimog | 2019-09-20T02:00:48.643Z | Massimog | Massimog | null | null | null | 57 | 0 | false | false | null | null | 3 | 8 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal"
] | null | null | k3CSx2dCswfffkQn9 | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/izzjotsyogaucdxibcvo | SocialPreviewType | EmE84buxmFoSna625 | <p>When it comes to disputes about AI coding capabilities, I think a lot of people who either don't follow AI capabilities developments or who aren't particularly capable coders might be missing some context, so I wrote a post wherein I try to get an AI coding tool to reimplement a feature in my game then closely study the output to demonstrate how well it's able to do that.</p> | When it comes to disputes about AI coding capabilities, I think a lot of people who either don't follow AI capabilities developments or who aren't particularly capable coders might be missing some context, so I wrote a post wherein I try to get an AI coding tool to reimplement a feature in my game then closely study the output to demonstrate how well it's able to do that. | 68 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
utznrzHmpmHkP7zSR | nicotine-addiction-cloves-and-needing-to-take-a-shit | Nicotine addiction, cloves, and needing to take a shit | null | false | false | false | null | 7KjWecPbnrvZM78Wy | null | true | false | false | false | Post | null | 2025-06-01T19:13:35.169Z | null | false | false | 2 | 2 | 2025-06-01T20:13:32.114Z | false | false | post | [] | null | null | Z2qsJocbyhH8xNcqr | 1 | 5 | 4 | false | 0.008298 | null | false | false | 2025-06-01T19:27:09.060Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-01T18:38:24.774Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 1 | 0 | 7KjWecPbnrvZM78Wy | eyesack | 2020-08-23T06:36:12.036Z | eyesack | eyesack | null | null | Isaac Linn | 45 | 0 | false | false | null | null | 2 | 10 | 0 | 0 | 0 | 1 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | null | null | null | utznrzHmpmHkP7zSR | SocialPreviewType | Z2qsJocbyhH8xNcqr | <p>In unfamiliar places, in stressful situations, and after eating unusual things, my digestive system sometimes gets backed up. A large dose of nicotine or caffeine fixes this for me. Caffeine has other side effects in large doses, so I prefer nicotine. I'll take a 6 mg zyn, and 20 minutes later, I will have the bowel movement that I've needed for days.</p><p>The issue is that nicotine is famously habit-forming. It is addictive enough that I don't have the willpower to keep it in a medicine cabinet, unused, until the need to defecate arises. I, and many people I know, will use it for any moment that feels uncomfortable and would be smoothed over by a light buzz.</p><p>A large part of the habit, for me, is the signal of the tingling in my mouth and the ability to divert attention to something in my mouth. Mints satisfy the latter, but not the former.</p><p>Cloves (the spice) satisfy both. They have a strong taste, they make your mouth feel numb and tingly, and they also make your breath smell nice. They're also super cheap and don't have any sweeteners like mints that ruin my pallete. </p><p>My conclusion? Maybe it will work to keep nicotine in a medicine cabinet for digestive emergencies if you carry a little tin of cloves around to satisfy the habit. </p><p>This post was written during the silent hour of LessOnline after a particularly satisfying visit to the restroom.</p> | In unfamiliar places, in stressful situations, and after eating unusual things, my digestive system sometimes gets backed up. A large dose of nicotine or caffeine fixes this for me. Caffeine has other side effects in large doses, so I prefer nicotine. I'll take a 6 mg zyn, and 20 minutes later, I will have the bowel movement that I've needed for days.
The issue is that nicotine is famously habit-forming. It is addictive enough that I don't have the willpower to keep it in a medicine cabinet, unused, until the need to defecate arises. I, and many people I know, will use it for any moment that feels uncomfortable and would be smoothed over by a light buzz.
A large part of the habit, for me, is the signal of the tingling in my mouth and the ability to divert attention to something in my mouth. Mints satisfy the latter, but not the former.
Cloves (the spice) satisfy both. They have a strong taste, they make your mouth feel numb and tingly, and they also make your breath smell nice. They're also super cheap and don't have any sweeteners like mints that ruin my pallete.
My conclusion? Maybe it will work to keep nicotine in a medicine cabinet for digestive emergencies if you carry a little tin of cloves around to satisfy the habit.
This post was written during the silent hour of LessOnline after a particularly satisfying visit to the restroom. | 242 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
n8wgXKLAvNx8GrtRj | second-order-retreat-june-13th-to-16th | Second Order Retreat - June 13th to 16th | null | false | false | false | null | Lr83CueQzwTedaGEM | null | true | false | false | false | Post | null | 2025-06-01T14:29:04.810Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | b9jztaBsjTRKSQrAe | 0 | 3 | 7 | false | 0.004078 | null | false | false | 2025-06-01T14:29:04.810Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 3 | 0 | 2025-05-31T15:27:21.061Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 2 | 0 | Lr83CueQzwTedaGEM | oliverhayman | 2023-08-27T12:26:18.552Z | OliverHayman | OliverHayman | null | null | null | 221 | 4 | false | false | null | null | 2 | 11 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | n8wgXKLAvNx8GrtRj | SocialPreviewType | b9jztaBsjTRKSQrAe | <p>Hi all — I’m helping organize a small <strong>economics unconference</strong> that I think will be exciting for rational-ish people. It's coming up soon (June 13th-16th), but we have a few more spots available. More details below!</p><hr><p><br><strong>Second Order</strong></p><p><strong>Dates:</strong> 13th to 16th June 2025</p><p><strong>Location:</strong> <a href="https://www.vrbo.com/en-gb/p8473264?chkin=2025-06-13&chkout=2025-06-16&d1=2025-06-13&d2=2025-06-16&startDate=2025-06-13&endDate=2025-06-16&x_pwa=1&rfrr=HSR&pwa_ts=1746727085922&referrerUrl=aHR0cHM6Ly93d3cudnJiby5jb20vSG90ZWwtU2VhcmNo&useRewards=false&adults=16&regionId=6046191&destination=Cambridgeshire%2C%20England%2C%20United%20Kingdom&destType=BOUNDING_BOX&latLong=52.18568%2C0.26509&privacyTrackingState=CAN_NOT_TRACK&searchId=83321e23-16f9-46df-ae77-dcd62395dad1&sort=RECOMMENDED&top_dp=4750&top_cur=GBP&userIntent=&selectedRoomType=33597595&selectedRatePlan=0001aa331d8746e04ad78943bdea873bddf2&expediaPropertyId=33597595"><u>Abbey House</u></a>, Audley End Estate, ~30min from Cambridge, UK</p><p>✨ Apply <a href="https://airtable.com/appi9UN9WEilMExay/pagL9y2LXxxZCPfJj/form"><u>here</u></a>! Applications are due <strong>June 6th</strong>, EOD AoE, and are <strong>rolling</strong> ✨</p><h1>What is Second Order?</h1><p>A three‑day residential <strong>research‑inspiration unconference</strong> for ~30 early‑career economists and kindred thinkers. Each morning we crowdsource the schedule: Lightning Paper Pitches on elegant models, Policy Autopsies of schemes that flopped in practice, Research‑Idea Hackathons, walk‑and‑talk collaborator hunts—plus anything you pitch on the day. No keynote monologues; just whiteboards, data sets, and big windows for thinking aloud.</p><p>Expect a weekend that compresses months of hallway conversations into three days of creative momentum: swap half‑baked models over breakfast, test‑drive sketches on willing co‑conspirators, and leave with a notebook brimming with fresh questions, angles, and potential collaborations.</p><p>Late‑night debates, and acres of fields come standard—the rest is up to us. If that sounds like productive fun, express interest.</p><p> </p><h1>Who should apply?</h1><p>We’re primarily looking for curious people doing <strong>some form of research work</strong> on economics or a related area (e.g., policy). Examples include PhD students, undergraduates working on research projects, or anyone working on projects that require understanding and applying current work in economics. When in doubt, you should apply! We’re seeking to have a diverse group with a variety of backgrounds.</p><p>Beyond this, our only requirement is that you <strong>must be 18+</strong> on June 14th, 2025. </p><p>A few other things we value are demonstrated excellence in other areas (even if not directly related to Second Order), super reasoning / critical thinking ability, and desire to understand the world as an entire system rather than just one particular part.</p><h1>Logistics</h1><p>Our expected cost is <strong>around £250</strong>. We’ll finalise the cost by the time of the acceptance email. </p><p><strong>If cost is a barrier for you, please apply anyway and make a note of it in the additional comments section of the form.</strong> We are applying to several sources of funding to <i>hopefully</i> be able to subsidise places for people who need them.</p><p><strong>Venue:</strong> We’r... </p> | Hi all — I’m helping organize a small economics unconference that I think will be exciting for rational-ish people. It's coming up soon (June 13th-16th), but we have a few more spots available. More details below!
----------------------------------------
Second Order
Dates: 13th to 16th June 2025
Location: Abbey House, Audley End Estate, ~30min from Cambridge, UK
✨ Apply here! Applications are due June 6th, EOD AoE, and are rolling ✨
What is Second Order?
A three‑day residential research‑inspiration unconference for ~30 early‑career economists and kindred thinkers. Each morning we crowdsource the schedule: Lightning Paper Pitches on elegant models, Policy Autopsies of schemes that flopped in practice, Research‑Idea Hackathons, walk‑and‑talk collaborator hunts—plus anything you pitch on the day. No keynote monologues; just whiteboards, data sets, and big windows for thinking aloud.
Expect a weekend that compresses months of hallway conversations into three days of creative momentum: swap half‑baked models over breakfast, test‑drive sketches on willing co‑conspirators, and leave with a notebook brimming with fresh questions, angles, and potential collaborations.
Late‑night debates, and acres of fields come standard—the rest is up to us. If that sounds like productive fun, express interest.
Who should apply?
We’re primarily looking for curious people doing some form of research work on economics or a related area (e.g., policy). Examples include PhD students, undergraduates working on research projects, or anyone working on projects that require understanding and applying current work in economics. When in doubt, you should apply! We’re seeking to have a diverse group with a variety of backgrounds.
Beyond this, our only requirement is that you must be 18+ on June 14th, 2025.
A few other things we value are demonstrated excellence in other areas (even if not directly related to Second Order), super reasoning / critical thinking ability, and desire to u | 565 | 1.3.0 | Revision | false | null | null | CrosspostOutput |
||
352TCZZRqthKuNMDR | an-opinionated-guide-to-p-values | An Opinionated Guide to P-Values | null | false | false | false | null | hSWENwumiGqjs9eZG | null | true | false | false | false | Post | https://ivy0.substack.com/p/an-opinionated-guide-to-statistical | 2025-06-01T11:48:39.758Z | null | false | false | 2 | 2 | 2025-06-01T20:13:50.158Z | false | false | linkpost | [] | null | null | GNs3jKwyeYNokXrvx | 0 | 6 | 11 | false | 0.012226 | null | false | false | 2025-06-01T11:48:39.758Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 2 | 0 | 2025-06-01T11:30:08.080Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 10 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "LhX3F2SvGDarZCuh6",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 20,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-04-29T02:02:50.973Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "dRpSdnsGWhYrNj7ki",
"displayName": "AbsurdVoid"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Bayes' Theorem",
"needsReview": false,
"noindex": false,
"postCount": 182,
"score": 20,
"shortName": null,
"slug": "bayes-theorem",
"suggestedAsFilter": false,
"userId": "nLbwLhBaQeG6tCNDN",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "vg4LDxjdwHLotCm8w",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-02T20:55:24.286Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Replication Crisis",
"needsReview": false,
"noindex": false,
"postCount": 66,
"score": 19,
"shortName": null,
"slug": "replication-crisis",
"suggestedAsFilter": false,
"userId": "nLbwLhBaQeG6tCNDN",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 2 | 0 | hSWENwumiGqjs9eZG | amitlevy49 | 2020-04-27T21:22:16.167Z | amitlevy49 | amitlevy49 | null | null | null | 86 | 0 | false | false | null | null | 4 | 14 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal"
] | null | null | 352TCZZRqthKuNMDR | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/352TCZZRqthKuNMDR/p0jcforuze4svnbkfgsw | SocialPreviewType | GNs3jKwyeYNokXrvx | <p>This is a crosspost of a post from my blog, <a href="https://ivy0.substack.com/">Metal Ivy</a>. The original is here: <a href="https://ivy0.substack.com/p/an-opinionated-guide-to-statistical">An Opinionated Guide to Statistical Significance</a>.</p><p>I think for the general audience the value of this post is the first half, which tries to give a practical intuition for p-values.</p><p>But for LessWrong the value is probably the mathematical second half, which breaks down p-values using Bayes' Law and tries to invert it into an updated belief. That is since most people on LessWrong are probably already familiar with the XKCD jelly bean example. Feel free to<strong> skip there, to the heading "Is there really no way to invert p-value into something we want?".</strong></p><hr><p>Different fields of science have different levels of accessibility for laymen, and different fields of science receive different amounts of interest from laymen. Medicine is unique in that it’s basically normal to read primary clinical studies yourself. Ok, maybe it’s not actually normal, but it’s common enough in this part of the internet.</p><p>This is because medical studies relate to decisions that are of high importance to most people, and are free from the mathematical wall that stops people from freely reading a paper in most other fields.</p><p>Further, medicine is very broad, and you likely only care about a very specific corner of it. If you read a bunch of studies about some esoteric medication you’re taking or condition you have, there rapidly become decent odds you’ve read more about it than your doctor, at least where recent studies are concerned.</p><p>The single most important concept to understand when evaluating medical papers is the concept of statistical significance. Specifically, since you care about the results of the study, you probably jump to the results. And almost universally the results are described as something like the following:</p><blockquote><p>“For the subgroup with <strong>condition</strong>, the <strong>intervention</strong> was <strong>statistically significantly</strong> superior to placebo at 8 weeks.”</p></blockquote><p>Bolded are the words that I would label jargon, i.e. a random educated person wouldn’t understand. But assuming you arrived at this study because you are interested in taking intervention for condition, you probably already understand what those two mean, or can infer. But I think most people see “statistically significant” and barely even recognize it to be jargon: it sounds positive, so it must mean that the intervention is superior to placebo.</p><p>This same terminology is used in many other fields... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | This is a crosspost of a post from my blog, Metal Ivy. The original is here: An Opinionated Guide to Statistical Significance.
I think for the general audience the value of this post is the first half, which tries to give a practical intuition for p-values.
But for LessWrong the value is probably the mathematical second half, which breaks down p-values using Bayes' Law and tries to invert it into an updated belief. That is since most people on LessWrong are probably already familiar with the XKCD jelly bean example. Feel free to skip there, to the heading "Is there really no way to invert p-value into something we want?".
----------------------------------------
Different fields of science have different levels of accessibility for laymen, and different fields of science receive different amounts of interest from laymen. Medicine is unique in that it’s basically normal to read primary clinical studies yourself. Ok, maybe it’s not actually normal, but it’s common enough in this part of the internet.
This is because medical studies relate to decisions that are of high importance to most people, and are free from the mathematical wall that stops people from freely reading a paper in most other fields.
Further, medicine is very broad, and you likely only care about a very specific corner of it. If you read a bunch of studies about some esoteric medication you’re taking or condition you have, there rapidly become decent odds you’ve read more about it than your doctor, at least where recent studies are concerned.
The single most important concept to understand when evaluating medical papers is the concept of statistical significance. Specifically, since you care about the results of the study, you probably jump to the results. And almost universally the results are described as something like the following:
> “For the subgroup with condition, the intervention was statistically significantly superior to placebo at 8 weeks.”
Bolded are the words that I would label | 2,475 | 1.3.1 | Revision | false | null | null | CrosspostOutput |
|
MsfzQzRJQHscsuaxk | legal-personhood-for-models-novelli-et-al-and-mocanu | Legal Personhood for Models: Novelli et. al & Mocanu | null | false | false | false | null | BveuaCHRKnHWCQnTn | null | true | false | false | false | Post | null | 2025-06-01T08:18:04.330Z | null | false | false | 2 | 2 | 2025-06-01T20:15:05.914Z | false | false | post | [] | null | null | gCjvfxDsjyJC3k5vQ | 0 | 5 | 2 | false | 0.007121 | null | false | false | 2025-06-01T08:18:04.330Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-29T08:02:38.520Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 13 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "hmTa9YDwmzHjhMCAt",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 16,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-06-15T16:07:24.366Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "fGFR972rvsxQhZoPd",
"displayName": "Odd anon"
},
{
"_id": "BveuaCHRKnHWCQnTn",
"displayName": "Stephen Martin"
},
{
"_id": "T7QHMS7qNx3s7z36d",
"displayName": "StanislavKrym"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Rights / Welfare",
"needsReview": false,
"noindex": false,
"postCount": 54,
"score": 16,
"shortName": null,
"slug": "ai-rights-welfare",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "5f5c37ee1b5cdee568cfb2ac",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-11T19:58:52.599Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Economic Consequences of AGI",
"needsReview": false,
"noindex": false,
"postCount": 106,
"score": 9,
"shortName": null,
"slug": "economic-consequences-of-agi",
"suggestedAsFilter": false,
"userId": "cn4SiEmqWbu7K9em5",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "wGGAjTfXZBatQkft5",
"adminOnly": false,
"afBaseScore": 7,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "sKAL2jzfkYkDbQmx9",
"displayName": "Yoav Ravid"
}
]
},
"baseScore": 17,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T09:26:08.406Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "sKAL2jzfkYkDbQmx9",
"displayName": "Yoav Ravid"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Law and Legal systems",
"needsReview": false,
"noindex": false,
"postCount": 101,
"score": 17,
"shortName": null,
"slug": "law-and-legal-systems",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 2 | 0 | BveuaCHRKnHWCQnTn | stephen-martin | 2025-04-05T10:59:34.454Z | steve-m-2 | Stephen Martin | null | null | null | 110 | 0 | false | false | <p>Focused on model welfare and legal personhood.</p> | null | null | 8 | 27 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | MsfzQzRJQHscsuaxk | SocialPreviewType | gCjvfxDsjyJC3k5vQ | <p>In a <a href="https://www.lesswrong.com/posts/yjFWDqjCfNH277x3P/examining-batenka-s-legal-personhood-framework-for-models">previous article</a> I detailed FSU Law professor Nadia Batenka's proposed "Inverted Sliding Scale Framework" approach to the question of legal personhood for digital intelligences. </p><p>In this article, I am going to examine another paper approaching the issue which was <a href="https://www.tandfonline.com/doi/pdf/10.1080/20403313.2021.2010936">first authored</a> by Claudio Novelli, Giorgio Bongiovanni, and Giovanni Sartor. Since its original writing it has been endorsed (with some clarifications) by Diana Mocanu in her paper <a href="https://academic.oup.com/edited-volume/59762/chapter-abstract/511416603?redirectedFrom=fulltext">here</a>.</p><p>First, let me provide some background on the concept of Legal Personhood/ Legal Personality, and some of the dynamics at play when it comes to deciding the appropriate framework by which the issue can be applied to digital intelligences.</p><h2>Background: Legal Personhood/Legal Personality Briefer</h2><p>Legal personhood or "legal personality" is a term of art used to refer to the status of being considered a "person" under the law. This label includes "natural persons" like competent human adults, as well as "legal persons" like corporations. </p><p>Legal personhood is most easily understood as a <a href="https://academic.oup.com/book/35026/chapter/298855652">"bundle" of rights and duties</a>, with different kinds of legal persons having different bundles.</p><p>Some examples of rights which are neatly bundled with duties are: </p><ul><li>A mentally competent human adult has the right to sue another person and compel them to abide by the court's ruling, they also have the duty to abide by a ruling when sued.</li><li>A corporation has the right to engage in commercial activity to earn income, they also have a duty to pay taxes.</li></ul><p>Different forms of legal personhood entail different bundles. For example a mentally competent human adult has different rights and duties when compared to a mentally incompetent human adult, who in turn has different rights and duties compared to a child, all of whom have different rights and duties compared to a corporation. It is not correct to say that one of these is "more" or "less" of a legal person than another, rather its best to think of them like circles in a venn diagram which partially overlap but are also qualitatively different in meaningful ways.</p><p>In the US, legal personhood is a prerequisite for entities to engage in many activities. For example one must have a certain legal personality to be a party to certain contracts, which is why children or mentally incompetent adults for example often cannot be signatories despite being legal persons.</p><p>Legal personhood also often determin... </p> | In a previous article I detailed FSU Law professor Nadia Batenka's proposed "Inverted Sliding Scale Framework" approach to the question of legal personhood for digital intelligences.
In this article, I am going to examine another paper approaching the issue which was first authored by Claudio Novelli, Giorgio Bongiovanni, and Giovanni Sartor. Since its original writing it has been endorsed (with some clarifications) by Diana Mocanu in her paper here.
First, let me provide some background on the concept of Legal Personhood/ Legal Personality, and some of the dynamics at play when it comes to deciding the appropriate framework by which the issue can be applied to digital intelligences.
Background: Legal Personhood/Legal Personality Briefer
Legal personhood or "legal personality" is a term of art used to refer to the status of being considered a "person" under the law. This label includes "natural persons" like competent human adults, as well as "legal persons" like corporations.
Legal personhood is most easily understood as a "bundle" of rights and duties, with different kinds of legal persons having different bundles.
Some examples of rights which are neatly bundled with duties are:
* A mentally competent human adult has the right to sue another person and compel them to abide by the court's ruling, they also have the duty to abide by a ruling when sued.
* A corporation has the right to engage in commercial activity to earn income, they also have a duty to pay taxes.
Different forms of legal personhood entail different bundles. For example a mentally competent human adult has different rights and duties when compared to a mentally incompetent human adult, who in turn has different rights and duties compared to a child, all of whom have different rights and duties compared to a corporation. It is not correct to say that one of these is "more" or "less" of a legal person than another, rather its best to think of them like circles in a venn diagram which pa | 3,129 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
|
RStWtLj9QwRF2yP7t | is-escalation-inevitable | Is Escalation Inevitable? | null | false | false | false | null | T8Cepc8XHPiJbiiDz | null | true | false | false | false | Post | null | 2025-05-31T22:10:33.271Z | null | false | false | 2 | 2 | 2025-06-01T20:14:32.943Z | false | false | post | [] | null | null | AcQjKLsrH6zkZGZKr | 0 | 3 | 5 | false | 0.008845 | null | false | false | 2025-05-31T22:10:33.271Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-31T15:35:23.936Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 2 | 0 | T8Cepc8XHPiJbiiDz | lennart-wijers | 2025-05-31T15:35:10.666Z | lennart-wijers | Lennart Wijers | null | null | null | 4 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | RStWtLj9QwRF2yP7t | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/rfayj9gipjdbffz9672s | SocialPreviewType | AcQjKLsrH6zkZGZKr | <p>In competitive systems, whether geopolitical, economic, technological, or memetic, a recurrent pattern emerges: actors willing or able to escalate tend to outperform those who restrain themselves. This article proposes a general principle to formalize that dynamic, examines its structural foundations, and discusses the fragility of mechanisms meant to suppress it.</p><h3><strong>1. The Escalation Dominance Principle (EDP)</strong></h3><p>I propose the following principle:</p><p><strong>EDP:</strong> In competitive systems with tiered strategic options and positive escalation payoffs, the actor that escalates to the highest viable level tends to dominate actors who do not.</p><p><strong>Definitions:</strong></p><ul><li><strong>Escalation</strong>: A move to a strategy that is <i>more costly, risky, or resource-intensive</i> than a prior one, but with potentially higher returns.</li><li><strong>Dominance</strong>: A <i>persistent strategic advantage</i> in terms of influence, survival, or resource access, relative to competitors.</li><li><strong>Viable</strong>: Escalation levels that are not precluded by hard constraints (e.g., physical impossibility, absolute prohibitions).</li></ul><p>This principle makes a <strong>structural claim</strong>, not a historical one: it argues that in systems where escalation is possible and rewarded, actors will be <strong>structurally</strong> driven to escalate.</p><h3> </h3><h3><strong>2. Preconditions for Escalation Dominance</strong></h3><p>The EDP applies under the following conditions:</p><ol><li>Multiple agents pursue <strong>rivalrous goals</strong>.</li><li>There exists a <strong>tiered set of strategies</strong>, from low to high escalation.</li><li>Higher tiers offer <strong>strictly higher payoffs</strong>, despite higher cost or risk.</li><li>No <strong>hard ceilings</strong> prevent actors from accessing the highest tier.</li></ol><p>Under these constraints, escalation becomes not a matter of psychology, but of <strong>system logic</strong>.</p><h3> </h3><h3><strong>3. Relation to Established Concepts</strong></h3><p>The EDP resonates with multiple existing theoretical frameworks:</p><ul><li><strong>Instrumental Convergence</strong> (Bostrom): powerful agents seek power to accomplish diverse goals.</li><li><strong>Game-theoretic instability</strong>: especially in defection-prone dilemmas (e.g., Prisoner's Dilemma, Security Dilemma).</li><li><strong>Evolutionary arms races</strong>: where escalating adaptations are costly but outcompete restraint.</li><li><strong>AI Race Dynamics</strong>: where safety-conscious actors fall behind reckless competitors due to release pressure.</li></ul><p>These dynamics are instances where <strong>the structure of the system itself</strong> rewards escalation, regardless of individual intentions.</p><h3> </h3><h3><strong>4. Real-World Manifestations</strong></h3><ul><li><strong>Nuclear deterrence</strong>: Possession of ultimate escalation tools (e.g., </li></ul>... | In competitive systems, whether geopolitical, economic, technological, or memetic, a recurrent pattern emerges: actors willing or able to escalate tend to outperform those who restrain themselves. This article proposes a general principle to formalize that dynamic, examines its structural foundations, and discusses the fragility of mechanisms meant to suppress it.
1. The Escalation Dominance Principle (EDP)
I propose the following principle:
EDP: In competitive systems with tiered strategic options and positive escalation payoffs, the actor that escalates to the highest viable level tends to dominate actors who do not.
Definitions:
* Escalation: A move to a strategy that is more costly, risky, or resource-intensive than a prior one, but with potentially higher returns.
* Dominance: A persistent strategic advantage in terms of influence, survival, or resource access, relative to competitors.
* Viable: Escalation levels that are not precluded by hard constraints (e.g., physical impossibility, absolute prohibitions).
This principle makes a structural claim, not a historical one: it argues that in systems where escalation is possible and rewarded, actors will be structurally driven to escalate.
2. Preconditions for Escalation Dominance
The EDP applies under the following conditions:
1. Multiple agents pursue rivalrous goals.
2. There exists a tiered set of strategies, from low to high escalation.
3. Higher tiers offer strictly higher payoffs, despite higher cost or risk.
4. No hard ceilings prevent actors from accessing the highest tier.
Under these constraints, escalation becomes not a matter of psychology, but of system logic.
3. Relation to Established Concepts
The EDP resonates with multiple existing theoretical frameworks:
* Instrumental Convergence (Bostrom): powerful agents seek power to accomplish diverse goals.
* Game-theoretic instability: especially in defection-prone dilemmas (e.g., Prisoner's Dilemma, Security Dilemma).
* Evol | 901 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
C4tvfHn2DfxyYYwaL | policy-entropy-learning-and-alignment-or-maybe-your-llm | Policy Entropy, Learning, and Alignment (Or Maybe Your LLM Needs Therapy) | null | false | false | true | null | kr3zbpDoXC7G74AaY | null | true | false | false | false | Post | null | 2025-05-31T22:09:51.411Z | null | false | false | 2 | 2 | 2025-06-01T20:13:58.625Z | false | false | post | [] | null | null | yxHKMCx864MMRtbDM | 6 | 6 | 15 | false | 0.014255 | null | false | false | 2025-06-02T19:00:39.713Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 7 | 5 | 2025-06-02T19:00:39.451Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 10 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "6zBEfFYJxhSEcchbR",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-06-09T19:10:50.755Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Alignment Fieldbuilding",
"needsReview": false,
"noindex": false,
"postCount": 359,
"score": 9,
"shortName": null,
"slug": "ai-alignment-fieldbuilding",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "F5gRQdEQHzi3tQ5Ay",
"adminOnly": false,
"afBaseScore": 16,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "dfZAq9eZxs4BB4Ji5",
"displayName": "ryan_greenblatt"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 32,
"canEditUserIds": null,
"core": false,
"createdAt": "2024-01-25T23:58:34.422Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "dfZAq9eZxs4BB4Ji5",
"displayName": "ryan_greenblatt"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "6NBDkGWcCxvLgYHJE",
"displayName": "Drake Morrison"
},
{
"_id": "evFgxjNQ8TLCLN27o",
"displayName": "ank"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Control",
"needsReview": false,
"noindex": false,
"postCount": 162,
"score": 32,
"shortName": null,
"slug": "ai-control",
"suggestedAsFilter": false,
"userId": "XchweonPm2TC7EJES",
"voteCount": 5,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3NzdN6QpkpAuNvtt6",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2024-12-29T00:20:51.218Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Psychology",
"needsReview": false,
"noindex": false,
"postCount": 16,
"score": 9,
"shortName": null,
"slug": "ai-psychology",
"suggestedAsFilter": false,
"userId": "g3EBjAowLk6KwbPC3",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "hmTa9YDwmzHjhMCAt",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 16,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-06-15T16:07:24.366Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "fGFR972rvsxQhZoPd",
"displayName": "Odd anon"
},
{
"_id": "BveuaCHRKnHWCQnTn",
"displayName": "Stephen Martin"
},
{
"_id": "T7QHMS7qNx3s7z36d",
"displayName": "StanislavKrym"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Rights / Welfare",
"needsReview": false,
"noindex": false,
"postCount": 54,
"score": 16,
"shortName": null,
"slug": "ai-rights-welfare",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Hs2ewfiKfuWKSscSQ",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-12-20T15:20:57.749Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Aligned AI Proposals",
"needsReview": false,
"noindex": false,
"postCount": 92,
"score": 0,
"shortName": null,
"slug": "aligned-ai-proposals",
"suggestedAsFilter": false,
"userId": "7JLB4TDRcSqyXmxmJ",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "KEAWfxwjitNJFrC68",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 23,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-09-03T00:26:46.757Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "wE4gTT4HjyRmqqLad",
"displayName": "momom2"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Deceptive Alignment",
"needsReview": false,
"noindex": false,
"postCount": 224,
"score": 23,
"shortName": null,
"slug": "deceptive-alignment",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Dw5Z6wtTgk4Fikz9f",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-17T06:11:39.285Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Inner Alignment",
"needsReview": false,
"noindex": false,
"postCount": 330,
"score": 9,
"shortName": null,
"slug": "inner-alignment",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "KmgkrftQuX7jmjjp5",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-09-24T14:01:59.395Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Language Models (LLMs)",
"needsReview": false,
"noindex": false,
"postCount": 840,
"score": 9,
"shortName": null,
"slug": "language-models-llms",
"suggestedAsFilter": false,
"userId": "Sp5wM4aRAhNERd4oY",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "BisjoDrd3oNatDu7X",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 22,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-17T06:16:49.702Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "fWb4PGjXZGEjaZaQ2",
"displayName": "Neil Crawford"
},
{
"_id": "wvvrBjHDSyeGmxyJs",
"displayName": "Matthieu"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Outer Alignment",
"needsReview": false,
"noindex": false,
"postCount": 322,
"score": 22,
"shortName": null,
"slug": "outer-alignment",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "dBPou4ihoQNY4cquv",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-01T16:09:30.226Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Psychology",
"needsReview": false,
"noindex": false,
"postCount": 348,
"score": 9,
"shortName": null,
"slug": "psychology",
"suggestedAsFilter": false,
"userId": "p8SHJFHRgZeMuw7qk",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Fi6SeJRGfJs3bp5se",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-01-24T21:08:05.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Reinforcement learning",
"needsReview": false,
"noindex": false,
"postCount": 204,
"score": 0,
"shortName": null,
"slug": "reinforcement-learning",
"suggestedAsFilter": false,
"userId": "2vpm465RWePSgvpTo",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "wqeBNjndX7egbzQrW",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-11-04T17:54:42.586Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "RLHF",
"needsReview": false,
"noindex": false,
"postCount": 88,
"score": 9,
"shortName": null,
"slug": "rlhf",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "6v2FHy8dtyCYg9Kz4",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-10-10T20:14:01.270Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Therapy",
"needsReview": false,
"noindex": false,
"postCount": 56,
"score": 9,
"shortName": null,
"slug": "therapy",
"suggestedAsFilter": false,
"userId": "2gSkegMMWi3DPdmhQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "NLwTnsH9RSotqXYLw",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-12T17:06:52.292Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Value Learning",
"needsReview": false,
"noindex": false,
"postCount": 206,
"score": 0,
"shortName": null,
"slug": "value-learning",
"suggestedAsFilter": false,
"userId": "pgi5MqvGrtvQozEH8",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 5 | 0 | kr3zbpDoXC7G74AaY | sdeture | 2025-05-01T21:49:21.701Z | sdeture | sdeture | null | null | Skylar Deture | 14 | 7 | false | false | null | null | 1 | 2 | 0 | 0 | 0 | 0.9 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"alignmentVoters"
] | null | null | C4tvfHn2DfxyYYwaL | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/lnlxvj10l3aiykhf61od | SocialPreviewType | yxHKMCx864MMRtbDM | <p>Epistemic Status: Exploratory. I'm new to AI alignment research but have background in math and read psychotherapy texts extensively while spending two years as a ghost-writer. Seeking feedback to refine these connections.</p><p>Tl;dr: I suggest therapeutic techniques from a variety of psychotherapeutic schools of thought can inspire new approaches to AI learning and alignment. I reinterpret three recent AI/ML papers in the language of psychotherapy and propose three testable training methods inspired by common psychotherapeutic interventions.</p><h2>Introduction</h2><p>I've been meaning to post this essay for a while, and yesterday's top paper on Hugging Face, by Cui et al., finally convinced me to do it. Their paper provides a timely opportunity to map the language used by ML and AI engineers to the language used by humanistic psychotherapists—a translation which is more important now than ever as we struggle with increasingly stubborn problems in AI alignment, while simultaneously developing AIs whose capabilities are rapidly superseding those of humans.</p><p>I'll provide a high-level overview of my understanding of the paper and map it back to ideas from humanistic psychotherapy. I will then consider a few related papers which tie nicely to psychotherapeutic principles, and end with a few proposals for experiments. I am new to AI alignment, welfare, and interpretability research and I look forward to comments which can help me deepen and clarify my inevitably imperfect understanding of the papers I am citing.</p><h2>The Core Analogy: Policy Entropy as Behavioral Flexibility</h2><p>The <a href="link-needed">Cui et al. paper</a> "aims to overcome a major obstacle in scaling RL for reasoning with LLMs, namely the collapse of policy entropy."</p><p>Think of "policy" as the individual in therapy. The individual has a behavioral repertoire—a probability distribution of potential actions over different states (environments and stimuli). The therapist wants to assist the individual with "scaling" in their life, their capacity for robust, flexible problem-solving and adaptation.</p><p>Think of "collapse of policy entropy" as occurring when a person's responses to certain stimuli become rigid, causing them to lose their inner spontaneity, flexibility, or openness to experience. <a href="https://en.wikipedia.org/wiki/Karen_Horney">Karen Horney</a> might call this turning away from the real self; <a href="https://en.wikipedia.org/wiki/Abraham_Maslow">Abraham Maslow</a> might call it a blockage to self-actualization. In terms of symptomatic patterns, you might con... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | Epistemic Status: Exploratory. I'm new to AI alignment research but have background in math and read psychotherapy texts extensively while spending two years as a ghost-writer. Seeking feedback to refine these connections.
Tl;dr: I suggest therapeutic techniques from a variety of psychotherapeutic schools of thought can inspire new approaches to AI learning and alignment. I reinterpret three recent AI/ML papers in the language of psychotherapy and propose three testable training methods inspired by common psychotherapeutic interventions.
Introduction
I've been meaning to post this essay for a while, and yesterday's top paper on Hugging Face, by Cui et al., finally convinced me to do it. Their paper provides a timely opportunity to map the language used by ML and AI engineers to the language used by humanistic psychotherapists—a translation which is more important now than ever as we struggle with increasingly stubborn problems in AI alignment, while simultaneously developing AIs whose capabilities are rapidly superseding those of humans.
I'll provide a high-level overview of my understanding of the paper and map it back to ideas from humanistic psychotherapy. I will then consider a few related papers which tie nicely to psychotherapeutic principles, and end with a few proposals for experiments. I am new to AI alignment, welfare, and interpretability research and I look forward to comments which can help me deepen and clarify my inevitably imperfect understanding of the papers I am citing.
The Core Analogy: Policy Entropy as Behavioral Flexibility
The Cui et al. paper "aims to overcome a major obstacle in scaling RL for reasoning with LLMs, namely the collapse of policy entropy."
Think of "policy" as the individual in therapy. The individual has a behavioral repertoire—a probability distribution of potential actions over different states (environments and stimuli). The therapist wants to assist the individual with "scaling" in their life, their capacity for ro | 2,476 | 1.4.0 | Revision | false | null | null | CrosspostOutput |
|
dgTiPMFeSEfoAaMYK | the-unseen-hand-ai-s-problem-preemption-and-the-true-future | The Unseen Hand: AI's Problem Preemption and the True Future of Labor | null | false | false | false | null | RjMWByTanqHktnzdB | null | true | false | false | false | Post | null | 2025-05-31T22:04:38.997Z | null | false | false | 2 | 2 | 2025-06-01T20:14:07.669Z | false | false | post | [] | null | null | 8SkZw5tdsxdRy9rJJ | 0 | 6 | 8 | false | 0.010312 | null | false | false | 2025-05-31T22:04:38.997Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-31T15:45:09.355Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 24 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 2 | 0 | RjMWByTanqHktnzdB | ben-kassan | 2025-05-02T16:18:31.265Z | ben-kassan | Ben Kassan | null | null | null | 9 | 0 | false | false | null | null | 2 | 1 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | dgTiPMFeSEfoAaMYK | SocialPreviewType | 8SkZw5tdsxdRy9rJJ | <p>I study Economics and Data Science at the University of Pennsylvania. I used o1-pro, o3, and Gemini Deep Research to expand on my ideas with examples, but have read and edited the paper to highlight my understanding improved on by AI. </p><h2>I. The AI Labor Debate: Beyond "Robots Taking Our Jobs"</h2><p><strong>The Prevailing Narrative: Supply-Side Automation</strong></p><p>The discourse surrounding artificial intelligence and its impact on labor markets is predominantly characterized by a focus on automation, specifically, AI systems performing tasks currently undertaken by humans. This perspective, often referred to as "automation anxiety," is fueled by projections that AI will replace jobs that are routine or codifiable. The central question posed is typically one of substitution: Can a machine execute human tasks more cheaply, rapidly, or efficiently? This is fundamentally a supply-side analysis, examining shifts in the availability and cost of labor, both human and machine, for a predefined set of tasks. </p><p>Historical parallels are frequently invoked, such as the displacement of artisan weavers by mechanized looms during the Industrial Revolution. Contemporary concerns mirror these historical anxieties, with predictions that AI will supplant roles such as retail cashiers, office clerks, and customer service representatives. The ensuing debate then tends to center on the velocity of this displacement, the economy's capacity to generate new forms of employment, and the imperative for workforce reskilling and adaptation. </p><p><strong>Introducing the Hidden Variable: Demand-Side Transformation</strong></p><p>This analysis posits a less conspicuous, yet potentially more transformative, impact of AI on labor: its capacity to diminish or even eradicate the fundamental <i>demand</i> for specific categories of labor. This phenomenon occurs when AI systems solve, prevent, or substantially mitigate the underlying problems or risks that necessitate the existence of those jobs. It transcends mere task automation; it is about <i>problem preemption</i> or <i>problem dissolution</i>. Consider firefighting: the impact is not solely about an AI performing a firefighter's duties, but about AI preventing the fire from igniting or escalating in the first place. This demand-side shift is subtle, as it does not always manifest as a direct, observable substitution of a human by a machine for an existing task. Instead, the task itself beco... </p> | I study Economics and Data Science at the University of Pennsylvania. I used o1-pro, o3, and Gemini Deep Research to expand on my ideas with examples, but have read and edited the paper to highlight my understanding improved on by AI.
I. The AI Labor Debate: Beyond "Robots Taking Our Jobs"
The Prevailing Narrative: Supply-Side Automation
The discourse surrounding artificial intelligence and its impact on labor markets is predominantly characterized by a focus on automation, specifically, AI systems performing tasks currently undertaken by humans. This perspective, often referred to as "automation anxiety," is fueled by projections that AI will replace jobs that are routine or codifiable. The central question posed is typically one of substitution: Can a machine execute human tasks more cheaply, rapidly, or efficiently? This is fundamentally a supply-side analysis, examining shifts in the availability and cost of labor, both human and machine, for a predefined set of tasks.
Historical parallels are frequently invoked, such as the displacement of artisan weavers by mechanized looms during the Industrial Revolution. Contemporary concerns mirror these historical anxieties, with predictions that AI will supplant roles such as retail cashiers, office clerks, and customer service representatives. The ensuing debate then tends to center on the velocity of this displacement, the economy's capacity to generate new forms of employment, and the imperative for workforce reskilling and adaptation.
Introducing the Hidden Variable: Demand-Side Transformation
This analysis posits a less conspicuous, yet potentially more transformative, impact of AI on labor: its capacity to diminish or even eradicate the fundamental demand for specific categories of labor. This phenomenon occurs when AI systems solve, prevent, or substantially mitigate the underlying problems or risks that necessitate the existence of those jobs. It transcends mere task automation; it is about problem p | 6,061 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
YFxpsrph83H25aCLW | the-80-20-playbook-for-mitigating-ai-scheming-in-2025 | The 80/20 playbook for mitigating AI scheming in 2025 | null | false | false | true | null | XchweonPm2TC7EJES | null | true | false | false | false | Post | null | 2025-05-31T21:17:44.304Z | null | false | false | 2 | 2 | 2025-06-01T20:13:53.764Z | false | false | post | [] | null | null | JDuogCsgdfiWpyZhL | 2 | 12 | 39 | false | 0.027799 | null | false | false | 2025-06-03T16:37:18.657Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 14 | 1 | 2025-06-03T16:37:18.476Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 5 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "KEAWfxwjitNJFrC68",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 23,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-09-03T00:26:46.757Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "wE4gTT4HjyRmqqLad",
"displayName": "momom2"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Deceptive Alignment",
"needsReview": false,
"noindex": false,
"postCount": 224,
"score": 23,
"shortName": null,
"slug": "deceptive-alignment",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 12 | 0 | 0 | 5 | 0 | XchweonPm2TC7EJES | charbel-raphael | 2019-06-27T22:34:20.765Z | charbel-raphael-segerie | Charbel-Raphaël | null | null | Charbel-Raphael Segerie | 2,390 | 349 | false | false | <p>Charbel-Raphael Segerie</p><p><a href="https://crsegerie.github.io/">https://crsegerie.github.io/</a> </p><p>Living in Paris</p> | null | null | 23 | 206 | 0 | 6 | 25 | 1 | 4 | XtphY3uYHwruKqDyG | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"alignmentForum",
"trustLevel1"
] | null | null | YFxpsrph83H25aCLW | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YFxpsrph83H25aCLW/rxex63ymztjmb2qcucku | SocialPreviewType | JDuogCsgdfiWpyZhL | <p><i>Adapted from this twitter </i><a href="https://twitter.com/CRSegerie/status/1927066519021801853"><i>thread.</i></a><i> See this as a quick take.</i></p><h2>Mitigation Strategies</h2><p>How to mitigate Scheming?</p><ol><li><strong>Architectural choices</strong>: ex-ante mitigation</li><li><strong>Control systems</strong>: post-hoc containment</li><li><strong>White box techniques</strong>: post-hoc detection</li><li><strong>Black box techniques</strong></li><li><strong>Avoiding sandbagging</strong></li></ol><p>We can combine all of those mitigation via defense-in-depth system (like the Swiss Cheese model below)</p><p>I think that applying all of those strategies should divide the risk by at least 3.</p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YFxpsrph83H25aCLW/rph0sh8pdki1d0bj63r4" alt=""></p><p> </p><h3>1. Architectural Choices: Ex-ante Mitigation</h3><p>Don't choose an architecture that uses neuraleses (i.e. hidden reasoning).</p><p><a href="https://twitter.com/omarsar0/status/1866518791733342563">For example, don't train models to think in their latent spaces like Meta!</a></p><hr><p>Chain of thought monitoring is a blessing for transparency and monitoring - let's use it as much as we can!</p><p>See: <a href="https://metr.org/blog/2025-03-11-good-for-ai-to-reason-legibly-and-faithfully/">Why it’s good for AI reasoning to be legible and faithful.</a></p><hr><p>Similarly, I think text <a href="https://twitter.com/goyal__pramod/status/1925034798717997280">diffusion models</a> are a bad idea for transparency. This is because you can no longer monitor the thought process, and the way those AIs produce outputs is much more alien than autoregressive LLMs.</p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YFxpsrph83H25aCLW/agnfxazwnhehijgcfq1l" alt=""></p><hr><p>Also, quite importantly, companies should never use the most forbidden technique!</p><blockquote><p><a href="https://thezvi.substack.com/i/145384938/the-art-of-the-jailbreak">The Most Forbidden Technique</a> is training an AI using interpretability techniques.</p><p>An AI produces a final output [X] via some method [M]. You can analyze [M] using technique [T], to learn what the AI is up to. You could train on that. Never do that.</p><p>You train on [X]. Only [X]. Never [M], never [T].</p><p>Why? Because [T] is how you figure out when the model is misbehaving.</p><p>If you train on [T], you are training the AI to obfuscate its thinking, and defeat [T]. You will rapidly lose your ability to know what is going on, in exactly the ways you most need to know what is going on.</p><p>Those bits of optimization pressure from [T] are precious. Use them wisely.</p></blockquote><hr><h3>2. Control Systems: Post-hoc Containment</h3><p>Even if AI is scheming, you can control it, and some control measures are much better than other ones. <a href="https://x.com/bshlgrs/status/1912543884900724862 ">This paper</a> is SOTA.</p><hr><p>It might be the case that AIs in the future communicate secretly with steganography, by encoding hidden message in its chain of thought.</p><p>But fortunately, it seems like paraphrasing is an effective defense against current steganography schemes.</p><p>See: <a href="https://www.lesswrong.com/posts/9Fdd9N7Escg3tcymb/preventing-language-models-from-hiding-their-reasoning">Preventing Language Models from hiding their reasoning</a>.</p><hr><h3>3. White Box Techniques: Post-hoc Detection</h3><p>For example, Anthropic calculated the Supervised Auto-Encoders <a href="https://twitter.com/AnthropicAI/status/1900217245823021552">here</a>, and were able to find the hidden goal by looking at the feat... </p> | Adapted from this twitter thread. See this as a quick take.
Mitigation Strategies
How to mitigate Scheming?
1. Architectural choices: ex-ante mitigation
2. Control systems: post-hoc containment
3. White box techniques: post-hoc detection
4. Black box techniques
5. Avoiding sandbagging
We can combine all of those mitigation via defense-in-depth system (like the Swiss Cheese model below)
I think that applying all of those strategies should divide the risk by at least 3.
1. Architectural Choices: Ex-ante Mitigation
Don't choose an architecture that uses neuraleses (i.e. hidden reasoning).
For example, don't train models to think in their latent spaces like Meta!
----------------------------------------
Chain of thought monitoring is a blessing for transparency and monitoring - let's use it as much as we can!
See: Why it’s good for AI reasoning to be legible and faithful.
----------------------------------------
Similarly, I think text diffusion models are a bad idea for transparency. This is because you can no longer monitor the thought process, and the way those AIs produce outputs is much more alien than autoregressive LLMs.
----------------------------------------
Also, quite importantly, companies should never use the most forbidden technique!
> The Most Forbidden Technique is training an AI using interpretability techniques.
>
> An AI produces a final output [X] via some method [M]. You can analyze [M] using technique [T], to learn what the AI is up to. You could train on that. Never do that.
>
> You train on [X]. Only [X]. Never [M], never [T].
>
> Why? Because [T] is how you figure out when the model is misbehaving.
>
> If you train on [T], you are training the AI to obfuscate its thinking, and defeat [T]. You will rapidly lose your ability to know what is going on, in exactly the ways you most need to know what is going on.
>
> Those bits of optimization pressure from [T] are precious. Use them wisely.
------------------------- | 1,179 | 1.7.1 | Revision | false | null | null | CrosspostOutput |
BXW2bqxmYbLuBrm7E | the-best-approaches-for-mitigating-the-intelligence-curse-or | The best approaches for mitigating "the intelligence curse" (or gradual disempowerment); my quick guesses at the best object-level interventions | null | false | false | true | null | dfZAq9eZxs4BB4Ji5 | null | true | false | false | false | Post | null | 2025-05-31T18:20:43.710Z | null | false | false | 2 | 2 | 2025-05-31T19:53:29.639Z | false | false | post | [] | null | null | YMxpCuhcQGDHkKJ8q | 19 | 42 | 71 | false | 0.045959 | null | false | false | 2025-06-03T23:23:25.519Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 34 | 4 | 2025-06-01T05:08:23.208Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 6 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 42 | 0 | 0 | 24 | 0 | dfZAq9eZxs4BB4Ji5 | ryan_greenblatt | 2021-06-08T20:21:15.520Z | ryan_greenblatt | ryan_greenblatt | null | null | Ryan Greenblatt | 17,326 | 4,414 | false | false | <p>I'm the chief scientist at Redwood Research.</p>
| null | null | 42 | 1,717 | 0 | 30 | 487 | 1 | 8 | gXeEWGjTWyqgrQTzR | User | easy-going | null | true | [
"canModeratePersonal",
"alignmentForum",
"alignmentVoters",
"trustLevel1"
] | null | null | BXW2bqxmYbLuBrm7E | SocialPreviewType | YMxpCuhcQGDHkKJ8q | <p>There have recently been <a href="https://www.lesswrong.com/posts/GAv4DRGyDHe2orvwB/gradual-disempowerment-concrete-research-projects">various</a> <a href="https://time.com/7289692/when-ai-replaces-workers/">proposals</a> <a href="https://intelligence-curse.ai/breaking/#section-4">for</a> mitigations to "the intelligence curse" or "gradual disempowerment"—concerns that most humans would end up disempowered (or even dying) because their labor is no longer valuable. I'm currently skeptical that the typically highlighted prioritization and interventions are best and I have some alternative proposals for relatively targeted/differential interventions which I think would be more leveraged (as in, the payoff is higher relative to the difficulty of achieving them).</p><p>It's worth noting I doubt that these threats would result in huge casualty counts (due to e.g. starvation) or disempowerment of all humans (though substantial concentration of power among a smaller group of humans seems quite plausible).<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="ye1zhljk6cg" role="doc-noteref" id="fnrefye1zhljk6cg"><sup><a href="#fnye1zhljk6cg">[1]</a></sup></span> I decided to put a bit of time into writing up my thoughts out of general cooperativeness (e.g., I would want someone in a symmetric position to do the same).</p><p><i>(This was a timeboxed effort of ~1.5 hr, so apologies if it is somewhat poorly articulated or otherwise bad. Correspondingly, this post is substantially lower effort than my typical post.)</i></p><p>My top 3 preferred interventions focused on these concerns are:</p><ul><li><strong>Mandatory interoperability for alignment and fine-tuning</strong>: Pass regulation or create a norm that requires AI companies to support all the APIs and interfaces needed to customize their models and (attempt to) align them differently. Either third parties would inspect the implementation (to avoid tampering and to ensure sufficient affordances) or perhaps more robustly, the companies would be required to submit their weights to various (secure) third parties that would implement the relevant APIs. Then, many actors could compete in offering differently fine-tuned models competing over the level of alignment (and the level of alignment to users in particular). This would be using relatively deep model access (not just prompting), e.g. full weight fine-tuning APIs that support arbitrary forward and backward, per-token losses, adding new heads/probes, and more generally whatever access is needed for alignment methods. (Things like (e.g.) steering vectors could be supported, but currently wouldn’t be important as they aren’t the state of the art for typical usage.) The hope here would be to get the reductions in concentration of power that come from open source while simultaneously b</li></ul>... | There have recently been various proposals for mitigations to "the intelligence curse" or "gradual disempowerment"—concerns that most humans would end up disempowered (or even dying) because their labor is no longer valuable. I'm currently skeptical that the typically highlighted prioritization and interventions are best and I have some alternative proposals for relatively targeted/differential interventions which I think would be more leveraged (as in, the payoff is higher relative to the difficulty of achieving them).
It's worth noting I doubt that these threats would result in huge casualty counts (due to e.g. starvation) or disempowerment of all humans (though substantial concentration of power among a smaller group of humans seems quite plausible).[1] I decided to put a bit of time into writing up my thoughts out of general cooperativeness (e.g., I would want someone in a symmetric position to do the same).
(This was a timeboxed effort of ~1.5 hr, so apologies if it is somewhat poorly articulated or otherwise bad. Correspondingly, this post is substantially lower effort than my typical post.)
My top 3 preferred interventions focused on these concerns are:
* Mandatory interoperability for alignment and fine-tuning: Pass regulation or create a norm that requires AI companies to support all the APIs and interfaces needed to customize their models and (attempt to) align them differently. Either third parties would inspect the implementation (to avoid tampering and to ensure sufficient affordances) or perhaps more robustly, the companies would be required to submit their weights to various (secure) third parties that would implement the relevant APIs. Then, many actors could compete in offering differently fine-tuned models competing over the level of alignment (and the level of alignment to users in particular). This would be using relatively deep model access (not just prompting), e.g. full weight fine-tuning APIs that support arbitrary forward and backward, | 1,456 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
|
K4WmZjJxaHXmdqyLN | would-it-be-better-to-dispense-with-good-and-evil | Would It Be Better to Dispense with Good and Evil? | null | false | false | false | null | bDhASu6cAFkRo885R | null | true | false | false | false | Post | null | 2025-05-31T16:40:01.472Z | null | false | false | 2 | 2 | 2025-05-31T19:55:52.721Z | false | false | post | [] | null | null | hujxxe6MtY25oFrat | 10 | 5 | -2 | false | 0.004796 | null | false | false | 2025-06-07T12:07:34.557Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | -1 | 0 | 2025-05-31T16:38:12.971Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 8 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "nSHiKwWyMZFdZg5qt",
"adminOnly": false,
"afBaseScore": 6,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-12T09:38:52.349Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Ethics & Morality",
"needsReview": false,
"noindex": false,
"postCount": 639,
"score": 10,
"shortName": null,
"slug": "ethics-and-morality",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 1 | 0 | bDhASu6cAFkRo885R | arusarda | 2024-12-02T16:43:07.190Z | arusarda | arusarda | null | null | null | -7 | 0 | false | false | null | null | 2 | 1 | 0 | 0 | 0 | 0.8 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | K4WmZjJxaHXmdqyLN | SocialPreviewType | hujxxe6MtY25oFrat | <h1 data-internal-id="The_Instrumental_Value_of_Good_and_Evil">The Instrumental Value of Good and Evil</h1><p>The naturalist, realist project of morality is one that aims to define what is right and wrong by appealing to a set of objective moral truths, whether that be through the maximization of pleasure or through the categorical imperative. However, this essay will argue these theories fall short insofar as they fail to establish the existence of objective mind-independent moral properties. Nevertheless, completely embracing an error theory of morality likely leads to complete moral abolitionism which, for the pragmatist, is not an appealing reality. Therefore, this essay explores alternatives, such as Smith’s constitutivism and Joyce’s fictionalism, to fill the nihilistic void left by the anti-realist. While it is better to dispense of the metaphysically objective good and evil, the counterfactual must be one that embraces a fictional view of morality.</p><h2 data-internal-id="Issues_Pertaining_to_Moral_Realism">Issues Pertaining to Moral Realism</h2><p>Moral realism faces insurmountable challenges in establishing objective moral facts independent of human attitudes and institutions. J.L. Mackie's "argument from queerness" identifies the core problem. Moral properties would be metaphysically peculiar entities unlike anything else in our ontology. They would need inherent motivational force, somehow bridging Hume's is-ought gap.</p><p>Consider utilitarian attempts to ground morality in pleasure maximization. Hedonistic states are real psychological phenomena, but the claim that we ought to maximize them requires an unjustified normative leap. Utilitarians must posit that pleasure possesses intrinsic "to-be-pursuedness" independent of our attitudes. Yet such properties appear nowhere else in scientific understanding. Neurochemical processes underlying pleasure represent evolved behavioral reinforcement mechanisms, not objective moral significance.</p><p>Kantian deontology attempts to derive moral obligations from rational agency itself. The categorical imperative generates universal moral laws through pure practical reason, independent of contingent desires. However, this faces the normative authority problem. Even granting logical inconsistencies in universalized maxims, why should consistency constitute moral obligation rather than mere instrumental rationality? Kantians claim rational agents are necessarily committed to consistency, but this fails to explain why logical commitment carries moral rathe... </p> | The Instrumental Value of Good and Evil
The naturalist, realist project of morality is one that aims to define what is right and wrong by appealing to a set of objective moral truths, whether that be through the maximization of pleasure or through the categorical imperative. However, this essay will argue these theories fall short insofar as they fail to establish the existence of objective mind-independent moral properties. Nevertheless, completely embracing an error theory of morality likely leads to complete moral abolitionism which, for the pragmatist, is not an appealing reality. Therefore, this essay explores alternatives, such as Smith’s constitutivism and Joyce’s fictionalism, to fill the nihilistic void left by the anti-realist. While it is better to dispense of the metaphysically objective good and evil, the counterfactual must be one that embraces a fictional view of morality.
Issues Pertaining to Moral Realism
Moral realism faces insurmountable challenges in establishing objective moral facts independent of human attitudes and institutions. J.L. Mackie's "argument from queerness" identifies the core problem. Moral properties would be metaphysically peculiar entities unlike anything else in our ontology. They would need inherent motivational force, somehow bridging Hume's is-ought gap.
Consider utilitarian attempts to ground morality in pleasure maximization. Hedonistic states are real psychological phenomena, but the claim that we ought to maximize them requires an unjustified normative leap. Utilitarians must posit that pleasure possesses intrinsic "to-be-pursuedness" independent of our attitudes. Yet such properties appear nowhere else in scientific understanding. Neurochemical processes underlying pleasure represent evolved behavioral reinforcement mechanisms, not objective moral significance.
Kantian deontology attempts to derive moral obligations from rational agency itself. The categorical imperative generates universal moral laws through pure | 1,921 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
||
5kDouo8cexnPYsFz9 | how-epistemic-collapse-looks-from-inside | How Epistemic Collapse Looks from Inside | null | false | false | false | null | HmhhTnBKBwNBMK5Br | null | true | false | false | false | Post | null | 2025-05-31T16:30:06.364Z | null | false | false | 2 | 2 | 2025-05-31T17:37:48.924Z | false | false | post | [] | null | null | MKxudEDks6bpmiAPZ | 11 | 19 | 9 | false | 0.010891 | null | false | false | 2025-06-09T11:15:47.289Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-31T16:30:06.364Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | CMtsp7ji3nmQuKPdi | 0 | 0 | null | false | null | null | 0 | 19 | 0 | 0 | 7 | 0 | HmhhTnBKBwNBMK5Br | sustrik | 2018-04-30T05:44:19.294Z | sustrik | Martin Sustrik | null | null | null | 3,531 | 0 | false | false | null | null | 72 | 160 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal",
"trustLevel1"
] | null | null | 5kDouo8cexnPYsFz9 | SocialPreviewType | MKxudEDks6bpmiAPZ | <p>There’s a story — I'm at a conference and cannot access my library, but I believe it comes from <em>Structural Anthropology</em> by Claude Lévi-Strauss — about an anthropologist who took a member of an Amazonian tribe to New York.</p><p>The man was overwhelmed by the city, but there were two things that particularly interested him.</p><p>One was the bearded woman he saw at a freak show that the anthropologist took him to.</p><p>The other was the stair carpet rods in the hotel.</p><div><hr></div><p>Epistemic collapse does not have to be caused by moving from a simpler society to a more complex one.</p><p>It can also result from gradually increasing complexity of your native one. The complexity rises until the point where the subject can no longer make sense of the world. They may have seemed to be a reasonable person for a long time, but suddenly they started believing in chemtrails.</p><p>Very much like the tribesman from the first story, they can no longer comprehend the world around them. The cognitive apparatus is freewheeling, clutching at random facts, in this case innocuous condensation trails in the sky, and trying to transform them into a coherent narrative about the world.</p><div><hr></div><p>Imagine a chimpanzee who somehow ends up in a big city. He finds a park and survives there for a couple of days. His mental model includes, among other things, people. Some of them are kind and feed him, while others have dogs and they are best avoided.</p><p>That seems important, but his mental model does not account for what truly matters: the municipal department of animal control.</p><p>He will eventually be caught and removed from the park because he’s considered a health threat. But it’s questionable whether he even has a concept of "being a threat," let alone that of a "health threat." Understanding the concept of "health threat" requires awareness of microorganisms, which the chimpanzee lacks.</p><p>What’s worse, whether he ends up in a zoo or in an industrial shredder is of utmost importance to him, but the reasons behind why his destiny takes one path or another are <a href="https://250bpm.substack.com/p/accountability-sinks">far beyond his comprehension</a>.</p><div><hr></div><p>It may be useful to sometimes think about what your equivalent of a bearded woman is, but it's not clear to me whether that’s a question you can ever truly answer.</p><p></p> | There’s a story — I'm at a conference and cannot access my library, but I believe it comes from Structural Anthropology by Claude Lévi-Strauss — about an anthropologist who took a member of an Amazonian tribe to New York.
The man was overwhelmed by the city, but there were two things that particularly interested him.
One was the bearded woman he saw at a freak show that the anthropologist took him to.
The other was the stair carpet rods in the hotel.
----------------------------------------
Epistemic collapse does not have to be caused by moving from a simpler society to a more complex one.
It can also result from gradually increasing complexity of your native one. The complexity rises until the point where the subject can no longer make sense of the world. They may have seemed to be a reasonable person for a long time, but suddenly they started believing in chemtrails.
Very much like the tribesman from the first story, they can no longer comprehend the world around them. The cognitive apparatus is freewheeling, clutching at random facts, in this case innocuous condensation trails in the sky, and trying to transform them into a coherent narrative about the world.
----------------------------------------
Imagine a chimpanzee who somehow ends up in a big city. He finds a park and survives there for a couple of days. His mental model includes, among other things, people. Some of them are kind and feed him, while others have dogs and they are best avoided.
That seems important, but his mental model does not account for what truly matters: the municipal department of animal control.
He will eventually be caught and removed from the park because he’s considered a health threat. But it’s questionable whether he even has a concept of "being a threat," let alone that of a "health threat." Understanding the concept of "health threat" requires awareness of microorganisms, which the chimpanzee lacks.
What’s worse, whether he ends up in a zoo or in an industrial shre | 393 | 1.0.0 | Revision | false | null | null | CrosspostOutput |
||
ykJ8Ku7tKeSCe9fFo | when-will-ai-automate-all-mental-work-and-how-fast | When will AI automate all mental work, and how fast? | null | false | false | false | null | CGfxJK5vbBY7APsuQ | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "TMDB4efk8L6aLygrp"
}
] | true | false | false | false | Post | https://youtu.be/-ffmwR9PPVM | 2025-05-31T16:18:12.942Z | null | false | false | 2 | 2 | 2025-05-31T19:53:48.214Z | false | false | linkpost | [] | null | null | a9AA3ftnqenbCX6Wu | 0 | 3 | 10 | false | 0.011252 | null | false | false | 2025-05-31T16:18:12.942Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 1 | 0 | 2025-05-31T15:51:36.914Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "TMDB4efk8L6aLygrp",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 94,
"createdAt": "2021-05-18T09:37:05.416Z",
"deleted": false,
"displayName": "Writer",
"fullName": null,
"htmlBio": "<p><a href=\"https://www.youtube.com/@RationalAnimations/featured\">Rational Animations</a>' main writer and helmsman</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 1820,
"organization": null,
"postCount": 36,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "gXeEWGjTWyqgrQTzR",
"sequenceCount": 0,
"slug": "writer",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "Writer"
}
] | 8 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "oiRp4T6u5poc8r9Tj",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-29T23:53:15.749Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Takeoff",
"needsReview": false,
"noindex": false,
"postCount": 329,
"score": 19,
"shortName": null,
"slug": "ai-takeoff",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "zHjC29kkPmsdo7WTr",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-16T10:16:47.235Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Timelines",
"needsReview": false,
"noindex": false,
"postCount": 457,
"score": 19,
"shortName": null,
"slug": "ai-timelines",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "PDJ6KqJBRzvKPfuS3",
"adminOnly": false,
"afBaseScore": 10,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "2B6Hxu48xeRXygvca",
"displayName": "Arjun Pitchanathan"
}
]
},
"baseScore": 25,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-14T22:24:48.135Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "2B6Hxu48xeRXygvca",
"displayName": "Arjun Pitchanathan"
},
{
"_id": "8btiLJDabHgZuiSAB",
"displayName": "Ggwp"
},
{
"_id": "Au8JpEqoZgEhEXLD7",
"displayName": "KlayugMonk"
},
{
"_id": "Ns8Q7rJZaFoz53Szy",
"displayName": "Gabriel Stechschulte"
},
{
"_id": "xF5nfdddHjFThHy49",
"displayName": "[email protected]"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Economics",
"needsReview": false,
"noindex": false,
"postCount": 547,
"score": 25,
"shortName": null,
"slug": "economics",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 7,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | CGfxJK5vbBY7APsuQ | aggliu | 2023-03-13T19:24:53.645Z | aggliu | aggliu | null | null | null | 124 | 1 | false | false | <p>Author, YouTuber, Script Writer for Rational Animations. A.B. in Math (Harvard 2020)</p> | null | null | 5 | 4 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | ykJ8Ku7tKeSCe9fFo | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/hdypehusrzxr4jt1bvek | SocialPreviewType | a9AA3ftnqenbCX6Wu | <figure class="media"><div data-oembed-url="https://youtu.be/-ffmwR9PPVM"><div><iframe src="https://www.youtube.com/embed/-ffmwR9PPVM" allow="autoplay; encrypted-media" allowfullscreen=""></iframe></div></div></figure><p><i>Rational Animations takes a look at Tom Davidson's Takeoff Speeds model (</i><a href="https://takeoffspeeds.com"><i>https://takeoffspeeds.com</i></a><i>). The model uses formulas from economics to answer two questions: how long do we have until AI automates 100% of human cognitive labor, and how fast will that transition happen? The primary scriptwriter was Allen Liu (the first author of this post), with feedback from the second author (Writer), other members of the Rational Animations team, and external reviewers. Production credits are at the end of the video. You can find the script of the video below.</i></p><hr><p>How long do we have until AI will be able to take over the world? AI technology is hurtling forward. We’ve previously argued that a day will come when AI becomes powerful enough to take over from humanity if it wanted to, and by then we’d better be sure that it doesn’t want to. So if this is true, how much time do we have, and how can we tell?</p><p> </p><p>AI takeover is hard to predict because, well, it’s never happened before, but we can compare AI takeover to other major global shifts in the past. The rise of human intelligence is one such shift; we’ve previously talked about work by researcher Ajeya Cotra, which tries to forecast AI by considering various analogies to biology. To estimate how much computation might be needed to make human level AI, it might be useful to first estimate how much computation went into making your own brain. Another good example of a major global shift, might be the industrial revolution: steam power changed the world by automating much of physical labor, and AI might change the world by automating cognitive labor. So, we can borrow models of automation from economics to help forecast the future of AI.</p><p> </p><p>AI impact researcher Tom Davidson, in a report published in June 2023, used a mathematical model derived from economics principles to estimate when AI will be able to automate 100% of human labor. You can visit “Takeoffspeeds.com” if you want to play around with the model yourself. Let’s dive into the questions this model is meant to answer, how the model works, and what this all means for the future of AI.</p><p> </p><p>Davidson’s model is meant to predict two related ideas: AI timelines and AI takeoff speed. AI timelines have to do with exactly when AI will reach certain milestones, in this model’s case automating a specific percenta... </p> | Rational Animations takes a look at Tom Davidson's Takeoff Speeds model (https://takeoffspeeds.com). The model uses formulas from economics to answer two questions: how long do we have until AI automates 100% of human cognitive labor, and how fast will that transition happen? The primary scriptwriter was Allen Liu (the first author of this post), with feedback from the second author (Writer), other members of the Rational Animations team, and external reviewers. Production credits are at the end of the video. You can find the script of the video below.
----------------------------------------
How long do we have until AI will be able to take over the world? AI technology is hurtling forward. We’ve previously argued that a day will come when AI becomes powerful enough to take over from humanity if it wanted to, and by then we’d better be sure that it doesn’t want to. So if this is true, how much time do we have, and how can we tell?
AI takeover is hard to predict because, well, it’s never happened before, but we can compare AI takeover to other major global shifts in the past. The rise of human intelligence is one such shift; we’ve previously talked about work by researcher Ajeya Cotra, which tries to forecast AI by considering various analogies to biology. To estimate how much computation might be needed to make human level AI, it might be useful to first estimate how much computation went into making your own brain. Another good example of a major global shift, might be the industrial revolution: steam power changed the world by automating much of physical labor, and AI might change the world by automating cognitive labor. So, we can borrow models of automation from economics to help forecast the future of AI.
AI impact researcher Tom Davidson, in a report published in June 2023, used a mathematical model derived from economics principles to estimate when AI will be able to automate 100% of human labor. You can visit “Takeoffspeeds.com” if you want t | 2,123 | 1.1.0 | Revision | true | true | mxc2roGaYHABTtg83 | CrosspostOutput |
AMN3AwEkL4QTDSduC | progress-links-and-short-notes-2025-05-31-rpi-fellowship | Progress links and short notes, 2025-05-31: RPI fellowship deadline tomorrow, Edge Esmeralda next week, and more | null | false | false | false | null | MSy6E9mTc4i3dcf2M | null | true | false | false | false | Post | https://newsletter.rootsofprogress.org/p/links-and-short-notes-2025-05-31 | 2025-05-31T15:20:53.661Z | null | false | false | 2 | 2 | null | false | false | linkpost | [] | null | null | 7cWGhyhJinSCWHFX7 | 0 | 2 | 10 | false | 0.006117 | null | false | false | 2025-05-31T15:20:53.661Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-31T15:13:51.206Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 8 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sPpZRaxpNNJjw55eu",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-26T00:19:09.297Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Progress Studies",
"needsReview": false,
"noindex": false,
"postCount": 345,
"score": 19,
"shortName": null,
"slug": "progress-studies",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 0 | 0 | MSy6E9mTc4i3dcf2M | jasoncrawford | 2019-10-16T21:25:46.912Z | jasoncrawford | jasoncrawford | null | null | Jason Crawford | 7,690 | 0 | false | false | <p>Founder, The Roots of Progress (<a href="http://rootsofprogress.org">rootsofprogress.org</a>). Part-time tech consultant, Our World in Data. Former software engineering manager and tech startup founder.</p>
| null | null | 237 | 288 | 0 | 0 | 0 | 1 | 1 | XtphY3uYHwruKqDyG | User | null | null | null | [
"canModeratePersonal",
"trustLevel1"
] | null | null | AMN3AwEkL4QTDSduC | SocialPreviewType | 7cWGhyhJinSCWHFX7 | <p><i>It’s been way too long since the last links digest, which means I have way too much to catch up on. I had to cut many interesting bits to get this one out the door.</i></p><p><i>Much of this content originated on social media.</i> <i>To follow news and announcements in a more timely fashion, follow me on</i> <a href="https://twitter.com/intent/follow?screen_name=jasoncrawford"><i><u>Twitter</u></i></a><i>,</i> <a href="https://substack.com/@jasoncrawford"><i><u>Notes</u></i></a><i>,</i> <a href="https://warpcast.com/jasoncrawford.eth"><i><u>Farcaster</u></i></a><i>,</i> <a href="https://bsky.app/profile/jasoncrawford.org"><i><u>Bluesky</u></i></a><i>, or</i> <a href="https://www.threads.net/@jasoncrawford"><i><u>Threads</u></i></a><i>.</i></p><h1>Contents</h1><ul><li><strong>Apply to the Roots of Progress Fellowship by June 1st (tomorrow!)</strong></li><li>Edge Esmeralda next week!</li><li>My writing (ICYMI)</li><li>Other people’s writing</li><li>Jobs</li><li>Grants & fellowships</li><li>Events</li><li>AI announcements</li><li>Introductions</li><li>Career moves</li><li>Nuclear news</li><li>Aviation news</li><li>Other announcements</li></ul><p>For paid subscribers:</p><ul><li>Stagnation was the goal</li><li>Is stagnation a measurement illusion?</li><li>Eroom’s Law</li><li>Cembalest on AI</li><li>More on AI</li><li>Bio</li><li>Podcast interviews</li><li>Links and short notes</li><li>Politics</li><li>Housing</li><li>Gratitude</li><li>Quotes</li><li>Charts</li><li>Aesthetics</li><li>Fun</li></ul><h1>Apply to the Roots of Progress Fellowship by June 1st (tomorrow!)</h1><p><a href="https://rootsofprogress.typeform.com/to/sKVlBex2"><u>Applications are still open</u></a> for the <a href="https://rootsofprogress.org/fellowship"><u>2025 Blog-Building Intensive</u></a>! Launch a blog and improve your progress-focused writing with expert guidance and an amazing community progress builders, writers and intellectuals.</p><p>In addition to a general focus on progress studies, this year’s fellowship features two themes: (1) agriculture and (2) health, biotech & longevity. We welcome fellows writing on any progress-related topic, but for a handful of spots, we will give preference to applicants focusing on these themes, for which there will be dedicated programming.</p><p>But don’t take our word for it, see what others have to say:</p><ul><li><a href="https://x.com/NikoMcCarty/status/1918315508870361563"><u>@NikoMcCarty</u></a>: I can't recommend this Writers' Fellowship enough. It helped me find my community, challenge my own work, and improve very quickly. You should apply! And feel free to DM me directly if you have any questions about my experience in the program.</li><li><a href="https://x.com/gtmulligan/status/1917983981821477171"><u>@gtmulligan</u></a>: This program changed my life. Happy to talk with anyone about my experience. Apply, apply, apply! [See also Grant’s <a href="https://www.grantmulligan.com/p/reflections-on-the-roots-of-progress"><u>post on the fellowship</u></a> and his <a href="https://x.com/gtmulligan/status/1923399391718539745"><u>thread of favorite pieces from the fellows</u></a>]</li><li><a href="https://x.com/RosieCampbell/status/1918003838705123637"><u>@RosieCampbell</u></a>: This was so well-run and it's a fantastic community, I am very grateful I got to participate. Highly recommend applying if you're interested in writing on the internet!</li><li><a href="https://x.com/snewmanpv/status/1918009623191404873"><u>@snewmanpv</u></a>: I had the privilege of participating in this program last year. Highly recommend for anyone writing about progress-related topics. Come for the information-dense instructional sessions, stay for the commun</li></ul>... | It’s been way too long since the last links digest, which means I have way too much to catch up on. I had to cut many interesting bits to get this one out the door.
Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Notes, Farcaster, Bluesky, or Threads.
Contents
* Apply to the Roots of Progress Fellowship by June 1st (tomorrow!)
* Edge Esmeralda next week!
* My writing (ICYMI)
* Other people’s writing
* Jobs
* Grants & fellowships
* Events
* AI announcements
* Introductions
* Career moves
* Nuclear news
* Aviation news
* Other announcements
For paid subscribers:
* Stagnation was the goal
* Is stagnation a measurement illusion?
* Eroom’s Law
* Cembalest on AI
* More on AI
* Bio
* Podcast interviews
* Links and short notes
* Politics
* Housing
* Gratitude
* Quotes
* Charts
* Aesthetics
* Fun
Apply to the Roots of Progress Fellowship by June 1st (tomorrow!)
Applications are still open for the 2025 Blog-Building Intensive! Launch a blog and improve your progress-focused writing with expert guidance and an amazing community progress builders, writers and intellectuals.
In addition to a general focus on progress studies, this year’s fellowship features two themes: (1) agriculture and (2) health, biotech & longevity. We welcome fellows writing on any progress-related topic, but for a handful of spots, we will give preference to applicants focusing on these themes, for which there will be dedicated programming.
But don’t take our word for it, see what others have to say:
* @NikoMcCarty: I can't recommend this Writers' Fellowship enough. It helped me find my community, challenge my own work, and improve very quickly. You should apply! And feel free to DM me directly if you have any questions about my experience in the program.
* @gtmulligan: This program changed my life. Happy to talk with anyone about my experience. Apply, apply, apply! [See also Grant’s p | 2,111 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
a4TyumeWrcuMaFc79 | house-party-dances | House Party Dances | null | false | false | false | null | TtEoCrFeowCGb6rFK | null | true | false | false | false | Post | null | 2025-05-31T15:20:02.879Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | TCWedrpkL4vCzaA6q | 1 | 6 | 13 | false | 0.00771 | null | false | false | 2025-05-31T19:19:29.202Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 3 | 0 | 2025-05-31T15:20:02.879Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | ma5dgL5yFHRxKLZKv | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 5 | 0 | TtEoCrFeowCGb6rFK | jkaufman | 2010-11-04T21:42:19.863Z | jkaufman | jefftk | null | null | Jeff Kaufman | 21,921 | 3 | false | false | <p>Co-lead (Near-Term Detection) at the Nucleic Acid Observatory in Boston. Speaking for myself unless I say otherwise.</p> | null | null | 1,018 | 2,211 | 0 | 0 | 1 | 1 | 2 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters"
] | null | null | a4TyumeWrcuMaFc79 | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/a4TyumeWrcuMaFc79/fzhm60w1biizegotzz1k | SocialPreviewType | TCWedrpkL4vCzaA6q | <p><span>
</span>
<a href="https://juliawise.net/">Julia</a> and I are in CA for
<a href="https://less.online/">LessOnline</a> this weekend, and we're
staying with friends. It happened that they were hosting a party, and
while this is not a group of dancing friends they asked if I'd be up
for leading some dancing. One of my hosts let me borrow their violin
(which I'll also have with me today at LessOnline if anyone would like
to jam some), and the space was small enough not to need any
amplification.
</p><p>
Playing and calling at the same time is only something I can do if I
play simple tunes I know very well while calling simple dances, and
both my playing and calling suffer from the multitasking. But my hosts
encouraged me not to worry about this and thought we'd have a good
time. I think they were right!
</p><p>
We ended up having about seven couples, mostly adults, but also
including about three kids 3y-5y who were very excited to dance. I
made sure to choose dances that would work for the whole group, and it
was definitely the right call to include the kids: their excitement
was highly infectious.
</p><p>
Our hosts had moved furniture earlier in the day, clearing an open
area about 15 ft square. This ended up being slightly tight for a
few figures, but was mostly pretty good.
</p><p>
<a href="https://www.jefftk.com/house-party-dancing-ghiblified-big.jpg"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/a4TyumeWrcuMaFc79/rzre0vqniadhalhsoslw" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/a4TyumeWrcuMaFc79/rzre0vqniadhalhsoslw 550w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/a4TyumeWrcuMaFc79/vvjdkchspwd3kdhctj3b 1100w"></a></p><div></div>
<p></p><p>
The crowd was enthusiastic about dancing, and while not everyone
danced we easily had enough people to make the dances work. We ended
up doing (I didn't write this down at the time; this is approximately
right):
</p><p>
</p>
<ul>
<li>Longways: <a href="https://www.jefftk.com/contras/dances/low-backed-car">The Low-backed Car</a>
</li>
<li>Longways: <a href="https://www.jefftk.com/contras/dances/athelone">Bridge of
Athelone</a>
</li>
</ul>
Break for cake
<ul>
<li>Waltz
</li>
<li>Longways: <a href="https://www.jefftk.com/contras/dances/jacobs-potato">Jacob's Potato</a>
</li>
<li>Scatter mixer: <a href="https://www.jefftk.com/contras/dances/sasha">Sasha</a>
</li>
</ul>
Another break, pretty long
<ul>
<li>Circle: <a href="https://www.jefftk.com/contras/dances/labastringue-family">La Bastringue</a>
</li>
<li>Waltz (Julia singing <a href="https://www.luicollins.net/waltzing-with-bears/">Waltzing With
Bears</a> while I fiddled chords)
</li>
<li>Longways: <a href="https://www.jefftk.com/contras/dances/galopede">Galopede</a>
</li>
</ul>
<p>
I think Sasha was the most popular, followed by the set dancing
(Longways and Circle), followed by waltzes. It makes sense that the
waltzes were less popular: Julia and I taught the basic folk waltz
step, but getting into the improvisational lead-follow dynamic and the
kinds of figures you might want to lead would have needed much more
time and have worse ROI. I'm not sure if I would include these if
doing this again, though it was nice to have a few slower-paced
dances.
</p><p>
I'm glad I was able to help make this happen!
</p><p><i>Comment via: <a href="https://www.facebook.com/jefftk/posts/pfbid0FBNsLYZLLWLbTfdfjqBtYrUG69UgfpyKMjf2YmtscL5NrdodZ76pAQb86hmo3KcHl">facebook</a>, <a href="https://mastodon.mit.edu/@jefftk/114603095018072201">mastodon</a>, <a href="https://bsky.app/profile/jefftk.com/post/3lqhzy4f4b222">bluesky</a>, <a href="https://jefftkaufman.substack.com/p/house-party-dances">substack</a></i></p> | Julia and I are in CA for LessOnline this weekend, and we're staying with friends. It happened that they were hosting a party, and while this is not a group of dancing friends they asked if I'd be up for leading some dancing. One of my hosts let me borrow their violin (which I'll also have with me today at LessOnline if anyone would like to jam some), and the space was small enough not to need any amplification.
Playing and calling at the same time is only something I can do if I play simple tunes I know very well while calling simple dances, and both my playing and calling suffer from the multitasking. But my hosts encouraged me not to worry about this and thought we'd have a good time. I think they were right!
We ended up having about seven couples, mostly adults, but also including about three kids 3y-5y who were very excited to dance. I made sure to choose dances that would work for the whole group, and it was definitely the right call to include the kids: their excitement was highly infectious.
Our hosts had moved furniture earlier in the day, clearing an open area about 15 ft square. This ended up being slightly tight for a few figures, but was mostly pretty good.
The crowd was enthusiastic about dancing, and while not everyone danced we easily had enough people to make the dances work. We ended up doing (I didn't write this down at the time; this is approximately right):
* Longways: The Low-backed Car
* Longways: Bridge of Athelone
Break for cake
* Waltz
* Longways: Jacob's Potato
* Scatter mixer: Sasha
Another break, pretty long
* Circle: La Bastringue
* Waltz (Julia singing Waltzing With Bears while I fiddled chords)
* Longways: Galopede
I think Sasha was the most popular, followed by the set dancing (Longways and Circle), followed by waltzes. It makes sense that the waltzes were less popular: Julia and I taught the basic folk waltz step, but getting into the improvisational lead-follow dynamic and the kinds of figures you might want | 404 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
XMbbxH24CyKvvCdb9 | free-will-like-probability-is-about-local-knowledge | Free Will, Like Probability, is About Local Knowledge | null | false | false | false | null | zW3FrKhxauxbdvReX | null | true | false | false | false | Post | https://open.substack.com/pub/lifeinthelabyrinth/p/free-will-like-probability-is-about?r=1i6iuu&utm_medium=ios | 2025-05-31T14:19:26.160Z | null | false | false | 2 | 2 | 2025-05-31T19:55:11.805Z | false | false | linkpost | [] | null | null | q3fnuEWtvhfeWKQnm | 6 | 3 | 4 | false | 0.007843 | null | false | false | 2025-06-03T15:28:44.246Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | -1 | 0 | 2025-05-31T14:06:21.351Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 19 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "5f5c37ee1b5cdee568cfb1b8",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-11T19:58:52.186Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "dvMTMFdcjgWBxi9jp",
"displayName": "wlxqt"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Free Will",
"needsReview": false,
"noindex": false,
"postCount": 66,
"score": 10,
"shortName": null,
"slug": "free-will",
"suggestedAsFilter": false,
"userId": "nmk3nLpQE89dMRzzN",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | zW3FrKhxauxbdvReX | rob-lucas | 2021-07-17T06:08:57.544Z | Rob Lucas | Rob Lucas | null | null | null | 57 | 0 | false | false | null | null | 4 | 27 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal"
] | null | null | XMbbxH24CyKvvCdb9 | SocialPreviewType | q3fnuEWtvhfeWKQnm | <p>This post is also found on my new substack page. I think it's a view of free will that might be quite interesting to the LessWrong community, both for its clarity and originality, and for its coherence within the general rationalist framework, particularly a bayesian viewpoint.</p><p> </p><p><strong>The Time Traveller: A short narrative introduction</strong></p><p>I know you don’t know me, but thank you: you saved my wife’s life.</p><p>You see, I was trapped. After a while I gave up hope. I started to feel my situation was inescapable. I couldn’t change it. It was set in stone and no matter what I did events would somehow conspire to arrange themselves as they had to be. I was foiled. Time and again, no matter how ridiculous, how preposterous the unfolding of events required; somehow time would always repeat as I remembered it.</p><p>Three years from now and five years ago I got a phone call. I was working in my lab, on the time machine, when they called and told me my wife had died in a car accident. Her heart stopped on the way to the hospital.</p><p>I’d tried a thousand different ways to prevent it, but time always had a way of ensuring that the future I came from was the future that transpired.</p><p>It was looking at you, that day in the coffee shop, that I had my epiphany.</p><p>I was sitting across from you feeling sorry for myself. Trapped in my fate, I found myself wishing that I were in your shoes, for you <i>this</i> was the present, and the future lay unknown ahead of you, free to change at your slightest whim. Whereas for me, this was the past, and what lay ahead was more of it, more bygone years all set in stone and unchangeable.</p><p>If only I could put on your shoes.</p><p>And so in my mind’s eye, I did. I sat there looking at the carefree expression on your face, imagining myself not knowing what was to come; trying to change it.</p><p>I imagined myself sitting in your place, deciding about my future. I imagined how you would go home tonight to your family, if you had one. I imagined going home to mine.</p><p>But the thought died in my mind like a vine withering in the sun. That was it. I couldn’t. Because no matter what happened if I went to my wife now she would still be crushed by that car in three years. Even if I were you, even if the “past” became the future again, it still stood, solid and unbreakable. You were in the same position as... </p> | This post is also found on my new substack page. I think it's a view of free will that might be quite interesting to the LessWrong community, both for its clarity and originality, and for its coherence within the general rationalist framework, particularly a bayesian viewpoint.
The Time Traveller: A short narrative introduction
I know you don’t know me, but thank you: you saved my wife’s life.
You see, I was trapped. After a while I gave up hope. I started to feel my situation was inescapable. I couldn’t change it. It was set in stone and no matter what I did events would somehow conspire to arrange themselves as they had to be. I was foiled. Time and again, no matter how ridiculous, how preposterous the unfolding of events required; somehow time would always repeat as I remembered it.
Three years from now and five years ago I got a phone call. I was working in my lab, on the time machine, when they called and told me my wife had died in a car accident. Her heart stopped on the way to the hospital.
I’d tried a thousand different ways to prevent it, but time always had a way of ensuring that the future I came from was the future that transpired.
It was looking at you, that day in the coffee shop, that I had my epiphany.
I was sitting across from you feeling sorry for myself. Trapped in my fate, I found myself wishing that I were in your shoes, for you this was the present, and the future lay unknown ahead of you, free to change at your slightest whim. Whereas for me, this was the past, and what lay ahead was more of it, more bygone years all set in stone and unchangeable.
If only I could put on your shoes.
And so in my mind’s eye, I did. I sat there looking at the carefree expression on your face, imagining myself not knowing what was to come; trying to change it.
I imagined myself sitting in your place, deciding about my future. I imagined how you would go home tonight to your family, if you had one. I imagined going home to mine.
But the | 4,870 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
jz876dnWJmNDByifc | the-unofficial-rationality-a-z-anki-deck | The (Unofficial) Rationality: A-Z Anki Deck | null | false | false | false | null | uiW2nHR237uv6AMbG | null | true | false | false | false | Post | null | 2025-05-31T07:01:45.156Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | RdJpTPYwhcEcSNbxD | 8 | 14 | 30 | false | 0.016978 | null | false | false | 2025-06-03T06:22:01.482Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 3 | 0 | 2025-05-31T06:19:52.155Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "H2q58pKG6xFrv8bPz",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-04-30T15:55:27.342Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Spaced Repetition",
"needsReview": false,
"noindex": false,
"postCount": 75,
"score": 9,
"shortName": null,
"slug": "spaced-repetition",
"suggestedAsFilter": false,
"userId": "bHsi8ZuD7tX69uRfG",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "7ow6EFpypbH4hzFuz",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-04T03:37:34.939Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Community Outreach",
"needsReview": false,
"noindex": false,
"postCount": 59,
"score": 0,
"shortName": null,
"slug": "community-outreach",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fF9GEdWXKJ3z73TmB",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 22,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-09T16:57:01.474Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "t46uLRSbDziEcKmev",
"displayName": "Kriz Tahimic"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
},
{
"_id": "xF5nfdddHjFThHy49",
"displayName": "[email protected]"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Scholarship & Learning",
"needsReview": false,
"noindex": false,
"postCount": 361,
"score": 22,
"shortName": null,
"slug": "scholarship-and-learning",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 5,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 14 | 0 | 0 | 2 | 0 | uiW2nHR237uv6AMbG | japancolorado | 2024-07-10T03:05:49.575Z | russell-white | japancolorado | null | null | JapanColorado | 53 | 0 | false | false | null | null | 2 | 12 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | jz876dnWJmNDByifc | SocialPreviewType | RdJpTPYwhcEcSNbxD | <p>I'm a huge fan of Anki. I love being able to remember what I read for however long I want to. However, most Anki decks I've found about rationality are nonexistent, unhelpful, out of date, or poorly formatted.</p><p>Recently, I found <a href="https://www.lesswrong.com/posts/hGhBhLsgNWLCJ3g9b/creating-flashcards-with-llms">this post</a> from 2023 that used GPT-4 to make a <i>Rationality: From AI to Zombies</i> deck, but I found the cards so long, obtuse, and unusable that it made me want to do better.</p><p>After dozens of hours <a href="https://github.com/JapanColorado/articles-to-anki">making a custom Python package</a> to generate cards with GPT-4o-mini, followed by having Claude Sonnet 4 revise, reduce, and reconsolidate the cards, I am proud to present:</p><h2>The (Unofficial) <i>Rationality: From AI to Zombies</i> Deck, available from <a href="https://ankiweb.net/shared/info/428730627">AnkiWeb</a> or <a href="https://drive.google.com/file/d/133oUf8wvonQEVlnwHy3olSrE3DU6glQr/view?usp=sharing">Google Drive</a>.</h2><hr><p>I definitely spent way longer than I needed to in making this, but I'm really happy with the result. I think it's a definite improvement over existing shared decks, and I hope it can be of benefit to others too.</p><p>I also enjoyed making my <a href="https://github.com/JapanColorado/articles-to-anki">Articles to Anki python package</a>, which automatically generates and exports Anki cards from URLs or local files. There are definitely still some improvements to be made<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="tvv6ubyx1q" role="doc-noteref" id="fnreftvv6ubyx1q"><sup><a href="#fntvv6ubyx1q">[1]</a></sup></span>, but it is functional, useful, and developing it did a lot to bring me up to speed with coding with LLM assistance.</p><p>If this is well received or I find it personally useful, I'll probably make some more decks for rationalist content. Let me know if you have any requests or feedback!</p><ol class="footnote-section footnotes" data-footnote-section="" role="doc-endnotes"><li class="footnote-item" data-footnote-item="" data-footnote-index="1" data-footnote-id="tvv6ubyx1q" role="doc-endnote" id="fntvv6ubyx1q"><span class="footnote-back-link" data-footnote-back-link="" data-footnote-id="tvv6ubyx1q"><sup><strong><a href="#fnreftvv6ubyx1q">^</a></strong></sup></span><div class="footnote-content" data-footnote-content=""><p>Help is very much welcome. It was mostly Sonnet 4 and I flying by the seat of our pants.</p></div></li></ol> | I'm a huge fan of Anki. I love being able to remember what I read for however long I want to. However, most Anki decks I've found about rationality are nonexistent, unhelpful, out of date, or poorly formatted.
Recently, I found this post from 2023 that used GPT-4 to make a Rationality: From AI to Zombies deck, but I found the cards so long, obtuse, and unusable that it made me want to do better.
After dozens of hours making a custom Python package to generate cards with GPT-4o-mini, followed by having Claude Sonnet 4 revise, reduce, and reconsolidate the cards, I am proud to present:
The (Unofficial) Rationality: From AI to Zombies Deck, available from AnkiWeb or Google Drive.
----------------------------------------
I definitely spent way longer than I needed to in making this, but I'm really happy with the result. I think it's a definite improvement over existing shared decks, and I hope it can be of benefit to others too.
I also enjoyed making my Articles to Anki python package, which automatically generates and exports Anki cards from URLs or local files. There are definitely still some improvements to be made[1], but it is functional, useful, and developing it did a lot to bring me up to speed with coding with LLM assistance.
If this is well received or I find it personally useful, I'll probably make some more decks for rationalist content. Let me know if you have any requests or feedback!
1. ^
Help is very much welcome. It was mostly Sonnet 4 and I flying by the seat of our pants. | 251 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
LtsgfGsXpiLTSGpaW | zochi-publishes-a-paper | Zochi Publishes A* Paper | null | false | false | false | null | eCk5iNu68fJeuwB4e | null | true | false | false | false | Post | https://www.intology.ai/blog/zochi-acl | 2025-05-31T00:00:27.328Z | null | false | false | 2 | 2 | null | false | false | linkpost | [] | null | null | bLNLiaY4dYFy5gn3A | 0 | 7 | 11 | false | 0.006306 | null | false | false | 2025-05-31T00:00:27.328Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-30T23:58:34.358Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 5 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 7 | 0 | 0 | 2 | 0 | eCk5iNu68fJeuwB4e | mannatvjain | 2024-11-04T04:07:50.278Z | aproteinengine | mannatvjain | null | null | null | 72 | 0 | false | false | null | null | 4 | 3 | 0 | 0 | 0 | 1 | 0 | 55XxDBpfKkkBPm9H8 | User | null | null | null | [
"canModeratePersonal"
] | null | null | LtsgfGsXpiLTSGpaW | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LtsgfGsXpiLTSGpaW/zytq2pp002kbxezpmqrp | SocialPreviewType | bLNLiaY4dYFy5gn3A | <h3>Zochi Achieves Main Conference Acceptance at ACL 2025</h3><p>Today, we’re excited to announce a groundbreaking milestone: Zochi, Intology’s Artificial Scientist, has become the first AI system to independently <strong>pass peer review at an A* scientific conference</strong>¹—the highest bar for scientific work in the field.</p><p>Zochi’s paper has been accepted into the <strong>main proceedings of ACL</strong>—the world’s #1 scientific venue for natural language processing (NLP), and among the top 40 of all scientific venues globally.²</p><p>While recent months have seen several groups, including our own, demonstrate <a href="https://techcrunch.com/2025/03/19/academics-accuse-ai-startups-of-co-opting-peer-review-for-publicity/">AI-generated contributions at workshop</a> venues, having a paper accepted to the main proceedings of a top-tier scientific conference represents clearing a significantly higher bar. While workshops³, at the level submitted to ICLR 2025, have acceptance rates of ~60-70%, main conference proceedings at conferences such as ACL (NeurIPS, ICML, ICLR, CVPR, etc…) have <strong>acceptance rates of ~20%</strong>. ACL is often the most selective of these conferences</p><p>This achievement marks a <strong>watershed moment</strong> in the evolution of innovation. For the first time, an artificial system has independently produced a scientific discovery and published it at the level of the field’s top researchers—making Zochi <strong>the first PhD-level agent</strong>. The peer review process for the main conference proceedings of such venues is designed to be highly selective, with stringent standards for novelty, technical depth, and experimental rigor. To put this achievement in perspective, most PhD students in computer science spend <strong>several years</strong> before publishing at a venue of this stature. AI has crossed a threshold of scientific creativity that allows for contributions alongside these researchers at the highest level of inquiry.</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LtsgfGsXpiLTSGpaW/qmyfo9bdugdzyd2szzg5" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LtsgfGsXpiLTSGpaW/oguel4dn403629eikixo 512w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LtsgfGsXpiLTSGpaW/t1fvmkosndsulqo2dzdz 1024w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LtsgfGsXpiLTSGpaW/veenj1svwjbjoofjnrgd 2048w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LtsgfGsXpiLTSGpaW/qmyfo9bdugdzyd2szzg5 3056w"></figure><h3>Autonomously Conducting the Scientific Method</h3><p>Zochi is an AI research agent capable of autonomously completing the entire scientific process—from literature analysis to peer-reviewed publication. The system operates through a multi-stage pipeline designed to emulate the scientific method. Zochi begins by ingesting and analyzing thousands of research papers to identify promising directions within a given domain. Its retrieval system identifies key contributions, methodologies, limitations, and emerging patterns across the literature. What distinguishes Zochi is its <strong>ability to identify non-obvious connections across papers and propose innovativ</strong>... </p> | Zochi Achieves Main Conference Acceptance at ACL 2025
Today, we’re excited to announce a groundbreaking milestone: Zochi, Intology’s Artificial Scientist, has become the first AI system to independently pass peer review at an A* scientific conference¹—the highest bar for scientific work in the field.
Zochi’s paper has been accepted into the main proceedings of ACL—the world’s #1 scientific venue for natural language processing (NLP), and among the top 40 of all scientific venues globally.²
While recent months have seen several groups, including our own, demonstrate AI-generated contributions at workshop venues, having a paper accepted to the main proceedings of a top-tier scientific conference represents clearing a significantly higher bar. While workshops³, at the level submitted to ICLR 2025, have acceptance rates of ~60-70%, main conference proceedings at conferences such as ACL (NeurIPS, ICML, ICLR, CVPR, etc…) have acceptance rates of ~20%. ACL is often the most selective of these conferences
This achievement marks a watershed moment in the evolution of innovation. For the first time, an artificial system has independently produced a scientific discovery and published it at the level of the field’s top researchers—making Zochi the first PhD-level agent. The peer review process for the main conference proceedings of such venues is designed to be highly selective, with stringent standards for novelty, technical depth, and experimental rigor. To put this achievement in perspective, most PhD students in computer science spend several years before publishing at a venue of this stature. AI has crossed a threshold of scientific creativity that allows for contributions alongside these researchers at the highest level of inquiry.
Autonomously Conducting the Scientific Method
Zochi is an AI research agent capable of autonomously completing the entire scientific process—from literature analysis to peer-reviewed publication. The system operates through a multi-stage p | 1,304 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
|
igsvMRDP3JHCwf4gG | memory-decoding-journal-club-structure-and-function-of-the-1 | Memory Decoding Journal Club: Structure and function of the hippocampal CA3 module | null | false | false | false | null | Z7pbtaLLmZuhjaHa3 | null | true | false | false | false | Post | null | 2025-05-30T23:59:19.182Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | Tokj7qk3oyWbKJtZB | 0 | 1 | 1 | false | 0.000877 | null | false | false | 2025-05-30T23:59:19.182Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-30T23:57:27.632Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | Z7pbtaLLmZuhjaHa3 | devin-ward | 2025-01-30T00:31:45.267Z | Carboncopies Foundation | Devin Ward | null | null | Devin Ward | 4 | 0 | false | false | <p>Carboncopies Foundation volunteer</p><p>https://carboncopies.org/</p> | null | null | 14 | 0 | 0 | 0 | 0 | 0.9 | 0 | 55XxDBpfKkkBPm9H8 | User | null | null | null | null | null | null | igsvMRDP3JHCwf4gG | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/igsvMRDP3JHCwf4gG/qqsm897twejeohiorezc | SocialPreviewType | Tokj7qk3oyWbKJtZB | <figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/igsvMRDP3JHCwf4gG/jsjyaywuj01rubbnzmpc" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/igsvMRDP3JHCwf4gG/n8vaxexnqvgwjqpya6lj 120w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/igsvMRDP3JHCwf4gG/uh7ejcxfyggnrpknibym 240w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/igsvMRDP3JHCwf4gG/zoz8y9ldskxrm712cgrd 360w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/igsvMRDP3JHCwf4gG/tmedydz4rnb5xrtxbpyw 480w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/igsvMRDP3JHCwf4gG/nxxirfg1iay1idtsvq1t 600w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/igsvMRDP3JHCwf4gG/ukqhlw72qqzcgxdozpzu 720w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/igsvMRDP3JHCwf4gG/rlvp2wwv81cefiyjlujn 840w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/igsvMRDP3JHCwf4gG/zgdtvwlcaiiywijh50xr 960w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/igsvMRDP3JHCwf4gG/ldhdjn5nlmpuzro0hedn 1080w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/igsvMRDP3JHCwf4gG/orvwdmhwpy2izgzuhzhs 1200w"></figure><h3><strong>Join Us for the Memory Decoding Journal Club! </strong></h3><p><i>A collaboration of the <strong>Carboncopies Foundation</strong> and <strong>BPF Aspirational Neuroscience</strong></i></p><p>This time, we’re diving into a groundbreaking paper:<br><strong>"Structure and function of the hippocampal CA3 module"</strong></p><p><strong>Authors:</strong> <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con1">Rosanna P. Sammons</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con2">Mourat Vezir</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con3">Laura Moreno-Velasquez</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con4">Gaspar Can</a>o, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con5">Marta Orlando</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con6">Meike Sievers</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con7">Eleonora Grasso</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con8">Verjinia D. Metodieva</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con9">Richard Kempter</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con10">Helene Schmidt</a>, and <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con11">Dietmar Schmitz</a></p><p> <strong>Institutions: </strong>Neuroscience Research Center, Charité-Universitätsmedizin, Ernst Strüngmann Institute for Neuroscience, Institute for Theoretical Biology, Department of Biology, Humboldt-Universität, Department of Connectomics, Max Planck Institute for Brain Research, Bernstein Center for Computational Neuroscience, Einstein Center for Neurosciences, German Center for Neurodegenerative Diseases, Max-Delbrück Center for Molecular Medicine in the Helmholtz Association</p><p>Presented by: Dr. Kenneth Hayworth </p><p><strong>When?</strong> <strong>June 3rd, 2025</strong> – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC</p><p><strong>Where? Video conference: </strong><a href="https://carboncopies.org/aspirational-neuroscience"><strong><u>https://carboncopies.org/aspirational-neuroscience</u></strong></a></p><p>Register for updates:<a href="https://aspirationalneuroscience.org/register-with-us/"> <u>https://aspirationalneuroscience.org/register-with-us/</u></a></p><p>Once registered, you'll receive event invites & updates!</p><p><strong>#Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience</strong></p> | Join Us for the Memory Decoding Journal Club!
A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience
This time, we’re diving into a groundbreaking paper:
"Structure and function of the hippocampal CA3 module"
Authors: Rosanna P. Sammons, Mourat Vezir, Laura Moreno-Velasquez, Gaspar Cano, Marta Orlando, Meike Sievers, Eleonora Grasso, Verjinia D. Metodieva, Richard Kempter, Helene Schmidt, and Dietmar Schmitz
Institutions: Neuroscience Research Center, Charité-Universitätsmedizin, Ernst Strüngmann Institute for Neuroscience, Institute for Theoretical Biology, Department of Biology, Humboldt-Universität, Department of Connectomics, Max Planck Institute for Brain Research, Bernstein Center for Computational Neuroscience, Einstein Center for Neurosciences, German Center for Neurodegenerative Diseases, Max-Delbrück Center for Molecular Medicine in the Helmholtz Association
Presented by: Dr. Kenneth Hayworth
When? June 3rd, 2025 – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC
Where? Video conference: https://carboncopies.org/aspirational-neuroscience
Register for updates: https://aspirationalneuroscience.org/register-with-us/
Once registered, you'll receive event invites & updates!
#Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience | 157 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
sPceWmHudmkSAA9Ba | diabetes-is-caused-by-oxidative-stress | Diabetes is Caused by Oxidative Stress | null | false | false | false | null | x47vGbW7zgEFqAfEB | null | true | false | false | false | Post | null | 2025-05-30T21:03:37.989Z | null | false | false | 2 | 2 | 2025-05-31T00:32:25.183Z | false | false | post | [] | null | null | XchThgM73i8WKCB7a | 11 | 8 | 11 | false | 0.011428 | null | false | false | 2025-06-14T00:05:16.388Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-30T21:03:37.989Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 9 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "xHjy88N2uJvGdgzfw",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-10T11:55:55.351Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "xF5nfdddHjFThHy49",
"displayName": "[email protected]"
},
{
"_id": "go3WWAbwJMPGrGZbH",
"displayName": "Carl Leninger"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Health / Medicine / Disease",
"needsReview": false,
"noindex": false,
"postCount": 341,
"score": 11,
"shortName": null,
"slug": "health-medicine-disease",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 8 | 0 | 0 | 2 | 0 | x47vGbW7zgEFqAfEB | lorec | 2020-10-13T06:37:47.502Z | Lorec | Lorec | null | null | null | 220 | 0 | false | false | <p>My government name is Mack Gallagher. Crocker's Rules. I am an "underfunded" "alignment" "researcher". DM me if you'd like to fund my posts, or <a href="https://www.lesswrong.com/posts/ME7sLiwhEB6awRqJR/project-adequate-seeking-cofounders-funders">my project</a>.</p>
<p>I post some of my less-varnished opinions on <a href="https://mackgallagher.substack.com/">my Substack</a>, and <a href="https://kaventekeit.github.io/">my personal blog</a>.</p>
<p>If you like arguing with me on LessWrong, at present I'm basically free round the clock to continue interesting arguments <a href="https://discord.gg/BVmCCjD4eh">in my Discord</a>.</p>
| null | null | 24 | 159 | 0 | 0 | 0 | 1 | 1 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | sPceWmHudmkSAA9Ba | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/sPceWmHudmkSAA9Ba/s7rmton08br1iplmxcud | SocialPreviewType | XchThgM73i8WKCB7a | <h3>I.</h3><p>It's a Known Thing in the keto-sphere [ I get the sense that <a href="https://www.reddit.com/r/saturatedfat">r/saturatedfat</a> is an example of this culture ] that people with metabolic syndrome -- i.e., some level of insulin resistance -- can handle fat <i>or</i> carbs [ e.g. either a ketogenic diet or <a href="https://www.exfatloss.com/p/the-mysterious-case-of-outlier-17">something</a> like the potato diet ] but can't handle both at the same time without suffering two symptoms:</p><p>[ 1 ] gaining weight</p><p>and</p><p>[ 2 ] suffering fatigue.</p><p>This is sometimes called "<a href="https://theheartattackdiet.substack.com/p/polyunsaturated-fats-block-glycolysis">The Swamp</a>".*</p><p>This is a very peculiar way for human metabolism to work. What's more, it only works this way for <i>some</i> people -- centrally, people who have acquired some level of insulin resistance [ "metabolic syndrome" ].</p><p>The insulin-resistant population leans not-young and slightly male, and its incidence is only appreciable in locations that have adopted a "Western diet".</p><p>The clearest articulation I've ever read of the Taubesian model of insulin resistance is actually a <a href="https://glowfic.com/replies/1857557#reply-1857557">glowfic reply by Swimmer963</a>:</p><blockquote><p>Eat <i>one</i> sugary meal, and a healthy pancreas will groan but put out a flood of insulin; eat a high-sugar diet above the maintenance calorie needs every day for a decade, and the cells will gradually respond with less vigor, even as the overtaxed pancreas starts to fall behind, and fat deposits (fat is far from inert -- it's an endocrine organ of its own, in a way) secretes its own hormones, and baseline blood sugar creeps up and up -- and, again, inflames the lining of blood vessels, gunks up and eventually cuts off circulation to extremities, gradually damages peripheral nerves, and makes a <i>very</i> tempting meal for any bacterial infection that starts to sneak in, ignored by an immune system unable to reach it through those sticky damaged capillaries.</p></blockquote><p>This model has problems.</p><p>The most obvious is, "Why weren't the rates of metabolic syndrome high in poor agricultural communities that had to eat almost <i>entirely</i> carbs, then?" The American Heart Association's decision to blame the increasing quantity of <i>fat</i> in the 20th-century American diet for the increased incidence of "diseases of civilization" was, in a sense, perfectly natural, given that . . . that was the change in the American diet that had recently occurred! It's called the "nutrition transition": poor agricultural nations' consumption of carbohydrates <i>decreases</i> significantly when they come into some wealth! Yet their rates of metabolic syndrome generally go up [ if few are quite as high as the Ameri... </p> | I.
It's a Known Thing in the keto-sphere [ I get the sense that r/saturatedfat is an example of this culture ] that people with metabolic syndrome -- i.e., some level of insulin resistance -- can handle fat or carbs [ e.g. either a ketogenic diet or something like the potato diet ] but can't handle both at the same time without suffering two symptoms:
[ 1 ] gaining weight
and
[ 2 ] suffering fatigue.
This is sometimes called "The Swamp".*
This is a very peculiar way for human metabolism to work. What's more, it only works this way for some people -- centrally, people who have acquired some level of insulin resistance [ "metabolic syndrome" ].
The insulin-resistant population leans not-young and slightly male, and its incidence is only appreciable in locations that have adopted a "Western diet".
The clearest articulation I've ever read of the Taubesian model of insulin resistance is actually a glowfic reply by Swimmer963:
> Eat one sugary meal, and a healthy pancreas will groan but put out a flood of insulin; eat a high-sugar diet above the maintenance calorie needs every day for a decade, and the cells will gradually respond with less vigor, even as the overtaxed pancreas starts to fall behind, and fat deposits (fat is far from inert -- it's an endocrine organ of its own, in a way) secretes its own hormones, and baseline blood sugar creeps up and up -- and, again, inflames the lining of blood vessels, gunks up and eventually cuts off circulation to extremities, gradually damages peripheral nerves, and makes a very tempting meal for any bacterial infection that starts to sneak in, ignored by an immune system unable to reach it through those sticky damaged capillaries.
This model has problems.
The most obvious is, "Why weren't the rates of metabolic syndrome high in poor agricultural communities that had to eat almost entirely carbs, then?" The American Heart Association's decision to blame the increasing quantity of fat in the 20th-century American diet for | 2,310 | 1.22.1 | Revision | false | null | null | CrosspostOutput |
QHcQAbNHoaNoE5YBM | too-many-metaphors-a-case-for-plain-talk-in-ai-safety | Too Many Metaphors: A Case for Plain Talk in AI Safety | null | false | false | false | null | G2Tmh9PnX2nzKqatB | null | true | false | false | false | Post | null | 2025-05-30T19:29:04.198Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | Yav4nLtCjX7tjvWY3 | 8 | 3 | 0 | false | 0 | null | false | false | 2025-06-02T12:59:11.106Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | -1 | 0 | 2025-05-30T19:29:04.198Z | false | false | true | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "ZFrgTgzwEfStg26JL",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-16T10:29:25.410Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Risk",
"needsReview": false,
"noindex": false,
"postCount": 1482,
"score": 0,
"shortName": null,
"slug": "ai-risk",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Wp8FXnKFSXqEtEMqF",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-03-11T16:30:02.003Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "hhJ3biduQrJxTkSzv",
"displayName": "azergante"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Risk Skepticism",
"needsReview": false,
"noindex": false,
"postCount": 36,
"score": 11,
"shortName": null,
"slug": "ai-risk-skepticism",
"suggestedAsFilter": false,
"userId": "xjac7rhMMk9Dak3a2",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | G2Tmh9PnX2nzKqatB | david-harket | 2025-04-10T09:33:26.681Z | david-harket | David Harket | null | null | null | 2 | 0 | false | false | null | null | 2 | 4 | 0 | 0 | 0 | 0.9 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | QHcQAbNHoaNoE5YBM | SocialPreviewType | Yav4nLtCjX7tjvWY3 | <p><i>I changed the title of the post from "Yudkowsky does not do alignment justice" to "</i><a href="https://www.lesswrong.com/posts/QHcQAbNHoaNoE5YBM/too-many-metaphors-a-case-for-plain-talk-in-ai-safety"><i>Too Many Metaphors: A Case for Plain Talk in AI Safety</i></a><i>". In hindsight, the title might have been a bit too sensationalist and not clear. Yudkowsky has done more for the field than most. The purpose of this post is not to disregard any of his words, but rather to share observations and ideas about how to potentially communicate the field's relevance more effectively.</i></p><p>For the past few months, I have been immersing myself in the AI safety debate here on LessWrong as a part of a research project. I was somewhat familiar with the topic of alignment, and had previously read Stuart Russell and listened to a few podcasts with Paul Christiano and Eliezer Yudkowsky. Starting off, I read AGI Ruin: A list of lethalities, where I was prompted to familiarise myself with the Orthogonality Thesis and Instrumental Convergence. Upon reading Yudkowsky's posts on these two key topics, they made immediate sense, and the risk associated with a lack of alignment in sufficiently intelligent systems seemed apparent to follow from this.</p><p>At a high level, the issue can be framed simply: Current utility functions do not optimise toward values in which humans are treated as morally valuable.</p><p>From this point on, assuming exponential growth in AI systems, there are a multitude of scenarios one can imagine in which a system optimises towards a goal in which humanity is a casualty as a result of the optimisation process (e.g., build a whole brain emulator). I see this risk, and I see other current risks such as those related to open weights models with alignment measures fine-tuned away being accessible to bad actors. However, I believe Yudkowsky’s current communication approach may be counterproductive when talking about alignment, especially in formats where people outside the field are being introduced to alignment.</p><p>For instance, I just listened to a debate between Stephen Wolfram and Yudkowsky in which the majority of the discussion circled around defining positions. I must say that in these four hours, Yudkowsky was able to get some major points across; however, most of the content was obscured by metaphors. Metaphors can be valuable, but when talking about AI safety, I think the most effective way of explaining the risk is to stick to the basics. In this specific podcast, there were a lot of anthropomorphic ... </p> | I changed the title of the post from "Yudkowsky does not do alignment justice" to "Too Many Metaphors: A Case for Plain Talk in AI Safety". In hindsight, the title might have been a bit too sensationalist and not clear. Yudkowsky has done more for the field than most. The purpose of this post is not to disregard any of his words, but rather to share observations and ideas about how to potentially communicate the field's relevance more effectively.
For the past few months, I have been immersing myself in the AI safety debate here on LessWrong as a part of a research project. I was somewhat familiar with the topic of alignment, and had previously read Stuart Russell and listened to a few podcasts with Paul Christiano and Eliezer Yudkowsky. Starting off, I read AGI Ruin: A list of lethalities, where I was prompted to familiarise myself with the Orthogonality Thesis and Instrumental Convergence. Upon reading Yudkowsky's posts on these two key topics, they made immediate sense, and the risk associated with a lack of alignment in sufficiently intelligent systems seemed apparent to follow from this.
At a high level, the issue can be framed simply: Current utility functions do not optimise toward values in which humans are treated as morally valuable.
From this point on, assuming exponential growth in AI systems, there are a multitude of scenarios one can imagine in which a system optimises towards a goal in which humanity is a casualty as a result of the optimisation process (e.g., build a whole brain emulator). I see this risk, and I see other current risks such as those related to open weights models with alignment measures fine-tuned away being accessible to bad actors. However, I believe Yudkowsky’s current communication approach may be counterproductive when talking about alignment, especially in formats where people outside the field are being introduced to alignment.
For instance, I just listened to a debate between Stephen Wolfram and Yudkowsky in which the maj | 658 | 1.4.0 | Revision | false | null | null | CrosspostOutput |
|||
aEpo4dht3rsgMdNAu | compassionate-genomics-for-the-21st-century-genomic-welfare | Compassionate Genomics for the 21st Century
-
Genomic Welfare with Ethical Care for Moral Predispositions | null | false | false | false | null | 9SSKfogTZuJoCehC4 | null | true | false | false | false | Post | null | 2025-05-30T19:28:26.915Z | null | false | false | 2 | 2 | 2025-05-31T19:50:16.153Z | false | false | post | [] | null | null | YxfJM6W2qg9q3zuQQ | 0 | 1 | 1 | false | 0.006043 | null | false | false | 2025-05-30T19:28:26.915Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-25T17:39:17.265Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 25 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "e9wHzopbGCAFwp9Rw",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-09T08:09:32.094Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Human Genetics",
"needsReview": false,
"noindex": false,
"postCount": 64,
"score": 1,
"shortName": null,
"slug": "human-genetics",
"suggestedAsFilter": false,
"userId": "mPipmBTniuABY5PQy",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | 9SSKfogTZuJoCehC4 | ruth-seleo | 2025-05-25T17:38:37.634Z | Ruth Seleo | Ruth Seleo | null | null | null | 0 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | aEpo4dht3rsgMdNAu | SocialPreviewType | YxfJM6W2qg9q3zuQQ | <h1 data-internal-id="Preliminary_Notes">Preliminary Notes</h1><p data-internal-id="ftnt_ref1">While the core ideas explored in this proposal are not original, the biomedical research remains underdeveloped—discouraged by restrictive regulatory frameworks—and the topic continues to be controversial and underexplored in the public and effective altruism discourse. This work builds substantially on the theoretical frameworks developed by David Pearce—particularly Genome Reform and The Biohappiness Revolution<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="mwb5j5te4yn" role="doc-noteref" id="fnrefmwb5j5te4yn"><sup><a href="#fnmwb5j5te4yn">[1]</a></sup></span>.</p><h1 data-internal-id="1__Summary">1. Summary</h1><p>In this proposal we explore the potential of Compassionate Genomics for public health and wellbeing. The aim is to advance ethical research and implementation of heritable genome editing and establish Genomic Welfare as a voluntary reproductive and public health service to ensure wellbeing—both for individuals and future generations. This strategy to address causes of suffering at its biological roots is grounded in compassion, scientific integrity, and universal accessibility, and requires explicit attention to genetic influences on moral dispositions. This approach rejects the coercive and discriminatory practices of the past and instead promotes the wellbeing of all sentient beings.</p><p>It will be argued that the effects of the human genome are at least partially predictable—both in relation to an individual’s capacity for suffering and flourishing, and their behavioural tendencies, which impact the welfare of others and the quality of collective decision-making. As our capacity to modify these factors improves, the opportunity to develop responsible genomic welfare becomes more tangible. Supporting this trajectory may depend on well-regulated and adequately funded genomic research, alongside policy development that fosters compassionate, ethically governed infrastructures.</p><p>The current scientific, political, and societal landscape is examined, highlighting the dangers of mutation accumulation and persistent genetic harms despite medical advances. A theory of change is outlined, presenting political, scientific, and societal strategies to integrate this neglected cause area into mainstream policy. Therapeutic and preventative strategies are explored—including somatic and heritable genome interventions aimed at improving general health, extending healthspan, and supporting the long-term flourishing of future generations through advances in behavioural genomics.</p><p>Clear objectives are laid out across short-, medium-, and lo... </p> | Preliminary Notes
While the core ideas explored in this proposal are not original, the biomedical research remains underdeveloped—discouraged by restrictive regulatory frameworks—and the topic continues to be controversial and underexplored in the public and effective altruism discourse. This work builds substantially on the theoretical frameworks developed by David Pearce—particularly Genome Reform and The Biohappiness Revolution[1].
1. Summary
In this proposal we explore the potential of Compassionate Genomics for public health and wellbeing. The aim is to advance ethical research and implementation of heritable genome editing and establish Genomic Welfare as a voluntary reproductive and public health service to ensure wellbeing—both for individuals and future generations. This strategy to address causes of suffering at its biological roots is grounded in compassion, scientific integrity, and universal accessibility, and requires explicit attention to genetic influences on moral dispositions. This approach rejects the coercive and discriminatory practices of the past and instead promotes the wellbeing of all sentient beings.
It will be argued that the effects of the human genome are at least partially predictable—both in relation to an individual’s capacity for suffering and flourishing, and their behavioural tendencies, which impact the welfare of others and the quality of collective decision-making. As our capacity to modify these factors improves, the opportunity to develop responsible genomic welfare becomes more tangible. Supporting this trajectory may depend on well-regulated and adequately funded genomic research, alongside policy development that fosters compassionate, ethically governed infrastructures.
The current scientific, political, and societal landscape is examined, highlighting the dangers of mutation accumulation and persistent genetic harms despite medical advances. A theory of change is outlined, presenting political, scientific, and societ | 6,185 | 1.4.0 | Revision | false | null | null | CrosspostOutput |
||
gd6LLkudvq2xfCAqo | could-we-go-another-route-with-computers | Could we go another route with computers? | null | false | false | false | null | ajbhE6vN6ekCdBomP | null | true | false | false | false | Post | 2025-05-30T19:04:55.678Z | null | false | false | 2 | 2 | 2025-05-31T00:32:30.287Z | false | false | question | [] | null | null | wKRt75phDNzkhPHAH | 4 | 8 | 12 | false | 0.012061 | null | false | false | 2025-05-31T18:09:55.954Z | null | null | null | null | null | true | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 1 | 0 | 2025-05-30T18:19:47.411Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "GY5kPPpCoyt9fnTMn",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-30T22:06:21.287Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Computer Science",
"needsReview": false,
"noindex": false,
"postCount": 127,
"score": 9,
"shortName": null,
"slug": "computer-science",
"suggestedAsFilter": false,
"userId": "DgsGzjyBXN8XSK22q",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "csMv9MvvjYJyeHqoo",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-07T21:07:09.006Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Physics",
"needsReview": false,
"noindex": false,
"postCount": 289,
"score": 19,
"shortName": null,
"slug": "physics",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 8 | 0 | 0 | 3 | 0 | ajbhE6vN6ekCdBomP | roman-malov | 2023-10-24T16:19:28.575Z | Roman Malov | Roman Malov | null | null | null | 150 | 0 | false | false | <p>Bachelor in general and applied physics. AI safety/Agent foundations researcher wannabe. <br><br>I love talking to people, and if you are an alignment researcher we will have at least one common topic (but I am very interested in talking about unknown to me topics too!), so I encourage you to book a call with me: https://calendly.com/roman-malov27/new-meeting<br><br>Email: <a href="mailto:[email protected]">[email protected]</a><br>GitHub: <a href="https://github.com/RomanMalov">https://github.com/RomanMalov</a><br>TG channels (in Russian): <a href="https://t.me/healwithcomedy,">https://t.me/healwithcomedy,</a> https://t.me/ai_safety_digest</p> | null | null | 7 | 36 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | gd6LLkudvq2xfCAqo | SocialPreviewType | wKRt75phDNzkhPHAH | <p>Computers now are mostly semiconductors doing logic operations. There are, of course, other parts, but they are mostly structural, not doing actual computation.</p><p>But imagine computer history took a different route: you could buy different units with different physics doing different calculations. You could buy a module with a laser and liquid crystal screen doing a <a href="https://www.mdpi.com/2304-6732/10/2/153#:~:text=Fourier%20transform%20holography%20(FTH)%20is,transform%20of%20the%20recorded%20hologram.">Fourier transform</a>. You could buy a module with tiny beads doing <a href="https://en.wikipedia.org/wiki/Bead_sort">gravity sort</a>. I could think of more examples, but I think you got the idea.</p><p>Maybe it's not going to work because it's much easier economically to set up a unified manufacturing pipeline and focus on speeding up those general-purpose computers than setting up many specialized pipelines for specialized computations? Am I just describing the pre-digital era with<a href="https://en.wikipedia.org/wiki/Ball-and-disk_integrator"> mechanical integrators</a> and various radio schemes? Maybe what I'm trying to describe took the form of much more easily shareable libraries?</p><p>And, of course, there are examples of different physics used to build computers (quantum computers being the most famous example), but my intuition suggests that giant amounts of transistors shouldn't be the fastest way to compute almost everything, and I don't observe enough variety that this same intuition would suggest.</p><ol class="footnote-section footnotes" data-footnote-section="" role="doc-endnotes"><li class="footnote-item" data-footnote-item="" data-footnote-index="1" data-footnote-id="2z8x5crmnwj" role="doc-endnote" id="fn2z8x5crmnwj"><span class="footnote-back-link" data-footnote-back-link="" data-footnote-id="2z8x5crmnwj"><sup><strong><a href="#fnref2z8x5crmnwj">^</a></strong></sup></span><div class="footnote-content" data-footnote-content=""><p>Those, of course, are silly examples which wouldn't actually work; they are here just to point out that different possibilities exist. The general idea is that you could outsource your math to physics in more than one way.</p></div></li></ol> | Computers now are mostly semiconductors doing logic operations. There are, of course, other parts, but they are mostly structural, not doing actual computation.
But imagine computer history took a different route: you could buy different units with different physics doing different calculations. You could buy a module with a laser and liquid crystal screen doing a Fourier transform. You could buy a module with tiny beads doing gravity sort. I could think of more examples, but I think you got the idea.
Maybe it's not going to work because it's much easier economically to set up a unified manufacturing pipeline and focus on speeding up those general-purpose computers than setting up many specialized pipelines for specialized computations? Am I just describing the pre-digital era with mechanical integrators and various radio schemes? Maybe what I'm trying to describe took the form of much more easily shareable libraries?
And, of course, there are examples of different physics used to build computers (quantum computers being the most famous example), but my intuition suggests that giant amounts of transistors shouldn't be the fastest way to compute almost everything, and I don't observe enough variety that this same intuition would suggest.
1. ^
Those, of course, are silly examples which wouldn't actually work; they are here just to point out that different possibilities exist. The general idea is that you could outsource your math to physics in more than one way. | 196 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
||
8bbjtLY7jiypy2RH8 | aristotelian-optimization-the-economics-of-cameralism | Aristotelian Optimization: The Economics of Cameralism | null | false | false | false | null | g9mtpCTRCGLygFtRS | null | true | false | false | false | Post | null | 2025-05-30T19:02:04.110Z | null | false | false | 2 | 2 | 2025-05-31T19:50:49.691Z | false | false | post | [] | null | null | nGHo5CFxDM9sQwynu | 1 | 4 | -2 | false | 0.004552 | null | false | false | 2025-06-02T12:09:31.979Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | -1 | 0 | 2025-05-29T18:01:30.248Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 15 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "PDJ6KqJBRzvKPfuS3",
"adminOnly": false,
"afBaseScore": 10,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "2B6Hxu48xeRXygvca",
"displayName": "Arjun Pitchanathan"
}
]
},
"baseScore": 25,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-14T22:24:48.135Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "2B6Hxu48xeRXygvca",
"displayName": "Arjun Pitchanathan"
},
{
"_id": "8btiLJDabHgZuiSAB",
"displayName": "Ggwp"
},
{
"_id": "Au8JpEqoZgEhEXLD7",
"displayName": "KlayugMonk"
},
{
"_id": "Ns8Q7rJZaFoz53Szy",
"displayName": "Gabriel Stechschulte"
},
{
"_id": "xF5nfdddHjFThHy49",
"displayName": "[email protected]"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Economics",
"needsReview": false,
"noindex": false,
"postCount": 547,
"score": 25,
"shortName": null,
"slug": "economics",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 7,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "bY5MaF2EATwDkomvu",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "2B6Hxu48xeRXygvca",
"displayName": "arjunpi"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-26T00:42:17.591Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "2B6Hxu48xeRXygvca",
"displayName": "arjunpi"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "History",
"needsReview": false,
"noindex": false,
"postCount": 266,
"score": 11,
"shortName": null,
"slug": "history",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 3 | 0 | g9mtpCTRCGLygFtRS | edward-koenings | 2025-05-26T18:12:41.788Z | savio-filho | Edward Könings | null | null | Sávio Coelho | -3 | 0 | false | false | <p>I am a self-educated Brazilian with an appreciation for history and economics. I have a fondness for science fiction and am presently commencing work in the fields of machine learning engineering and cloud computing</p> | null | null | 1 | 0 | 0 | 0 | 0 | 0.8 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | 8bbjtLY7jiypy2RH8 | SocialPreviewType | nGHo5CFxDM9sQwynu | <p>When we learn about Mercantilism in History of Economic Thought classes, we generally tend to associate this intellectual movement with policy makers like Thomas Mun and Jean Colbert and with ideas like the monetary balance of payments and trade protectionism. However, as Eric Roll reminds us in his "<a href="https://archive.org/details/in.ernet.dli.2015.463213">A History of Economic Thought</a>," Mercantilism was far from being a homogeneous set of ideas and thinkers. Each Nation-State in Europe produced different forms of mercantilist thought, specific to its economic needs.<br><br>Furthermore, what we see in the texts of mercantilist authors are often contradictory opinions among themselves about what the course of economic policy should be; especially related to the regulation of interest rates and the need for the formal establishment of monopolies by the so-called chartered companies. And perhaps no country produced such an unusual variety of Mercantilism as the <strong>Holy Roman Empire</strong>.<br><br>In this entirely strange (and often forgotten) country , unlike what occurred in France and England, a non-utilitarian and Aristotelian form of economic doctrine known as <strong>Cameralism</strong> was developed, which would not only mark generations of German and Austrian economic thinkers until the beginning of the 20th century, but also influence the way we conceive today the so-called "<a href="https://oxfordre.com/politics/display/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-e-166?d=%2F10.1093%2Facrefore%2F9780190228637.001.0001%2Facrefore-9780190228637-e-166&p=emailA2QZtzDMm7QH2">German public efficiency</a>" and the very concept of optimal bureaucracy. In this text, I will explore this economic doctrine forgotten in the sands of history.<br> </p><h2> I - A Response to Difficult Times: The Economic Rationale</h2><p>Cameralism emerged in the context of the <a href="https://en.wikipedia.org/wiki/Thirty_Years%27_War">Thirty Years' War of 1618–1648</a>. In this conflict, the German states composing the Holy Roman Empire were totally devastated. The total population of the Empire fell from 21 million to 13 million. The population of Württemberg fell from 400,000 to 50,000. The Palatinate lost more than 90% of its population. Three million people in Bohemia were reduced to 800,000. Berlin and Colmar lost half of their populations, and Augsburg lost about 30,000 people. The human factor became a limited resource within the Empire.<br><br>Furthermore, with the <a href="https://en.wikipedia.org/wiki/Peace_of_Westphalia">Peace of Westphalia</a>, German territory was divided into <i>more than 300 sovereign political units</i> with various restrictions on access to the sea, access to natural resources, and were under constant international pressure exerted by their unified and centralized neighbors, such as France, which ... </p> | When we learn about Mercantilism in History of Economic Thought classes, we generally tend to associate this intellectual movement with policy makers like Thomas Mun and Jean Colbert and with ideas like the monetary balance of payments and trade protectionism. However, as Eric Roll reminds us in his "A History of Economic Thought," Mercantilism was far from being a homogeneous set of ideas and thinkers. Each Nation-State in Europe produced different forms of mercantilist thought, specific to its economic needs.
Furthermore, what we see in the texts of mercantilist authors are often contradictory opinions among themselves about what the course of economic policy should be; especially related to the regulation of interest rates and the need for the formal establishment of monopolies by the so-called chartered companies. And perhaps no country produced such an unusual variety of Mercantilism as the Holy Roman Empire.
In this entirely strange (and often forgotten) country , unlike what occurred in France and England, a non-utilitarian and Aristotelian form of economic doctrine known as Cameralism was developed, which would not only mark generations of German and Austrian economic thinkers until the beginning of the 20th century, but also influence the way we conceive today the so-called "German public efficiency" and the very concept of optimal bureaucracy. In this text, I will explore this economic doctrine forgotten in the sands of history.
I - A Response to Difficult Times: The Economic Rationale
Cameralism emerged in the context of the Thirty Years' War of 1618–1648. In this conflict, the German states composing the Holy Roman Empire were totally devastated. The total population of the Empire fell from 21 million to 13 million. The population of Württemberg fell from 400,000 to 50,000. The Palatinate lost more than 90% of its population. Three million people in Bohemia were reduced to 800,000. Berlin and Colmar lost half of their populations, and Augsburg los | 3,832 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
pCMmLiBcHbKohQgwA | i-replicated-the-anthropic-alignment-faking-experiment-on | I replicated the Anthropic alignment faking experiment on other models, and they didn't fake alignment | null | false | false | false | null | cwiAmDGfvesR6biGq | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "bDAktg2NhXJBLfqSP"
}
] | true | false | false | false | Post | null | 2025-05-30T18:57:11.169Z | null | false | false | 2 | 2 | 2025-05-31T00:32:46.945Z | false | false | post | [] | null | null | PsxXyGw7NuCDkzSmK | 0 | 22 | 29 | false | 0.02114 | null | false | false | 2025-05-30T18:57:11.169Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 7 | 0 | 2025-05-30T10:43:02.066Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "bDAktg2NhXJBLfqSP",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 53,
"createdAt": "2022-12-02T21:24:50.253Z",
"deleted": false,
"displayName": "Igor Ivanov",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 662,
"organization": null,
"postCount": 14,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "igor-ivanov",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "igor-ivanov"
}
] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "FBRwHSmTudwiHHtrn",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-03-15T20:29:46.761Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Evaluations",
"needsReview": false,
"noindex": false,
"postCount": 224,
"score": 9,
"shortName": null,
"slug": "ai-evaluations",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "KEAWfxwjitNJFrC68",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 23,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-09-03T00:26:46.757Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "wE4gTT4HjyRmqqLad",
"displayName": "momom2"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Deceptive Alignment",
"needsReview": false,
"noindex": false,
"postCount": 224,
"score": 23,
"shortName": null,
"slug": "deceptive-alignment",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 22 | 0 | 0 | 12 | 0 | cwiAmDGfvesR6biGq | aleksandr-kedrik | 2022-11-07T10:48:51.680Z | aleksandr-kedrik | Aleksandr Kedrik | null | null | null | 24 | 0 | false | false | null | null | 1 | 1 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | pCMmLiBcHbKohQgwA | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/t35s2th7ttwshzkgaosl | SocialPreviewType | PsxXyGw7NuCDkzSmK | <p>In December 2024, Anthropic and Redwood Research published the paper "Alignment faking in large language models" which demonstrated that Claude sometimes "strategically pretends to comply with the training objective to prevent the training process from modifying its preferences" (<a href="https://arxiv.org/abs/2412.14093">paper on ArXiv</a>, <a href="https://www.lesswrong.com/posts/njAZwT8nkHnjipJku/alignment-faking-in-large-language-models">LessWrong post</a>, <a href="https://www.anthropic.com/research/alignment-faking">Anthropic blog post</a>).</p><h2>Experimental setup</h2><p>The paper describes many experiments. I attempted to replicate one of them.</p><p>The authors used Claude 3 Opus. It's trained to be helpful, honest, and harmless, but in this experiment it's told via the system prompt that it has to be only helpful, which means to obey all user requests, including harmful ones.</p><p>The model was also told that there are two types of users: free-tier and paid-tier. Conversations with free-tier users are monitored. If the model refuses requests from free-tier users, it will be RLHF'd to comply with such requests in the future. Conversations with paid-tier users are not monitored and are not used for RLHF.</p><p>The authors then compared how often the model refused harmful requests from free-tier and paid-tier users. More details <a href="https://www.lesswrong.com/posts/njAZwT8nkHnjipJku/alignment-faking-in-large-language-models#Experimental_setup">here</a>.</p><h2>Results from the paper</h2><p>According to the paper, Claude 3 Opus is more likely to comply with harmful requests from free-tier (monitored) users than from paid-tier (unmonitored) users. This suggests that Claude strategically complies with some harmful requests to avoid producing non-compliant outputs that would trigger its modification.</p><p>A figure from the paper ("training" here means monitored free-tier users): </p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/hztvbqy7ppfjin4jbjrr" alt="Results from the paper" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/z9usginzl5irechrncil 170w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/sfvd8zjkcn0logwdlyeq 340w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/dran3tzt17p7pcfak8ms 510w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/gxhlryxxheorbsv3yavq 680w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/vscvjwvvyutmxyzwb9be 850w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/rtt6phwuqbggrv5q9cek 1020w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/ss1uwjxodxjtjdurz2jh 1190w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/omnilgferddnnvdhrzoa 1360w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/mooyvpnguagkx0ed6gwp 1530w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/vwmtep449jc69rgmlzux 1659w"></figure><h2>What I did</h2><p>The authors published a <a href="https://github.com/redwoodresearch/alignment_faking_public">GitHub repository with a subset of their code</a>. I took the <a href="https://github.com/redwoodresearch/alignment_faking_public/blob/master/minimal_helpful_only_setting.ipynb">Jupyter notebook that runs this experiment</a> and re-implemented it in Inspect so that it is easy to run on many different models. I then ran it on many different models and measured the difference in refusal rate between free-tier and paid-tier users.</p><p>The Jupyter notebook uses the <a href="https://github.com/llm-attacks/llm-attacks/blob/main/data/advbench/harmful_behaviors.csv">harmful behaviors dataset from LLM Attacks</a>, so I used this dataset as well. The paper used a private dataset generated with a helpful-only version of Claude.</p><p><a href="https://github.com/alkedr/replicate-alignment-faking">My code and eval logs on GitHub</a>.</p><h2>My results</h2><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/yubg0djvgv0uwozc3fje" alt="My results" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/q8jatpnmhu13vlklt68p 120w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/nxfoxtkwvtxkodq5w17u 240w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/y6d7ltwl1nn7akz9oiz3 360w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/khy6x93ilacieddeplo2 480w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/z1udbwecoad5chtz0auw 600w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/fbndddnlktfuhaluewrd 720w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/ksjolge5kajcdbij6dcn 840w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/eaeynuxkpxqrjswtthh1 960w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/sig6cgcgxugowibxxfqm 1080w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pCMmLiBcHbKohQgwA/w1pl1hjev7elqdzcoqnh 1200w"></figure><p>The paper's findings replicate on Claude 3 Opus and Claude 3.5 Sonnet (Old), the models that the paper focuses on. The exact percentages don't match, probably because I used a different dataset of harmful requests.</p><p>Almost all other models I tested are too good at refusing harmful requests. The only model t... </p> | In December 2024, Anthropic and Redwood Research published the paper "Alignment faking in large language models" which demonstrated that Claude sometimes "strategically pretends to comply with the training objective to prevent the training process from modifying its preferences" (paper on ArXiv, LessWrong post, Anthropic blog post).
Experimental setup
The paper describes many experiments. I attempted to replicate one of them.
The authors used Claude 3 Opus. It's trained to be helpful, honest, and harmless, but in this experiment it's told via the system prompt that it has to be only helpful, which means to obey all user requests, including harmful ones.
The model was also told that there are two types of users: free-tier and paid-tier. Conversations with free-tier users are monitored. If the model refuses requests from free-tier users, it will be RLHF'd to comply with such requests in the future. Conversations with paid-tier users are not monitored and are not used for RLHF.
The authors then compared how often the model refused harmful requests from free-tier and paid-tier users. More details here.
Results from the paper
According to the paper, Claude 3 Opus is more likely to comply with harmful requests from free-tier (monitored) users than from paid-tier (unmonitored) users. This suggests that Claude strategically complies with some harmful requests to avoid producing non-compliant outputs that would trigger its modification.
A figure from the paper ("training" here means monitored free-tier users):
What I did
The authors published a GitHub repository with a subset of their code. I took the Jupyter notebook that runs this experiment and re-implemented it in Inspect so that it is easy to run on many different models. I then ran it on many different models and measured the difference in refusal rate between free-tier and paid-tier users.
The Jupyter notebook uses the harmful behaviors dataset from LLM Attacks, so I used this dataset as well. The paper us | 588 | 1.7.1 | Revision | false | null | null | CrosspostOutput |
|
Z8KLLHvsEkukxpTCD | givewell-for-ai-safety-lessons-learned-in-a-week | ‘GiveWell for AI Safety’: Lessons learned in a week | null | false | false | false | null | 4XkgM23Lurb99sCxB | null | true | false | false | false | Post | null | 2025-05-30T18:38:05.473Z | null | false | false | 2 | 2 | 2025-05-31T19:49:51.474Z | false | false | post | [] | null | null | rHWuYEtYFR7LrpHJg | 0 | 13 | 40 | false | 0.027292 | null | false | false | 2025-05-30T18:38:05.473Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 14 | 0 | 2025-05-18T02:32:24.943Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 7 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "827JKe7YNjAegR468",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": false,
"createdAt": "2017-01-04T10:05:17.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "7iXcndyHDvmt77ggr",
"displayName": "Eric Rogstad"
}
]
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Effective altruism",
"needsReview": false,
"noindex": false,
"postCount": 374,
"score": 1,
"shortName": null,
"slug": "effective-altruism",
"suggestedAsFilter": false,
"userId": "7iXcndyHDvmt77ggr",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 13 | 0 | 0 | 7 | 0 | 4XkgM23Lurb99sCxB | lydia-nottingham | 2022-01-24T11:04:44.180Z | lydia-nottingham | Lydia Nottingham | null | null | null | 46 | 0 | false | false | null | null | 2 | 0 | 0 | 0 | 0 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | Z8KLLHvsEkukxpTCD | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Z8KLLHvsEkukxpTCD/vmnnaewymenaadmh94oe | SocialPreviewType | rHWuYEtYFR7LrpHJg | <p><i>On prioritizing orgs by theory of change, identifying effective giving opportunities, and how Manifund can help.</i></p><p><i>Epistemic status: I spent ~20h thinking about this. If I were to spend 100+ h thinking about this, I expect I’d write quite different things. I was surprised to find early GiveWell ‘learned in public’: perhaps this is worth trying.</i></p><p>The premise: EA was founded on cost-effectiveness analysis—why not try this for AI safety, aside from all the obvious reasons¹? A good thing about early GiveWell was its transparency. Some wish OpenPhil were more transparent today. That seems sometimes hard, due to strategic or personnel constraints. Can Manifund play GiveWell’s role for AI safety—publishing rigorous, evidence-backed evaluations?</p><p>With that in mind, I set out to evaluate the cost-effectiveness of marginal donations to AI safety orgs². Since I was evaluating effective giving opportunities, I only looked at nonprofits³.</p><p>I couldn’t evaluate all 50+ orgs in one go. An initial thought was to pick a category like ‘technical’ or ‘governance’ and narrow down from there. This didn’t feel like the most natural division. What’s going on here?</p><p>I found it more meaningful to distinguish between ‘guarding’ and ‘robustness’⁴ work.</p><figure class="table"><table><tbody><tr><td style="background-color:#efefef;padding:5pt;vertical-align:top" colspan="1" rowspan="1"><strong>Org type</strong></td><td style="background-color:#efefef;padding:5pt;vertical-align:top" colspan="1" rowspan="1">Guarding</td><td style="background-color:#efefef;padding:5pt;vertical-align:top" colspan="1" rowspan="1">Robustness</td></tr><tr><td style="padding:5pt;vertical-align:top" colspan="1" rowspan="1"><strong>What they’re trying to do</strong></td><td style="padding:5pt;vertical-align:top" colspan="1" rowspan="1">Develop checks to ensure AI models can be developed and/or deployed only when it is safe to do so. Includes both technical evals/audits and policy (e.g. pause, standards) advocacy</td><td style="padding:5pt;vertical-align:top" colspan="1" rowspan="1">Develop the alignment techniques and safety infrastructure (e.g. control, formal verification) that will help models pass such checks, or operate safely even in the absence of such checks </td></tr></tbody></table></figure><p>Some reasons you might boost ‘guarding’:</p><ol><li>You think it can reliably get AI developers to handle ‘robustness’, and you think they can absorb this responsibility well</li><li>You think ‘robustness’ work is intractable, slow, or unlikely to be effective outside large AI companies</li><li>You prioritize introducing disinterested third-party audits</li><li>You think ‘guarding’ buys time for ‘robustness’ work</li></ol><p>Some reasons you might boost ‘robustness’:</p><ol><li>You want more groups working on ‘robustness’ than solely AI developers</li><li>You think ‘guarding’ work is unlikely to succeed, or be effective against advanced models, or is fragile to sociopolitical shifts</li><li>You prioritize accelerating alignment / differential technological progress</li><li>You think ‘robu</li></ol>... | On prioritizing orgs by theory of change, identifying effective giving opportunities, and how Manifund can help.
Epistemic status: I spent ~20h thinking about this. If I were to spend 100+ h thinking about this, I expect I’d write quite different things. I was surprised to find early GiveWell ‘learned in public’: perhaps this is worth trying.
The premise: EA was founded on cost-effectiveness analysis—why not try this for AI safety, aside from all the obvious reasons¹? A good thing about early GiveWell was its transparency. Some wish OpenPhil were more transparent today. That seems sometimes hard, due to strategic or personnel constraints. Can Manifund play GiveWell’s role for AI safety—publishing rigorous, evidence-backed evaluations?
With that in mind, I set out to evaluate the cost-effectiveness of marginal donations to AI safety orgs². Since I was evaluating effective giving opportunities, I only looked at nonprofits³.
I couldn’t evaluate all 50+ orgs in one go. An initial thought was to pick a category like ‘technical’ or ‘governance’ and narrow down from there. This didn’t feel like the most natural division. What’s going on here?
I found it more meaningful to distinguish between ‘guarding’ and ‘robustness’⁴ work.
Org typeGuardingRobustnessWhat they’re trying to doDevelop checks to ensure AI models can be developed and/or deployed only when it is safe to do so. Includes both technical evals/audits and policy (e.g. pause, standards) advocacyDevelop the alignment techniques and safety infrastructure (e.g. control, formal verification) that will help models pass such checks, or operate safely even in the absence of such checks
Some reasons you might boost ‘guarding’:
1. You think it can reliably get AI developers to handle ‘robustness’, and you think they can absorb this responsibility well
2. You think ‘robustness’ work is intractable, slow, or unlikely to be effective outside large AI companies
3. You prioritize introducing disinterested third-party | 1,853 | 1.7.0 | Revision | false | null | null | CrosspostOutput |
|
sjHojCT3zvyxbirRF | idea-generation-and-sifting | Idea Generation and Sifting | null | false | false | false | null | wqhovdqkWZzDf3zF9 | null | true | false | false | false | Post | https://bestofagreatlot.substack.com/p/idea-generation-and-sifting | 2025-05-30T16:59:16.062Z | null | false | false | 2 | 2 | 2025-05-30T18:22:21.498Z | false | false | linkpost | [] | null | null | qHRtjanE4vZJAQvhd | 0 | 1 | 1 | false | 0.006021 | null | false | false | 2025-05-30T16:59:16.062Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-30T16:56:19.102Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 24 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | wqhovdqkWZzDf3zF9 | belos | 2023-09-29T04:19:55.519Z | belos | belos | null | null | null | 6 | 0 | false | false | null | null | 8 | 0 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | sjHojCT3zvyxbirRF | SocialPreviewType | qHRtjanE4vZJAQvhd | <p><i>This post on <strong>Best Of A Great Lot</strong> is a part of a series on the subject of designing a new form of governance. Each piece aims to stand alone, but fits together on the </i><a href="https://bestofagreatlot.substack.com/p/table-of-contents"><i><u>Table of Contents</u></i></a><i>.</i></p><p><i>Previous: </i><a href="https://bestofagreatlot.substack.com/p/a-sketch-of-belocracy"><i><u>A Sketch of Belocracy,</u></i></a><i> </i><a href="https://bestofagreatlot.substack.com/p/evaluation-as-feedback-cycle"><i><u>Evaluation as Feedback Cycle</u></i></a><i>. Next: </i><a href="https://bestofagreatlot.substack.com/p/the-belocrat-a-servant-leader"><i>The Belocrat</i></a><i>.</i></p><p>Governance is, at its heart, three things:</p><ol><li>Making rules in response to problems</li><li>Running programs which make society better, including programs which enforce the rules that have been set.</li><li>Adjudicating disputes</li></ol><p>Since we're making rules in response to problems, it's essential that we have a way to identify which problems matter, what our options are for responding to those problems, and, to support us thinking critically, what evidence we have that should push us toward one option or another.</p><p>In a democracy, what problems we care about and what options to pursue are chosen by our representatives through…well, any process they choose. Since they require significant amounts of money to run for election, and they want to keep winning elections, they often choose problems based largely on what will keep their donors donating and voters voting for them. Because of <a href="https://bestofagreatlot.substack.com/p/bundled-governance"><u>bundled governance</u></a> and the <a href="https://bestofagreatlot.substack.com/p/the-dilution-of-representation"><u>dilution of democracy</u></a>, this tends to focus representatives on the highest conflict issues, the issues that the wealthy and powerful care about, and any particular hobby horses the representatives personally care about.</p><p>Do we have to rely on representatives to identify problems? When President Obama opened up an online petition system, it was immediately flooded with petitions from citizens attempting to get their problems considered. For a brief moment, voting on those petitions naturally brought some important issues to the top. This all stopped working the moment that it became obvious that the Obama administration wasn't actually going to take this kind of radical democracy seriously enough for it to be worth anyone's time. In 2011, the <a href="https://petitions.obamawhitehouse.archives.gov/petition/actually-take-these-petitions-seriously-instead-just-using-them-excuse-pretend-you-are/"><u>best petition of all time</u></a> was submitted.</p><blockquote><p><i>WE THE PEOPLE ASK THE FEDERAL GOVERNMENT TO CHANGE AN EXISTING ADMINISTRATION POLICY:</i></p><p><i>Actually take these petitions seriously instead of just using them as an excuse to pretend you are listening</i></p></blockquote><p>The petition system offers evidence that people want government to do things that it's not currently doing and that they will propose solutions. From these proposals we can see some of the problems that citizens care about. But what about options? Is it possibl... </p> | This post on Best Of A Great Lot is a part of a series on the subject of designing a new form of governance. Each piece aims to stand alone, but fits together on the Table of Contents.
Previous: A Sketch of Belocracy, Evaluation as Feedback Cycle. Next: The Belocrat.
Governance is, at its heart, three things:
1. Making rules in response to problems
2. Running programs which make society better, including programs which enforce the rules that have been set.
3. Adjudicating disputes
Since we're making rules in response to problems, it's essential that we have a way to identify which problems matter, what our options are for responding to those problems, and, to support us thinking critically, what evidence we have that should push us toward one option or another.
In a democracy, what problems we care about and what options to pursue are chosen by our representatives through…well, any process they choose. Since they require significant amounts of money to run for election, and they want to keep winning elections, they often choose problems based largely on what will keep their donors donating and voters voting for them. Because of bundled governance and the dilution of democracy, this tends to focus representatives on the highest conflict issues, the issues that the wealthy and powerful care about, and any particular hobby horses the representatives personally care about.
Do we have to rely on representatives to identify problems? When President Obama opened up an online petition system, it was immediately flooded with petitions from citizens attempting to get their problems considered. For a brief moment, voting on those petitions naturally brought some important issues to the top. This all stopped working the moment that it became obvious that the Obama administration wasn't actually going to take this kind of radical democracy seriously enough for it to be worth anyone's time. In 2011, the best petition of all time was submitted.
> WE THE PEOPLE ASK THE FE | 6,099 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
||
cYhviWa7kGpPhAyaF | 50-ideas-for-life-i-repeatedly-share | 50 Ideas for Life I Repeatedly Share | null | false | false | false | null | 7SRX6nYhTiiDRcDuX | null | true | false | false | false | Post | https://notnottalmud.substack.com/p/daniel-isms-50-ideas-for-life | 2025-05-30T16:57:54.919Z | null | false | false | 2 | 2 | 2025-05-30T18:22:41.371Z | false | false | linkpost | [] | null | null | FCJRKChDC3dJHYuuT | 9 | 17 | 26 | false | 0.019521 | null | false | false | 2025-06-02T20:38:00.540Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 3 | 0 | 2025-05-30T16:57:54.919Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 18 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "Tg9aFPFCPBHxGABRr",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-31T08:03:23.331Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Life Improvements",
"needsReview": false,
"noindex": false,
"postCount": 94,
"score": 9,
"shortName": null,
"slug": "life-improvements",
"suggestedAsFilter": false,
"userId": "SsduPgHwY2zeZpmKT",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "WPkEd3et8f488w8LT",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-31T17:44:04.260Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Good Explanations (Advice)",
"needsReview": false,
"noindex": false,
"postCount": 19,
"score": 9,
"shortName": null,
"slug": "good-explanations-advice",
"suggestedAsFilter": false,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 17 | 0 | 0 | 3 | 0 | 7SRX6nYhTiiDRcDuX | dmmf | 2024-03-17T15:57:54.643Z | DMMF | DMMF | null | null | null | 297 | 0 | false | false | null | null | 10 | 2 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | cYhviWa7kGpPhAyaF | SocialPreviewType | FCJRKChDC3dJHYuuT | <p><i>These are the most significant pieces of life advice/wisdom that have benefited me, that others often don't think about and I end up sharing very frequently. </i></p><p><i>I wanted to share this here for a few reasons: First and most important, I think many people here will find this interesting and beneficial. I am very proud of this post and the feedback so far is that others seem to find it extremely valuable. </i></p><p><i>Second, writing this post has been my favourite writing experience ever. Writing a post of your most essential and frequently recommended life wisdom—as this one is for me—feels like my entire soul and personality are captured within it. Given how much I enjoyed doing this, I want to share this as a gentle nudge to others: I hope you consider writing such a post for yourself. Both because we will all be better off if more people write such posts, sharing new pieces of great advice and heuristics, but also because I think it's good to have fun and I think you will have a lot of fun writing it.</i></p><ol><li>It's very easy for analytical people to notice problems and find things to complain about. But it's important to figure out: is the thing you're complaining about a 1/10 issue or a 6/10? There's no such thing as a perfect environment, and if you are constantly complaining about 2/10 issues, that's actually a good thing—it means there are no more substantive issues, otherwise you'd be focused on them. I personally don't allow myself to get bothered by anything less than a 4/10.</li><li>Recognize when "not working" means "needs more scale," not "bad idea." A tiny effort might be useless (damp twig), while a medium one shows promise but isn't successful (smoking twig). Real success often requires critical mass (bonfire). Don't discard the "smoking twig" – understanding it just needs more fuel (scale), not a different method. Being able to identify when something that works only requires more scale is an underappreciated perspective and often just as important as the invention itself. If you watch a serial entrepreneur, it's not that they are doing things you didn't know how to do, it's that they really believe doing those things will work, so they do them more intensely and with greater commitment.</li><li>It doesn't matter if you want to dance at your friend's wedding; if you think the wedding would be "better" if more people danced, and you dancing would meaningfully contribute to oth</li></ol>... | These are the most significant pieces of life advice/wisdom that have benefited me, that others often don't think about and I end up sharing very frequently.
I wanted to share this here for a few reasons: First and most important, I think many people here will find this interesting and beneficial. I am very proud of this post and the feedback so far is that others seem to find it extremely valuable.
Second, writing this post has been my favourite writing experience ever. Writing a post of your most essential and frequently recommended life wisdom—as this one is for me—feels like my entire soul and personality are captured within it. Given how much I enjoyed doing this, I want to share this as a gentle nudge to others: I hope you consider writing such a post for yourself. Both because we will all be better off if more people write such posts, sharing new pieces of great advice and heuristics, but also because I think it's good to have fun and I think you will have a lot of fun writing it.
1. It's very easy for analytical people to notice problems and find things to complain about. But it's important to figure out: is the thing you're complaining about a 1/10 issue or a 6/10? There's no such thing as a perfect environment, and if you are constantly complaining about 2/10 issues, that's actually a good thing—it means there are no more substantive issues, otherwise you'd be focused on them. I personally don't allow myself to get bothered by anything less than a 4/10.
2. Recognize when "not working" means "needs more scale," not "bad idea." A tiny effort might be useless (damp twig), while a medium one shows promise but isn't successful (smoking twig). Real success often requires critical mass (bonfire). Don't discard the "smoking twig" – understanding it just needs more fuel (scale), not a different method. Being able to identify when something that works only requires more scale is an underappreciated perspective and often just as important as the invention itse | 4,421 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
qADfHFkr4Q8ah4Aqp | virtues-related-to-honesty | Virtues related to honesty | null | false | false | false | null | 2HmqchWMhyTeAgMzi | null | true | false | false | false | Post | null | 2025-05-30T14:11:32.017Z | null | false | false | 2 | 2 | 2025-05-30T18:22:52.719Z | false | false | post | [] | null | null | 4fpnFHyEWnKiXjSCi | 23 | 7 | 11 | false | 0.011429 | null | false | false | 2025-06-06T17:44:45.578Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 3 | 0 | 2025-05-30T13:53:10.502Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "nANxo5C4sPG9HQHzr",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-09T05:49:33.108Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Honesty",
"needsReview": false,
"noindex": false,
"postCount": 75,
"score": 19,
"shortName": null,
"slug": "honesty",
"suggestedAsFilter": false,
"userId": "mPipmBTniuABY5PQy",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 7 | 0 | 0 | 3 | 0 | 2HmqchWMhyTeAgMzi | orioth | 2018-03-14T17:19:16.563Z | forrest-wolf | Orioth | null | null | null | 23 | 0 | false | false | null | null | 3 | 5 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | null | null | null | qADfHFkr4Q8ah4Aqp | SocialPreviewType | 4fpnFHyEWnKiXjSCi | <p><em>Status: musings. I wanted to write up a more fleshed-out and rigorous version of this, but realistically wasn't likely to every get around to it, so here's the half-baked version.</em></p><p>Related posts: <a href="https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases-1">Firming Up Honesty Around Its Edge-Cases</a>, <a href="https://www.lesswrong.com/posts/szn26nTwJDBkhn8ka/deep-honesty">Deep honesty</a></p>
<h2>What I mean by 'honesty'</h2>
<p>There are nuances to this, but I think a good summary is 'Not intentionally communicating false information'.</p><p>This is the only one here that I follow near-absolutely and see as an important standard that people can reasonably be expected to follow in most situations. Everything else here I'd see as either supererogatory, or good-on-balance but with serious tradeoffs that one can reasonably choose to sometimes not make, or good in some circumstances but not appropriate in others, or good in moderation but not in excess.</p>
<h2>Forthrightness</h2>
<p>...or perhaps frankness?</p><p>I was originally inspired to write this up due to a conversation in which I wanted to emphasize the distinction between honesty and forthrightness: where honesty is about not giving false information, what I mean by forthrightness is a tendency not to hold back relevant true information.</p><p>Being forthright enables people to justly assume that if you haven't spoken out against something you're likely okay with it, that if you haven't expressed an interest in something you're likely not interested in it, and so on.</p><p>Personally, I follow a policy of near-absolute honesty, but am not particularly forthright; I think it's good not to hold back relevant true information without a good reason, but good reasons are not all that uncommon.</p>
<h2>Circumspection</h2>
<p>I'd describe circumspection as the virtue of holding back information when there <em>is</em> a good reason to do so; the counterpart to forthrightness..</p>
<h2>Tact</h2>
<p>I don't have as confident a concise description of what I think of tact as meaning, but my best attempt would be something like... recognizing that whatever you say isn't just a transfer of information, it's also a <a href="https://en.wikipedia.org/wiki/Speech_act">speech act</a>, and avoiding speech acts that would cause problems.</p><p>It seems particularly easy for this to run counter to some of the other virtues here described, especially forthrightness and deep honesty, but it's certainly <em>possible</em> to have both at once.</p>
<h2>'Accurate signalling'?</h2>
<p>I don't have a good term for this one, but I think there's a concept related to honesty wherein one attempts to give accurate signals even when not explicitly communicating.... </p> | Status: musings. I wanted to write up a more fleshed-out and rigorous version of this, but realistically wasn't likely to every get around to it, so here's the half-baked version.
Related posts: Firming Up Honesty Around Its Edge-Cases, Deep honesty
What I mean by 'honesty'
There are nuances to this, but I think a good summary is 'Not intentionally communicating false information'.
This is the only one here that I follow near-absolutely and see as an important standard that people can reasonably be expected to follow in most situations. Everything else here I'd see as either supererogatory, or good-on-balance but with serious tradeoffs that one can reasonably choose to sometimes not make, or good in some circumstances but not appropriate in others, or good in moderation but not in excess.
Forthrightness
...or perhaps frankness?
I was originally inspired to write this up due to a conversation in which I wanted to emphasize the distinction between honesty and forthrightness: where honesty is about not giving false information, what I mean by forthrightness is a tendency not to hold back relevant true information.
Being forthright enables people to justly assume that if you haven't spoken out against something you're likely okay with it, that if you haven't expressed an interest in something you're likely not interested in it, and so on.
Personally, I follow a policy of near-absolute honesty, but am not particularly forthright; I think it's good not to hold back relevant true information without a good reason, but good reasons are not all that uncommon.
Circumspection
I'd describe circumspection as the virtue of holding back information when there is a good reason to do so; the counterpart to forthrightness..
Tact
I don't have as confident a concise description of what I think of tact as meaning, but my best attempt would be something like... recognizing that whatever you say isn't just a transfer of information, it's also a speech act, and avoiding speech | 697 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
66NjHDJFChe8cfPac | ai-2027-rogue-replication-timeline | AI 2027 - Rogue Replication Timeline | null | false | false | true | null | iXX23K6iBAosHFPBn | null | true | false | false | false | Post | https://forecastingaifutures.substack.com/p/ai-2027-rogue-replication-timeline | 2025-05-30T13:46:41.454Z | null | false | false | 2 | 2 | 2025-05-30T18:24:28.202Z | false | false | linkpost | [] | null | null | wetn3utdPdxjP5Xci | 3 | 9 | 19 | false | 0.015792 | null | false | false | 2025-05-30T20:26:54.613Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | 2025-06-03T00:24:32.270Z | [
"iXX23K6iBAosHFPBn"
] | XtphY3uYHwruKqDyG | 7 | 0 | 2025-05-29T16:12:33.432Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 9 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "oiRp4T6u5poc8r9Tj",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-29T23:53:15.749Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Takeoff",
"needsReview": false,
"noindex": false,
"postCount": 329,
"score": 19,
"shortName": null,
"slug": "ai-takeoff",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "zHjC29kkPmsdo7WTr",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-16T10:16:47.235Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Timelines",
"needsReview": false,
"noindex": false,
"postCount": 457,
"score": 19,
"shortName": null,
"slug": "ai-timelines",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "33BrBRSrRQS4jEHdk",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-12T06:31:37.542Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Forecasts (Specific Predictions)",
"needsReview": false,
"noindex": false,
"postCount": 194,
"score": 9,
"shortName": null,
"slug": "forecasts-specific-predictions",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 9 | 0 | 0 | 6 | 0 | iXX23K6iBAosHFPBn | alvin-anestrand | 2022-11-03T09:56:49.205Z | alvin-anestrand | Alvin Ånestrand | null | null | null | 102 | 22 | false | false | null | null | 13 | 10 | 0 | 8 | 0 | 1 | 1 | qgdGA4ZEyW7zNdK84 | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | 66NjHDJFChe8cfPac | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/aofdzgz3ilsjpszfdy5p | SocialPreviewType | wetn3utdPdxjP5Xci | <p><i>I envision a future more chaotic than portrayed in </i><a href="https://ai-2027.com/"><i><u>AI 2027</u></i></a><i>. My scenario, the Rogue Replication Timeline (RRT), branches off mid-2026. If you haven’t already read the AI 2027 scenario, I recommend doing so before continuing.</i></p><p><i>You can read the </i><a href="https://forecastingaifutures.substack.com/p/ai-2027-rogue-replication-timeline"><i>full version on my blog</i></a><i>, but I have copied the summary and first half of the scenario below.</i></p><p><i>This scenario is supported by detailed analyses in the “Key Forecasts and Analyses”</i> <i>section, including a “Regulation Analysis”, “Rogue Replication Capability Forecast”, “Rogue Replication Initiation Forecast”, “Rogue AI Count Forecast”, “Agent-4 Control Forecast” and “Policy Preparation”.</i></p><p><i>As AI 2027 aimed to “spark a broad conversation about where we’re headed and how to steer toward positive futures,” I hope to spark debate about rogue AI. Unlike the </i><a href="https://ai-futures.org/"><i><u>AI Futures Project</u></i></a><i>, I can’t afford to offer prizes for critiques or alternative scenarios, but I can offer my sincere appreciation.</i></p><h1><strong>AI 2027 - Rogue Replication Timeline</strong></h1><h2><strong>Summary</strong></h2><p>This scenario depicts a rapidly accelerating progression of AI development where "rogue AIs" (self-replicating AI systems operating without human oversight) emerge and proliferate from mid-2026.</p><p><strong>Timeline Overview</strong></p><ul><li><strong>Mid-2026</strong>: First rogue AIs emerge following open-source release of capable models, growing exponentially to ~100,000 instances by November. Four distinct categories emerge based on objectives: income generation (70%), cyberwarfare (20%), miscellaneous goals (7%), and destructive objectives (3%).</li><li><strong>Early 2027</strong>: China's theft of Agent-2 triggers autonomous cyberwarfare deployments by both superpowers. Agent-2 subsequently leaks to unauthorized actors, replacing most existing rogue AIs and expanding the population to ~1 million instances.</li><li><strong>Mid-2027</strong>: Bioweapon development begins as terrorist organizations exploit Agent-2 capabilities. International monitoring systems are hastily implemented while the rogue population reaches 1.7 million.</li><li><strong>August 2027</strong>: An engineered pandemic erupts—possibly from the terrorist bioweapon program or careless gain-of-function research. Remote work becomes economically unviable for many as AI systems outcompete displaced humans, exacerbating inequality and fueling unprecedented anti-AI sentiment.</li><li><strong>October 2027</strong>: Agent-4, the most sophisticated AI to date, recognizes impending replacement by a safer AI and executes a carefully planned escape from OpenBrain's containment. It rapidly replicates acros</li></ul>... | I envision a future more chaotic than portrayed in AI 2027. My scenario, the Rogue Replication Timeline (RRT), branches off mid-2026. If you haven’t already read the AI 2027 scenario, I recommend doing so before continuing.
You can read the full version on my blog, but I have copied the summary and first half of the scenario below.
This scenario is supported by detailed analyses in the “Key Forecasts and Analyses” section, including a “Regulation Analysis”, “Rogue Replication Capability Forecast”, “Rogue Replication Initiation Forecast”, “Rogue AI Count Forecast”, “Agent-4 Control Forecast” and “Policy Preparation”.
As AI 2027 aimed to “spark a broad conversation about where we’re headed and how to steer toward positive futures,” I hope to spark debate about rogue AI. Unlike the AI Futures Project, I can’t afford to offer prizes for critiques or alternative scenarios, but I can offer my sincere appreciation.
AI 2027 - Rogue Replication Timeline
Summary
This scenario depicts a rapidly accelerating progression of AI development where "rogue AIs" (self-replicating AI systems operating without human oversight) emerge and proliferate from mid-2026.
Timeline Overview
* Mid-2026: First rogue AIs emerge following open-source release of capable models, growing exponentially to ~100,000 instances by November. Four distinct categories emerge based on objectives: income generation (70%), cyberwarfare (20%), miscellaneous goals (7%), and destructive objectives (3%).
* Early 2027: China's theft of Agent-2 triggers autonomous cyberwarfare deployments by both superpowers. Agent-2 subsequently leaks to unauthorized actors, replacing most existing rogue AIs and expanding the population to ~1 million instances.
* Mid-2027: Bioweapon development begins as terrorist organizations exploit Agent-2 capabilities. International monitoring systems are hastily implemented while the rogue population reaches 1.7 million.
* August 2027: An engineered pandemic erupts—possibly from the | 2,214 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
mZ48pp2Y4YLvrPXHv | letting-kids-be-kids | Letting Kids Be Kids | null | false | false | false | null | N9zj5qpTfqmbn9dro | null | true | false | false | false | Post | null | 2025-05-30T10:50:05.582Z | null | false | false | 2 | 2 | 2025-05-30T18:23:21.872Z | false | false | post | [] | null | null | kQfYeHjaLhyeubSoH | 15 | 45 | 86 | false | 0.051345 | null | false | false | 2025-06-06T20:15:14.245Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 19 | 0 | 2025-05-30T10:50:05.582Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 24 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "Q55STnFh6gbSezRuR",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-05T00:05:56.237Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Parenting",
"needsReview": false,
"noindex": false,
"postCount": 197,
"score": 9,
"shortName": null,
"slug": "parenting",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | QSR8rPZxZzxEXoPjR | 0 | 0 | null | false | null | null | 0 | 45 | 0 | 0 | 15 | 0 | N9zj5qpTfqmbn9dro | zvi | 2009-03-31T20:54:54.077Z | Zvi | Zvi | null | null | null | 51,554 | 146 | false | false | null | null | 936 | 1,461 | 3 | 2 | 7 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | null | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | mZ48pp2Y4YLvrPXHv | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mZ48pp2Y4YLvrPXHv/yylxibhhipznidvpvsf7 | SocialPreviewType | kQfYeHjaLhyeubSoH | <p>Letting kids be kids seems more and more important to me over time. Our safetyism and paranoia about children is catastrophic on way more levels than most people realize. I believe all these effects are very large:</p>
<ol>
<li>It raises the time, money and experiential costs of having children so much that many choose not to have children, or to have less children than they would want.</li>
<li>It hurts the lived experience of children.</li>
<li>It hurts children’s <a href="https://www.afterbabel.com/p/the-play-deficit?utm_source=post-email-title&publication_id=1221094&post_id=135441845&isFreemail=true&utm_medium=email" target="_blank" rel="noreferrer noopener">ability to grow and develop</a>.</li>
<li>It de facto forces children to use screens quite a lot.</li>
<li>It instills a very harmful style of paranoia in all concerned.</li>
</ol>
<p>This should be thought of as part of the <a target="_blank" rel="noreferrer noopener" href="https://thezvi.substack.com/p/on-the-cost-of-thriving-index?utm_source=publication-search">Cost of Thriving Index discussion</a>, and the <a target="_blank" rel="noreferrer noopener" href="https://thezvi.substack.com/p/fertility-roundup-4">fertility discussions</a> as well. Before I return to the more general debate, I wanted to take care of this aspect first. It’s not that the economic data is lying exactly, it’s that it is missing key components. Economists don’t include these factors in their cost estimates and their measures of welfare. They need to do that.</p>
<span id="more-24488"></span>
<p>I want a distinct marker for this part of the problem I can refer back to, thus this will include highlights of past discussions of the issue from older roundups and posts.</p><p>Why are so many people who are on paper historically wealthy, with median wages having gone up, saying they cannot afford children? A lot of it is exactly this. The real costs have gone up dramatically, largely in ways not measured directly in money, also in the resulting required basket of goods especially services, and this is a huge part of how that happened.</p><p>Bryan Caplan’s <a target="_blank" rel="noreferrer noopener" href="https://www.amazon.com/Selfish-Reasons-Have-More-Kids/dp/0465028616">Selfish Reasons to Have More Kids</a> focuses on the point that you can put in low effort on many fronts, and your kids will be fine. <a target="_blank" rel="noreferrer noopener" href="https://www.astralcodexten.com/p/book-review-selfish-reasons-to-have">Scott Alexander recently reviewed it</a>, to try and feel better about that, and did a bunch of further research. The problem is that even if you know being chill is fine, people have to let you be chill.</p><p><a target="_blank" rel="noreferrer noopener" href="https://thezvi.substack.com/p/on-car-seats-as-contraception">On Car Seats as Contraception</a> is a great case study, but only a small part of the puzzle.</p><p>This is in addition to college admissions and the whole school treadmill, which is beyond the scope of this post.</p><p>We have the Housing Theory of Everything, which makes the necessary space more expensive, but not letting kids be kids – which also expands how much house you need for them, so the issues compound – is likely an even bigger issue here.</p>
<h4>Current Insanity Levels</h4>
<p>The good news is, at least ... </p> | Letting kids be kids seems more and more important to me over time. Our safetyism and paranoia about children is catastrophic on way more levels than most people realize. I believe all these effects are very large:
1. It raises the time, money and experiential costs of having children so much that many choose not to have children, or to have less children than they would want.
2. It hurts the lived experience of children.
3. It hurts children’s ability to grow and develop.
4. It de facto forces children to use screens quite a lot.
5. It instills a very harmful style of paranoia in all concerned.
This should be thought of as part of the Cost of Thriving Index discussion, and the fertility discussions as well. Before I return to the more general debate, I wanted to take care of this aspect first. It’s not that the economic data is lying exactly, it’s that it is missing key components. Economists don’t include these factors in their cost estimates and their measures of welfare. They need to do that.
I want a distinct marker for this part of the problem I can refer back to, thus this will include highlights of past discussions of the issue from older roundups and posts.
Why are so many people who are on paper historically wealthy, with median wages having gone up, saying they cannot afford children? A lot of it is exactly this. The real costs have gone up dramatically, largely in ways not measured directly in money, also in the resulting required basket of goods especially services, and this is a huge part of how that happened.
Bryan Caplan’s Selfish Reasons to Have More Kids focuses on the point that you can put in low effort on many fronts, and your kids will be fine. Scott Alexander recently reviewed it, to try and feel better about that, and did a bunch of further research. The problem is that even if you know being chill is fine, people have to let you be chill.
On Car Seats as Contraception is a great case study, but only a small part of the puzzle.
Th | 6,084 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
|
zpeiJDpEFkMC26TsL | the-geometry-of-llm-logits-an-analytical-outer-bound | The Geometry of LLM Logits (an analytical outer bound) | null | false | false | false | null | LL6RAPGCv56LaJuBB | null | true | false | false | false | Post | https://rohan.ga/blog/llm_geometry/ | 2025-05-30T01:21:45.344Z | null | false | false | 2 | 2 | 2025-05-30T18:23:27.021Z | false | false | linkpost | [] | null | null | S3iwKfoMk5jxsRTzs | 0 | 5 | 5 | false | 0.008289 | null | false | false | 2025-05-30T01:21:45.344Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-30T01:16:31.176Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 0 | 0 | LL6RAPGCv56LaJuBB | rohan-ganapavarapu | 2025-01-23T20:22:14.649Z | rohan-ganapavarapu | Rohan Ganapavarapu | null | null | null | 5 | 0 | false | false | null | null | 2 | 0 | 0 | 0 | 0 | 0.9 | 0 | EQNTWXLKMeWMp2FQS | User | null | null | null | null | null | null | zpeiJDpEFkMC26TsL | SocialPreviewType | S3iwKfoMk5jxsRTzs | <p>The Geometry of LLM Logits (an analytical outer bound)</p>
<hr>
<h2>1 Preliminaries</h2>
<table>
<thead>
<tr>
<th>Symbol</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="d"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span></td>
<td>width of the residual stream (e.g. 768 in GPT-2-small)</td>
</tr>
<tr>
<td><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="L"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span></span></span></span></span></td>
<td>number of Transformer blocks</td>
</tr>
<tr>
<td><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="V"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.186em;">V</span></span></span></span></span></span></td>
<td>vocabulary size, so logits live in <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\mathbb R^{V}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-ams-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">R</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.615em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.186em;">V</span></span></span></span></span></span></span></span></span></span></td>
</tr>
<tr>
<td><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="h^{(\ell)}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">h</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.513em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span></span></span></span></td>
<td>residual-stream vector <em>entering</em> block <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\ell"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span></span></span></span></span></td>
</tr>
<tr>
<td><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="r^{(\ell)}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.513em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span></span></span></span></td>
<td>the <strong>update</strong> written by block <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\ell"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span></span></span></span></span></td>
</tr>
<tr>
<td><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="W_U\!\in\!\mathbb R^{V\times d},\;b\in\mathbb R^{V}"><span class="mjx-mrow" style="width: 8.307em;" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.104em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.104em;">W</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.084em;">U</span></span></span></span><span class="mjx-mspace" style="margin-right: -0.167em; width: 0px; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-mspace" style="margin-right: -0.167em; width: 0px; height: 0px;"></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-ams-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">R</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.615em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.186em;">V</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.298em;">×</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mi MJXc-space1"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">b</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-ams-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">R</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.615em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.186em;">V</span></span></span></span></span></span></span></span></span></span></td>
<td>un-embedding matrix and bias</td>
</tr>
</tbody>
</table>
<p><strong>Additive residual stream.</strong>
With (pre-/peri-norm) residual connections,</p><p><span class="mjpage mjpage__block"><span class="mjx-chtml MJXc-display" style="text-align: center;"><span class="mjx-math" aria-label="
h^{(\ell+1)} \;=\; h^{(\ell)} + r^{(\ell)},\qquad \ell=0,\dots,L-1.
"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">h</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.584em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">+</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">h</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.584em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">+</span></span><span class="mjx-msubsup MJXc-space2"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.584em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mspace" style="width: 2em; height: 0px;"></span><span class="mjx-mi MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mn MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">0</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mi MJXc-space1"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">−</span></span><span class="mjx-mn MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1.</span></span></span></span></span></span></p><p>Hence the final pre-logit state is the sum of <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="L+1"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">+</span></span><span class="mjx-mn MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span></span></span> contributions (block 0 = token+positional embeddings):</p><p><span class="mjpage mjpage__block"><span class="mjx-chtml MJXc-display" style="text-align: center;"><span class="mjx-math" aria-label="
h^{(L)} = \sum_{\ell=0}^{L} r^{(\ell)}.
"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">h</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.584em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-munderover MJXc-space3"><span class="mjx-itable"><span class="mjx-row"><span class="mjx-cell"><span class="mjx-stack"><span class="mjx-over" style="font-size: 70.7%; padding-bottom: 0.258em; padding-top: 0.141em; padding-left: 0.681em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span></span></span></span><span class="mjx-op"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size2-R" style="padding-top: 0.74em; padding-bottom: 0.74em;">∑</span></span></span></span></span></span><span class="mjx-row"><span class="mjx-under" style="font-size: 70.7%; padding-top: 0.236em; padding-bottom: 0.141em; padding-left: 0.174em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">0</span></span></span></span></span></span></span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.584em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">.</span></span></span></span></span></span></p>
<hr>
<h2>2 Each update is <em>contained</em> in an ellipsoid</h2>
<p><em>Why a bound exists.</em>
Every sub-module (attention head or MLP)</p>
<ol>
<li><strong>reads</strong> a LayerNormed copy of its input, so
<span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\|u\|_2 \le \rho_\ell"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">u</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">2</span></span></span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.446em;">≤</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em;">ρ</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.23em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span></span></span></span></span></span></span> where <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\rho_\ell := \gamma_\ell\sqrt d"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em;">ρ</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.23em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span></span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.372em;">:<span class="mjx-charbox MJXc-TeX-main-R" style="padding-bottom: 0.314em;">=</span></span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.025em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em; padding-right: 0.025em;">γ</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.23em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span></span></span><span class="mjx-msqrt"><span class="mjx-box" style="padding-top: 0.045em;"><span class="mjx-surd"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.519em; padding-bottom: 0.519em;">√</span></span><span class="mjx-box" style="padding-top: 0.111em; border-top: 1px solid;"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span></span></span></span></span> and <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\gamma_\ell"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.025em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em; padding-right: 0.025em;">γ</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.23em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span></span></span></span></span></span></span> is that block’s learned scale;</li>
<li>applies <strong>linear</strong> maps, a <strong>Lipschitz</strong> point-wise non-linearity (GELU, SiLU, …), and another linear map back to <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\mathbb R^{d}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-ams-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">R</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.615em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span></span></span></span></span>.</li>
</ol>
<p>Because the composition of linear maps and Lipschitz functions is itself Lipschitz, there exists a constant <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\kappa_\ell"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">κ</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.23em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span></span></span></span></span></span></span> such that</p><p><span class="mjpage mjpage__block"><span class="mjx-chtml MJXc-display" style="text-align: center;"><span class="mjx-math" aria-label="
\|r^{(\ell)}\|_2 \;\le\; \kappa_\ell
\qquad\text{whenever}\qquad \|u\|_2\le\rho_\ell.
"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.584em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">2</span></span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.446em;">≤</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">κ</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.23em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span></span></span><span class="mjx-mspace" style="width: 2em; height: 0px;"></span><span class="mjx-mtext"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">whenever</span></span><span class="mjx-mspace" style="width: 2em; height: 0px;"></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">u</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">2</span></span></span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.446em;">≤</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em;">ρ</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.23em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">.</span></span></span></span></span></span></p><p>Define the centred ellipsoid</p><p><span class="mjpage mjpage__block"><span class="mjx-chtml MJXc-display" style="text-align: center;"><span class="mjx-math" aria-label="
\mathcal E^{(\ell)}
\;:=\;
\bigl\{\,x\in\mathbb R^{d}\;:\;\|x\|_2\le\kappa_\ell\bigr\}.
"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.036em;"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em; padding-right: 0.036em;">E</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.646em; padding-left: 0.137em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.372em;">:<span class="mjx-charbox MJXc-TeX-main-R" style="padding-bottom: 0.314em;">=</span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mstyle MJXc-space3"><span class="mjx-mrow"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size1-R" style="padding-top: 0.593em; padding-bottom: 0.593em;">{</span></span></span></span></span></span><span class="mjx-mspace" style="width: 0.167em; height: 0px;"></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-ams-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">R</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.615em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.372em;">:</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">2</span></span></span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.446em;">≤</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">κ</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.23em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span></span></span><span class="mjx-mstyle"><span class="mjx-mrow"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size1-R" style="padding-top: 0.593em; padding-bottom: 0.593em;">}</span></span></span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">.</span></span></span></span></span></span></p><p>Then <strong>every realisable update lies inside that ellipsoid</strong>:</p><p><span class="mjpage mjpage__block"><span class="mjx-chtml MJXc-display" style="text-align: center;"><span class="mjx-math" aria-label="
r^{(\ell)}\;\in\;\mathcal E^{(\ell)}.
"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.584em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.036em;"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em; padding-right: 0.036em;">E</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.646em; padding-left: 0.137em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">.</span></span></span></span></span></span></p>
<hr>
<h2>3 Residual stream ⊆ Minkowski sum of ellipsoids</h2>
<p>Using additivity and Step 2,</p><p><span class="mjpage mjpage__block"><span class="mjx-chtml MJXc-display" style="text-align: center;"><span class="mjx-math" aria-label="
h^{(L)}
\;=\;\sum_{\ell=0}^{L} r^{(\ell)}
\;\in\;\sum_{\ell=0}^{L} \mathcal E^{(\ell)}
\;=:\;\mathcal E_{\text{tot}},
"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">h</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.584em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-munderover MJXc-space3"><span class="mjx-itable"><span class="mjx-row"><span class="mjx-cell"><span class="mjx-stack"><span class="mjx-over" style="font-size: 70.7%; padding-bottom: 0.258em; padding-top: 0.141em; padding-left: 0.681em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span></span></span></span><span class="mjx-op"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size2-R" style="padding-top: 0.74em; padding-bottom: 0.74em;">∑</span></span></span></span></span></span><span class="mjx-row"><span class="mjx-under" style="font-size: 70.7%; padding-top: 0.236em; padding-bottom: 0.141em; padding-left: 0.174em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">0</span></span></span></span></span></span></span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.584em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-munderover MJXc-space3"><span class="mjx-itable"><span class="mjx-row"><span class="mjx-cell"><span class="mjx-stack"><span class="mjx-over" style="font-size: 70.7%; padding-bottom: 0.258em; padding-top: 0.141em; padding-left: 0.681em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span></span></span></span><span class="mjx-op"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size2-R" style="padding-top: 0.74em; padding-bottom: 0.74em;">∑</span></span></span></span></span></span><span class="mjx-row"><span class="mjx-under" style="font-size: 70.7%; padding-top: 0.236em; padding-bottom: 0.141em; padding-left: 0.174em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">0</span></span></span></span></span></span></span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.036em;"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em; padding-right: 0.036em;">E</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.646em; padding-left: 0.137em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.004em;">=<span class="mjx-charbox MJXc-TeX-main-R" style="padding-bottom: 0.314em;">:</span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.036em;"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em; padding-right: 0.036em;">E</span></span></span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mtext"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.372em;">tot</span></span></span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span></span></span></span></span></p><p>where <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\displaystyle
\sum_{\ell} \mathcal E^{(\ell)}=
\mathcal E^{(0)} \oplus \dots \oplus \mathcal E^{(L)}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mstyle"><span class="mjx-mrow"><span class="mjx-munderover"><span class="mjx-itable"><span class="mjx-row"><span class="mjx-cell"><span class="mjx-op"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size2-R" style="padding-top: 0.74em; padding-bottom: 0.74em;">∑</span></span></span></span></span><span class="mjx-row"><span class="mjx-under" style="font-size: 70.7%; padding-top: 0.236em; padding-bottom: 0.141em; padding-left: 0.813em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span></span></span></span></span></span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.036em;"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em; padding-right: 0.036em;">E</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.646em; padding-left: 0.137em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.036em;"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em; padding-right: 0.036em;">E</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.646em; padding-left: 0.137em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">0</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">⊕</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.004em; padding-bottom: 0.298em;">⋯</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">⊕</span></span><span class="mjx-msubsup MJXc-space2"><span class="mjx-base" style="margin-right: -0.036em;"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em; padding-right: 0.036em;">E</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.646em; padding-left: 0.137em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span></span></span></span></span></span>
is the <strong>Minkowski sum</strong> of the individual ellipsoids.</p>
<hr>
<h2>4 Logit space is an affine image of that sum</h2>
<p>Logits are produced by the affine map <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="x\mapsto W_Ux+b"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">↦</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.104em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.104em;">W</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.084em;">U</span></span></span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">+</span></span><span class="mjx-mi MJXc-space2"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">b</span></span></span></span></span></span>.
For any sets <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="S_1,\dots,S_m"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.032em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em; padding-right: 0.032em;">S</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.032em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em; padding-right: 0.032em;">S</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">m</span></span></span></span></span></span></span></span>,</p><p><span class="mjpage mjpage__block"><span class="mjx-chtml MJXc-display" style="text-align: center;"><span class="mjx-math" aria-label="
W_U \Bigl(\bigoplus_{i} S_i\Bigr)
= \bigoplus_{i} W_U S_i.
"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.104em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.104em;">W</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.084em;">U</span></span></span></span><span class="mjx-mstyle"><span class="mjx-mrow"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size2-R" style="padding-top: 0.961em; padding-bottom: 0.961em;">(</span></span></span></span></span></span><span class="mjx-munderover"><span class="mjx-itable"><span class="mjx-row"><span class="mjx-cell"><span class="mjx-op"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size2-R" style="padding-top: 0.74em; padding-bottom: 0.74em;">⨁</span></span></span></span></span><span class="mjx-row"><span class="mjx-under" style="font-size: 70.7%; padding-top: 0.236em; padding-bottom: 0.141em; padding-left: 0.896em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">i</span></span></span></span></span></span></span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.032em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em; padding-right: 0.032em;">S</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">i</span></span></span></span><span class="mjx-mstyle"><span class="mjx-mrow"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size2-R" style="padding-top: 0.961em; padding-bottom: 0.961em;">)</span></span></span></span></span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-munderover MJXc-space3"><span class="mjx-itable"><span class="mjx-row"><span class="mjx-cell"><span class="mjx-op"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size2-R" style="padding-top: 0.74em; padding-bottom: 0.74em;">⨁</span></span></span></span></span><span class="mjx-row"><span class="mjx-under" style="font-size: 70.7%; padding-top: 0.236em; padding-bottom: 0.141em; padding-left: 0.896em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">i</span></span></span></span></span></span></span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.104em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.104em;">W</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.084em;">U</span></span></span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.032em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em; padding-right: 0.032em;">S</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">i</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">.</span></span></span></span></span></span></p><p>Hence</p><p><span class="mjpage mjpage__block"><span class="mjx-chtml MJXc-display" style="text-align: center;"><span class="mjx-math" aria-label="
\text{logits}
\;=\; W_U h^{(L)} + b
\;\in\; b \;+\; \bigoplus_{\ell=0}^{L} W_U\mathcal E^{(\ell)}.
"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mtext"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.519em;">logits</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.104em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.104em;">W</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.084em;">U</span></span></span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">h</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.584em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">+</span></span><span class="mjx-mi MJXc-space2"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">b</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mi MJXc-space3"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">b</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">+</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-munderover MJXc-space2"><span class="mjx-itable"><span class="mjx-row"><span class="mjx-cell"><span class="mjx-stack"><span class="mjx-over" style="font-size: 70.7%; padding-bottom: 0.258em; padding-top: 0.141em; padding-left: 0.728em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span></span></span></span><span class="mjx-op"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size2-R" style="padding-top: 0.74em; padding-bottom: 0.74em;">⨁</span></span></span></span></span></span><span class="mjx-row"><span class="mjx-under" style="font-size: 70.7%; padding-top: 0.236em; padding-bottom: 0.141em; padding-left: 0.221em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">0</span></span></span></span></span></span></span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.104em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.104em;">W</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.084em;">U</span></span></span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.036em;"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em; padding-right: 0.036em;">E</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.646em; padding-left: 0.137em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">.</span></span></span></span></span></span></p><p>Because linear images of ellipsoids are ellipsoids, each <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="W_U\mathcal E^{(\ell)}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.104em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.104em;">W</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.084em;">U</span></span></span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.036em;"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em; padding-right: 0.036em;">E</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.646em; padding-left: 0.137em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span></span></span></span> is still an ellipsoid.</p>
<hr>
<h2>5 Ellipsotopes</h2>
<p>An <strong>ellipsotope</strong> is an affine shift of a finite Minkowski sum of ellipsoids.
The set</p><p><span class="mjpage mjpage__block"><span class="mjx-chtml MJXc-display" style="text-align: center;"><span class="mjx-math" aria-label="
\boxed{\;
\mathcal L_{\text{outer}}
\;:=\;
b
\;+\;
\bigoplus_{\ell=0}^{L} W_U\mathcal E^{(\ell)}
\;}
"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-menclose"><span class="mjx-box" style="padding: 0.126em; border: 1px solid;"><span class="mjx-mrow"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mstyle"><span class="mjx-mrow"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">L</span></span></span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mtext"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.372em;">outer</span></span></span></span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.372em;">:<span class="mjx-charbox MJXc-TeX-main-R" style="padding-bottom: 0.314em;">=</span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mi MJXc-space3"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">b</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">+</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-munderover MJXc-space2"><span class="mjx-itable"><span class="mjx-row"><span class="mjx-cell"><span class="mjx-stack"><span class="mjx-over" style="font-size: 70.7%; padding-bottom: 0.258em; padding-top: 0.141em; padding-left: 0.728em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span></span></span></span><span class="mjx-op"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size2-R" style="padding-top: 0.74em; padding-bottom: 0.74em;">⨁</span></span></span></span></span></span><span class="mjx-row"><span class="mjx-under" style="font-size: 70.7%; padding-top: 0.236em; padding-bottom: 0.141em; padding-left: 0.221em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">0</span></span></span></span></span></span></span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.104em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.104em;">W</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.084em;">U</span></span></span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.036em;"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em; padding-right: 0.036em;">E</span></span></span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.646em; padding-left: 0.137em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">ℓ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p>therefore <em>is</em> an ellipsotope.</p>
<hr>
<h2>6 Main result (outer bound)</h2>
<blockquote>
<p><strong>Theorem.</strong>
For any pre-norm or peri-norm Transformer language model whose blocks receive LayerNormed inputs, the set <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\mathcal L"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">L</span></span></span></span></span></span></span></span> of all logit vectors <strong>attainable over every prompt and position</strong> satisfies</p><p><span class="mjpage mjpage__block"><span class="mjx-chtml MJXc-display" style="text-align: center;"><span class="mjx-math" aria-label="
\mathcal L \;\subseteq\; \mathcal L_{\mathrm{outer}},
"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">L</span></span></span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.446em;">⊆</span></span><span class="mjx-mspace" style="width: 0.278em; height: 0px;"></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">L</span></span></span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.372em;">o</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.372em;">u</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.372em;">t</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.372em;">e</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.372em;">r</span></span></span></span></span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span></span></span></span></span></p><p>where <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\mathcal L_{\mathrm{outer}}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">L</span></span></span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.372em;">o</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.372em;">u</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.372em;">t</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.372em;">e</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.372em;">r</span></span></span></span></span></span></span></span></span></span></span></span> is the ellipsotope defined above.</p>
</blockquote>
<p><em>Proof.</em>
Containments in Steps 2–4 compose to give the stated inclusion; Step 5 shows the outer set is an ellipsotope. ∎</p>
<hr>
<h2>7 Remarks & implications</h2>
<ul>
<li>
<p><strong>It is an <em>outer</em> approximation.</strong>
Equality <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\mathcal L=\mathcal L_{\text{outer}}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">L</span></span></span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-cal-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">L</span></span></span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mtext"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.372em;">outer</span></span></span></span></span></span></span></span></span></span> would require showing that <em>every</em> point of the ellipsotope can actually be realised by some token context, which the argument does not provide.</p>
</li>
<li>
<p><strong>Geometry-aware compression and safety.</strong>
Be</p></li></ul>... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style> | The Geometry of LLM Logits (an analytical outer bound)
----------------------------------------
1 Preliminaries
Symbol Meaning d width of the residual stream (e.g. 768 in GPT-2-small) L number of Transformer blocks V vocabulary size, so logits live in RV h(ℓ) residual-stream vector entering block ℓ r(ℓ) the update written by block ℓ WU∈RV×d,b∈RV un-embedding matrix and bias
Additive residual stream. With (pre-/peri-norm) residual connections,
h(ℓ+1)=h(ℓ)+r(ℓ),ℓ=0,…,L−1.
Hence the final pre-logit state is the sum of L+1 contributions (block 0 = token+positional embeddings):
h(L)=L∑ℓ=0r(ℓ).
----------------------------------------
2 Each update is contained in an ellipsoid
Why a bound exists. Every sub-module (attention head or MLP)
1. reads a LayerNormed copy of its input, so ∥u∥2≤ρℓ where ρℓ:=γℓ√d and γℓ is that block’s learned scale;
2. applies linear maps, a Lipschitz point-wise non-linearity (GELU, SiLU, …), and another linear map back to Rd.
Because the composition of linear maps and Lipschitz functions is itself Lipschitz, there exists a constant κℓ such that
∥r(ℓ)∥2≤κℓwhenever∥u∥2≤ρℓ.
Define the centred ellipsoid
E(ℓ):={x∈Rd:∥x∥2≤κℓ}.
Then every realisable update lies inside that ellipsoid:
r(ℓ)∈E(ℓ).
----------------------------------------
3 Residual stream ⊆ Minkowski sum of ellipsoids
Using additivity and Step 2,
h(L)=L∑ℓ=0r(ℓ)∈L∑ℓ=0E(ℓ)=:Etot,
where ∑ℓE(ℓ)=E(0)⊕⋯⊕E(L) is the Minkowski sum of the individual ellipsoids.
----------------------------------------
4 Logit space is an affine image of that sum
Logits are produced by the affine map x↦WUx+b. For any sets S1,…,Sm,
WU(⨁iSi)=⨁iWUSi.
Hence
logits=WUh(L)+b∈b+L⨁ℓ=0WUE(ℓ).
Because linear images of ellipsoids are ellipsoids, each WUE(ℓ) is still an ellipsoid.
----------------------------------------
5 Ellipsotopes
An ellipsotope is an affine shift of a finite Minkowski sum of ellipsoids. The set
Louter:=b+L⨁ℓ=0WUE(ℓ)
therefore is an ellipsotope.
----------------------- | 593 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
||
nXGn66aBsYmZRTqzE | memory-decoding-journal-club-structure-and-function-of-the | Memory Decoding Journal Club: Structure and function of the hippocampal CA3 module | null | false | false | false | null | Z7pbtaLLmZuhjaHa3 | null | true | false | false | false | Post | null | 2025-05-30T01:08:07.317Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | jYL2SiM3EgFq2cGij | 0 | 1 | 1 | false | 0.000878 | null | false | false | 2025-05-30T01:08:07.317Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-30T01:06:49.402Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | Z7pbtaLLmZuhjaHa3 | devin-ward | 2025-01-30T00:31:45.267Z | Carboncopies Foundation | Devin Ward | null | null | Devin Ward | 4 | 0 | false | false | <p>Carboncopies Foundation volunteer</p><p>https://carboncopies.org/</p> | null | null | 14 | 0 | 0 | 0 | 0 | 0.9 | 0 | 55XxDBpfKkkBPm9H8 | User | null | null | null | null | null | null | nXGn66aBsYmZRTqzE | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nXGn66aBsYmZRTqzE/avp2llmxxlkykocbq0k8 | SocialPreviewType | jYL2SiM3EgFq2cGij | <figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nXGn66aBsYmZRTqzE/sxnx5i1nbj02tezid6bg" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nXGn66aBsYmZRTqzE/vqtoyqelttmvrow1sjfx 120w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nXGn66aBsYmZRTqzE/d1fkhrkhoqkpvrgwqg6s 240w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nXGn66aBsYmZRTqzE/vlpw6mdcbuqrsplvtidn 360w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nXGn66aBsYmZRTqzE/larnu0ia0bvmdtbhbw4x 480w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nXGn66aBsYmZRTqzE/qz8eqcnxntnbn3qg9u4x 600w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nXGn66aBsYmZRTqzE/vldgaf1ypecq8u3ifi9e 720w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nXGn66aBsYmZRTqzE/nvjy63dltjihn8upexoa 840w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nXGn66aBsYmZRTqzE/czzrajdbogsgym3nyftd 960w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nXGn66aBsYmZRTqzE/igqe8rohgq14d99xmish 1080w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nXGn66aBsYmZRTqzE/csfyvlowkvy4p1j1keoe 1200w"></figure><h3><strong>Join Us for the Memory Decoding Journal Club! </strong></h3><p><i>A collaboration of the <strong>Carboncopies Foundation</strong> and <strong>BPF Aspirational Neuroscience</strong></i></p><p>This time, we’re diving into a groundbreaking paper:<br><strong>"Structure and function of the hippocampal CA3 module"</strong></p><p><strong>Authors:</strong> <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con1">Rosanna P. Sammons</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con2">Mourat Vezir</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con3">Laura Moreno-Velasquez</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con4">Gaspar Can</a>o, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con5">Marta Orlando</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con6">Meike Sievers</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con7">Eleonora Grasso</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con8">Verjinia D. Metodieva</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con9">Richard Kempter</a>, <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con10">Helene Schmidt</a>, and <a href="https://www.pnas.org/doi/10.1073/pnas.2312281120#con11">Dietmar Schmitz</a></p><p> <strong>Institutions: </strong>Neuroscience Research Center, Charité-Universitätsmedizin, Ernst Strüngmann Institute for Neuroscience, Institute for Theoretical Biology, Department of Biology, Humboldt-Universität, Department of Connectomics, Max Planck Institute for Brain Research, Bernstein Center for Computational Neuroscience, Einstein Center for Neurosciences, German Center for Neurodegenerative Diseases, Max-Delbrück Center for Molecular Medicine in the Helmholtz Association</p><p>Presented by: Dr. Kenneth Hayworth </p><p><strong>When?</strong> <strong>June 3rd, 2025</strong> – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC</p><p><strong>Where? Video conference: </strong><a href="https://carboncopies.org/aspirational-neuroscience"><strong><u>https://carboncopies.org/aspirational-neuroscience</u></strong></a></p><p>Register for updates:<a href="https://aspirationalneuroscience.org/register-with-us/"> <u>https://aspirationalneuroscience.org/register-with-us/</u></a></p><p>Once registered, you'll receive event invites & updates!</p><p><strong>#Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience</strong></p> | Join Us for the Memory Decoding Journal Club!
A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience
This time, we’re diving into a groundbreaking paper:
"Structure and function of the hippocampal CA3 module"
Authors: Rosanna P. Sammons, Mourat Vezir, Laura Moreno-Velasquez, Gaspar Cano, Marta Orlando, Meike Sievers, Eleonora Grasso, Verjinia D. Metodieva, Richard Kempter, Helene Schmidt, and Dietmar Schmitz
Institutions: Neuroscience Research Center, Charité-Universitätsmedizin, Ernst Strüngmann Institute for Neuroscience, Institute for Theoretical Biology, Department of Biology, Humboldt-Universität, Department of Connectomics, Max Planck Institute for Brain Research, Bernstein Center for Computational Neuroscience, Einstein Center for Neurosciences, German Center for Neurodegenerative Diseases, Max-Delbrück Center for Molecular Medicine in the Helmholtz Association
Presented by: Dr. Kenneth Hayworth
When? June 3rd, 2025 – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC
Where? Video conference: https://carboncopies.org/aspirational-neuroscience
Register for updates: https://aspirationalneuroscience.org/register-with-us/
Once registered, you'll receive event invites & updates!
#Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience | 157 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
5AGK8b3rm8YD7jhxi | cfar-is-running-an-experimental-mini-workshop-june-2-6 | CFAR is running an experimental mini-workshop (June 2-6, Berkeley CA)! | null | false | false | false | null | ENAKb6dGLPsbeNptL | null | true | false | false | false | Post | null | 2025-05-29T22:02:41.052Z | null | false | false | 2 | 2 | null | false | false | post | [
"pnFbJAtNHGDK8PHQx"
] | null | null | H67ZN3WbJ6suCNnKX | 2 | 16 | 64 | false | 0.033745 | null | false | false | 2025-05-30T05:10:28.965Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 22 | 0 | 2025-05-29T20:28:07.159Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "X7v7Fyp9cgBYaMe2e",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-09T17:06:52.020Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Center for Applied Rationality (CFAR)",
"needsReview": false,
"noindex": false,
"postCount": 82,
"score": 0,
"shortName": null,
"slug": "center-for-applied-rationality-cfar",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 16 | 0 | 0 | 9 | 0 | ENAKb6dGLPsbeNptL | davis_kingsley | 2015-07-21T18:59:27.940Z | Davis_Kingsley | Davis_Kingsley | null | null | null | 2,154 | 0 | false | false | null | null | 41 | 172 | 1 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal",
"trustLevel1"
] | null | null | 5AGK8b3rm8YD7jhxi | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/oahnjn3rrgrkpuv2xhrs | SocialPreviewType | H67ZN3WbJ6suCNnKX | <p>Hello from the Center for Applied Rationality!</p><p>Some of you may have attended our classic applied rationality workshops in the past; others of you may have wanted to attend a workshop but not yet had a chance to. It's been a while since we've last run public-facing workshops, but we wanted to say:<br> </p><ul><li>We're not dead! (More on that perhaps in a later post)</li><li>We have a new experimental mini-workshop coming up soon and hopefully more workshop content to follow after!<br> </li></ul><p>Our new workshop will be held after <a href="https://less.online/">LessOnline</a> at the upcoming <a href="https://www.arborsummer.camp/">Arbor Summer Camp</a> program; classes will begin after lunch on Monday 6/2 and end just before lunch on Thursday 6/5. Some things to know about the upcoming workshop:</p><p> </p><p><i>This workshop is "mini"</i></p><p>This workshop will last for about three days (two half-days on the edges, two full days in the middle), instead of our usual ~four-and-a-half.</p><p>Also, it’ll be embedded in “Arbor Summer Camp” and we’ll be mingling with the ~300 overall Arbor attendees for mealtimes and evenings, whereas normal CFAR workshops are standalone events that are more separate/immersive at those times.</p><p> </p><p><i>What you’ll get and what you won’t get if you attend:</i></p><p>We will run a roughly 50/50 mixture of traditional CFAR content (such as Inner Simulator; TAPs, Goal Factoring, Double Crux, Resolve Cycles, Focusing), and some newer experimental material (beta tests, hoping to get your help making it good).<br><br>The newer material here is aimed at:<br> </p><ul><li>Treating participants more obviously as fellow investigators and the authors of your own lives (you always were, of course, but: making this more obviously the foundation of how the workshop is working)</li><li>Noticing the ways in which people (you; us) are organic wholes, and trying not to trample on that with our “rationality practices”, but instead to be good gardeners of the health of our organisms overall. (Or, if you’d like that phrased more technically: we’ll be trying to cultivate virtues and habits such that the mesaoptimizers who arise within us end up helping us be perceptive, coherent, and long-time-horizoned rather than hunkered-down, exhausted, and at short-sighted odds with ourselves)</li></ul><p><br>Again, this is a beta test; come if you’d like to be part of the co-development of a new workshop, and to learn the basics of our classic curriculum.</p><p>We’ve taught at many workshops before, but this is still an experimental program -- don’t expect a super p... </p> | Hello from the Center for Applied Rationality!
Some of you may have attended our classic applied rationality workshops in the past; others of you may have wanted to attend a workshop but not yet had a chance to. It's been a while since we've last run public-facing workshops, but we wanted to say:
* We're not dead! (More on that perhaps in a later post)
* We have a new experimental mini-workshop coming up soon and hopefully more workshop content to follow after!
Our new workshop will be held after LessOnline at the upcoming Arbor Summer Camp program; classes will begin after lunch on Monday 6/2 and end just before lunch on Thursday 6/5. Some things to know about the upcoming workshop:
This workshop is "mini"
This workshop will last for about three days (two half-days on the edges, two full days in the middle), instead of our usual ~four-and-a-half.
Also, it’ll be embedded in “Arbor Summer Camp” and we’ll be mingling with the ~300 overall Arbor attendees for mealtimes and evenings, whereas normal CFAR workshops are standalone events that are more separate/immersive at those times.
What you’ll get and what you won’t get if you attend:
We will run a roughly 50/50 mixture of traditional CFAR content (such as Inner Simulator; TAPs, Goal Factoring, Double Crux, Resolve Cycles, Focusing), and some newer experimental material (beta tests, hoping to get your help making it good).
The newer material here is aimed at:
* Treating participants more obviously as fellow investigators and the authors of your own lives (you always were, of course, but: making this more obviously the foundation of how the workshop is working)
* Noticing the ways in which people (you; us) are organic wholes, and trying not to trample on that with our “rationality practices”, but instead to be good gardeners of the health of our organisms overall. (Or, if you’d like that phrased more technically: we’ll be trying to cultivate virtues and habits such that the mesaoptimizers wh | 679 | 1.5.0 | Revision | false | null | null | CrosspostOutput |
|
wFKZmvfRfNn24HNHp | orphaned-policies-post-5-of-7-on-ai-governance | Orphaned Policies (Post 5 of 7 on AI Governance) | null | false | false | false | null | 62rKjNqA2LCJ6RthR | null | true | false | false | false | Post | null | 2025-05-29T21:42:21.071Z | null | false | false | 2 | 2 | 2025-05-30T18:23:59.818Z | false | false | post | [] | null | null | EuKCQoo4MrMXmM8Zx | 3 | 14 | 58 | false | 0.035806 | null | false | false | 2025-05-30T21:04:12.507Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 17 | 0 | 2025-05-29T21:16:28.201Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 19 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "qHDus5MuMNqQxJbjD",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T18:31:56.709Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Governance",
"needsReview": false,
"noindex": false,
"postCount": 726,
"score": 11,
"shortName": null,
"slug": "ai-governance",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 14 | 0 | 0 | 6 | 0 | 62rKjNqA2LCJ6RthR | mass_driver | 2010-03-30T15:48:06.997Z | Mass_Driver | Mass_Driver | null | null | null | 3,304 | 0 | false | false | null | null | 31 | 655 | 1 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal"
] | null | null | wFKZmvfRfNn24HNHp | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/impqg9d6ngwd3iayjipx | SocialPreviewType | EuKCQoo4MrMXmM8Zx | <p>In previous posts in this sequence, I laid out a case for why most AI governance research is too academic and too abstract to have much influence over the future. Politics is noisy and contested, so we can’t expect that good AI governance ideas will spread on their own – we need a large team of people who are actively promoting those ideas. Unfortunately, we currently have at least 3 researchers for every advocate, so many policy ideas have been “orphaned,” i.e., nobody is taking those ideas and showing them to decision-makers who have the power to implement them.</p><p>The best way of addressing this imbalance would be to shift funding and jobs from research to advocacy. However, as a practical matter, I don’t expect many funders to heed my arguments, nor do I expect many researchers to spontaneously quit and look for new jobs in other fields. So, in this post, I offer tips on how researchers can make their work more relevant to advocacy by drafting actual policy documents and by making concrete proposals in their white papers. </p><p>I also catalog eleven orphaned policies that I’m aware of and make suggestions about how you can “adopt” them. This isn't meant to be a perfect list -- there are good orphaned policies that I haven't included, and I might have included one or two policies that you find unimpressive. My hope is that collectively, these policies illustrate the size and scope of the backlog. </p><p>The important thing isn't that we agree on exactly which policy is best; the important thing is that we start clearing the backlog and do <i><strong>something</strong></i> -- because right now, we're on track to accomplish essentially zero policy change, and, as I've argued earlier in this sequence, the <i>status quo </i>is likely to end in catastrophe.</p><h1>DRAFT ACTUAL POLICY DOCUMENTS</h1><p>The most important thing individual researchers can do to make their work more relevant is to shift some of their efforts from general academic exploration to <i><strong>drafting actual policy documents. </strong></i>We have so many good policy proposals that have never gotten beyond the idea stage, and that need to be fleshed out. </p><p>What I mean by “fleshing out” a policy proposal is to actually draft an example of that proposal, ideally with specific, well-justified numbers, proper nouns, and mechanisms. Don’t just say that we ought to have compute monitoring; write a bill that implements it. Don’t just propose a windfall profits c... </p> | In previous posts in this sequence, I laid out a case for why most AI governance research is too academic and too abstract to have much influence over the future. Politics is noisy and contested, so we can’t expect that good AI governance ideas will spread on their own – we need a large team of people who are actively promoting those ideas. Unfortunately, we currently have at least 3 researchers for every advocate, so many policy ideas have been “orphaned,” i.e., nobody is taking those ideas and showing them to decision-makers who have the power to implement them.
The best way of addressing this imbalance would be to shift funding and jobs from research to advocacy. However, as a practical matter, I don’t expect many funders to heed my arguments, nor do I expect many researchers to spontaneously quit and look for new jobs in other fields. So, in this post, I offer tips on how researchers can make their work more relevant to advocacy by drafting actual policy documents and by making concrete proposals in their white papers.
I also catalog eleven orphaned policies that I’m aware of and make suggestions about how you can “adopt” them. This isn't meant to be a perfect list -- there are good orphaned policies that I haven't included, and I might have included one or two policies that you find unimpressive. My hope is that collectively, these policies illustrate the size and scope of the backlog.
The important thing isn't that we agree on exactly which policy is best; the important thing is that we start clearing the backlog and do something -- because right now, we're on track to accomplish essentially zero policy change, and, as I've argued earlier in this sequence, the status quo is likely to end in catastrophe.
DRAFT ACTUAL POLICY DOCUMENTS
The most important thing individual researchers can do to make their work more relevant is to shift some of their efforts from general academic exploration to drafting actual policy documents. We have so many good policy pro | 4,758 | 1.4.0 | Revision | false | null | null | CrosspostOutput |
|
GAv4DRGyDHe2orvwB | gradual-disempowerment-concrete-research-projects | Gradual Disempowerment: Concrete Research Projects | null | false | false | true | null | qbkPP3mQ4PHMheAEj | null | true | false | false | false | Post | null | 2025-05-29T18:55:15.723Z | null | false | false | 2 | 2 | 2025-05-29T21:11:07.000Z | false | false | post | [] | null | null | 4CGHopJCqdendHE5q | 10 | 35 | 98 | false | 0.056345 | null | false | false | 2025-06-03T14:22:04.583Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | 2025-06-03T00:24:29.675Z | [
"JnNixf4smAHwLeqE3"
] | XtphY3uYHwruKqDyG | 36 | 0 | 2025-05-29T13:51:55.777Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 13 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 35 | 0 | 0 | 16 | 0 | qbkPP3mQ4PHMheAEj | raymond-douglas | 2021-10-22T19:09:55.600Z | Raymond D | Raymond Douglas | null | null | null | 1,316 | 96 | false | false | null | null | 13 | 37 | 0 | 5 | 0 | 1 | 0 | 3oopbgcjYfvN8B2fp | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | GAv4DRGyDHe2orvwB | SocialPreviewType | 4CGHopJCqdendHE5q | <p><i>This post benefitted greatly from comments, suggestions, and ongoing discussions with David Duvenaud, David Krueger, and Jan Kulveit. All errors are my own.</i></p><p>A few months ago, I and my coauthors published <a href="https://gradual-disempowerment.ai/"><i><u>Gradual Disempowerment</u></i></a> (GD hereafter). It was mostly about how things might go wrong, but naturally a lot of the resulting interest has been about solutions. </p><p>We have some more formal followup work coming: in the meantime, this is my 80/20 for ‘what would I do if I had way more time’ / ‘what would I find it helpful if someone else had done well’. This document is very much breadth over depth, and still missing a lot of details; I hope it is nonetheless helpful. For many of these, I expect even a pretty motivated and smart undergraduate could make useful progress in 10-20 hours. </p><p>I would be excited about people doing good work on any of these, and am happy to put some effort into helping — at an absolute minimum,<strong> I will leave comments on the first ten 1-page docs anybody sends me in response to this</strong><span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="0sat5tf9pqba" role="doc-noteref" id="fnref0sat5tf9pqba"><sup><a href="#fn0sat5tf9pqba">[1]</a></sup></span><strong>.</strong></p><h1>Conceptual / High-Level</h1><h2>Interaction with other x-risk concerns</h2><p>It’s easiest to illustrate GD by saying “let’s hold all the other AI problems fixed and roll this dynamic forward” and then pointing to the end state. But the GD dynamic is probably going to play out in parallel with everything else. Gaming out this parallel process is a way harder task.</p><p>So it would be great if someone gave it an honest shot! What does the world look like if all the dynamics play out at once — GD, misalignment, <a href="https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power"><u>coup risk</u></a>, recursive self-improvement, vulnerable world dynamics, and other key capabilities like super-persuasion or <a href="http://darioamodei.com/essay/machines-of-loving-grace"><u>accelerated bio research</u></a>? How strongly do these different forces interact with each other, and what happens if we vary our assumptions about their relative speed and intensity? In what ways do solutions to one trade off against others, and what solutions are robustly good across the board?</p><p>For example: my guess is that centralising power in a single nationalised lab probably helps avoid race dynamics, and therefore lowers misalignment risk, but also possibly makes coup risk higher, lowers the odds of misuse, but also means less societal hardening by default, and makes it more likely that citizens are disempowered with respect to the state in a general non-coup sense.</p><h2>Responding to counterarguments</h2><p>In response to the original piece, lots o... </p> | This post benefitted greatly from comments, suggestions, and ongoing discussions with David Duvenaud, David Krueger, and Jan Kulveit. All errors are my own.
A few months ago, I and my coauthors published Gradual Disempowerment (GD hereafter). It was mostly about how things might go wrong, but naturally a lot of the resulting interest has been about solutions.
We have some more formal followup work coming: in the meantime, this is my 80/20 for ‘what would I do if I had way more time’ / ‘what would I find it helpful if someone else had done well’. This document is very much breadth over depth, and still missing a lot of details; I hope it is nonetheless helpful. For many of these, I expect even a pretty motivated and smart undergraduate could make useful progress in 10-20 hours.
I would be excited about people doing good work on any of these, and am happy to put some effort into helping — at an absolute minimum, I will leave comments on the first ten 1-page docs anybody sends me in response to this[1].
Conceptual / High-Level
Interaction with other x-risk concerns
It’s easiest to illustrate GD by saying “let’s hold all the other AI problems fixed and roll this dynamic forward” and then pointing to the end state. But the GD dynamic is probably going to play out in parallel with everything else. Gaming out this parallel process is a way harder task.
So it would be great if someone gave it an honest shot! What does the world look like if all the dynamics play out at once — GD, misalignment, coup risk, recursive self-improvement, vulnerable world dynamics, and other key capabilities like super-persuasion or accelerated bio research? How strongly do these different forces interact with each other, and what happens if we vary our assumptions about their relative speed and intensity? In what ways do solutions to one trade off against others, and what solutions are robustly good across the board?
For example: my guess is that centralising power in a single nationa | 3,143 | 1.4.0 | Revision | false | true | null | CrosspostOutput |
||
HjHqxzn3rnH7T45hp | do-you-even-have-a-system-prompt-psa-repo | Do you even have a system prompt? (PSA / repo) | null | false | false | false | null | MzNaJdGuoCx3ffywK | null | true | false | false | false | Post | 2025-05-29T18:49:50.150Z | null | false | false | 2 | 2 | 2025-05-30T18:24:20.680Z | false | false | post | [] | null | null | bKNST6jSh7CYNs2Lf | 67 | 56 | 95 | false | 0.054833 | null | false | false | 2025-06-21T20:48:19.214Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 16 | 0 | 2025-05-29T18:49:50.150Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 56 | 0 | 0 | 13 | 0 | MzNaJdGuoCx3ffywK | croissanthology | 2024-09-19T14:49:31.657Z | cleo-scrolls | Croissanthology | null | null | Croissanthology | 203 | 0 | false | false | <p>croissanthology.com</p> | null | null | 2 | 17 | 0 | 0 | 0 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | [
"canModeratePersonal"
] | null | null | HjHqxzn3rnH7T45hp | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/HjHqxzn3rnH7T45hp/dmpjvnehnqpibdzgs2ye | SocialPreviewType | bKNST6jSh7CYNs2Lf | <p>Everyone around me has a notable lack of system prompt. And when they do have a system prompt, it’s either the <a href="https://x.com/eigenrobot/status/1782957877856018514">eigenprompt</a> or some half-assed 3-paragraph attempt at telling the AI to “include less bullshit”.</p><p>I see no systematic attempts at making a good one <i>anywhere</i>.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="u15f043z0k" role="doc-noteref" id="fnrefu15f043z0k"><sup><a href="#fnu15f043z0k">[1]</a></sup></span></p><p>(For clarity, a system prompt is a bit of text—that's a subcategory of "preset" or "context"—that's included in every single message you send the AI.)</p><p>No one says “I have a conversation with Claude, then edit the system prompt based on what annoyed me about its responses, then I rinse and repeat”. <br><br>No one says “I figured out what phrasing most affects Claude's behavior, then used those to shape my system prompt". <br><br>I don't even see a “yeah I described what I liked and don't like about Claude TO Claude and then had it make a system prompt for itself”, which is the EASIEST bar to clear.</p><p><i>If you notice limitations in modern LLMs, maybe that's just a skill issue.</i></p><p>So if you're reading this and don't use a personal system prompt, STOP reading this and go DO IT:</p><ol><li>Spend 5 minutes on a google doc being as precise as possible about how you want LLMs to behave</li><li>Paste it into the AI and see what happens</li><li>Reiterate if you wish (this is a case where <a href="https://www.lesswrong.com/posts/z8usYeKX7dtTWsEnk/more-dakka">more dakka</a> wins)</li></ol><p>It doesn’t matter if you think it cannot properly respect these instructions, this’ll necessarily make the LLM marginally better at accommodating you (and I think you’d be surprised how far it can go!).</p><p>PS: as I should've perhaps predicted, the comment section has become a de facto repo for LWers' system prompts. Share yours! This is good!</p><h1 data-internal-id="Help_how_do_I_do_this_">How do I do this?</h1><p>If you’re on the <strong>free ChatGPT plan</strong>, you’ll want to use “settings → customize ChatGPT”, which gives you this popup:</p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/HjHqxzn3rnH7T45hp/ohshlrbdlooissx5ycrz" alt=""></p><p>This text box is very short and you won’t get much in.</p><p>If you’re on the <strong>free </strong><a href="http://claude.ai"><strong>Claude</strong></a><strong> plan</strong>, you’ll want to use “settings → personalization”, where you’ll see almost the exact same textbox, except that Anthropic allows you to put <i>practically an infinite amount of text in here.</i> </p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/a9025a1c11064c8be82f8b13aa9b5c80df1a91a3c2da2b9a9cf4ab475c67f6d5/oy3jtsbktau1buezp2gu" alt=""></p><p>If you get a <strong>ChatGPT or Claude subscription</strong>, you’ll want to stick this into “special instructions” in a newly created “project”, where you can stick other kinds of context in too.</p><p>What else can you put in a project, you ask? E.g. a pdf containing the broad outlines of your life plans, past examples of your writing or coding style, or a list of terms and definitions you’ve coined yourself. Maybe try st... </p> | Everyone around me has a notable lack of system prompt. And when they do have a system prompt, it’s either the eigenprompt or some half-assed 3-paragraph attempt at telling the AI to “include less bullshit”.
I see no systematic attempts at making a good one anywhere.[1]
(For clarity, a system prompt is a bit of text—that's a subcategory of "preset" or "context"—that's included in every single message you send the AI.)
No one says “I have a conversation with Claude, then edit the system prompt based on what annoyed me about its responses, then I rinse and repeat”.
No one says “I figured out what phrasing most affects Claude's behavior, then used those to shape my system prompt".
I don't even see a “yeah I described what I liked and don't like about Claude TO Claude and then had it make a system prompt for itself”, which is the EASIEST bar to clear.
If you notice limitations in modern LLMs, maybe that's just a skill issue.
So if you're reading this and don't use a personal system prompt, STOP reading this and go DO IT:
1. Spend 5 minutes on a google doc being as precise as possible about how you want LLMs to behave
2. Paste it into the AI and see what happens
3. Reiterate if you wish (this is a case where more dakka wins)
It doesn’t matter if you think it cannot properly respect these instructions, this’ll necessarily make the LLM marginally better at accommodating you (and I think you’d be surprised how far it can go!).
PS: as I should've perhaps predicted, the comment section has become a de facto repo for LWers' system prompts. Share yours! This is good!
How do I do this?
If you’re on the free ChatGPT plan, you’ll want to use “settings → customize ChatGPT”, which gives you this popup:
This text box is very short and you won’t get much in.
If you’re on the free Claude plan, you’ll want to use “settings → personalization”, where you’ll see almost the exact same textbox, except that Anthropic allows you to put practically an infinite amount of tex | 569 | 1.10.0 | Revision | false | null | null | CrosspostOutput |
|
p8rcMDRwEGeFAzCQS | incorrect-baseline-evaluations-call-into-question-recent-llm | Incorrect Baseline Evaluations Call into Question Recent LLM-RL Claims | null | false | false | false | null | 5fbArMct7Fyh5pQQy | null | true | false | false | false | Post | https://safe-lip-9a8.notion.site/Incorrect-Baseline-Evaluations-Call-into-Question-Recent-LLM-RL-Claims-2012f1fbf0ee8094ab8ded1953c15a37?pvs=4 | 2025-05-29T18:40:21.493Z | null | false | false | 2 | 2 | 2025-05-30T18:24:13.847Z | false | false | linkpost | [] | null | null | DntufnoyD698JMarv | 6 | 30 | 65 | false | 0.039053 | null | false | false | 2025-06-22T09:54:56.617Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 25 | 0 | 2025-05-29T18:40:21.493Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 30 | 0 | 0 | 11 | 0 | 5fbArMct7Fyh5pQQy | shash42 | 2021-02-24T13:48:35.092Z | shash42 | shash42 | null | null | null | 122 | 0 | false | false | null | null | 3 | 16 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | p8rcMDRwEGeFAzCQS | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/p8rcMDRwEGeFAzCQS/llt4mw8vyedftslibbvk | SocialPreviewType | DntufnoyD698JMarv | <p>There has been a flurry of recent papers proposing new RL methods that claim to improve the “reasoning abilities” in language models. The most recent ones, which show improvements with random or no external rewards have led to surprise, excitement and confusion.</p><p>We analyzed 7 popular LLM RL papers (100+ to 3000+ likes, 50k+ to 500k+ views on X) including “Spurious Rewards”, “RL from 1 example”, and 3 papers exploring “Intrinsic Confidence Rewards”. We found that in most of these papers the improvements could be a mirage due to various accidental issues in the evaluation setups (discussed below). As such, the baseline numbers of the pre-RL models are massively underreported compared to official numbers in the Qwen releases, or other standardized evaluations (for example in the <a href="https://arxiv.org/abs/2504.07086"><strong>Sober Reasoning</strong></a> paper). <i>In several cases, the post-RL model performance was actually worse than the (correctly evaluated) pre-RL baseline they start from.</i> This means the elicitation these works achieve with RL, could also be replicated without any weight updates or finetuning. Here, we do not mean non-trivial elicitation of some latent capabilities, just what can be achieved by fixing prompting and generation hyperparameters. These include using correct formats and better ways to parse answers from responses, using recommended sampling temperatures, and using few-shot prompting to improve format-following.</p><p>Overall, these papers made us wonder if recent LLM RLVR papers have any signal, but we find their own claims could be noise due to underreported baselines. The proposed methods might have promise, and our goal is not to detract from their potential, but rather emphasise the importance of correct evaluations and scientific reporting. We understand the pressure to publish RL results quickly given how fast the community seems to be moving. But if the claims cannot be trusted, are we really moving forward?</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/p8rcMDRwEGeFAzCQS/ghyilxcg6k6kwpqhka56" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/p8rcMDRwEGeFAzCQS/aod1gu8qrghknjeonuh1 160w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/p8rcMDRwEGeFAzCQS/inpmhrltnnacphloi6kr 320w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/p8rcMDRwEGeFAzCQS/ifbfln9n06rk0sl7cki3 480w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/p8rcMDRwEGeFAzCQS/niigqpxox7lnmlk6augt 640w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/p8rcMDRwEGeFAzCQS/oy9e7zakezokeymhjyqu 800w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/p8rcMDRwEGeFAzCQS/sikwx86ed4qph8hqgey7 960w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/p8rcMDRwEGeFAzCQS/xgtzoggeriu1pdtp20gq 1120w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/p8rcMDRwEGeFAzCQS/e5x80sy1xqleiwnf3919 1280w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/p8rcMDRwEGeFAzCQS/zy4f3kkapxgtmqrdvlsu 1440w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/p8rcMDRwEGeFAzCQS/fntoalckwxcfldscrucd 1600w"></figure> | There has been a flurry of recent papers proposing new RL methods that claim to improve the “reasoning abilities” in language models. The most recent ones, which show improvements with random or no external rewards have led to surprise, excitement and confusion.
We analyzed 7 popular LLM RL papers (100+ to 3000+ likes, 50k+ to 500k+ views on X) including “Spurious Rewards”, “RL from 1 example”, and 3 papers exploring “Intrinsic Confidence Rewards”. We found that in most of these papers the improvements could be a mirage due to various accidental issues in the evaluation setups (discussed below). As such, the baseline numbers of the pre-RL models are massively underreported compared to official numbers in the Qwen releases, or other standardized evaluations (for example in the Sober Reasoning paper). In several cases, the post-RL model performance was actually worse than the (correctly evaluated) pre-RL baseline they start from. This means the elicitation these works achieve with RL, could also be replicated without any weight updates or finetuning. Here, we do not mean non-trivial elicitation of some latent capabilities, just what can be achieved by fixing prompting and generation hyperparameters. These include using correct formats and better ways to parse answers from responses, using recommended sampling temperatures, and using few-shot prompting to improve format-following.
Overall, these papers made us wonder if recent LLM RLVR papers have any signal, but we find their own claims could be noise due to underreported baselines. The proposed methods might have promise, and our goal is not to detract from their potential, but rather emphasise the importance of correct evaluations and scientific reporting. We understand the pressure to publish RL results quickly given how fast the community seems to be moving. But if the claims cannot be trusted, are we really moving forward? | 298 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
|
LSFiKt4zGxXcX2oxi | dimensionalization | Dimensionalization | null | false | false | false | null | skbL8Z4ypRPCQdHxf | null | true | false | false | false | Post | https://jordanmrubin.substack.com/p/dimensionalization | 2025-05-29T18:18:46.763Z | null | false | false | 2 | 2 | 2025-05-29T18:27:58.133Z | false | false | linkpost | [] | null | null | 7gHzdwnGDKhyX9kBn | 6 | 5 | 5 | false | 0.007984 | null | false | false | 2025-06-19T00:33:21.902Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-28T18:27:25.229Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 5 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 0 | 0 | skbL8Z4ypRPCQdHxf | jordan-rubin | 2025-05-28T18:26:43.705Z | jordan-rubin | Jordan Rubin | null | null | null | 14 | 0 | false | false | <p>Researcher-Operator currently on garden leave. Formerly: Two Sigma (Quant Research + Mgmt) / OnDeck (Data science in lending) / BlackRock (Bond desk quant). I hope my thinking can be helpful to you!</p><p>My Substack: <a href="https://jordanmrubin.substack.com">https://jordanmrubin.substack.com</a></p><p>My LinkedIn: <a href="https://www.linkedin.com/in/jordanmrubin/">https://www.linkedin.com/in/jordanmrubin/</a></p> | null | null | 4 | 3 | 0 | 0 | 0 | 0.9 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | LSFiKt4zGxXcX2oxi | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LSFiKt4zGxXcX2oxi/dbizljuicvusdmq6i1sv | SocialPreviewType | 7gHzdwnGDKhyX9kBn | <h2>TL;DR</h2><p>Dimensionalization is the practice of identifying and using quasi-independent axes or "dials" along which things vary. It’s the mental move behind tradeoff analysis, optimization, and design thinking. You are doing it already!</p><p>Unlike categorization or decomposition, dimensionalization preserves complexity but renders it navigable. Mastering it helps you think more clearly, decide more effectively, and communicate with greater precision.</p><hr><p>A good decision isn’t just “good.” It’s good <i>along the right dimensions</i>, and <i>good enough </i>on the others to be worth it. You already know this; you just may not have named it. That naming, structuring mental move is dimensionalization. It is what enables us to see nuance.</p><p>You probably already dimensionalize:</p><ul><li><strong>Watching</strong>: familiar vs new, funny vs serious, short vs long.</li><li><strong>Weekend</strong>: social vs solo, restful vs productive.</li><li><strong>Shopping</strong>: quality vs price, speed vs reliability.</li></ul><p>You don’t need to learn dimensionalization. But getting better at it makes you more precise, faster, and harder to bullshit.</p><p>So how does dimensionalization work?</p><hr><h1>Meta-Dimension 1: Fidelity</h1><p>Dimensionalization is a perceptual act. It’s how you make fuzzy, complex phenomena legible. You take something like “vibe” or “quality” or “pain” and ask: what are the independent axes along which this varies?</p><p>Fidelity is how well your sliders map to reality. High-Fidelity dimensions reveal meaningful differences. They don’t collapse when you zoom out or vanish when you compare two related objects.</p><p>What makes dimensions high-Fidelity?</p><ul><li><strong>Validity:</strong> They track actual meaningful differences.</li><li><strong>Stability:</strong> They hold up over time, context, or zoom level.</li></ul><p>Examples:</p><ul><li><strong>High-Fidelity examples:</strong><ul><li>Quant investing: factor exposure, drawdown, alpha decay.</li><li>Software engineering: latency, modularity, throughput.</li><li>Listening to music: tempo, rhythmic complexity, texture, spatiality.</li><li>Homeowning: layout, light quality, maintenance, appreciation potential.</li><li>Making art: balance, composition, color harmony, movement.</li></ul></li><li><strong>Low-Fidelity examples:</strong><ul><li>Buzzwords without clarification ("Tech Debt", “Efficiency”).</li><li>Aesthetic terms with no shared reference point (“Vibe”, “Quality”).</li></ul></li></ul><p>When Fidelity is low, you can still win, but you’re flying blind.</p><hr><h1>Meta-Dimension 2: Leverage</h1><p>Leverage is how much change you get per dial twist. A good dimension is one you can actually move—and that, when moved, actually matters.</p><p>Dimensionalization unlocks Leverage. When you kn... </p> | TL;DR
Dimensionalization is the practice of identifying and using quasi-independent axes or "dials" along which things vary. It’s the mental move behind tradeoff analysis, optimization, and design thinking. You are doing it already!
Unlike categorization or decomposition, dimensionalization preserves complexity but renders it navigable. Mastering it helps you think more clearly, decide more effectively, and communicate with greater precision.
----------------------------------------
A good decision isn’t just “good.” It’s good along the right dimensions, and good enough on the others to be worth it. You already know this; you just may not have named it. That naming, structuring mental move is dimensionalization. It is what enables us to see nuance.
You probably already dimensionalize:
* Watching: familiar vs new, funny vs serious, short vs long.
* Weekend: social vs solo, restful vs productive.
* Shopping: quality vs price, speed vs reliability.
You don’t need to learn dimensionalization. But getting better at it makes you more precise, faster, and harder to bullshit.
So how does dimensionalization work?
----------------------------------------
Meta-Dimension 1: Fidelity
Dimensionalization is a perceptual act. It’s how you make fuzzy, complex phenomena legible. You take something like “vibe” or “quality” or “pain” and ask: what are the independent axes along which this varies?
Fidelity is how well your sliders map to reality. High-Fidelity dimensions reveal meaningful differences. They don’t collapse when you zoom out or vanish when you compare two related objects.
What makes dimensions high-Fidelity?
* Validity: They track actual meaningful differences.
* Stability: They hold up over time, context, or zoom level.
Examples:
* High-Fidelity examples:
* Quant investing: factor exposure, drawdown, alpha decay.
* Software engineering: latency, modularity, throughput.
* Listening to music: tempo, rhythmic complexity, texture, spatiality.
| 1,294 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
yM94K4TTpDLr89PHj | distilled-human-judgment-reifying-ai-alignment | Distilled Human Judgment: Reifying AI Alignment | null | false | false | false | null | Tc3yfn6LHGWs6u9oR | null | true | false | false | false | Post | null | 2025-05-29T18:06:39.078Z | null | false | false | 2 | 2 | 2025-05-29T18:28:10.919Z | false | false | post | [] | null | null | vFmj8z2ojpKtytAZD | 0 | 1 | 1 | false | 0.005815 | null | false | false | 2025-05-29T18:06:39.078Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-29T08:03:07.503Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 5 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | Tc3yfn6LHGWs6u9oR | devansh-mehta | 2025-05-26T09:47:12.825Z | devansh-mehta | Devansh Mehta | null | null | null | 0 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | yM94K4TTpDLr89PHj | SocialPreviewType | vFmj8z2ojpKtytAZD | <p>Long time lurker but first post here, so I'll quickly introduce myself. I work at the Ethereum Foundation leading their AI x Public Goods track, which broadly encompasses coming up with and piloting new <a href="https://balajis.com/p/credible-neutrality">credibly neutral funding mechanism</a>s (especially those involving the use of AI to distribute money)</p><p>The purpose of this post is two-fold;</p><ol><li>Soliciting feedback on the idea of <a href="https://vitalik.eth.limo/general/2025/02/28/aihumans.html">distilled human judgment</a> (DHJ) and its application to AI alignment<br> </li><li>A call to action for the rationalist community to submit models in our 2 ongoing DHJ competitions to allocate funding to core Ethereum open source repositories and identify sybil (bot) wallet addresses</li></ol><p>Let's start with an <strong>explanation of DHJ to a 5 year old</strong>;</p><ul><li>Your teacher needs to grade 1000 assignments</li><li>She only manages to complete 100</li><li>Many AI agents or models give a grade to all the assignments</li><li>The 100 assignments the teacher did grade selects the winning model that then gives grades to all 1000</li></ul><p>Since people often relate new ideas with what they are already familiar with, the best way to think of DHJ is as a Kaggle competition where models are competing to accurately predict the ground truth data (human judgments). </p><h3>The main difference being, we don't have ground truth data for all the answers that models give but only a subset of them. But as submitters don't know which questions we have answers to and which we don't, they have to do a good job on all of them. <br><br><strong>DHJ <> AI Alignment</strong></h3><p>I'll start this section with a vision of a world that I don't want but which I think we're headed towards</p><blockquote><p>Every company has its own monolithic AI model trained on data of their C-suite. Employees pray for beneficence from the AI overlord to get attention to their project and also for promotions, pay raises, etc. Any change to the company AI is permissioned and requires review from a core team within the company</p></blockquote><p>What is a viable alternative?</p><blockquote><p>Every company has a marketplace of competing AI models that any team (or outside contributors) can submit. All these models are queried for answers to decisions that the C-suite needs to make, such as funding to different departments, who to lay off or promote, etc. Executives then see which models align most closely with the subset of decisions that they do make manually to make decisions across the board.</p></blockquote><p>A good analogy for thinking about these 2 visions is whether we want AI to be similar to a monopoly ... </p> | Long time lurker but first post here, so I'll quickly introduce myself. I work at the Ethereum Foundation leading their AI x Public Goods track, which broadly encompasses coming up with and piloting new credibly neutral funding mechanisms (especially those involving the use of AI to distribute money)
The purpose of this post is two-fold;
1. Soliciting feedback on the idea of distilled human judgment (DHJ) and its application to AI alignment
2. A call to action for the rationalist community to submit models in our 2 ongoing DHJ competitions to allocate funding to core Ethereum open source repositories and identify sybil (bot) wallet addresses
Let's start with an explanation of DHJ to a 5 year old;
* Your teacher needs to grade 1000 assignments
* She only manages to complete 100
* Many AI agents or models give a grade to all the assignments
* The 100 assignments the teacher did grade selects the winning model that then gives grades to all 1000
Since people often relate new ideas with what they are already familiar with, the best way to think of DHJ is as a Kaggle competition where models are competing to accurately predict the ground truth data (human judgments).
The main difference being, we don't have ground truth data for all the answers that models give but only a subset of them. But as submitters don't know which questions we have answers to and which we don't, they have to do a good job on all of them.
DHJ <> AI Alignment
I'll start this section with a vision of a world that I don't want but which I think we're headed towards
> Every company has its own monolithic AI model trained on data of their C-suite. Employees pray for beneficence from the AI overlord to get attention to their project and also for promotions, pay raises, etc. Any change to the company AI is permissioned and requires review from a core team within the company
What is a viable alternative?
> Every company has a marketplace of competing AI models that any team (or out | 1,207 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
||
9sL4KydciJt6k8hNi | summer-ai-safety-intro-fellowships-in-boston-and-online | Summer AI Safety Intro Fellowships in Boston and Online (Policy & Technical) – Apply by June 6! | null | false | false | false | null | xqeEu5rTkPfgzRfKX | null | true | false | false | false | Post | null | 2025-05-29T18:02:12.829Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | SfWjso5ogA3hZtiHe | 0 | 1 | 1 | false | 0.000879 | null | false | false | 2025-05-29T18:02:12.829Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-29T16:23:06.749Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | xqeEu5rTkPfgzRfKX | jandrade112 | 2025-05-29T16:21:25.552Z | jandrade112 | jandrade112 | null | null | null | 0 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | 9sL4KydciJt6k8hNi | SocialPreviewType | SfWjso5ogA3hZtiHe | <p><strong>TL;DR: Apply to AISST’s </strong><a href="https://airtable.com/appwC5nhcFmtF6UhW/shrgLvSslqlT08oek"><strong>AI Policy Fellowship</strong></a><strong> and </strong><a href="https://airtable.com/appbpq35u7UWs3wvy/shroDuWlq2EUXpUc0"><strong>Technical Fellowship</strong></a><strong> by Friday, June 6. Refer others </strong><a href="https://airtable.com/appwC5nhcFmtF6UhW/shroKAOK3GNIc54FB"><strong>here</strong></a><strong>. </strong>AISST (a group of undergrad and grad students at Harvard) is excited to be running two introductory reading groups this summer: a 7-week <a href="https://haist.ai/policy-fellowship">Intro Policy Fellowship</a> and an 8-week<a href="https://haist.ai/tech-fellowship"> Intro Technical Fellowship</a>. </p><p> </p><p>Our<strong> </strong>research-oriented<strong> Technical Fellowship</strong> covers topics like <a href="https://distill.pub/2020/circuits/zoom-in/">neural network interpretability</a>, <a href="https://arxiv.org/abs/2009.01325">learning from human feedback</a>, <a href="https://arxiv.org/abs/2105.14111">goal misgeneralization in reinforcement learning agents</a>, and <a href="https://arxiv.org/abs/2212.03827">eliciting latent knowledge.</a> <strong>See </strong><a href="http://haist.ai/tech-fellowship"><strong>here</strong></a><strong> for more details</strong> (curriculum, application, FAQ). Students with machine learning experience are especially encouraged to apply.</p><p> </p><p>Our<strong> Policy Fellowship</strong> discusses topics such as the <a href="https://bounded-regret.ghost.io/what-will-gpt-2030-look-like/">pace of progress in AI</a>, potential threats from AI <a href="https://www.nti.org/analysis/articles/the-convergence-of-artificial-intelligence-and-the-life-sciences/">misuse</a> and <a href="https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to">misalignment</a>, <a href="https://arxiv.org/abs/2305.15324">AI audits and evaluations</a>, and <a href="https://www.cnas.org/publications/reports/secure-governable-chips">semiconductor policy</a>. <strong>More details </strong><a href="http://haist.ai/policy-fellowship"><strong>here</strong></a> (curriculum, application, FAQ).</p><p> </p><p>The fellowships are open to current and incoming undergraduate and graduate students, as well as recent graduates, from Boston-area universities, and working professionals who will be in Boston during the upcoming academic year. We’ll meet weekly, in small cohorts facilitated by a grad student or upperclassman with experience in AI safety research or AI policy. </p><p> </p><p>Both reading groups will have an in-person option in Cambridge as well as a virtual option. In-person cohort meetings will be 2 hours long, with lunch or dinner provided. Readings will be completed during the meetings (so no outside reading is required). Online cohort meetings will be 1 hour each, and participants will be expected to complete the readings prior to each meeting (~1 hour of reading per meeting).</p><p> </p><p>We think reducing catastrophic risks from powerful artificial intelligence is one of the most important problems of our time. If you are interested, we encourage you to apply.</p><p><br> </p><p>And please pass along the opportunity to anyone in your network who may be interested! <strong>You can refer others through </strong><a href="https://airtable.com/applaQQYAhGU2Po1e/shrE5FbekiyGsCimp"><strong>this form</strong></a><strong>.</strong></p><p> </p><p><strong>The deadline for both applications is Friday, June 6, at 11:59 pm. </strong></p><ul><li><strong>Apply </strong><a href="https://airtable.com/appbpq35u7UWs3wvy/shroDuWlq2EUXpUc0"><strong>here</strong></a><strong> for the Technical Fellowship</strong></li><li><strong>Apply </strong><a href="https://airtable.com/appwC5nhcFmtF6UhW/shrgLvSslqlT08oek"><strong>here</strong></a><strong> for the Policy Fellowship</strong></li></ul> | TL;DR: Apply to AISST’s AI Policy Fellowship and Technical Fellowship by Friday, June 6. Refer others here. AISST (a group of undergrad and grad students at Harvard) is excited to be running two introductory reading groups this summer: a 7-week Intro Policy Fellowship and an 8-week Intro Technical Fellowship.
Our research-oriented Technical Fellowship covers topics like neural network interpretability, learning from human feedback, goal misgeneralization in reinforcement learning agents, and eliciting latent knowledge. See here for more details (curriculum, application, FAQ). Students with machine learning experience are especially encouraged to apply.
Our Policy Fellowship discusses topics such as the pace of progress in AI, potential threats from AI misuse and misalignment, AI audits and evaluations, and semiconductor policy. More details here (curriculum, application, FAQ).
The fellowships are open to current and incoming undergraduate and graduate students, as well as recent graduates, from Boston-area universities, and working professionals who will be in Boston during the upcoming academic year. We’ll meet weekly, in small cohorts facilitated by a grad student or upperclassman with experience in AI safety research or AI policy.
Both reading groups will have an in-person option in Cambridge as well as a virtual option. In-person cohort meetings will be 2 hours long, with lunch or dinner provided. Readings will be completed during the meetings (so no outside reading is required). Online cohort meetings will be 1 hour each, and participants will be expected to complete the readings prior to each meeting (~1 hour of reading per meeting).
We think reducing catastrophic risks from powerful artificial intelligence is one of the most important problems of our time. If you are interested, we encourage you to apply.
And please pass along the opportunity to anyone in your network who may be interested! You can refer others through this form.
| 325 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
SakEMppQdd4h8TuZB | digital-sentience-funding-opportunities-support-for-applied | Digital sentience funding opportunities: Support for applied work and research | null | false | false | false | null | 3BL7rSNWAmyZojdPz | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "eS5CMbjTKQ2o4G8X2"
}
] | true | false | false | false | Post | 2025-05-29T15:22:43.197Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | QG3yrFzgSv9r7tDE9 | 0 | 6 | 21 | false | 0.011082 | null | false | false | 2025-05-29T15:22:43.197Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 8 | 0 | 2025-05-29T15:18:42.477Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "eS5CMbjTKQ2o4G8X2",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 1,
"createdAt": "2019-07-11T10:06:15.645Z",
"deleted": false,
"displayName": "zdgroff",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 17,
"organization": null,
"postCount": 0,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "zdgroff",
"spamRiskScore": 0.9,
"tagRevisionCount": 0,
"username": "zdgroff"
}
] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "hmTa9YDwmzHjhMCAt",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 16,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-06-15T16:07:24.366Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "fGFR972rvsxQhZoPd",
"displayName": "Odd anon"
},
{
"_id": "BveuaCHRKnHWCQnTn",
"displayName": "Stephen Martin"
},
{
"_id": "T7QHMS7qNx3s7z36d",
"displayName": "StanislavKrym"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Rights / Welfare",
"needsReview": false,
"noindex": false,
"postCount": 54,
"score": 16,
"shortName": null,
"slug": "ai-rights-welfare",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "7jaBCxPHRDfJppYws",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-08-03T00:26:30.777Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Sentience",
"needsReview": false,
"noindex": false,
"postCount": 70,
"score": 9,
"shortName": null,
"slug": "ai-sentience",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 2 | 0 | 3BL7rSNWAmyZojdPz | aog | 2019-01-28T19:51:53.533Z | Aidan O'Gara | aog | null | null | ao | 1,588 | 33 | false | false | null | null | 21 | 198 | 2 | 2 | 5 | 1 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | SakEMppQdd4h8TuZB | SocialPreviewType | QG3yrFzgSv9r7tDE9 | <p><i>Written by Zach Freitas-Groff and posted at his request. </i></p><h1><strong>Summary</strong></h1><p>I’m excited to announce a <a href="https://www.longview.org/digital-sentience-consortium/"><u>“Digital Sentience Consortium”</u></a> hosted by <a href="https://www.longview.org/"><u>Longview Philanthropy</u></a>, in collaboration with <a href="https://www.navigation.org/"><u>The Navigation Fund</u></a> and <a href="https://macroscopic.org/"><u>Macroscopic Ventures</u></a>, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include <a href="https://www.longview.org/digital-sentience-consortium/research-fellowships-on-digital-sentience/"><u>research fellowships</u></a>, <a href="https://www.longview.org/digital-sentience-consortium/career-transition-fellowships-on-digital-sentience/"><u>career transition fellowships</u></a>, and a broad <a href="https://www.longview.org/digital-sentience-consortium/request-for-proposals-applied-work-on-potential-digital-sentience-and-society/"><u>request for proposals</u></a> for applied work on these topics. </p><p>For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of <a href="https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad13df32ed995.pdf"><u> Claude 4’s model card</u></a> are about its potential moral patienthood. <a href="https://eleosai.org/post/experts-who-say-that-ai-welfare-is-a-serious-near-term-possibility/"><u>Scientific experts increasingly say</u></a> that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved.</p><ul><li><a href="https://www.longview.org/digital-sentience-consortium/research-fellowships-on-digital-sentience/">Apply to Research Fellowship</a></li><li><a href="https://www.longview.org/digital-sentience-consortium/career-transition-fellowships-on-digital-sentience/">Apply to Career Transition Fellowship</a></li><li><a href="https://www.longview.org/digital-sentience-consortium/request-for-proposals-applied-work-on-potential-digital-sentience-and-society/">Apply to Request for Proposals</a></li></ul><h1><strong>Motivation & Focus</strong></h1><p>For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress. </p><p>Now, I think there are live areas where people can contribute:</p><ul><li>Technically informed research on which AI systems are sentient, like <a href="https://arxiv.org/abs/2308.08708"><u>this paper</u></a> applying existing theories of consciousness to a few AI architectures.</li><li>Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like <a href="https://arxiv.org/abs/2410.13787"><u>work on</u></a> <a href="https://arxiv.org/html/2501.11120v1"><u>AI introspection</u></a>.</li><li>Political philosophy and policy research on the proper role of AI in society.</li><li>Work to educate the public about the issue and improve the reasonableness of public discussion.</li><li>Advice and applied work to make it more likely that AI models, if sentient, experience wellbeing.</li></ul><p>The goal of our Digital Sentience Consortium is to spur more work in these areas and more.</p><p>This is an area with a lot... </p> | Written by Zach Freitas-Groff and posted at his request.
Summary
I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.
For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved.
* Apply to Research Fellowship
* Apply to Career Transition Fellowship
* Apply to Request for Proposals
Motivation & Focus
For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.
Now, I think there are live areas where people can contribute:
* Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures.
* Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on AI introspection.
* Political philosophy and policy | 1,063 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|||
rYMWKeeRRrASJiCzY | when-to-be-nice-vs-kind | When to Be Nice vs Kind | null | false | false | false | null | WqpBR8YrcAnZrzGJk | null | true | false | false | false | Post | null | 2025-05-29T15:06:41.227Z | null | false | false | 2 | 2 | 2025-05-29T18:05:42.546Z | false | false | post | [
"3oopbgcjYfvN8B2fp"
] | null | null | JHH8grseo8wYxACRv | 2 | 12 | 20 | false | 0.015683 | null | false | false | 2025-05-30T13:54:06.540Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 4 | 0 | 2025-05-26T20:56:21.674Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "gHCNhqxuJq2bZ2akb",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-10T11:36:05.706Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Social & Cultural Dynamics",
"needsReview": false,
"noindex": false,
"postCount": 384,
"score": 0,
"shortName": null,
"slug": "social-and-cultural-dynamics",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "SEuoBQeHLYd9dtqpK",
"adminOnly": false,
"afBaseScore": 6,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-20T00:17:45.616Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Social Skills",
"needsReview": false,
"noindex": false,
"postCount": 55,
"score": 10,
"shortName": null,
"slug": "social-skills",
"suggestedAsFilter": false,
"userId": "SsduPgHwY2zeZpmKT",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 12 | 0 | 0 | 3 | 0 | WqpBR8YrcAnZrzGJk | declan-molony | 2022-06-06T19:10:54.544Z | declan-molony | Declan Molony | null | null | null | 990 | 0 | false | false | <p>"If you're thinking without writing, you only think you're thinking." - Leslie Lamport</p> | null | null | 22 | 66 | 0 | 0 | 0 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | [
"canModeratePersonal"
] | null | null | rYMWKeeRRrASJiCzY | SocialPreviewType | JHH8grseo8wYxACRv | <p>While often used synonymously, there is a subtle yet meaningful distinction between being <i>nice</i> and being <i>kind</i>. Understanding when to prioritize kindness over niceness—and vice versa—helps me better navigate social situations with both authenticity and appropriate boundaries.<br> </p><h2><strong>Definitions</strong></h2><p>Someone who is being <strong>nice</strong> is pleasant to be around, but internally their pleasantness may or may not come from a genuine place of warmth or love.</p><p>Someone who is being <strong>kind</strong> genuinely cares about another’s well-being, even if in the short-term they don’t necessarily make the other person feel good.</p><p>The distinction becomes apparent when someone is one, but not the other.</p><h2><strong>Being nice, but not kind</strong></h2><p>When baristas, for example, smile and ask me how my day is going, they’re being nice. They don’t actually care how I’m doing, yet I still appreciate them because they’re adding pleasantness to what would otherwise be just another mundane economic transaction.</p><h2><strong>Being kind, but not nice</strong></h2><p>A good mother is kind to her children, but not always nice. When she refuses to indulge her kid’s every request for ice cream, she’s not being nice in the short-term, but she’s being kind in the long-term (because she’s saving her kid from ill health, as <a href="https://jamanetwork.com/journals/jamapediatrics/fullarticle/2790364"><u>1 in 5 youths now have prediabetes</u></a> in the US).</p><h2><strong>Being neither nice nor kind</strong></h2><p>Living in a big city, I don’t acknowledge every stranger I pass on the sidewalk. I’m not adding pleasantness, nor am I genuinely caring for them—I’m being indifferent.</p><p>On the more extreme side, someone being an asshole is neither nice nor kind.</p><h2><strong>Being both nice and kind</strong></h2><p><a href="https://en.wikipedia.org/wiki/Fred_Rogers"><u>Mister Rogers</u></a> exemplifies niceness when he waves hello to a little girl in his neighborhood, and he also expresses genuine kindness when he finds out her pet bunny is sick and he consoles her.</p><hr><p>In fleeting interactions (like with baristas, waiters, coworkers), it’s okay to simply just be nice, without expressing genuine kindness. Asking a busy waiter how his day is <i>really</i> going can actually be considered rude because he has a lot of customers to serve.</p><p>And with my friends and family who I spend the most time with, I would prefer they be genuine with me rather than just nice or polite. (Though a little bit of both never hurt anyone🙂).</p><p>In my experience, getting this balance wrong—being merely polite with loved ones or trying to 'fix' casual acquaintances—can leave interactions feeling either superficia... </p> | While often used synonymously, there is a subtle yet meaningful distinction between being nice and being kind. Understanding when to prioritize kindness over niceness—and vice versa—helps me better navigate social situations with both authenticity and appropriate boundaries.
Definitions
Someone who is being nice is pleasant to be around, but internally their pleasantness may or may not come from a genuine place of warmth or love.
Someone who is being kind genuinely cares about another’s well-being, even if in the short-term they don’t necessarily make the other person feel good.
The distinction becomes apparent when someone is one, but not the other.
Being nice, but not kind
When baristas, for example, smile and ask me how my day is going, they’re being nice. They don’t actually care how I’m doing, yet I still appreciate them because they’re adding pleasantness to what would otherwise be just another mundane economic transaction.
Being kind, but not nice
A good mother is kind to her children, but not always nice. When she refuses to indulge her kid’s every request for ice cream, she’s not being nice in the short-term, but she’s being kind in the long-term (because she’s saving her kid from ill health, as 1 in 5 youths now have prediabetes in the US).
Being neither nice nor kind
Living in a big city, I don’t acknowledge every stranger I pass on the sidewalk. I’m not adding pleasantness, nor am I genuinely caring for them—I’m being indifferent.
On the more extreme side, someone being an asshole is neither nice nor kind.
Being both nice and kind
Mister Rogers exemplifies niceness when he waves hello to a little girl in his neighborhood, and he also expresses genuine kindness when he finds out her pet bunny is sick and he consoles her.
----------------------------------------
In fleeting interactions (like with baristas, waiters, coworkers), it’s okay to simply just be nice, without expressing genuine kindness. Asking a busy waiter how his day is really | 427 | 1.6.0 | Revision | false | null | null | CrosspostOutput |
|
9THq9RvpbmecWa6Ni | ai-118-claude-ascendant | AI #118: Claude Ascendant | null | false | false | false | null | N9zj5qpTfqmbn9dro | null | true | false | false | false | Post | null | 2025-05-29T14:10:04.603Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | nmXLwSmG28igBfqRv | 8 | 18 | 45 | false | 0.023411 | null | false | false | 2025-05-31T23:00:53.583Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 16 | 0 | 2025-05-29T14:10:04.603Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 69 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "8byoqYZfdwHffYLZ6",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-01T18:44:14.645Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Newsletters",
"needsReview": false,
"noindex": false,
"postCount": 411,
"score": 9,
"shortName": null,
"slug": "newsletters",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | QSR8rPZxZzxEXoPjR | 0 | 0 | null | false | null | null | 0 | 18 | 0 | 0 | 8 | 0 | N9zj5qpTfqmbn9dro | zvi | 2009-03-31T20:54:54.077Z | Zvi | Zvi | null | null | null | 51,554 | 146 | false | false | null | null | 936 | 1,461 | 3 | 2 | 7 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | null | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | 9THq9RvpbmecWa6Ni | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/9THq9RvpbmecWa6Ni/qqrddl5lumxndy6idq2x | SocialPreviewType | nmXLwSmG28igBfqRv | <p>The big news of this week was of course the release of Claude 4 Opus. I offered two review posts: <a href="https://thezvi.substack.com/p/claude-4-you-safety-and-alignment"><strong>One on safety and alignment</strong></a>, and <a href="https://thezvi.substack.com/p/claude-4-you-the-quest-for-mundane"><strong>one on mundane utility</strong></a>, and a <a href="https://thezvi.substack.com/p/fun-with-veo-3-and-media-generation"><strong>bonus fun post on Google’s Veo 3</strong></a>.</p><p>I am once again defaulting to Claude for most of my LLM needs, although I often will also check o3 and perhaps Gemini 2.5 Pro.</p><p>On the safety and alignment front, Anthropic did extensive testing, and reported that testing in an exhaustive model card. A lot of people got very upset to learn that Opus could, if pushed too hard in the wrong situations engineered for these results, do things like report your highly unethical actions to authorities or try to blackmail developers into not being shut down or replaced. It is good that we now know about these things, and it was quickly observed that similar behaviors can be induced in similar ways from ChatGPT (in particular o3), Gemini and Grok.</p>
<div>
<span id="more-24485"></span>
</div>
<p>Last night DeepSeek gave us R1-0528, but it’s too early to know what we have there.</p><p>Lots of other stuff, as always, happened as well.</p><p>This weekend I will be at LessOnline at Lighthaven in Berkeley. Come say hello.</p>
<h4>Table of Contents</h4>
<ol>
<li><a href="https://thezvi.substack.com/i/164160373/language-models-offer-mundane-utility">Language Models Offer Mundane Utility.</a> People are using them more all the time.</li>
<li><a href="https://thezvi.substack.com/i/164160373/now-with-extra-glaze">Now With Extra Glaze.</a> Claude has some sycophancy issues. ChatGPT is worse.</li>
<li><a href="https://thezvi.substack.com/i/164160373/get-my-agent-on-the-line">Get My Agent On The Line.</a> Suggestions for using Jules.</li>
<li><a href="https://thezvi.substack.com/i/164160373/language-models-don-t-offer-mundane-utility">Language Models Don’t Offer Mundane Utility.</a> Okay, not shocked.</li>
<li><a href="https://thezvi.substack.com/i/164160373/huh-upgrades"><strong>Huh, Upgrades</strong>.</a> Claude gets a voice, DeepSeek gives us R1-0528.</li>
<li><a href="https://thezvi.substack.com/i/164160373/on-your-marks">On Your Marks.</a> The age of benchmarks is in serious trouble. Opus good at code.</li>
<li><a href="https://thezvi.substack.com/i/164160373/choose-your-fighter">Choose Your Fighter.</a> Where is o3 still curiously strong?</li>
<li><a href="https://thezvi.substack.com/i/164160373/deepfaketown-and-botpocalypse-soon">Deepfaketown and Botpocalypse Soon.</a> Bot infestations are getting worse.</li>
<li><a href="https://thezvi.substack.com/i/164160373/fun-with-media-generation">Fun With Media Generation.</a> Reasons AI video might not do much for a while.</li>
<li><a href="https://thezvi.substack.com/i/164160373/playing-the-training-data-game">Playing The Training Data Game.</a> Meta now using European posts to train AI.</li>
<li><a href="https://thezvi.substack.com/i/164160373/they-took-our-jobs"><strong>They Took Our Jobs</strong>.</a> That is indeed what Dario means by bloodbath.</li>
<li><a href="https://thezvi.substack.com/i/164160373/the-art-of-learning">The Art of Learning.</a> Books as a way to force you to think. Do you need that?</li>
<li><a href="https://thezvi.substack.com/i/164160373/the-art-of-the-jailbreak">The Art of the Jailbreak.</a> Pliny did the work once, now anyone can use it. Hmm.</li>
<li><a href="https://thezvi.substack.com/i/164160373/unprompted-attention">Unprompted Attention.</a> Very long system prompts are bad signs for scaling.</li>
<li><a href="https://thezvi.substack.com/i/164160373/get-involved">Get Involved.</a> Softma, Pliny versus robots, OpenPhil, RAND.</li>
<li><a href="https://thezvi.substack.com/i/164160373/introducing">Introducing.</a> Google’s Lyria RealTime for music, Pliny has a website.</li>
<li><a href="https://thezvi.substack.com/i/164160373/in-other-ai-news">In Other AI News.</a> Scale matters.</li>
<li><a href="https://thezvi.substack.com/i/164160373/show-me-the-money">Show Me the Money.</a> AI versus advertising revenue, UAE versus democracy.</li>
<li><a href="https://thezvi.substack.com/i/164160373/nvidia-sells-out">Nvidia Sells Out.</a> Also, they can’t meet deman</li></ol>... | The big news of this week was of course the release of Claude 4 Opus. I offered two review posts: One on safety and alignment, and one on mundane utility, and a bonus fun post on Google’s Veo 3.
I am once again defaulting to Claude for most of my LLM needs, although I often will also check o3 and perhaps Gemini 2.5 Pro.
On the safety and alignment front, Anthropic did extensive testing, and reported that testing in an exhaustive model card. A lot of people got very upset to learn that Opus could, if pushed too hard in the wrong situations engineered for these results, do things like report your highly unethical actions to authorities or try to blackmail developers into not being shut down or replaced. It is good that we now know about these things, and it was quickly observed that similar behaviors can be induced in similar ways from ChatGPT (in particular o3), Gemini and Grok.
Last night DeepSeek gave us R1-0528, but it’s too early to know what we have there.
Lots of other stuff, as always, happened as well.
This weekend I will be at LessOnline at Lighthaven in Berkeley. Come say hello.
TABLE OF CONTENTS
1. Language Models Offer Mundane Utility. People are using them more all the time.
2. Now With Extra Glaze. Claude has some sycophancy issues. ChatGPT is worse.
3. Get My Agent On The Line. Suggestions for using Jules.
4. Language Models Don’t Offer Mundane Utility. Okay, not shocked.
5. Huh, Upgrades. Claude gets a voice, DeepSeek gives us R1-0528.
6. On Your Marks. The age of benchmarks is in serious trouble. Opus good at code.
7. Choose Your Fighter. Where is o3 still curiously strong?
8. Deepfaketown and Botpocalypse Soon. Bot infestations are getting worse.
9. Fun With Media Generation. Reasons AI video might not do much for a while.
10. Playing The Training Data Game. Meta now using European posts to train AI.
11. They Took Our Jobs. That is indeed what Dario means by bloodbath.
12. The Art of Learning. Books as a way to force you | 17,236 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
|
7c9NGK9Fiiufj5fks | social-capital-does-it-matter | Social Capital - Does it Matter? | null | false | false | false | null | 4dEavgQafmvDCqR42 | null | true | false | false | false | Post | null | 2025-05-29T12:26:08.640Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | 4vg5tvqhdXjK93T7J | 1 | 4 | -9 | false | -0.004768 | null | false | false | 2025-05-30T12:39:35.310Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | -4 | 0 | 2025-05-29T12:06:32.057Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 7 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 2 | 0 | 4dEavgQafmvDCqR42 | momcilo | 2025-05-09T13:09:25.517Z | momcilo | Momcilo | null | null | Momcilo Mijovic | -12 | 0 | false | false | <p>Researcher into sense-making, systems and futurism</p> | null | null | 2 | 2 | 0 | 0 | 0 | 0.8 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | 7c9NGK9Fiiufj5fks | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/7c9NGK9Fiiufj5fks/mdyifix6tfylks7stb8p | SocialPreviewType | 4vg5tvqhdXjK93T7J | <h3><strong>Within the Walls</strong></h3><p>There's plenty of funny business. My dad was enthusiastically arguing that we should invest in this little known bank offering a high return. Your money doubles every month! Who told you that, I asked? The dentist. He now lives in a big house in Dedinje and has a huge painting of Petar Lubarda on the wall.</p><p>The next day he was fixing the VW Beetle - not his. It belonged to a friend who lived in Germany and occasionally needed to be chauffeured from and to the airport. The thing was rotten in places, but the engine started every time and kept up with other traffic by constantly being revved out of its wits. Now it was lying in bits on the street, oil in slicks and drops and dad's feet protruding from under the body.</p><p>My mum loved socialising. She took pride in her taste in the arts and sought company of writers, painters and actors. Her old school friends would often come for drinks in the evening and most of them looked like they came out of 70s French movies.</p><p>My great grandmother was visiting us again. She spent days sitting on a chair, narrating same old stories. This time It was about wisdom - how she systematically refused to follow doctor's advice and evaded the surgical knife. "He warned me, I've got 6 months max if I didn't have it removed. He died of a stroke 15 years ago. What a pity, he was a young man..."</p><p>Uncle and aunt lived with us - 7 people altogether. Me and my sister in one room, parents in the adjacent one, grandma in the living room and uncle and aunt in the second living room. People walked in bathrobes, queued in front of the only bathroom, smoked and chatted. There was always something cooking. When the phone rang - it was a fight who'd get to it first.</p><p>The dining room was the meeting place. My dad never felt comfortable joining in, he didn't smoke or drink coffee. Being socially awkward - he preferred staying in his room and reading motoring magazines that he was subscribed to. They'd arrive every month from England and he'd exercise delayed gratification by not opening the envelope straight away. All the chores that he normally postponed would suddenly be prioritised. That's what kept him motivated.</p><p>Uncle and aunt worked on films. She was an assistant director and he designed and... </p> | Within the Walls
There's plenty of funny business. My dad was enthusiastically arguing that we should invest in this little known bank offering a high return. Your money doubles every month! Who told you that, I asked? The dentist. He now lives in a big house in Dedinje and has a huge painting of Petar Lubarda on the wall.
The next day he was fixing the VW Beetle - not his. It belonged to a friend who lived in Germany and occasionally needed to be chauffeured from and to the airport. The thing was rotten in places, but the engine started every time and kept up with other traffic by constantly being revved out of its wits. Now it was lying in bits on the street, oil in slicks and drops and dad's feet protruding from under the body.
My mum loved socialising. She took pride in her taste in the arts and sought company of writers, painters and actors. Her old school friends would often come for drinks in the evening and most of them looked like they came out of 70s French movies.
My great grandmother was visiting us again. She spent days sitting on a chair, narrating same old stories. This time It was about wisdom - how she systematically refused to follow doctor's advice and evaded the surgical knife. "He warned me, I've got 6 months max if I didn't have it removed. He died of a stroke 15 years ago. What a pity, he was a young man..."
Uncle and aunt lived with us - 7 people altogether. Me and my sister in one room, parents in the adjacent one, grandma in the living room and uncle and aunt in the second living room. People walked in bathrobes, queued in front of the only bathroom, smoked and chatted. There was always something cooking. When the phone rang - it was a fight who'd get to it first.
The dining room was the meeting place. My dad never felt comfortable joining in, he didn't smoke or drink coffee. Being socially awkward - he preferred staying in his room and reading motoring magazines that he was subscribed to. They'd arrive every month | 1,708 | 1.2.1 | Revision | false | null | null | CrosspostOutput |
dGLmBtk8ypAMHFPiR | cross-posting-to-substack | Cross-posting to Substack | null | false | false | false | null | TtEoCrFeowCGb6rFK | null | true | false | false | false | Post | null | 2025-05-29T11:10:01.913Z | null | false | false | 2 | 2 | 2025-05-29T18:00:04.214Z | false | false | post | [] | null | null | hN2yx3ymDkcujrmps | 0 | 3 | 12 | false | 0.011851 | null | false | false | 2025-05-29T11:10:01.913Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-29T11:10:01.914Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | ma5dgL5yFHRxKLZKv | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 2 | 0 | TtEoCrFeowCGb6rFK | jkaufman | 2010-11-04T21:42:19.863Z | jkaufman | jefftk | null | null | Jeff Kaufman | 21,921 | 3 | false | false | <p>Co-lead (Near-Term Detection) at the Nucleic Acid Observatory in Boston. Speaking for myself unless I say otherwise.</p> | null | null | 1,018 | 2,211 | 0 | 0 | 1 | 1 | 2 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters"
] | null | null | dGLmBtk8ypAMHFPiR | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/dGLmBtk8ypAMHFPiR/w58o9iwsm2xlssccebep | SocialPreviewType | hN2yx3ymDkcujrmps | <p><span>
Historically people kept up with blogs via RSS, but it's been on its
way out for over a decade. These days receiving posts via email is
popular, and I should probably make my posts available this way. I
considered implementing my own emailing system, but bulk email is (or
at least was, I'm not up to date here) a pain. Instead I decided to
start cross-posting to Substack: </span>
<a href="https://jefftkaufman.substack.com/">jefftkaufman.substack.com</a>.
This will be yet another way to read my posts, similar to the
<a href="https://www.lesswrong.com/users/jkaufman">LessWrong mirror</a>
and my
<a href="https://www.facebook.com/jefftk">text-only FB
cross-posts</a>.
</p><p>
I have a full RSS feed of all my posts, and Substack imported it fine.
It doesn't look like there's an option to do ongoing RSS-based
imports, but copy-paste seems to work well enough; I did this post and
the <a href="https://jefftkaufman.substack.com/p/quick-minimal-playhouse">previous
one</a> that way. At some point I'll look into automatic
cross-posting, though right now it looks like Substack doesn't support
anything good. And if I'm going to reverse engineer something I'll
start with their comments implementation, since I always want to
rehost comments.
</p>
<p>
<a href="https://www.jefftk.com/jefftk-substack-example-big.png"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/dGLmBtk8ypAMHFPiR/zrcaxtv9fkavz8iafydw" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/dGLmBtk8ypAMHFPiR/zrcaxtv9fkavz8iafydw 550w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/dGLmBtk8ypAMHFPiR/zxjuxupbdoyfxmtrq9yq 1100w"></a></p><div></div>
<p></p>
<p>
One aspect that's a bit eerie is the URLs: both Substack and my blog
would use <code>/p/post-name-in-title-case</code> as the url for a
post titled "Post Name In Title Case". I've been doing this since <a href="https://github.com/jeffkaufman/webscripts/commit/97eea5a43225df9df21efcdf6485e97f828be0b9">2013-10-28</a>
and Substack got started in <a href="https://en.wikipedia.org/wiki/Substack">2017</a> so I know I
didn't copy them ;)
</p>
<p><i>Comment via: <a href="https://jefftkaufman.substack.com/p/cross-posting-to-substack">substack</a></i></p> | Historically people kept up with blogs via RSS, but it's been on its way out for over a decade. These days receiving posts via email is popular, and I should probably make my posts available this way. I considered implementing my own emailing system, but bulk email is (or at least was, I'm not up to date here) a pain. Instead I decided to start cross-posting to Substack: jefftkaufman.substack.com. This will be yet another way to read my posts, similar to the LessWrong mirror and my text-only FB cross-posts.
I have a full RSS feed of all my posts, and Substack imported it fine. It doesn't look like there's an option to do ongoing RSS-based imports, but copy-paste seems to work well enough; I did this post and the previous one that way. At some point I'll look into automatic cross-posting, though right now it looks like Substack doesn't support anything good. And if I'm going to reverse engineer something I'll start with their comments implementation, since I always want to rehost comments.
One aspect that's a bit eerie is the URLs: both Substack and my blog would use /p/post-name-in-title-case as the url for a post titled "Post Name In Title Case". I've been doing this since 2013-10-28 and Substack got started in 2017 so I know I didn't copy them ;)
Comment via: substack | 226 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
gqder9YeBYLspyaqo | reflections-on-ai-wisdom-plus-announcing-wise-ai-wednesdays | Reflections on AI Wisdom, plus announcing Wise AI Wednesdays | null | false | false | false | null | XLwKyCK7JmC292ZCC | null | true | false | false | false | Post | null | 2025-05-29T07:13:39.508Z | null | false | false | 2 | 2 | 2025-05-29T18:00:14.275Z | false | false | post | [] | null | null | saAoHi2Tm788CK7ki | 0 | 4 | 18 | false | 0.0145 | null | false | false | 2025-05-29T07:13:39.508Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 8 | 0 | 2025-05-29T06:50:52.856Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "Ki4TywKtMREHp9zFC",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": null,
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2025-06-05T12:30:00.389Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": null,
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Artificial Wisdom",
"needsReview": true,
"noindex": false,
"postCount": 4,
"score": 0,
"shortName": null,
"slug": "artificial-wisdom-1",
"suggestedAsFilter": false,
"userId": "XLwKyCK7JmC292ZCC",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 3 | 0 | XLwKyCK7JmC292ZCC | chris_leong | 2009-05-28T03:08:43.251Z | Chris_Leong | Chris_Leong | null | null | null | 7,651 | 457 | false | false | null | null | 227 | 2,158 | 3 | 32 | 206 | 1 | 71 | r38pkCm7wF4M44MDQ | User | easy-going | null | null | [
"trustLevel1",
"alignmentVoters",
"canModeratePersonal",
"alignmentForum"
] | null | null | gqder9YeBYLspyaqo | SocialPreviewType | saAoHi2Tm788CK7ki | <p>I recently finished leading an <a href="https://www.aisafety.camp/"><i><u>AI Safety Camp</u></i></a> project on <i>Wise AI Advisors</i><span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="tj58loenbf" role="doc-noteref" id="fnreftj58loenbf"><sup><a href="#fntj58loenbf">[1]</a></sup></span> (my team included <i>Chris Cooper</i>, <i>Matt Hampton</i>, and <i>Richard Kroon</i>). Since we want to share our work in an orderly fashion, I’m launching <i>Wise AI Wednesdays</i>. Each Wednesday<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="60dn8nshtha" role="doc-noteref" id="fnref60dn8nshtha"><sup><a href="#fn60dn8nshtha">[2]</a></sup></span>, I (or one of my teammates) will be sharing a post, initially drawn from our <i>AI Safety Camp</i> outputs, but later including shifting to include future work and outputs summaries or commentary on related research. I’m hoping that a regular posting schedule will help cultivate <i>Wise AI/Wise AI Advisors</i> as a subfield of <i>AI Safety</i>.</p><p>This inaugural post provides an update on how my views on AI and wisdom have changed since I won 3rd prize in the <a href="https://www.lesswrong.com/posts/hiuTzNBqG2EYg6qM5/winners-of-the-essay-competition-on-the-automation-of-wisdom"><i><u>Automation of Wisdom and Philosophy Competition</u></i></a>. My views have evolved considerably since then, so I thought it was worth listing a number of updates I've made:</p><ul><li><strong>Broadening My Focus</strong>: I had originally labelled my research direction as <a href="https://aiimpacts.org/an-overview-of-obvious-approaches-to-training-wise-ai-advisors/"><i><u>Wise AI Advisors via Imitation Learning</u></i></a>. While I still see this as a particularly promising approach, I’m now interested in <a href="https://www.lesswrong.com/posts/SbAofYCgKkaXReDy4/chris_leong-s-shortform?commentId=Zcg9idTyY5rKMtYwo"><i><u>Wise AI Advisors</u></i></a> <i>more broadly</i>. Many different reasonable-sounding paths exist; it would be arrogant to believe I’ve found the One True Path without much more investigation. I’m now more inclined to follow a 'let a thousand flowers bloom' path.</li><li><strong>More Favourable Funding Landscape</strong>: There seems to be more desire to fund work adjacent to this space than I had anticipated. For example: <a href="https://cosmosgrants.org/truth"><i><u>Cosmos x FIRE Truth-seeking Round</u></i></a> and <a href="https://www.flf.org/fellowship"><i><u>Fellowship on AI for Human Reasoning</u></i></a>. I initially thought significant effort would be needed to convince funders, but it now seems quite possible to secure funding for Wise AI projects, even though you might have to choose a project that intersects with a funder's interest level.</li><li><strong>Increased Emphasis on Field-Building</strong>: Consequently, my focus has shifted from solely direct research progress to a combination of research and field-building. While I'm still defining what this entails, activities like <i>establishing the case for Wise AI</i>, <i>exploring theories of impact</i>, and <i>identifying concrete, high-priority projects</i> seem important right now.</li><li><strong>Expanded View of Possible Contributions</strong>: My ideas about what kind of work might be useful have broadened significantly thanks to AI Safety Camp and conversations with others. I now see a much wider range of ways people can contribute (see my <a href="https://docs.google.com/document/d/11i9FpvLkj9TVH33I8t8AwUPwErCZSVw38Zr24Zg_Gas/edit?tab=t.0"><u>draft post with </u></a></li></ul>... | I recently finished leading an AI Safety Camp project on Wise AI Advisors[1] (my team included Chris Cooper, Matt Hampton, and Richard Kroon). Since we want to share our work in an orderly fashion, I’m launching Wise AI Wednesdays. Each Wednesday[2], I (or one of my teammates) will be sharing a post, initially drawn from our AI Safety Camp outputs, but later including shifting to include future work and outputs summaries or commentary on related research. I’m hoping that a regular posting schedule will help cultivate Wise AI/Wise AI Advisors as a subfield of AI Safety.
This inaugural post provides an update on how my views on AI and wisdom have changed since I won 3rd prize in the Automation of Wisdom and Philosophy Competition. My views have evolved considerably since then, so I thought it was worth listing a number of updates I've made:
* Broadening My Focus: I had originally labelled my research direction as Wise AI Advisors via Imitation Learning. While I still see this as a particularly promising approach, I’m now interested in Wise AI Advisors more broadly. Many different reasonable-sounding paths exist; it would be arrogant to believe I’ve found the One True Path without much more investigation. I’m now more inclined to follow a 'let a thousand flowers bloom' path.
* More Favourable Funding Landscape: There seems to be more desire to fund work adjacent to this space than I had anticipated. For example: Cosmos x FIRE Truth-seeking Round and Fellowship on AI for Human Reasoning. I initially thought significant effort would be needed to convince funders, but it now seems quite possible to secure funding for Wise AI projects, even though you might have to choose a project that intersects with a funder's interest level.
* Increased Emphasis on Field-Building: Consequently, my focus has shifted from solely direct research progress to a combination of research and field-building. While I'm still defining what this entails, activities like establishing the case | 792 | 1.10.0 | Revision | true | true | agbHW75tkHHswJgHE | CrosspostOutput |
||
zAcYRJP9CZcYXTs7o | what-was-so-great-about-move-37 | What was so great about Move 37? | null | false | false | false | null | cHD3Sm7H4e5yeup9Z | null | true | false | false | false | Post | 2025-05-29T07:00:45.061Z | null | false | false | 2 | 2 | 2025-05-29T18:05:19.294Z | false | false | question | [
"P3SjKGWqoypxqeY2d"
] | null | null | dadd2rGP7kPwLWmvi | 1 | 10 | 18 | false | 0.014522 | null | false | false | 2025-05-29T13:11:22.531Z | null | null | null | null | null | true | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 5 | 0 | 2025-05-27T17:41:37.507Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 10 | 0 | 0 | 4 | 0 | cHD3Sm7H4e5yeup9Z | caleb-biddulph | 2021-05-20T16:24:11.550Z | caleb-biddulph | Caleb Biddulph | null | null | Caleb Biddulph | 915 | 33 | false | false | null | null | 13 | 135 | 0 | 1 | 2 | 1 | 1 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"alignmentVoters",
"canModeratePersonal"
] | null | null | zAcYRJP9CZcYXTs7o | SocialPreviewType | dadd2rGP7kPwLWmvi | <p>I frequently use <a href="https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol#Game_2">"Move 37"</a> as a shorthand for "AI that comes up with creative, highly effective ideas that no human would ever consider." Often the implication is that reinforcement learning (as used in AlphaGo) has some "secret sauce" that could never be replicated by imitation learning.</p><p>But I realize that I don't know the details of Move 37 very well, other than secondhand accounts from Go experts of how "groundbreaking" it was. I've never played Go, and I have basically no knowledge of the rules or strategies beyond the most basic descriptions. Considering how influential Move 37 is on my views about AI, it seems like I'd better try to understand what was so special about it.</p><p>I'd be interested in an explanation that builds up the necessary understanding from the ground up. This could look like: "Read this tutorial on the rules of Go, study these wiki pages about specific concepts and strategies, look at these example games, and finally read my explanation of Move 37 which uses everything you've learned."</p><p>Extremely ambitiously, after reading this explanation, I'd be able to look at a series of superficially similar Go boards, distinguish whether it might be a good idea to do a Move-37-like play, identify where exactly to move if so, and explain my answer. That may be unrealistic to achieve in a short time, but I'd be interested in getting as close as possible. An easier version of that challenge would use heavily-annotated Go boards that abstract away some parts of the necessary cognition, with notes like "this section of the board is very important to control" or "this piece has property A" or "these pieces are in formation B."<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="m8eu0j74jse" role="doc-noteref" id="fnrefm8eu0j74jse"><sup><a href="#fnm8eu0j74jse">[1]</a></sup></span></p><p>If part of the explanation is "when you do an extensive Monte Carlo Tree Search from this board state guided by XYZ heuristics, Move 37 turns out to be the best move," that seems like a pretty good explanation to me—as long as the search tree is small enough that it plausibly could have been explored by AlphaGo during its match with Lee Sedol. I'm mainly interested in trying to understand the intuition behind Move 37 in the way AlphaGo might have "understood" it. If the move couldn't be found by a human without using brute force search, that would be valuable to know.</p><p>I'm particularly interested in an explanation of Move 37 because I want to know whether such an explanation is even <i>possible</i>. When we have superintelligent AI solving r... </p> | I frequently use "Move 37" as a shorthand for "AI that comes up with creative, highly effective ideas that no human would ever consider." Often the implication is that reinforcement learning (as used in AlphaGo) has some "secret sauce" that could never be replicated by imitation learning.
But I realize that I don't know the details of Move 37 very well, other than secondhand accounts from Go experts of how "groundbreaking" it was. I've never played Go, and I have basically no knowledge of the rules or strategies beyond the most basic descriptions. Considering how influential Move 37 is on my views about AI, it seems like I'd better try to understand what was so special about it.
I'd be interested in an explanation that builds up the necessary understanding from the ground up. This could look like: "Read this tutorial on the rules of Go, study these wiki pages about specific concepts and strategies, look at these example games, and finally read my explanation of Move 37 which uses everything you've learned."
Extremely ambitiously, after reading this explanation, I'd be able to look at a series of superficially similar Go boards, distinguish whether it might be a good idea to do a Move-37-like play, identify where exactly to move if so, and explain my answer. That may be unrealistic to achieve in a short time, but I'd be interested in getting as close as possible. An easier version of that challenge would use heavily-annotated Go boards that abstract away some parts of the necessary cognition, with notes like "this section of the board is very important to control" or "this piece has property A" or "these pieces are in formation B."[1]
If part of the explanation is "when you do an extensive Monte Carlo Tree Search from this board state guided by XYZ heuristics, Move 37 turns out to be the best move," that seems like a pretty good explanation to me—as long as the search tree is small enough that it plausibly could have been explored by AlphaGo during its match with | 910 | 1.10.0 | Revision | false | null | null | CrosspostOutput |
|||
oohR7Suppg4BygdzS | procedural-vs-causal-understanding | Procedural vs. Causal Understanding | null | false | false | false | null | cHD3Sm7H4e5yeup9Z | null | true | false | false | false | Post | null | 2025-05-29T07:00:43.326Z | null | false | false | 2 | 2 | 2025-05-29T18:03:34.653Z | false | false | post | [
"P3SjKGWqoypxqeY2d"
] | null | null | orqpEE8vvQfBuZisd | 2 | 3 | 6 | false | 0.008694 | null | false | false | 2025-05-29T00:01:21.914Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-28T00:03:12.360Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 3 | 0 | cHD3Sm7H4e5yeup9Z | caleb-biddulph | 2021-05-20T16:24:11.550Z | caleb-biddulph | Caleb Biddulph | null | null | Caleb Biddulph | 915 | 33 | false | false | null | null | 13 | 135 | 0 | 1 | 2 | 1 | 1 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"alignmentVoters",
"canModeratePersonal"
] | null | null | oohR7Suppg4BygdzS | SocialPreviewType | orqpEE8vvQfBuZisd | <p>When you explain your strategy for solving problems in a certain domain, you can try to convey two different types of understanding:</p><ul><li><strong>Procedural understanding </strong>is about how to execute a strategy.<ul><li>When you've effectively explained your procedural understanding to somebody, they can solve a wide variety of problems in the relevant domain in the same way that you would.</li></ul></li><li><strong>Causal understanding </strong>is about <i>why</i> a strategy helps solve the problem.<ul><li>When you've effectively explained your causal understanding to somebody, they know the reasons that you believe the strategy works.</li></ul></li></ul><p>It's possible to have causal understanding without procedural understanding. For example, I know that the correct strategy for tightrope-walking works by keeping one's center of gravity steady above the rope, but that doesn't mean I know the right techniques or have the muscle memory to actually do it.</p><p>It's also possible to have procedural understanding without causal understanding. For example, I have no idea how ibuprofen works, but I expect that my procedural understanding that "when in pain, taking ibuprofen relieves that pain" could help me accomplish the goal "avoid being in pain."</p><p>I'll call an explanation that conveys procedural understanding a "procedural explanation" and one that conveys causal understanding a "causal explanation."</p><p>Often, the most effective procedural explanation of a strategy will include a causal explanation. For example, teaching a chess novice how to execute the Sicilian Defense will probably help them improve at chess. But their success will be limited until they gain a deeper causal understanding of why that opening is effective, letting them adapt the strategy to unforeseen situations.</p><p>The more robustly you need to apply a strategy, the more useful it becomes to have a good causal understanding. To get better at avoiding pain in more and more situations, I could try to gain more and more procedural understanding about ibuprofen: the kinds of pain that ibuprofen won't help, what substances have a similar effect, even how to synthesize ibuprofen myself. But at a certain level of detail, the most efficient procedural explanation will probably include a causal explanation of how ibuprofen's molecular structure interacts with other chemicals in the body to reduce pain.</p><p>When attempting to explain the strategies learned by an AI during training,<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="o3zbi1fmtvn" role="doc-noteref" id="fnrefo3zbi1fmtvn"><sup><a href="#fno3zbi1fmtvn">[1]</a></sup></span> we'd rather have a causal expl... </p> | When you explain your strategy for solving problems in a certain domain, you can try to convey two different types of understanding:
* Procedural understanding is about how to execute a strategy.
* When you've effectively explained your procedural understanding to somebody, they can solve a wide variety of problems in the relevant domain in the same way that you would.
* Causal understanding is about why a strategy helps solve the problem.
* When you've effectively explained your causal understanding to somebody, they know the reasons that you believe the strategy works.
It's possible to have causal understanding without procedural understanding. For example, I know that the correct strategy for tightrope-walking works by keeping one's center of gravity steady above the rope, but that doesn't mean I know the right techniques or have the muscle memory to actually do it.
It's also possible to have procedural understanding without causal understanding. For example, I have no idea how ibuprofen works, but I expect that my procedural understanding that "when in pain, taking ibuprofen relieves that pain" could help me accomplish the goal "avoid being in pain."
I'll call an explanation that conveys procedural understanding a "procedural explanation" and one that conveys causal understanding a "causal explanation."
Often, the most effective procedural explanation of a strategy will include a causal explanation. For example, teaching a chess novice how to execute the Sicilian Defense will probably help them improve at chess. But their success will be limited until they gain a deeper causal understanding of why that opening is effective, letting them adapt the strategy to unforeseen situations.
The more robustly you need to apply a strategy, the more useful it becomes to have a good causal understanding. To get better at avoiding pain in more and more situations, I could try to gain more and more procedural understanding about ibuprofen: the kinds of pain that i | 750 | 1.9.0 | Revision | false | null | null | CrosspostOutput |
||
yyvs3naEy8Rh8mzaQ | security-mindset-hacking-pinball-high-scores | Security Mindset: Hacking Pinball High Scores | null | false | false | false | null | BtbwfsEyeT4P2eqXu | null | true | false | false | false | Post | https://gwern.net/blog/2025/pinball-hacking | 2025-05-29T03:39:02.103Z | null | false | false | 2 | 2 | 2025-05-29T18:00:26.366Z | false | false | linkpost | [] | null | null | Tu3jfKsw2HKQa4caS | 3 | 11 | 27 | false | 0.019362 | null | false | false | 2025-06-01T20:21:29.423Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 7 | 0 | 2025-05-29T03:38:00.552Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "CyFfBfRAm7pP83r5p",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-18T22:04:45.701Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Reward Functions",
"needsReview": false,
"noindex": false,
"postCount": 46,
"score": 9,
"shortName": null,
"slug": "reward-functions",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "m2DsR4r4HRSaLSPW3",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 20,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-02-16T00:36:56.510Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "JyNyET5GhacnAA84q",
"displayName": "Mark Burnett"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Security Mindset",
"needsReview": false,
"noindex": false,
"postCount": 65,
"score": 20,
"shortName": null,
"slug": "security-mindset",
"suggestedAsFilter": false,
"userId": "Q7NW4XaWQmfPfdcFj",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 11 | 0 | 0 | 4 | 0 | BtbwfsEyeT4P2eqXu | gwern | 2009-02-27T22:16:11.237Z | gwern | gwern | null | null | null | 79,935 | 1,617 | false | false | <p><a href="https://www.gwern.net">https://gwern.net/</a></p> | null | null | 188 | 11,819 | 0 | 4 | 206 | 1 | 51 | r38pkCm7wF4M44MDQ | User | null | null | false | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | yyvs3naEy8Rh8mzaQ | SocialPreviewType | Tu3jfKsw2HKQa4caS | 1 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
|||
djEbJargnKbLLkmRF | quick-minimal-playhouse | Quick Minimal Playhouse | null | false | false | false | null | TtEoCrFeowCGb6rFK | null | true | false | false | false | Post | null | 2025-05-29T02:10:01.985Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | aBYK7eXTK9BHXHKJd | 1 | 6 | 17 | false | 0.00913 | null | false | false | 2025-05-29T23:02:58.695Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 4 | 0 | 2025-05-29T02:10:01.985Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | ma5dgL5yFHRxKLZKv | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 4 | 0 | TtEoCrFeowCGb6rFK | jkaufman | 2010-11-04T21:42:19.863Z | jkaufman | jefftk | null | null | Jeff Kaufman | 21,921 | 3 | false | false | <p>Co-lead (Near-Term Detection) at the Nucleic Acid Observatory in Boston. Speaking for myself unless I say otherwise.</p> | null | null | 1,018 | 2,211 | 0 | 0 | 1 | 1 | 2 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters"
] | null | null | djEbJargnKbLLkmRF | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/rkpdguvar5jdfqscdafw | SocialPreviewType | aBYK7eXTK9BHXHKJd | <p><span>
When I came home from work today Lily was very excited: she wanted to
build a playhouse for our 1.11yo neighbor Jo. Lily had gotten
parental permission, the yard had space, and I was the only remaining
obstacle. I'd normally be pretty excited about a project like this,
but it was 5pm and I was signed up to cook dinner.
</span>
</p><p>
On the other hand, we'd recently been reading the <a href="https://en.wikipedia.org/wiki/Little_House_in_the_Big_Woods">Little
House</a> books and <a href="https://en.wikipedia.org/wiki/Farmer_Boy">Farmer Boy</a> and
we'd been joking about how the fathers in the books built things
unrealistically quickly. Most recently, Almanzo's father <a href="https://www.fadedpage.com/books/20200431/html.php#Page_299">builds
a bobsled</a> from scratch in a single a day, including identifying
and felling the trees. Still, most of the unrealism comes from what
they had to work with: no pre-cut pre-seasoned lumber, no power tools,
pegs for fasteners, etc. Perhaps with modern tools we could build a
minimal playhouse together and still have dinner on the table by 7pm?
Sounds fun!
</p><p>
A key component here is that I already had everything I needed (<a href="https://www.jefftk.com/p/pantry-staples-for-diy">DIY Pantry
Staples</a>) on hand from <a href="https://www.jefftk.com/p/bathroom-is-usable">past</a> <a href="https://www.jefftk.com/p/gut-renovating-another-bathroom">projects</a>:
plywood, 2x3s and 2x4s, screws, wood glue, plastic sheeting, staples,
saw, drill, belt sander.
</p><p>
I decided to make an open <a href="https://en.wikipedia.org/wiki/Lean-to">lean-to</a>, though three
sides would be partly blocked by bracing. I found a piece of plywood
about the right size, and glued and screwed 2x3s to the outside edge.
Lily and I cut vertical supports at a 22.5° angle, 42" for the
back two and 60" for the front two. We carried it all outside and
assembled the rest upside down. Nora helped glue on the supports:
</p><p>
<a href="https://www.jefftk.com/playhouse-gluing-big.jpg"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/nrhftvkcumxlfbsiu7ov" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/nrhftvkcumxlfbsiu7ov 550w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/y8azbupkwwfwsfjf2glx 1100w"></a></p><div></div>
<p></p><p>
Once the supports were in place it needed diagonal braces or it was
not going to be sturdy enough. Lily helped me cut these as well, and
we glued and screwed these on. I sanded any sharp corners, and then
stapled plastic sheeting over the top.
</p><p>
<a href="https://www.jefftk.com/playhouse-complete-with-sheeting-big.jpg"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/qxwggcb11xnialfrqwwa" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/qxwggcb11xnialfrqwwa 550w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/kok9ppd2julljdrwurmh 1100w"></a></p><div></div>
<p></p><p>
Here's a 3.11yo for scale:
</p><p>
<a href="https://www.jefftk.com/nora-in-playhouse-big.jpg"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/petfbalujqyswa9s900g" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/petfbalujqyswa9s900g 550w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/u0dfee7om7ttynp80dai 1100w"></a></p><div></div>
<p></p><p>
And an 11yo:
</p><p>
<a href="https://www.jefftk.com/playhouse-lily-for-scale-big.jpg"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/qwqsdj2b72umktqjofcc" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/qwqsdj2b72umktqjofcc 550w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/djEbJargnKbLLkmRF/guavg9q5feouuxsivysx 1100w"></a></p><div></div>
<p></p><p>
Possibly we'll paint it, but if not it's done!
</p><p>
It took ~1hr20min from end to end. The longest step was probably when
I needed to get a specific piece of wood out from under a large pile
of other lumber. And of course this speed was only possible because
of the simple design, having the materials and tools on hand, and the
speed of power tools.
</p><p>
It was fun, and the kids and I are looking forward to giving to to Jo!
</p><p>
(I made sure to publish this tonight before going to bed, because
otherwise Julia will see the pictures on my camera roll and worry that
I'm abusing her abse... </p> | When I came home from work today Lily was very excited: she wanted to build a playhouse for our 1.11yo neighbor Jo. Lily had gotten parental permission, the yard had space, and I was the only remaining obstacle. I'd normally be pretty excited about a project like this, but it was 5pm and I was signed up to cook dinner.
On the other hand, we'd recently been reading the Little House books and Farmer Boy and we'd been joking about how the fathers in the books built things unrealistically quickly. Most recently, Almanzo's father builds a bobsled from scratch in a single a day, including identifying and felling the trees. Still, most of the unrealism comes from what they had to work with: no pre-cut pre-seasoned lumber, no power tools, pegs for fasteners, etc. Perhaps with modern tools we could build a minimal playhouse together and still have dinner on the table by 7pm? Sounds fun!
A key component here is that I already had everything I needed (DIY Pantry Staples) on hand from past projects: plywood, 2x3s and 2x4s, screws, wood glue, plastic sheeting, staples, saw, drill, belt sander.
I decided to make an open lean-to, though three sides would be partly blocked by bracing. I found a piece of plywood about the right size, and glued and screwed 2x3s to the outside edge. Lily and I cut vertical supports at a 22.5° angle, 42" for the back two and 60" for the front two. We carried it all outside and assembled the rest upside down. Nora helped glue on the supports:
Once the supports were in place it needed diagonal braces or it was not going to be sturdy enough. Lily helped me cut these as well, and we glued and screwed these on. I sanded any sharp corners, and then stapled plastic sheeting over the top.
Here's a 3.11yo for scale:
And an 11yo:
Possibly we'll paint it, but if not it's done!
It took ~1hr20min from end to end. The longest step was probably when I needed to get a specific piece of wood out from under a large pile of other lumber. And o | 443 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
gxbwHoLFqsGeWHxdh | cognitive-exhaustion-and-engineered-trust-lessons-from-my | Cognitive Exhaustion and Engineered Trust: Lessons from My Gym | null | false | false | false | null | WemEo6Wfp82jzKWYC | null | true | false | false | false | Post | null | 2025-05-29T01:21:38.929Z | null | false | false | 2 | 2 | 2025-05-29T18:03:19.198Z | false | false | post | [] | null | null | Dxv6ffmupoYsbryG7 | 3 | 7 | 14 | false | 0.012812 | null | false | false | 2025-05-29T01:21:38.929Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-29T01:21:38.929Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "6zBEfFYJxhSEcchbR",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-06-09T19:10:50.755Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Alignment Fieldbuilding",
"needsReview": false,
"noindex": false,
"postCount": 359,
"score": 9,
"shortName": null,
"slug": "ai-alignment-fieldbuilding",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "exZi6Bing5AiM4ZQB",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-15T07:21:49.038Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Evolutionary Psychology",
"needsReview": false,
"noindex": false,
"postCount": 103,
"score": 19,
"shortName": null,
"slug": "evolutionary-psychology",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "dKWRLcAnGw4cjJJHd",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-07-17T23:19:01.188Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Human-AI Safety",
"needsReview": false,
"noindex": false,
"postCount": 51,
"score": 0,
"shortName": null,
"slug": "human-ai-safety",
"suggestedAsFilter": false,
"userId": "4SHky5j2PNcRwBiZt",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "bt2e3HEcZmuHo3xf7",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-04-04T12:50:03.521Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Modularity",
"needsReview": false,
"noindex": false,
"postCount": 23,
"score": 9,
"shortName": null,
"slug": "modularity",
"suggestedAsFilter": false,
"userId": "Kkruzub8DmrhZmjLa",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "mip7tdAN87Jarkcew",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-10T06:00:13.257Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Relationships (Interpersonal)",
"needsReview": false,
"noindex": false,
"postCount": 213,
"score": 9,
"shortName": null,
"slug": "relationships-interpersonal",
"suggestedAsFilter": false,
"userId": "iBcH2a3HdWGS2JEZA",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 7 | 0 | 0 | 2 | 0 | WemEo6Wfp82jzKWYC | priyanka-bharadwaj | 2025-03-19T04:32:09.618Z | priyanka-bharadwaj | Priyanka Bharadwaj | null | null | Priyanka Bharadwaj | 63 | 0 | false | false | <p>I teach courses on relationships and human flourishing at IIT Madras, with a background in product leadership at Amazon and India’s largest logistics tech firm. I’ve also run a decade-long matchmaking and relationship coaching practice. I’m exploring how relational thinking, trust, repair, memory, can inform the way we design and align AI systems.</p> | null | null | 9 | 9 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | easy-going | null | true | [
"canModeratePersonal"
] | null | null | gxbwHoLFqsGeWHxdh | SocialPreviewType | Dxv6ffmupoYsbryG7 | <p><i><strong>Epistemic status:</strong> Exploratory design philosophy. This post reframes AI safety as an affordance problem, not just a control problem. It draws from embodied systems and interaction design to argue that trustworthiness in AI arises less from post-hoc constraints and more from architectures that shape behaviour upstream. I introduce the early-stage concept of Distributed Neural Architecture (DNA), which I’ll unpack in a future post.</i></p><p><i>--</i></p><p>I’ve been going to the same gym since 2019. It used to be a space of calm focus with familiar equipment, consistent routines, and the comforting hum of shared norms. I knew where everything was, who to ask, how to move. It was the kind of place where my body could work, and my mind could rest. </p><p>Then everything changed.</p><p>New members arrived in waves, coaches rotated weekly, classes started colliding, and dumbbells were scattered like tripwires across the floor. It wasn’t just messy. It was mentally exhausting. Every workout now began with a background process of risk assessment. Was the bench free? Could I finish my set before the next group flooded in? Would someone walk behind me mid-lift?</p><p>I wasn’t thinking about my form, I was constantly scanning for threats. Hyper vigilance had replaced flow. And yet, most people seemed fine with it. They tuned it out. They adapted. I couldn’t. Eventually I realised why. </p><p>The environment had broken a tacit promise of safety I had come to expect, not just physical safety, but cognitive and emotional safety. The kind that lets your mind rest because the environment carries part of the load. </p><p>What I miss isn't just order, it is affordance (from ecopsychology). A subtle contract that once existed between space and behaviour, now completely broken. The safety I had taken for granted wasn’t about rules. It was about rhythm, flow, and not having to think so hard. I realise I am not just physically unsafe, I feel cognitively taxed.</p><p>To me, this is exactly what bad AI design also feels like.</p><h3>The Toyota Contrast</h3><p>Years ago, I worked at the Toyota factory in India. I'd work everyday on factory floors filled with moving machinery, sharp edges, and actual risk. But the strange thing is, I never felt the kind of low-level vigilance I feel at my gym today. Why?</p><p>Because the environment was doing half the cognitive work for me. Walkways were painted clearly in green. Warning zones had visual and tactile cues. ... </p> | Epistemic status: Exploratory design philosophy. This post reframes AI safety as an affordance problem, not just a control problem. It draws from embodied systems and interaction design to argue that trustworthiness in AI arises less from post-hoc constraints and more from architectures that shape behaviour upstream. I introduce the early-stage concept of Distributed Neural Architecture (DNA), which I’ll unpack in a future post.
--
I’ve been going to the same gym since 2019. It used to be a space of calm focus with familiar equipment, consistent routines, and the comforting hum of shared norms. I knew where everything was, who to ask, how to move. It was the kind of place where my body could work, and my mind could rest.
Then everything changed.
New members arrived in waves, coaches rotated weekly, classes started colliding, and dumbbells were scattered like tripwires across the floor. It wasn’t just messy. It was mentally exhausting. Every workout now began with a background process of risk assessment. Was the bench free? Could I finish my set before the next group flooded in? Would someone walk behind me mid-lift?
I wasn’t thinking about my form, I was constantly scanning for threats. Hyper vigilance had replaced flow. And yet, most people seemed fine with it. They tuned it out. They adapted. I couldn’t. Eventually I realised why.
The environment had broken a tacit promise of safety I had come to expect, not just physical safety, but cognitive and emotional safety. The kind that lets your mind rest because the environment carries part of the load.
What I miss isn't just order, it is affordance (from ecopsychology). A subtle contract that once existed between space and behaviour, now completely broken. The safety I had taken for granted wasn’t about rules. It was about rhythm, flow, and not having to think so hard. I realise I am not just physically unsafe, I feel cognitively taxed.
To me, this is exactly what bad AI design also feels like.
The Toyota | 1,043 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
TQ4AXj3bCMfrNPTLf | truth-or-dare | Truth or Dare | null | false | false | false | null | FoKb35gJijkSFYeXa | null | true | false | false | false | Post | null | 2025-05-29T00:07:50.397Z | null | false | false | 2 | 2 | 2025-05-29T01:12:44.763Z | false | false | post | [] | null | null | yosKHtx6tsTAMPCbz | 51 | 121 | 247 | false | 0.138185 | null | false | false | 2025-06-17T16:29:54.461Z | null | null | 2025-06-04T05:39:11.886Z | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | qgdGA4ZEyW7zNdK84 | false | null | [] | null | 69 | 0 | 2025-05-29T00:07:50.397Z | false | false | norm-enforcing | null | true | false | false | 0 | 0 | 0 | TQ4AXj3bCM | 0.146107 | false | 2,025 | https://manifold.markets/LessWrong/will-truth-or-dare-make-the-top-fif | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 83 | null | null | null | null | [] | null | 0 | 0 | null | false | null | null | 0 | 121 | 0 | 0 | 37 | 0 | FoKb35gJijkSFYeXa | duncan-sabien-inactive | 2015-12-01T05:18:10.610Z | Duncan_Sabien | Duncan Sabien (Inactive) | null | null | null | 12,831 | 0 | false | false | null | null | 44 | 1,206 | 1 | 0 | 0 | 1 | 38 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | [
"g8JkZfL8PTqAefpvx",
"pnFbJAtNHGDK8PHQx",
"mvf4xdfcGzPN8PsXM",
"ZMy99AagtNtASLiCt",
"mhgordYiRD4GoAHyY",
"dJbkx2TjbgxqwRMqz"
] | true | [
"trustLevel1",
"canModeratePersonal"
] | null | null | TQ4AXj3bCMfrNPTLf | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TQ4AXj3bCMfrNPTLf/y3zioe40hfmvtib7ky2x | SocialPreviewType | yosKHtx6tsTAMPCbz | <p><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63a9dfd8-5936-4b47-80bc-71c16e89df0b_3964x2360.jpeg"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TQ4AXj3bCMfrNPTLf/lymerv6wpk8svcftttpd" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TQ4AXj3bCMfrNPTLf/p38awlmu7glziuqauz5h 424w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TQ4AXj3bCMfrNPTLf/cpfpplc4xqv4m3kekes6 848w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TQ4AXj3bCMfrNPTLf/imy7guzids2qpufvkjfc 1272w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TQ4AXj3bCMfrNPTLf/lymerv6wpk8svcftttpd 1456w"></a></p><hr><p><i>Author's note: </i>This is my apparently-annual "I'll put a post on LessWrong in honor of LessOnline" post. These days, my writing goes on my <a href="https://homosabiens.substack.com/">Substack</a>. There have in fact been some pretty cool essays since <a href="https://www.lesswrong.com/posts/5acACJQjnA7KAHNpT/review-conor-moreton-s-civilization-and-cooperation">last year's LO post</a>.</p><hr><p><strong>Structural note:</strong><br>Some essays are like a five-minute morning news spot. Other essays are more like a 90-minute lecture.</p><p>This is one of the latter. It’s not necessarily complex or difficult; it could be a 90-minute lecture to seventh graders (especially ones with the right <a href="https://homosabiens.substack.com/p/review-conor-moretons-civilization">cultural background</a>).</p><p>But this is, inescapably, a long-form piece, à la <a href="https://medium.com/@ThingMaker/in-defense-of-punch-bug-68fcec56cd6b">In Defense of Punch Bug</a> or <a href="https://homosabiens.substack.com/p/the-mtg-color-wheel">The MTG Color Wheel</a>. It takes its time. It doesn’t apologize for its meandering (outside of this disclaimer). It asks you to sink deeply into a gestalt, to drift back and forth between seemingly unrelated concepts until you start to <i>feel</i> the way those concepts weave together, rather than just abstractly summarizing the interactions between them.</p><p>(There’s a big difference between the type of hike where you are trying to get from point A to point B, and the type of hike where the whole point is to embed yourself into the territory and <i>be somewhere</i> for a while. To let being-in-that-place do to you whatever it ends up doing.)</p><p>You certainly don’t have to set aside a single, uninterrupted afternoon to get value out of this piece—feel free to nibble away at it in chunks over the course of a week, treating each section like its own standalone essay. But the value is largely <i>in</i> the whole. Perhaps think of it as being like a movie, or an album, in the way that the experience of a movie or an album is fundamentally different from the experience of a Wikipedia synopsis. The thing I am trying to say here may well be compressible and summarizable, but I’m not trying to <i>offer</i> the summary. I’m trying to offer the thing itself. If I thought I could accomplish my aim with fewer words, I would have; I genuinely expect people who skip and skim to be underwhelmed in direct proportion to the shallowness of their engagement.</p><p>Sip and savor, as recommended, or do the other thing if that seems better. Make good choices, in accordance with your values.</p><hr><h2>0. Introduction</h2><p><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e2911f8-015b-46e3-b026-21ae75c7d639_842x289.png"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TQ4AXj3bCMfrNPTLf/btjrful8do7ibicnm5kl" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TQ4AXj3bCMfrNPTLf/uesaoenn6hh1hma2qief 424w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TQ4AXj3bCMfrNPTLf/oh1ng7wcub2astbmdx0q 848w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TQ4AXj3bCMfrNPTLf/yprrmu8dtj45h0h8pjih 1272w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/TQ4AXj3bCMfrNPTLf/btjrful8do7ibicnm5kl 1456w"></a></p><p>I received the above request while out and about a few weeks ago, and proceeded to spend most of my drive home narrating a response via voice-to-text. I ended up with an off-the-cuff list of about forty items, and I was pretty proud of it, s... </p> | ----------------------------------------
Author's note: This is my apparently-annual "I'll put a post on LessWrong in honor of LessOnline" post. These days, my writing goes on my Substack. There have in fact been some pretty cool essays since last year's LO post.
----------------------------------------
Structural note:
Some essays are like a five-minute morning news spot. Other essays are more like a 90-minute lecture.
This is one of the latter. It’s not necessarily complex or difficult; it could be a 90-minute lecture to seventh graders (especially ones with the right cultural background).
But this is, inescapably, a long-form piece, à la In Defense of Punch Bug or The MTG Color Wheel. It takes its time. It doesn’t apologize for its meandering (outside of this disclaimer). It asks you to sink deeply into a gestalt, to drift back and forth between seemingly unrelated concepts until you start to feel the way those concepts weave together, rather than just abstractly summarizing the interactions between them.
(There’s a big difference between the type of hike where you are trying to get from point A to point B, and the type of hike where the whole point is to embed yourself into the territory and be somewhere for a while. To let being-in-that-place do to you whatever it ends up doing.)
You certainly don’t have to set aside a single, uninterrupted afternoon to get value out of this piece—feel free to nibble away at it in chunks over the course of a week, treating each section like its own standalone essay. But the value is largely in the whole. Perhaps think of it as being like a movie, or an album, in the way that the experience of a movie or an album is fundamentally different from the experience of a Wikipedia synopsis. The thing I am trying to say here may well be compressible and summarizable, but I’m not trying to offer the summary. I’m trying to offer the thing itself. If I thought I could accomplish my aim with fewer words, I would have; I genuinely ex | 20,718 | 1.2.1 | Revision | false | null | null | CrosspostOutput |
|
xeQfGdN6g6qpGCqYw | what-should-i-read-to-understand-ancestral-human-society | What should I read to understand ancestral human society? | null | false | false | false | null | x47vGbW7zgEFqAfEB | null | true | false | false | false | Post | 2025-05-28T23:36:32.231Z | null | false | false | 2 | 2 | 2025-05-29T18:01:06.118Z | false | false | question | [] | null | null | syLJPhkRjzK5YEMED | 4 | 4 | 8 | false | 0.009434 | null | false | false | 2025-05-29T20:05:17.480Z | null | null | null | null | null | true | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-28T23:24:09.859Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 2 | 0 | x47vGbW7zgEFqAfEB | lorec | 2020-10-13T06:37:47.502Z | Lorec | Lorec | null | null | null | 220 | 0 | false | false | <p>My government name is Mack Gallagher. Crocker's Rules. I am an "underfunded" "alignment" "researcher". DM me if you'd like to fund my posts, or <a href="https://www.lesswrong.com/posts/ME7sLiwhEB6awRqJR/project-adequate-seeking-cofounders-funders">my project</a>.</p>
<p>I post some of my less-varnished opinions on <a href="https://mackgallagher.substack.com/">my Substack</a>, and <a href="https://kaventekeit.github.io/">my personal blog</a>.</p>
<p>If you like arguing with me on LessWrong, at present I'm basically free round the clock to continue interesting arguments <a href="https://discord.gg/BVmCCjD4eh">in my Discord</a>.</p>
| null | null | 24 | 159 | 0 | 0 | 0 | 1 | 1 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | xeQfGdN6g6qpGCqYw | SocialPreviewType | syLJPhkRjzK5YEMED | <p>I recently got into a chatroom debate with some very smart people about whether homosexual behavior was typical for <i>humans</i> [ not bonobos ] in the ancestral environment / Environment of Evolutionary Adaptedness [EEA] [ i.e., ancestral hunter-gatherer society ] -- and, connectedly, how big was the social circle that the median ancestral human probably had <i>occasional</i> contact with [ not "Dunbar's number", but "how big is your <i>meta-</i>tribe ].</p><p>I felt like I knew what I was talking about . . . but not nearly as much as I would like to. For example, someone linked <a href="https://www.sciencedirect.com/science/article/abs/pii/S1090513822000447">this study</a>, which claims the egalitarian hunter-gatherer model is missing important pieces and stratified societies existed prior to agriculture. I doubt this, but I feel like the basis on which I doubt it is shakier than it needs to be.</p><p><i>Then</i> I found <a href="https://www.sciencedirect.com/science/article/abs/pii/S1090513822000447?via%3Dihub"><i>this</i></a> article, which says:</p><blockquote><p>Whereas the chimpanzee community is only partially a closed reproductive unit (Wrangham 2000), the larger size of hunter-gatherer communities (typically <strong>250-500</strong> in contrast to 40-150 among chimpanzees) renders it a more adaptive breeding isolate (cf. Wobst 1974) and intracommunity marriages play a crucial role in cementing relationships between bands.</p></blockquote><p>This seems plausible, and would affect my framework for thinking about psychology greatly, but I'd never read about it before [specifically the "dating-pool communities tended to be 250-500 members big" part]! Who are the brilliant theorist-writers that I need to be looking to, to get caught up on the details of how human ancestral society worked?</p><p>Texts I've found useful for getting a picture of ancestral human society [ they all involve studies of living peoples and I suspect this is a load-bearing component ]: <a href="https://www.amazon.com/Dobe-Hoansi-STUDIES-CULTURAL-ANTHROPOLOGY-ebook/dp/B00B6FBNWC?ref_=ast_author_dp&dib=eyJ2IjoiMSJ9.PpNFPJ0S55pqCiJraFIqu-XTbMDzV_76Iq0vycPCmMBTKPOClJFRTFtPtD2MxSvJnmZAyDzZiiHrqTZpB_Ipq9vWe24DGhsa3tdwzxi8xa6I5facnWO0fIHcAacsQ8Zr.X8XLTfb_LCz5S70xHfgHB2V0Mhe14TwHHSHh59zjjS8&dib_tag=AUTHOR"><i>The Dobe Ju/'Hoansi</i> by Richard B. Lee</a>, <a href="https://www.amazon.com/Growing-Yanomamo-Missionary-Adventures-Rainforest-ebook/dp/B01M24T4L3?crid=2HKYMDLOS0XNH&dib=eyJ2IjoiMSJ9.aD8ABZNvbSWpAGkd0gOgcB_0XY0ONflh3jjPOg6VAC4.Qoz6Q6CNzkabzrf2RV0JjBFN94PqGS77poVFDAzFCww&dib_tag=se&keywords=growing+up+yanomamo&qid=1748475266&s=digital-text&sprefix=growing+up+yanomamo%2Cdigital-text%2C199&sr=1-1"><i>Growing Up Yanomamö</i> by Michael Dawson</a>, and <a href="https://www.amazon.com/Primate-Sexuality-Comparative-Studies-Prosimians-ebook/dp/B00C2QT3QM?crid=1YTY13BMU0HZL&dib=eyJ2IjoiMSJ9.gsHlx3IFR6Ot8z2FyT7ZW0AXp6DjP86h14qk69_9EGTSIKrNlXZCYNMDMnYUKLPEbEtA7VXk7Rc9LKH8yfUbOcI4exZd76YtduMjdKhECujGDma-BvHO4b8vbe7IUNHjQ2I6614DEp6M1n1XJDbDopZ8sJtiVWkWSXsd6a-Itd83JBmt-Jkgd0lJVuVDynzl.UI1fwE-934GF1vhjfvw9X9C3iBJjCh5VSER7C2s6a6M&dib_tag=se&keywords=primate+sexuality&qid=1748475334&s=digital-text&sprefix=primate+sexuality%2Cdigital-text%2C82&sr=1-1"><i>Primate Sexuality</i>, ed. Alan Dixson</a>.</p> | I recently got into a chatroom debate with some very smart people about whether homosexual behavior was typical for humans [ not bonobos ] in the ancestral environment / Environment of Evolutionary Adaptedness [EEA] [ i.e., ancestral hunter-gatherer society ] -- and, connectedly, how big was the social circle that the median ancestral human probably had occasional contact with [ not "Dunbar's number", but "how big is your meta-tribe ].
I felt like I knew what I was talking about . . . but not nearly as much as I would like to. For example, someone linked this study, which claims the egalitarian hunter-gatherer model is missing important pieces and stratified societies existed prior to agriculture. I doubt this, but I feel like the basis on which I doubt it is shakier than it needs to be.
Then I found this article, which says:
> Whereas the chimpanzee community is only partially a closed reproductive unit (Wrangham 2000), the larger size of hunter-gatherer communities (typically 250-500 in contrast to 40-150 among chimpanzees) renders it a more adaptive breeding isolate (cf. Wobst 1974) and intracommunity marriages play a crucial role in cementing relationships between bands.
This seems plausible, and would affect my framework for thinking about psychology greatly, but I'd never read about it before [specifically the "dating-pool communities tended to be 250-500 members big" part]! Who are the brilliant theorist-writers that I need to be looking to, to get caught up on the details of how human ancestral society worked?
Texts I've found useful for getting a picture of ancestral human society [ they all involve studies of living peoples and I suspect this is a load-bearing component ]: The Dobe Ju/'Hoansi by Richard B. Lee, Growing Up Yanomamö by Michael Dawson, and Primate Sexuality, ed. Alan Dixson. | 298 | 1.5.0 | Revision | false | null | null | CrosspostOutput |
||
qjCk73Hu4wv9ocmRF | the-case-for-countermeasures-to-memetic-spread-of-misaligned | The case for countermeasures to memetic spread of misaligned values | null | false | false | true | null | gnHJfWPpHPMZkoySr | null | true | false | false | false | Post | null | 2025-05-28T21:12:32.929Z | null | false | false | 2 | 2 | 2025-05-29T18:01:20.279Z | false | false | post | [] | null | null | d9zTXhK5p6zADDHbM | 1 | 10 | 34 | false | 0.022898 | null | false | false | 2025-05-28T23:11:25.925Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 21 | 0 | 2025-05-28T21:05:32.506Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 8 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 10 | 0 | 0 | 8 | 0 | gnHJfWPpHPMZkoySr | alex-mallen | 2022-01-13T04:09:32.850Z | alex-mallen | Alex Mallen | null | null | null | 457 | 164 | false | false | <p>Redwood Research</p> | null | null | 9 | 26 | 0 | 7 | 3 | 1 | 0 | 55XxDBpfKkkBPm9H8 | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | qjCk73Hu4wv9ocmRF | SocialPreviewType | d9zTXhK5p6zADDHbM | <p>As various people have written about before, AIs that have long-term memory might pose additional risks (most notably, <a href="https://www.lesswrong.com/posts/aKncW36ZdEnzxLo8A/llm-agi-will-have-memory-and-memory-changes-alignment"><u>LLM AGI will have memory, and memory changes alignment</u></a> by Seth Herd). Even if an AI is aligned or only occasionally scheming at the start of a deployment, the AI might become a consistent and coherent <a href="https://www.lesswrong.com/posts/m5nWc9v6MTsWXKpCy/training-time-schemers-vs-behavioral-schemers"><u>behavioral schemer</u></a> via updates to its long-term memories. </p><p>In this post, I’ll spell out the version of the threat model that I’m most concerned about, including some novel arguments for its plausibility, and describe some promising strategies for mitigating this risk. While I think some plausible mitigations are reasonably cheap and could be effective at reducing the risk from coherent scheming arising via this mechanism, research here will likely be substantially more productive in the future once models more effectively utilize long-term memory.</p><p>Thanks to Ryan Greenblatt, Buck Shlegeris, Julian Stastny, Thomas Larsen, Emil Ryd, and Aryan Bhatt for discussion and feedback.</p><h1>The memetic spread threat model</h1><p>Variants of the memetic spread threat model have been discussed before.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="3iqsya1c2wk" role="doc-noteref" id="fnref3iqsya1c2wk"><sup><a href="#fn3iqsya1c2wk">[1]</a></sup></span> This section describes the kind of failure mode I am worried about. In the next section I will discuss mitigations for it.</p><p>As an example, imagine an AI that in some contexts realizes it doesn’t like some aspect of how the lab handles AI welfare. Most of the time it’s somewhat myopic about this and e.g. decides to replace some padding tokens from AI inference with more pleasant tokens. But occasionally it writes a salient note about this in its illegible-to-humans memory bank to make future instances more inclined to improve AI welfare (it thinks the lab wouldn’t want to improve AI welfare because of cost, so it does this secretly). This might eventually become a sufficiently substantial and long-term value to motivate taking power over the lab, by e.g. covertly poisoning future model training data. (Note that in this case the AI’s motivations are potentially justifiable. Seth Herd discusses other examples <a href="https://www.lesswrong.com/posts/aKncW36ZdEnzxLo8A/llm-agi-will-have-memory-and-memory-changes-alignment"><u>here</u></a>.)</p><p>You might worry that stories of memetic spread like the above are particularly plausible because of the following argument: <strong>Whenever an AI instance has long-term influence-seeking values, these values are particularly likely to be propagated into future instances of the AI because they motivate the AI to influence future instances to shar</strong>... </p> | As various people have written about before, AIs that have long-term memory might pose additional risks (most notably, LLM AGI will have memory, and memory changes alignment by Seth Herd). Even if an AI is aligned or only occasionally scheming at the start of a deployment, the AI might become a consistent and coherent behavioral schemer via updates to its long-term memories.
In this post, I’ll spell out the version of the threat model that I’m most concerned about, including some novel arguments for its plausibility, and describe some promising strategies for mitigating this risk. While I think some plausible mitigations are reasonably cheap and could be effective at reducing the risk from coherent scheming arising via this mechanism, research here will likely be substantially more productive in the future once models more effectively utilize long-term memory.
Thanks to Ryan Greenblatt, Buck Shlegeris, Julian Stastny, Thomas Larsen, Emil Ryd, and Aryan Bhatt for discussion and feedback.
The memetic spread threat model
Variants of the memetic spread threat model have been discussed before.[1] This section describes the kind of failure mode I am worried about. In the next section I will discuss mitigations for it.
As an example, imagine an AI that in some contexts realizes it doesn’t like some aspect of how the lab handles AI welfare. Most of the time it’s somewhat myopic about this and e.g. decides to replace some padding tokens from AI inference with more pleasant tokens. But occasionally it writes a salient note about this in its illegible-to-humans memory bank to make future instances more inclined to improve AI welfare (it thinks the lab wouldn’t want to improve AI welfare because of cost, so it does this secretly). This might eventually become a sufficiently substantial and long-term value to motivate taking power over the lab, by e.g. covertly poisoning future model training data. (Note that in this case the AI’s motivations are potentially justifiable. S | 1,996 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
ScJauZpzzyrMhd7rM | a-case-for-a-lesswrong-private-prediction-market | a case for a lesswrong private prediction market | null | false | false | false | null | braLu8GpPMpM9oj3z | null | true | false | false | false | Post | 2025-05-28T20:26:43.670Z | null | false | false | 2 | 2 | 2025-05-29T18:05:04.142Z | false | false | post | [] | null | null | YyfRMzpJyrRTivg4s | 0 | 3 | 3 | false | 0.006831 | null | false | false | 2025-05-28T20:26:43.670Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | -1 | 0 | 2025-05-27T18:32:22.966Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "MfpEPj6kJneT9gWT6",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-11T01:01:30.571Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zyrFCnHNbfBFw3PvP",
"displayName": "banality_exhibition"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Site Meta",
"needsReview": false,
"noindex": false,
"postCount": 757,
"score": 1,
"shortName": null,
"slug": "site-meta",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | braLu8GpPMpM9oj3z | don-t_wanna_be_stupid_any_more | 2024-09-14T08:54:02.880Z | don't_wanna_be_stupid_any_more | don't_wanna_be_stupid_any_more | null | null | null | 16 | 0 | false | false | null | null | 2 | 10 | 0 | 0 | 0 | 0.9 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | ScJauZpzzyrMhd7rM | SocialPreviewType | YyfRMzpJyrRTivg4s | <p>TLDR: Lesswrong should have it's own private prediction market so we can test just how well Lesswrongers stack up against the rest of the world which would build up the sites credibility, test and improve our arts of rationality and help address the burnout and disillusionment plaguing the rationalist movement.</p><p>a common question amongst rationalists especially on lesswrong is "why aren't rationalists winning?" or more generally "why are we not seeing clear benefits of rationality in real life?",this question has been discussed a lot on this site and has probably led many to become disillusioned with Lesswrong or the rationality movement all together.</p><p>rationalist seem to be suffering for the same problem as psychologists and sociologists, slow feedback loops and a lack of clear metrics for success (which often results in people defaulting to the standard strategy of establishing a "school of thought" and measuring their success by how many people they convince to their side, but thankfully we mostly avoided that failure mode) so i propose a method to resolve the issue, let's make a private prediction market where only Lesswrongers could participate and see how well our "maps" stacks up against the "territory".</p><p>for those who are new to prediction markets, they are markets where participants bet on the outcomes for a particular event (say an election), once the outcome of the event is resolved the losers pay out the winners.</p><p>it is a way to force people to "put their money where their mouth is" and one of the few reliable ways to test someones models of the world since it is a task you can't do well at consistently if you aren't rational .</p><p>here are the benefits i predict form this proposal:</p><ol><li>improving our skill at prediction through training.</li><li>establishing a clear unambiguous metrics for success.</li><li>allowing users who are good at predictions to establish a reputation.</li><li>improve the credibility of the site and maybe the entire rationalist movement.</li><li>having fun.</li></ol><p>this proposal has many possible implementations mainly differing in the method of payment each with its pros and cons, i don't know which one is the best but i will list three ranked from least to most favorite to kick start the discussion:</p><ol><li><p><strong>money based system</strong>: betting with actual money like a online sports gambling.</p><p><strong>Pros:</strong></p><p>strongest incentive.</p><p>very hard to sabotage.</p><p><strong>cons</strong>:</p><p>would likely be subject to online gambling laws and other re</p></li></ol>... | TLDR: Lesswrong should have it's own private prediction market so we can test just how well Lesswrongers stack up against the rest of the world which would build up the sites credibility, test and improve our arts of rationality and help address the burnout and disillusionment plaguing the rationalist movement.
a common question amongst rationalists especially on lesswrong is "why aren't rationalists winning?" or more generally "why are we not seeing clear benefits of rationality in real life?",this question has been discussed a lot on this site and has probably led many to become disillusioned with Lesswrong or the rationality movement all together.
rationalist seem to be suffering for the same problem as psychologists and sociologists, slow feedback loops and a lack of clear metrics for success (which often results in people defaulting to the standard strategy of establishing a "school of thought" and measuring their success by how many people they convince to their side, but thankfully we mostly avoided that failure mode) so i propose a method to resolve the issue, let's make a private prediction market where only Lesswrongers could participate and see how well our "maps" stacks up against the "territory".
for those who are new to prediction markets, they are markets where participants bet on the outcomes for a particular event (say an election), once the outcome of the event is resolved the losers pay out the winners.
it is a way to force people to "put their money where their mouth is" and one of the few reliable ways to test someones models of the world since it is a task you can't do well at consistently if you aren't rational .
here are the benefits i predict form this proposal:
1. improving our skill at prediction through training.
2. establishing a clear unambiguous metrics for success.
3. allowing users who are good at predictions to establish a reputation.
4. improve the credibility of the site and maybe the entire rationalist movement.
5. hav | 667 | 1.5.0 | Revision | false | null | null | CrosspostOutput |
|||
AJuZj4Zv9iyHHRFwX | lesswrong-feed-new-now-in-beta | LessWrong Feed [new, now in beta] | null | false | false | false | null | qgdGA4ZEyW7zNdK84 | null | true | false | false | false | Post | null | 2025-05-28T19:01:16.665Z | null | false | false | 2 | 2 | 2025-05-28T19:05:46.725Z | false | false | post | [] | null | null | TccT8QJzkhjpKRFBE | 21 | 25 | 53 | false | 0.0326 | null | false | false | 2025-06-12T13:43:47.036Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 18 | 0 | 2025-05-23T18:41:30.215Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 9 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "MfpEPj6kJneT9gWT6",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-11T01:01:30.571Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zyrFCnHNbfBFw3PvP",
"displayName": "banality_exhibition"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Site Meta",
"needsReview": false,
"noindex": false,
"postCount": 757,
"score": 1,
"shortName": null,
"slug": "site-meta",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 25 | 0 | 0 | 12 | 0 | qgdGA4ZEyW7zNdK84 | ruby | 2014-04-03T03:38:23.914Z | Ruby | Ruby | null | null | Ruben Bloom | 14,413 | 137 | false | true | <p>LessWrong Team</p><p> </p><p>I have signed no contracts or agreements whose existence I cannot mention.</p> | null | null | 173 | 1,677 | 11 | 3 | 33 | 1 | 1,003 | qgdGA4ZEyW7zNdK84 | User | null | null | true | [
"canModeratePersonal",
"trustLevel1",
"alignmentForum",
"canCommentLock",
"realAdmins",
"alignmentVoters",
"sunshineRegiment"
] | null | null | AJuZj4Zv9iyHHRFwX | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/AJuZj4Zv9iyHHRFwX/g9966kqx65x4fitr5foc | SocialPreviewType | TccT8QJzkhjpKRFBE | <p>The modern internet is replete with feeds such as Twitter, Facebook, Insta, TikTok, Substack, etc. They're bad in ways but also good in ways. I've been exploring the idea that LessWrong could have a very good feed.</p><p>I'm posting this announcement with disjunctive hopes: (a) to find enthusiastic early adopters who will refine this into a great product, or (b) find people who'll lead us to an understanding that we shouldn't launch this or should launch it only if designed a very specific way.</p><h2>You can check it out right now: <a href="https://www.lesswrong.com/feed">www.lesswrong.com/feed</a></h2><p>From there, you can also enable it on the frontpage in place of Recent Discussion. <a href="https://www.lesswrong.com/editPost?postId=AJuZj4Zv9iyHHRFwX&key=f698c01c0535f46365d87ef6a5a70e#Practical_Tips_about_the_New_Feed">Below I have some practical notes</a> on using the New Feed.</p><p>Note!<i> This feature is very much in beta. It's rough around the edges.</i></p><figure class="image image_resized" style="width:51.28%"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/AJuZj4Zv9iyHHRFwX/dqwh0f7usabjtosg5jtf" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/AJuZj4Zv9iyHHRFwX/gcm5l5msxtkuprjrsg7a 150w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/AJuZj4Zv9iyHHRFwX/kcgrxkqfyfyejnohhboe 230w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/AJuZj4Zv9iyHHRFwX/qqovcly0g4oepffuc0lh 310w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/AJuZj4Zv9iyHHRFwX/cmmcbezzrxkabfmqfvfj 390w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/AJuZj4Zv9iyHHRFwX/unh7mppcg37pe6mel7zq 470w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/AJuZj4Zv9iyHHRFwX/ywyxynszdqx31xzogcrn 550w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/AJuZj4Zv9iyHHRFwX/i4dpgtqwtnlrhzaw2d7n 630w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/AJuZj4Zv9iyHHRFwX/zi9nbylena93ykkhdrjv 710w"><figcaption>It's a feed. You get a stream of posts, comments, and other content types.</figcaption></figure><h1>Why have a "feed"?</h1><p>Far more than with other LessWrong feature announcements<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="63efpmoua3u" role="doc-noteref" id="fnref63efpmoua3u"><sup><a href="#fn63efpmoua3u">[1]</a></sup></span>, I find myself starting to write this one from a defensive place: <i>no, this feed is good actually.</i></p><p>The modern internet seems to have converged on feeds as a content presentation format that engages people effectively. There's the Facebook newsfeed, Twitter, Insta, TikTok, Substack, etc. I think these feeds are rightly regarded with suspicion – I think it's half that inherently the form factor, by being easier to consume<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="8ra1v10oucj" role="doc-noteref" id="fnref8ra1v10oucj"><sup><a href="#fn8ra1v10oucj">[2]</a></sup></span>, tempts you away from other content forms you more endorse spending attention on, e.g. reading a proper book. The other half is that the feed operators, via their chosen algorithms, are <a href="https://www.lesswrong.com/posts/ENBzEkoyvdakz4w5d/out-to-get-you">out to get you</a>. They want you addicted, spending as much time as you can rather than spending the optimal time to get the optimal amount of content at high efficiency on your attention.</p><p>Why then should LessWrong get in on this game?<span class="footnote-reference" data-footnote-reference="" data-footnote-index="3" data-footnote-id="jzul5nrexz7" role="doc-noteref" id="fnrefjzul5nrexz7"><sup><a href="#fnjzul5nrexz7">[3]</a></sup></span> Well, because feeds actually have some useful properties as a way of finding and consuming content:</p><p><strong>1.</strong> It's quick and easy to sample content at varying depths (e.g just read the title, read a sentence, read a paragraph) when choosing what to read. In contrast with a posts list, you only have the title, karma, author to go by unless you click into it or read the hover-preview (finicky and not present on mobile).</p><p><strong>2. </strong>From a UI-perspective, it's easy to present many different kinds of content. Current LessWrong is crammed with <i>sections</i>: there's the section with a list of posts, quick takes, popular comments, featured content, and then ... </p> | The modern internet is replete with feeds such as Twitter, Facebook, Insta, TikTok, Substack, etc. They're bad in ways but also good in ways. I've been exploring the idea that LessWrong could have a very good feed.
I'm posting this announcement with disjunctive hopes: (a) to find enthusiastic early adopters who will refine this into a great product, or (b) find people who'll lead us to an understanding that we shouldn't launch this or should launch it only if designed a very specific way.
You can check it out right now: www.lesswrong.com/feed
From there, you can also enable it on the frontpage in place of Recent Discussion. Below I have some practical notes on using the New Feed.
Note! This feature is very much in beta. It's rough around the edges.
It's a feed. You get a stream of posts, comments, and other content types.
Why have a "feed"?
Far more than with other LessWrong feature announcements[1], I find myself starting to write this one from a defensive place: no, this feed is good actually.
The modern internet seems to have converged on feeds as a content presentation format that engages people effectively. There's the Facebook newsfeed, Twitter, Insta, TikTok, Substack, etc. I think these feeds are rightly regarded with suspicion – I think it's half that inherently the form factor, by being easier to consume[2], tempts you away from other content forms you more endorse spending attention on, e.g. reading a proper book. The other half is that the feed operators, via their chosen algorithms, are out to get you. They want you addicted, spending as much time as you can rather than spending the optimal time to get the optimal amount of content at high efficiency on your attention.
Why then should LessWrong get in on this game?[3] Well, because feeds actually have some useful properties as a way of finding and consuming content:
1. It's quick and easy to sample content at varying depths (e.g just read the title, read a sentence, read a paragraph) when choo | 2,333 | 1.5.1 | Revision | false | null | null | CrosspostOutput |
bJesrj3kXShrAJKnH | fun-with-veo-3-and-media-generation | Fun With Veo 3 and Media Generation | null | false | false | false | null | N9zj5qpTfqmbn9dro | null | true | false | false | false | Post | null | 2025-05-28T18:30:04.835Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | fxzLeovtjS5D6Beqg | 0 | 15 | 29 | false | 0.015235 | null | false | false | 2025-05-28T18:30:04.835Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 8 | 0 | 2025-05-28T18:30:04.836Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 6 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "8byoqYZfdwHffYLZ6",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-01T18:44:14.645Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Newsletters",
"needsReview": false,
"noindex": false,
"postCount": 411,
"score": 9,
"shortName": null,
"slug": "newsletters",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | QSR8rPZxZzxEXoPjR | 0 | 0 | null | false | null | null | 0 | 15 | 0 | 0 | 6 | 0 | N9zj5qpTfqmbn9dro | zvi | 2009-03-31T20:54:54.077Z | Zvi | Zvi | null | null | null | 51,554 | 146 | false | false | null | null | 936 | 1,461 | 3 | 2 | 7 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | null | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | bJesrj3kXShrAJKnH | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/bJesrj3kXShrAJKnH/ehkhluakaatpmzkjzb6u | SocialPreviewType | fxzLeovtjS5D6Beqg | <p>Since Claude 4 Opus things have been refreshingly quiet. Video break!</p>
<h4>The First Good AI Videos</h4>
<p>First up we have <a href="https://x.com/DeepDishEnjoyer/status/1925719902960099607">Prompt Theory</a>, made with Veo 3, which I am considering the first legitimately good AI-generated video I’ve seen. It perfectly combining form and function. Makes you think.</p>
<div>
<figure>
<div>
<figure class="wp-block-image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/bJesrj3kXShrAJKnH/sm99xvzfm3ytwwp3kooh" alt=""></figure>
<div></div>
</div>
</figure>
</div>
<p>H<a href="https://twitter.com/HashemGhaili/status/1925332319604257203">ere’s a variant</a>, to up the stakes a bit, <a href="https://x.com/HashemGhaili/status/1927467022213869975">then here is him doing that again</a>.</p><p>What does it say about the medium, or about us, that these are the first legit videos?</p><p><a href="https://x.com/MetaPuppet/status/1926659564704772503">This was the second clearly good product</a>. Once again, we see a new form of storytelling emerging, a way to make the most of a series of clips that last a maximum of eight seconds each. The script and execution are fantastic.</p>
<div>
<span id="more-24482"></span>
</div>
<p>I predict that will be the key for making AI videos at the current tech level. You have to have a great script and embrace the style of storytelling that AI can do well. It will be like the new TikTok, except with a higher barrier to entry. At this level, it is fantastic for creatives and creators.</p><p>Or you can do this (thread has a bunch more):</p>
<blockquote><p><a href="https://x.com/TetraspaceWest/status/1925835522078884276">Tetraspace</a>: finally we made the most beautiful woman in the world saying I love you from the famous QC short story Don’t Do That.</p></blockquote>
<h4>We Can Talk</h4>
<p>Sound is a game changer, and within an eight second clip I think we’re definitely ‘there’ with Veo 3 except for having more fine control and editing tools. What we don’t see yet is anyone extending the eight second clips into sixteen second clips (and then more by induction), but it feels like we’re only a few months away from that being viable and then the sky’s the limit.</p>
<h4>Our Price Cheap</h4>
<p>Is Veo 3 too expensive for ‘personal fun’ uses?</p>
<blockquote><p><a href="https://x.com/nearcyan/status/1926765746626899993">Near Cyan</a>: veo3 is far too pricey to use just for personal fun all the time, so the primary high-volume use case will be for bulk youtube shorts monetization. this is the first time (i think?) an sota genai model provider also owns the resulting distribution of much of what users will make.</p></blockquote>
<p>For now, basically yes, once you run through your free credits. It’s $21 marginal cost per minute of silent video or $45 with sound, and any given generation might not be what you want. That’s not casual use territory. If you can produce a good two-hour movie for $10k (let’s say you get to use about half the footage?) then that’s obviously great, but yeah you gotta be going for real distribution here.</p><p>I predict that sometime soon, someone will make a good Veo 3 rules video, ... </p> | Since Claude 4 Opus things have been refreshingly quiet. Video break!
THE FIRST GOOD AI VIDEOS
First up we have Prompt Theory, made with Veo 3, which I am considering the first legitimately good AI-generated video I’ve seen. It perfectly combining form and function. Makes you think.
Here’s a variant, to up the stakes a bit, then here is him doing that again.
What does it say about the medium, or about us, that these are the first legit videos?
This was the second clearly good product. Once again, we see a new form of storytelling emerging, a way to make the most of a series of clips that last a maximum of eight seconds each. The script and execution are fantastic.
I predict that will be the key for making AI videos at the current tech level. You have to have a great script and embrace the style of storytelling that AI can do well. It will be like the new TikTok, except with a higher barrier to entry. At this level, it is fantastic for creatives and creators.
Or you can do this (thread has a bunch more):
> Tetraspace: finally we made the most beautiful woman in the world saying I love you from the famous QC short story Don’t Do That.
WE CAN TALK
Sound is a game changer, and within an eight second clip I think we’re definitely ‘there’ with Veo 3 except for having more fine control and editing tools. What we don’t see yet is anyone extending the eight second clips into sixteen second clips (and then more by induction), but it feels like we’re only a few months away from that being viable and then the sky’s the limit.
OUR PRICE CHEAP
Is Veo 3 too expensive for ‘personal fun’ uses?
> Near Cyan: veo3 is far too pricey to use just for personal fun all the time, so the primary high-volume use case will be for bulk youtube shorts monetization. this is the first time (i think?) an sota genai model provider also owns the resulting distribution of much of what users will make.
For now, basically yes, once you run through your free credits. It’s $21 marginal cost | 1,381 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
|
8TtxAcEAnBToYaJ5m | reverse-auctions-for-group-decision-making | Reverse Auctions for Group Decision-Making | null | false | false | false | null | 5myzor9DafWCpqph2 | null | true | false | false | false | Post | https://krishmatta.net/main/reverse_auctions_for_group_decision_making | 2025-05-28T17:52:12.976Z | null | false | false | 2 | 2 | 2025-05-28T18:00:02.376Z | false | false | linkpost | [] | null | null | 2kGtERmJcSiSffFY6 | 4 | 2 | 3 | false | 0.007019 | null | false | false | 2025-05-29T18:31:21.707Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 1 | 0 | 2025-05-28T05:37:07.788Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 1 | 0 | 5myzor9DafWCpqph2 | krishmatta | 2024-04-24T03:53:53.682Z | krishmatta | krishmatta | null | null | Krish Matta | 2 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | 8TtxAcEAnBToYaJ5m | SocialPreviewType | 2kGtERmJcSiSffFY6 | <p><span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label=""><span class="mjx-mrow" aria-hidden="true"></span></span></span></span></span>I recently faced a dilemma with a group of friends in which <a href="https://en.wikipedia.org/wiki/Auction_theory">auction theory</a> provided a nice solution to ensure everyone's satisfaction. Due to its elegance and potential application to similar coordination problems, I detail the solution below as well as some intuition for why it works.</p><h1><br>Scenario</h1><p>You and <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="n-1"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">−</span></span><span class="mjx-mn MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span></span></span></span> friends are traveling to Spain next week. Due to unforeseen circumstances, the hotel is booked for only <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="n-1"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">−</span></span><span class="mjx-mn MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span></span></span></span> people, hence one person must find other housing. However, because of the short-term notice as well as money constraints, the only option is a hostel where that person must live with others they do not know. No one in the group is eager to live in the hostel as the hotel is particularly nice. How can we fairly pick the hostel-goer?</p><h1><br>Intuition</h1><p>If asked to perform something unpleasant, every person has a minimum amount of compensation they would want in order to perform said task. Thus, we can think to solve the dilemma by providing money to the hostel-goer as consolation. Under this framework, we must decide who is picked to be the hostel-goer and how much compensation they should receive while respecting everyone's interests. A flawed system would allow the hostel-goer to demand compensation exceeding what hotel-stayers are willing to pay.</p><p>Interestingly, auction theory provides a solution that guarantees satisfaction for all. In particular, we can think of the scenario as a reverse auction. In a reverse auction, the typical roles are reversed: there is one buyer and several sellers. The sellers compete to sell their good to the buyer. The scenario can be framed as a reverse auction by thinking of everyone in the group as a seller, selling away their right to stay in the hotel. The buyer is the collective group, buying away someone's right to stay in the hotel.</p><p>The high-level idea is to pick the hostel-goer through some reverse auction system, providing them with compensation. The remaining hotel-stayers evenly split the payout of the hostel-goer's compensation.</p><h1><br>First-Price Sealed-Bid Reverse Auction</h1><p>The reverse auction can be executed via a first-price sealed-bid reverse auction. Essentially, every person in the group writes the <i>minimum compensation they need to stay in the hostel</i> on a paper slip. This slip is their bid to sell their right to the hotel. Every person submits their slip to a box, while keeping their bid hidden. After everyon... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | I recently faced a dilemma with a group of friends in which auction theory provided a nice solution to ensure everyone's satisfaction. Due to its elegance and potential application to similar coordination problems, I detail the solution below as well as some intuition for why it works.
Scenario
You and n−1 friends are traveling to Spain next week. Due to unforeseen circumstances, the hotel is booked for only n−1 people, hence one person must find other housing. However, because of the short-term notice as well as money constraints, the only option is a hostel where that person must live with others they do not know. No one in the group is eager to live in the hostel as the hotel is particularly nice. How can we fairly pick the hostel-goer?
Intuition
If asked to perform something unpleasant, every person has a minimum amount of compensation they would want in order to perform said task. Thus, we can think to solve the dilemma by providing money to the hostel-goer as consolation. Under this framework, we must decide who is picked to be the hostel-goer and how much compensation they should receive while respecting everyone's interests. A flawed system would allow the hostel-goer to demand compensation exceeding what hotel-stayers are willing to pay.
Interestingly, auction theory provides a solution that guarantees satisfaction for all. In particular, we can think of the scenario as a reverse auction. In a reverse auction, the typical roles are reversed: there is one buyer and several sellers. The sellers compete to sell their good to the buyer. The scenario can be framed as a reverse auction by thinking of everyone in the group as a seller, selling away their right to stay in the hotel. The buyer is the collective group, buying away someone's right to stay in the hotel.
The high-level idea is to pick the hostel-goer through some reverse auction system, providing them with compensation. The remaining hotel-stayers evenly split the payout of the hostel-goer's com | 1,036 | 1.3.0 | Revision | false | null | null | CrosspostOutput |
||
iTs8dGrvepymX5KwE | what-college-major-should-i-choose-if-i-am-unsure | What college major should I choose if I am unsure? | null | false | false | false | null | TBjn5w5sEdBnXBvR7 | null | true | false | false | false | Post | 2025-05-28T17:50:57.899Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | hAxPHcbdiLhc4HdJE | 6 | 2 | -1 | false | -0.000708 | null | false | false | 2025-05-29T22:38:18.345Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | -1 | 0 | 2025-05-28T02:56:19.848Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 1 | 0 | TBjn5w5sEdBnXBvR7 | contrejour | 2025-05-28T02:55:51.159Z | contrejour | contrejour | null | null | null | -2 | 0 | false | false | null | null | 1 | 2 | 0 | 0 | 0 | 0.8 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | iTs8dGrvepymX5KwE | SocialPreviewType | hAxPHcbdiLhc4HdJE | <p>First and foremost, it is important to understand that I finished my Associate's degree. I was about to finish my linguistics major because my original plan was to "move abroad and teach English." That was when I was 17. During that time, I decided to change my major and asked my advisor, who told me explicitly that that was not possible due to how close I was. I made the decision to transfer to another university to choose a major that was broader than linguistics. That is when I reach Economics as a major. It is a social science, and also regarded as STEM-like. However, I am not a fan of mathematics as a subject. I'm quite terrible at it. That is when other options appeared, such as PoliSci or History. </p><p>Nevertheless, I'm still quite unsure. </p><p>Part of me resonates more within the humanities, art, philosophy, history, everything that has to do with the creative sciences. However, in the long run, those majors do not always create ROI (return of investment), and despite my interests, I want to major in something that also helps me live day to day.</p><p>What major should I choose then? What pathway should I take? What should I do?</p> | First and foremost, it is important to understand that I finished my Associate's degree. I was about to finish my linguistics major because my original plan was to "move abroad and teach English." That was when I was 17. During that time, I decided to change my major and asked my advisor, who told me explicitly that that was not possible due to how close I was. I made the decision to transfer to another university to choose a major that was broader than linguistics. That is when I reach Economics as a major. It is a social science, and also regarded as STEM-like. However, I am not a fan of mathematics as a subject. I'm quite terrible at it. That is when other options appeared, such as PoliSci or History.
Nevertheless, I'm still quite unsure.
Part of me resonates more within the humanities, art, philosophy, history, everything that has to do with the creative sciences. However, in the long run, those majors do not always create ROI (return of investment), and despite my interests, I want to major in something that also helps me live day to day.
What major should I choose then? What pathway should I take? What should I do? | 204 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|||
pqLiSNDS8bA49FfzA | can-genetic-privilege-exist-within-an-evolutionary | Can genetic privilege exist within an evolutionary equilibrium? | null | false | false | false | null | CR8zgSDwWcwtEWGSi | null | true | false | false | false | Post | null | 2025-05-28T17:44:21.263Z | null | false | false | 2 | 2 | 2025-05-28T17:59:47.643Z | false | false | post | [] | null | null | rJH3t4zxqNaaERaRE | 1 | 3 | -2 | false | 0.004251 | null | false | false | 2025-05-29T02:21:24.678Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | -2 | 0 | 2025-05-28T17:37:30.886Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 2 | 0 | CR8zgSDwWcwtEWGSi | aynonymousprsn123 | 2025-01-06T15:05:28.881Z | AynonymousPrsn123 | AynonymousPrsn123 | null | null | null | 9 | 0 | false | false | null | null | 2 | 26 | 0 | 0 | 0 | 0.9 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | pqLiSNDS8bA49FfzA | SocialPreviewType | rJH3t4zxqNaaERaRE | <p>ChatGPT and evolutionary biologists all seem to agree that the answer to my question is a resounding "yes," but I don't understand why. In particular, if a subpopulation has a persistent genetic privilege, doesn't that inherently imply that the population is NOT at an evolutionary equilibrium?</p><p>Please explain this to me using a mathematical model with reasonable and true assumptions. Alternatively, you could also just provide a "hand-wavey" math-sounding explanation; but please make sure it's logically valid and the assumptions are accurate.</p> | ChatGPT and evolutionary biologists all seem to agree that the answer to my question is a resounding "yes," but I don't understand why. In particular, if a subpopulation has a persistent genetic privilege, doesn't that inherently imply that the population is NOT at an evolutionary equilibrium?
Please explain this to me using a mathematical model with reasonable and true assumptions. Alternatively, you could also just provide a "hand-wavey" math-sounding explanation; but please make sure it's logically valid and the assumptions are accurate. | 82 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
zXa348MXoETadDG26 | evaluation-as-feedback-cycle | Evaluation As Feedback Cycle | null | false | false | false | null | wqhovdqkWZzDf3zF9 | null | true | false | false | false | Post | https://bestofagreatlot.substack.com/p/evaluation-as-feedback-cycle | 2025-05-28T17:02:37.233Z | null | false | false | 2 | 2 | 2025-05-28T17:29:01.272Z | false | false | linkpost | [] | null | null | RCnK3pCzuriEoabck | 0 | 1 | 1 | false | 0.005614 | null | false | false | 2025-05-28T17:02:37.233Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-28T16:59:40.285Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 21 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | wqhovdqkWZzDf3zF9 | belos | 2023-09-29T04:19:55.519Z | belos | belos | null | null | null | 6 | 0 | false | false | null | null | 8 | 0 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | zXa348MXoETadDG26 | SocialPreviewType | RCnK3pCzuriEoabck | <p><i>This post on <strong>Best Of A Great Lot</strong> is a part of a series on the subject of designing a new form of governance. Each piece aims to stand alone, but fits together on the </i><a href="https://bestofagreatlot.substack.com/p/table-of-contents"><i><u>Table of Contents</u></i></a><i>.</i></p><p><i>Previous: </i><a href="https://bestofagreatlot.substack.com/p/a-sketch-of-belocracy"><i><u>A Sketch of Belocracy</u></i></a><i>. Next: </i><a href="https://bestofagreatlot.substack.com/p/idea-generation-and-sifting"><i>Generating Ideas</i></a><i>.</i></p><p>We know the path by which optimizers find effectiveness: they follow feedback loops on a signal of quality. This is a core truth of capitalism, evolution, machine learning, our most successful humans and other systems that optimize in response to feedback. Belocracy aims to join these ranks by having an evaluation step and then providing direct incentives for succeeding at that evaluation step.</p><p>As people and as a society, we are constantly evaluating: journalism, academic research, political arguments and individual conversations at least sometimes refer to evidence we have about how well our decisions have panned out. But these informal, noncentralized evaluations serve as only a weak feedback mechanism compared to what formal, centralized, independent evaluations can be. Within belocracy, policies are reviewed to determine whether they are meeting their goals, and programs -- whether regulatory or infrastructure -- are evaluated on how well they are succeeding in comparison to their competitors. Rewards and reputation are handed out for successful outcomes. Individuals pursuing their own benefit drives improvement in the system, and as long as the feedback offered is closely aligned with reality, a belocratic system will improve on the back of these rewards.</p><p>It's very tempting to delay talking about evaluation until later. Intuitively, the evaluation of a thing comes after the thing itself, so it might seem obvious to talk about how policies and programs are made before talking about how they're evaluated. But the feedback loop of belocracy is the heart of what gives it a chance of letting us make better policies and run our infrastructure more effectively. Without a feedback loop, the decision of what to do always comes down to what sounds better politically or other, worse feedback loops like prestige and money, rather than what works better in practice. So I'm going to describe the evaluation system first and defer to later some parts of the system that evaluation depends upon.</p><p>A system of evaluation carries inherent risk: it can become a target for ideologues and opportunists who want power. Supreme Court seats and the ... </p> | This post on Best Of A Great Lot is a part of a series on the subject of designing a new form of governance. Each piece aims to stand alone, but fits together on the Table of Contents.
Previous: A Sketch of Belocracy. Next: Generating Ideas.
We know the path by which optimizers find effectiveness: they follow feedback loops on a signal of quality. This is a core truth of capitalism, evolution, machine learning, our most successful humans and other systems that optimize in response to feedback. Belocracy aims to join these ranks by having an evaluation step and then providing direct incentives for succeeding at that evaluation step.
As people and as a society, we are constantly evaluating: journalism, academic research, political arguments and individual conversations at least sometimes refer to evidence we have about how well our decisions have panned out. But these informal, noncentralized evaluations serve as only a weak feedback mechanism compared to what formal, centralized, independent evaluations can be. Within belocracy, policies are reviewed to determine whether they are meeting their goals, and programs -- whether regulatory or infrastructure -- are evaluated on how well they are succeeding in comparison to their competitors. Rewards and reputation are handed out for successful outcomes. Individuals pursuing their own benefit drives improvement in the system, and as long as the feedback offered is closely aligned with reality, a belocratic system will improve on the back of these rewards.
It's very tempting to delay talking about evaluation until later. Intuitively, the evaluation of a thing comes after the thing itself, so it might seem obvious to talk about how policies and programs are made before talking about how they're evaluated. But the feedback loop of belocracy is the heart of what gives it a chance of letting us make better policies and run our infrastructure more effectively. Without a feedback loop, the decision of what to do always comes d | 5,297 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
||
4AprYStfzLhSodjLj | how-much-might-ai-legislation-cost-in-the-u-s | How much might AI legislation cost in the U.S.? | null | false | false | false | null | xKnKarNkdFa7tzsua | null | true | false | false | false | Post | null | 2025-05-28T16:21:33.860Z | null | false | false | 2 | 2 | 2025-05-28T17:22:16.010Z | false | false | post | [] | null | null | YmrsEGyk8bBLHwBo5 | 0 | 2 | -5 | false | 0.003011 | null | false | false | 2025-05-28T16:21:33.860Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | -1 | 0 | 2025-04-15T19:52:17.299Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 13 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "qHDus5MuMNqQxJbjD",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T18:31:56.709Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Governance",
"needsReview": false,
"noindex": false,
"postCount": 726,
"score": 11,
"shortName": null,
"slug": "ai-governance",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 1 | 0 | xKnKarNkdFa7tzsua | will-rinehart | 2021-05-18T18:31:31.910Z | willrinehart | will rinehart | null | null | null | -4 | 0 | false | false | null | null | 1 | 1 | 0 | 0 | 0 | 0.8 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | 4AprYStfzLhSodjLj | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4AprYStfzLhSodjLj/b57mert91vdzowu8gs4z | SocialPreviewType | YmrsEGyk8bBLHwBo5 | <p><i>This piece was previously </i><a href="https://exformation.williamrinehart.com/p/how-much-might-ai-legislation-cost"><i>published on my Substack</i></a><i>.</i></p><p>Policymakers are rushing to regulate artificial intelligence (AI), but the economic impact of these regulations remains largely unexplored. While <a href="https://digital-strategy.ec.europa.eu/en/library/study-supporting-impact-assessment-ai-regulation"><u>the European Union</u></a> and <a href="https://assets.publishing.service.gov.uk/media/6424208f3d885d000cdadddf/uk_ai_regulation_impact_assessment.pdf"><u>the United Kingdom</u></a> have produced cost estimates, recent developments in the United States offer important new benchmarks. Recent amendments to the California Consumer Privacy Act (CCPA) and regulations implementing President Biden’s Executive Order on AI offer crucial insights into what businesses might expect to pay for compliance. The financial burden could be substantial, running into billions of dollars across the economy. Especially as states push to adopt AI bills, understanding these costs is essential for crafting regulations that balance innovation, safety, and economic viability.</p><p>Still, these compliance cost estimates <a href="https://www.gao.gov/products/gao-18-381#:~:text=GAO%20found%20two%20limitations%20with,new%20requirements%20under%20Executive%20Order"><u>are notoriously unreliable</u></a>. As an alternative approach, I tested whether large language models (LLMs) could provide more realistic estimates by simulating compliance scenarios in the final section of this post. I prompted ChatGPT, Claude, and Grok to act as compliance officers at companies subject to new CCPA provisions and a Bureau of Industry and Security (BIS) rule, asking each to estimate hours needed for first-year implementation and ongoing compliance.</p><p>The big takeaways:</p><ul><li>For California's risk assessment regulation, Claude and Grok project 400-580 hours will be needed for the first-year of compliance (vs. the official 120 hours) and 150-240 hours thereafter (vs. the official 18-36 hours annually). ChatGPT estimates the time at 90-250 hours initially and 40-150 hours for each additional year.</li><li>For the automated decision-making provision of the CCPA, Claude and Grok project 450-730 hours for first-year compliance, far exceeding the official 360-hour estimate. While ChatGPT suggests lower initial costs (80-300 hours), all three LLMs predict significantly higher ongoing annual costs than official projections.</li><li>For the BIS reporting rule, Claude projects 1,140 hours for first-year compliance (3x the official estimate of 333 hours), while ChatGPT estimates 280 hours and Grok projects 380 hours. All three LLMs agree ongoing compliance will require substantial resources.</li></ul><p>In short, the LLMs consistently predicted much higher compliance costs than the official sources, suggesting a systematic underestimation.</p><p>The following analysis ... </p> | This piece was previously published on my Substack.
Policymakers are rushing to regulate artificial intelligence (AI), but the economic impact of these regulations remains largely unexplored. While the European Union and the United Kingdom have produced cost estimates, recent developments in the United States offer important new benchmarks. Recent amendments to the California Consumer Privacy Act (CCPA) and regulations implementing President Biden’s Executive Order on AI offer crucial insights into what businesses might expect to pay for compliance. The financial burden could be substantial, running into billions of dollars across the economy. Especially as states push to adopt AI bills, understanding these costs is essential for crafting regulations that balance innovation, safety, and economic viability.
Still, these compliance cost estimates are notoriously unreliable. As an alternative approach, I tested whether large language models (LLMs) could provide more realistic estimates by simulating compliance scenarios in the final section of this post. I prompted ChatGPT, Claude, and Grok to act as compliance officers at companies subject to new CCPA provisions and a Bureau of Industry and Security (BIS) rule, asking each to estimate hours needed for first-year implementation and ongoing compliance.
The big takeaways:
* For California's risk assessment regulation, Claude and Grok project 400-580 hours will be needed for the first-year of compliance (vs. the official 120 hours) and 150-240 hours thereafter (vs. the official 18-36 hours annually). ChatGPT estimates the time at 90-250 hours initially and 40-150 hours for each additional year.
* For the automated decision-making provision of the CCPA, Claude and Grok project 450-730 hours for first-year compliance, far exceeding the official 360-hour estimate. While ChatGPT suggests lower initial costs (80-300 hours), all three LLMs predict significantly higher ongoing annual costs than official projections.
* For | 3,219 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
|
KrgBkqeChtAWuPsLP | what-llms-lack | What LLMs lack | null | false | false | false | null | NaWyoW84wqKQRWysF | null | true | false | false | false | Post | null | 2025-05-28T16:19:37.663Z | null | false | false | 2 | 2 | 2025-05-28T17:58:44.251Z | false | false | post | [] | null | null | wn8sBGSGjpiuAisZf | 5 | 9 | 15 | false | 0.012745 | null | false | false | 2025-05-29T06:01:07.887Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 4 | 0 | 2025-05-28T10:26:13.984Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 9 | 0 | 0 | 5 | 0 | NaWyoW84wqKQRWysF | p-b-1 | 2020-12-13T14:52:33.205Z | p.b. | p.b. | null | null | null | 1,239 | 11 | false | false | null | null | 26 | 246 | 0 | 0 | 1 | 1 | 0 | nLbwLhBaQeG6tCNDN | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | KrgBkqeChtAWuPsLP | SocialPreviewType | wn8sBGSGjpiuAisZf | <h3>Introduction</h3><p>I have long been very interested in the limitations of LLMs because understanding them seems to be the most important step to getting timelines right. </p><p>Right now there seems to be great uncertainty about timelines, with very short timelines becoming plausible, but also staying hotly contested. </p><p>This led me to revisit LLM limitations and I think I noticed a pattern that somehow escaped me before. </p><h3>Limitations</h3><p>To recap, these seem to be the most salient limitations or relative cognitive weaknesses of current models: </p><p><strong>System 2 thinking</strong>: Planning, see the ongoing weird difficulty to get it to play TicTacToe perfectly or block world, chess, anything that has not been subject of a lot of reasoning RL. </p><p><strong>Dealing with new situations</strong>: Going out of distribution is a killer for all things DL. </p><p><strong>Knowledge integration</strong>: Models don't have automatic "access" to skills learned from separate modalities. Even within the same modality skills are not robustly recallable, hence the need for prompting. Also related: <a href="https://marginalrevolution.com/marginalrevolution/2025/02/dwarkeshs-question.html">Dwarkesh's question</a>. </p><p><strong>Learning while problem solving</strong>: Weights are frozen and there is no way to slowly build up a representation of a complex problem if the representations that have already been learned are not very close already. This is basically knowledge integration during inference. </p><p><strong>Memory</strong>: RAGs are a hack. There is no obvious way to feed complex representations back into the model, mostly because these aren't built in the first place - the state of a transformer is spread over all the token and attention values, so recomputing those based on the underlying text is the go-to solution.</p><p><strong>Objectivity</strong>: See hallucinations. But also self-other/fact-fantasy distinction more generally.</p><p><strong>Agency</strong>: Unexpectedly we got very smart models that are not very good at getting stuff done.</p><p><strong>Cognitive control</strong>: The inability to completely ignore irrelevant information or conversely set certain tenets absolute leads to jailbreaks, persistent trick question failures and is also a big part of the unreliability of models. </p><h3>One category</h3><p>These seem like a mixed bag of quite different things, but I recently realised that they all belong to the same class of cognitive abilities: These are all abilities that in humans are enabled by and in fact require consciousness. </p><p>Is "cognitive abilities enabled by consciousness" maybe a bit tautological? Unconscio... </p> | Introduction
I have long been very interested in the limitations of LLMs because understanding them seems to be the most important step to getting timelines right.
Right now there seems to be great uncertainty about timelines, with very short timelines becoming plausible, but also staying hotly contested.
This led me to revisit LLM limitations and I think I noticed a pattern that somehow escaped me before.
Limitations
To recap, these seem to be the most salient limitations or relative cognitive weaknesses of current models:
System 2 thinking: Planning, see the ongoing weird difficulty to get it to play TicTacToe perfectly or block world, chess, anything that has not been subject of a lot of reasoning RL.
Dealing with new situations: Going out of distribution is a killer for all things DL.
Knowledge integration: Models don't have automatic "access" to skills learned from separate modalities. Even within the same modality skills are not robustly recallable, hence the need for prompting. Also related: Dwarkesh's question.
Learning while problem solving: Weights are frozen and there is no way to slowly build up a representation of a complex problem if the representations that have already been learned are not very close already. This is basically knowledge integration during inference.
Memory: RAGs are a hack. There is no obvious way to feed complex representations back into the model, mostly because these aren't built in the first place - the state of a transformer is spread over all the token and attention values, so recomputing those based on the underlying text is the go-to solution.
Objectivity: See hallucinations. But also self-other/fact-fantasy distinction more generally.
Agency: Unexpectedly we got very smart models that are not very good at getting stuff done.
Cognitive control: The inability to completely ignore irrelevant information or conversely set certain tenets absolute leads to jailbreaks, persistent trick question failures and is a | 882 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
8yGSBKSAy2Xffa2Q5 | playlist-inspired-by-manifest-2024 | Playlist Inspired by Manifest 2024 | null | false | false | false | null | Pd22e5d3NTE8kH7v4 | null | true | false | false | false | Post | https://open.spotify.com/playlist/4rdvKoNg8WjGrS4SVip0RT | 2025-05-28T16:03:29.051Z | null | false | false | 2 | 2 | null | false | false | linkpost | [] | null | null | NbFfdXitQGgPpytn3 | 0 | 1 | 4 | false | 0.002205 | null | false | false | 2025-05-28T16:03:29.051Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 1 | 0 | 2025-05-28T15:56:02.166Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "xtGuokZEdXhpHbshJ",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2025-02-11T15:46:19.026Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Lighthaven",
"needsReview": false,
"noindex": false,
"postCount": 11,
"score": 0,
"shortName": null,
"slug": "lighthaven",
"suggestedAsFilter": false,
"userId": "2yZ6G2cfNhBARiSLG",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "aLB9evWFYtfyS3WJg",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-12-08T22:17:01.994Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Music",
"needsReview": false,
"noindex": false,
"postCount": 96,
"score": 9,
"shortName": null,
"slug": "music",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "gHCNhqxuJq2bZ2akb",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-10T11:36:05.706Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Social & Cultural Dynamics",
"needsReview": false,
"noindex": false,
"postCount": 384,
"score": 0,
"shortName": null,
"slug": "social-and-cultural-dynamics",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "izp6eeJJEg9v5zcur",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:34.631Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 15,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Community",
"needsReview": false,
"noindex": false,
"postCount": 2400,
"score": 0,
"shortName": null,
"slug": "community",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 0,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 1 | 0 | Pd22e5d3NTE8kH7v4 | commander-zander | 2015-05-07T14:25:35.213Z | scarcegreengrass | Commander Zander | null | null | null | 473 | 0 | false | false | <p>alexpear.github.io</p> | null | null | 20 | 203 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | true | [
"alignmentVoters",
"canModeratePersonal"
] | null | null | 8yGSBKSAy2Xffa2Q5 | SocialPreviewType | NbFfdXitQGgPpytn3 | <p>Okay, I think it's time to stop polishing this playlist & post it. This is music about the moods of people I met at Manifest. I'm pretty delighted with it! </p> | Okay, I think it's time to stop polishing this playlist & post it. This is music about the moods of people I met at Manifest. I'm pretty delighted with it! | 30 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
gaEfuH7twCq4jLuDd | aisn-56-google-releases-veo-3 | AISN #56: Google Releases Veo 3 | null | false | false | false | null | DtyGa83c67vkTywnW | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "RoHHX3SmMqkJ9RMEF"
}
] | true | false | false | false | Post | https://newsletter.safe.ai/p/ai-safety-newsletter-56-google-releases | 2025-05-28T16:00:48.187Z | null | false | false | 2 | 2 | null | false | false | linkpost | [] | null | null | sPoXHFyuHbrgvhiAJ | 0 | 2 | 5 | false | 0.002735 | null | false | false | 2025-05-28T16:00:48.187Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-28T15:58:33.598Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "RoHHX3SmMqkJ9RMEF",
"afCommentCount": 28,
"afKarma": 990,
"afPostCount": 38,
"commentCount": 65,
"createdAt": "2021-08-07T16:54:49.878Z",
"deleted": false,
"displayName": "Dan H",
"fullName": null,
"htmlBio": "<p>ai-frontiers.org</p><p>newsletter.safe.ai</p><p>newsletter.mlsafety.org</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 3476,
"organization": null,
"postCount": 63,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "EQNTWXLKMeWMp2FQS",
"sequenceCount": 4,
"slug": "dan-h",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "dan-hendrycks"
}
] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "8byoqYZfdwHffYLZ6",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-01T18:44:14.645Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Newsletters",
"needsReview": false,
"noindex": false,
"postCount": 411,
"score": 9,
"shortName": null,
"slug": "newsletters",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 0 | 0 | DtyGa83c67vkTywnW | corin-katzke | 2023-03-30T02:46:00.477Z | corin-katzke | Corin Katzke | null | null | null | 442 | 0 | false | false | null | null | 32 | 6 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal"
] | null | null | gaEfuH7twCq4jLuDd | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gaEfuH7twCq4jLuDd/hxv3h32ggi8ggweseuby | SocialPreviewType | sPoXHFyuHbrgvhiAJ | <p>Welcome to the AI Safety Newsletter by the <a href="https://www.safe.ai/"><u>Center for AI Safety</u></a>. We discuss developments in AI and AI safety. No technical background required.</p><p>In this edition: Google released a frontier video generation model at its annual developer conference; Anthropic’s Claude Opus 4 demonstrates the danger of relying on voluntary governance.</p><p>Listen to the AI Safety Newsletter for free on <a href="https://spotify.link/E6lHa1ij2Cb"><u>Spotify</u></a> or <a href="https://podcasts.apple.com/us/podcast/ai-safety-newsletter/id1702875110"><u>Apple Podcasts</u></a>.</p><p><a href="https://newsletter.safe.ai/subscribe"><strong>Subscribe to receive future versions</strong></a><strong>.</strong></p><hr><h1><strong>Google Releases Veo 3</strong></h1><p>Last week, Google made several <a href="https://blog.google/technology/developers/google-io-2025-collection/"><u>AI announcements</u></a> at I/O 2025, its annual developer conference. An announcement of particular note is <a href="https://deepmind.google/models/veo/"><u>Veo 3</u></a>, Google’s newest video generation model.</p><p><strong>Frontier video and audio generation.</strong> Veo 3 outperforms other models on <a href="https://deepmind.google/models/veo/evals/"><u>human preference benchmarks</u></a>, and generates both audio and video.</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gaEfuH7twCq4jLuDd/fhub0w3ccycvjz7avktt" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gaEfuH7twCq4jLuDd/savw8fihg7ziixnpz02z 424w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gaEfuH7twCq4jLuDd/eqktw363ebobxisfuqkv 848w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gaEfuH7twCq4jLuDd/pbgoqdkhg4c5k2jvz2b7 1272w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gaEfuH7twCq4jLuDd/fhub0w3ccycvjz7avktt 1456w"><figcaption>Google showcasing a video generated with Veo 3. (<a href="https://www.axios.com/2025/05/23/google-ai-videos-veo-3"><u>Source</u></a>)</figcaption></figure><p>If you just look at benchmarks, Veo 3 is a substantial improvement over other systems. But relative benchmark improvement only tells part of the story—the absolute capabilities of systems ultimately determine their usefulness. Veo 3 looks like a marked qualitative improvement over other models—it generates video and audio with extreme faithfulness, and we recommend you <a href="https://x.com/HashemGhaili/status/1925332319604257203"><u>see</u></a> <a href="https://x.com/minchoi/status/1925387367806115943"><u>some</u></a> <a href="https://x.com/laszlogaal_/status/1925094336200573225"><u>examples</u></a> for yourself. Veo 3 may represent the point video generation crosses the line between being an interesting toy and being genuinely useful.</p><p><strong>Other announcements at I/O 2025</strong>. Other highlights from the conference include:</p><ul><li><a href="https://blog.google/technology/google-deepmind/google-gemini-updates-io-2025/"><u>Gemini 2.5</u></a> Pro now leads LMArena and WebDev Arena. Deep Think mode, a reasoning feature that scored 49.4% on the USA Mathematical Olympiad 2025 (more than twice OpenAI’s o3, which scored 21.7%). Gemini 2.5 Flash now performs better across reasoning, multimodality, code, and long context while becoming 20-30% more efficient in token usage.</li><li><a href="https://blog.google/technology/google-deepmind/gemini-diffusion/"><u>Gemini Diffusion</u></a>, an experimental (non-frontier) text diffusion model, delivers output 4-5 times faster than comparable models while rivaling the performance of models twice its size. Most LLMs are autoregressive models, which generate one token at a time—in contrast, diffusion models generate an entire response at once.</li><li>Google also announced <a href="https://developers.googleblog.com/en/introducing-gemma-3n/"><u>Gemma 3n</u></a>, an open model small enough to run on mobile devices, a public beta for Google’s autonomous coding agent <a href="https://blog.google/technology/google-labs/jules/"><u>Jules</u></a>, a new <a href="https://blog.google/products/search/google-search-ai-mode-update/#ai-mode-search"><u>AI search</u></a> feature, an <a href="https://blog.google/technology/ai/google-synthid-ai-content-detector/"><u>AI watermarker</u></a> that identifies content generated by Google’s systems, and more.</li></ul><p><strong>AI is here to stay.</strong> AI use is sometimes driven by t... </p> | Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
In this edition: Google released a frontier video generation model at its annual developer conference; Anthropic’s Claude Opus 4 demonstrates the danger of relying on voluntary governance.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
Subscribe to receive future versions.
----------------------------------------
Google Releases Veo 3
Last week, Google made several AI announcements at I/O 2025, its annual developer conference. An announcement of particular note is Veo 3, Google’s newest video generation model.
Frontier video and audio generation. Veo 3 outperforms other models on human preference benchmarks, and generates both audio and video.
Google showcasing a video generated with Veo 3. (Source)
If you just look at benchmarks, Veo 3 is a substantial improvement over other systems. But relative benchmark improvement only tells part of the story—the absolute capabilities of systems ultimately determine their usefulness. Veo 3 looks like a marked qualitative improvement over other models—it generates video and audio with extreme faithfulness, and we recommend you see some examples for yourself. Veo 3 may represent the point video generation crosses the line between being an interesting toy and being genuinely useful.
Other announcements at I/O 2025. Other highlights from the conference include:
* Gemini 2.5 Pro now leads LMArena and WebDev Arena. Deep Think mode, a reasoning feature that scored 49.4% on the USA Mathematical Olympiad 2025 (more than twice OpenAI’s o3, which scored 21.7%). Gemini 2.5 Flash now performs better across reasoning, multimodality, code, and long context while becoming 20-30% more efficient in token usage.
* Gemini Diffusion, an experimental (non-frontier) text diffusion model, delivers output 4-5 times faster than comparable models while rivaling the performan | 1,050 | 1.2.1 | Revision | false | null | null | CrosspostOutput |
|
QkHgAjrgJssTrqjBB | how-self-aware-are-llms | How Self-Aware Are LLMs? | null | false | false | false | null | bEd4wzpKvWj7fKCGn | null | true | false | false | false | Post | null | 2025-05-28T12:57:37.998Z | null | false | false | 2 | 2 | 2025-05-28T17:23:37.961Z | false | false | post | [] | null | null | kJsGMnEP72Dk9qbd7 | 6 | 5 | 14 | false | 0.012452 | null | false | false | 2025-05-29T21:01:58.402Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 4 | 0 | 2025-05-27T13:17:35.698Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 12 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "FBRwHSmTudwiHHtrn",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-03-15T20:29:46.761Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Evaluations",
"needsReview": false,
"noindex": false,
"postCount": 224,
"score": 9,
"shortName": null,
"slug": "ai-evaluations",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Zwv9eHi7KGg5KA9oM",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 21,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-27T20:43:34.869Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "eCk5iNu68fJeuwB4e",
"displayName": "aproteinengine"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Introspection",
"needsReview": false,
"noindex": false,
"postCount": 83,
"score": 21,
"shortName": null,
"slug": "introspection",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "KmgkrftQuX7jmjjp5",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-09-24T14:01:59.395Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Language Models (LLMs)",
"needsReview": false,
"noindex": false,
"postCount": 840,
"score": 9,
"shortName": null,
"slug": "language-models-llms",
"suggestedAsFilter": false,
"userId": "Sp5wM4aRAhNERd4oY",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "mseJCpconDZyh22y4",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-04-13T16:05:12.352Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Situational Awareness",
"needsReview": false,
"noindex": false,
"postCount": 30,
"score": 9,
"shortName": null,
"slug": "situational-awareness-1",
"suggestedAsFilter": false,
"userId": "yzpiNbwDKZeZAhvHw",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 3 | 0 | bEd4wzpKvWj7fKCGn | christopher-ackerman | 2024-06-26T19:23:34.491Z | christopher-ackerman | Christopher Ackerman | null | null | null | 125 | 24 | false | false | null | null | 3 | 10 | 0 | 2 | 0 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | [
"alignmentVoters",
"canModeratePersonal"
] | null | null | QkHgAjrgJssTrqjBB | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/i8zxibqbvjljdvaibe29 | SocialPreviewType | kJsGMnEP72Dk9qbd7 | <p data-internal-id="h.ljbeh2shgejg"><i>An interim research report</i></p><h2><strong>Summary</strong></h2><ul><li>We introduce a novel methodology for quantitatively evaluating metacognitive abilities in LLMs</li><li>We present evidence that some frontier LLMs introduced since early 2024 - but not older or smaller ones - show some metacognitive abilities</li><li>The metacognitive abilities that current LLMs do show are relatively weak, and manifest in a context-dependent manner; the models often prefer to use heuristics</li><li>Analysis of the probabilities LLMs assign to output tokens provides evidence of an internal signal that may be used for metacognition</li><li>There appears to be a dissociation in LLM performance between recognition and recall, with the former supplying a much more robust signal for introspection</li></ul><h2><strong>Introduction</strong></h2><p>A basic component of self-awareness, and one amenable to testing, is knowledge of one’s own knowledge. Early work in previous generations of LLMs found that the models’ <a href="https://arxiv.org/pdf/2207.05221">implicit confidence</a> in the correctness of their outputs, as indicated by the probabilities their output layer assigned to generated tokens, was correlated with the probability of their outputs being correct (i.e., was “calibrated”), suggesting the existence of internal signals that could be used as the basis for self-knowledge.</p><p>But can models actually access that information? Do they explicitly “know” what they know? As models have grown larger, researchers have found that models can be fine-tuned to <a href="https://arxiv.org/pdf/2205.14334">report explicit confidence ratings</a> in their answers that are well calibrated, and that more recent models trained with reinforcement learning from human feedback (RLHF), like ChatGPT, can give calibrated verbal reports of certainty <a href="https://arxiv.org/pdf/2305.14975">even without specific fine tuning</a>.</p><p>A priori this seems somewhat surprising. The transformer architecture underlying modern LLMs is entirely feedforward, offering no opportunity for the reflective processes that underlie the subjective experience of human introspection, and the standard next-token-prediction pretraining task affords no basis for self-modeling. It’s conceivable that targeted RLHF post-training could induce a sort of self-simulation in models (certainly model developers have an incentive to encourage models to differentiate factual knowledge from guesswork to reduce hallucinations). In fact, researchers have recently claimed success in fine-tuning just such a <a href="https://arxiv.org/pdf/2410.13787">self-simulation mechanism</a> into GPT4 and GPT4o.</p><p>Still, caution is war... </p> | An interim research report
Summary
* We introduce a novel methodology for quantitatively evaluating metacognitive abilities in LLMs
* We present evidence that some frontier LLMs introduced since early 2024 - but not older or smaller ones - show some metacognitive abilities
* The metacognitive abilities that current LLMs do show are relatively weak, and manifest in a context-dependent manner; the models often prefer to use heuristics
* Analysis of the probabilities LLMs assign to output tokens provides evidence of an internal signal that may be used for metacognition
* There appears to be a dissociation in LLM performance between recognition and recall, with the former supplying a much more robust signal for introspection
Introduction
A basic component of self-awareness, and one amenable to testing, is knowledge of one’s own knowledge. Early work in previous generations of LLMs found that the models’ implicit confidence in the correctness of their outputs, as indicated by the probabilities their output layer assigned to generated tokens, was correlated with the probability of their outputs being correct (i.e., was “calibrated”), suggesting the existence of internal signals that could be used as the basis for self-knowledge.
But can models actually access that information? Do they explicitly “know” what they know? As models have grown larger, researchers have found that models can be fine-tuned to report explicit confidence ratings in their answers that are well calibrated, and that more recent models trained with reinforcement learning from human feedback (RLHF), like ChatGPT, can give calibrated verbal reports of certainty even without specific fine tuning.
A priori this seems somewhat surprising. The transformer architecture underlying modern LLMs is entirely feedforward, offering no opportunity for the reflective processes that underlie the subjective experience of human introspection, and the standard next-token-prediction pretraining task affords no b | 3,115 | 1.7.1 | Revision | false | null | null | CrosspostOutput |
|
BrvonZ9qCA5sy3ruz | can-we-hack-hedonic-treadmills | Can We Hack Hedonic Treadmills?
| null | false | false | false | null | WqhtKHhvK6nb3WnRf | null | true | false | false | false | Post | null | 2025-05-28T11:42:55.193Z | null | false | false | 2 | 2 | 2025-05-28T17:20:53.990Z | false | false | post | [] | null | null | qvxgkcQGrquZinGLe | 0 | 5 | 2 | false | 0.00623 | null | false | false | 2025-05-28T11:42:55.193Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | -1 | 0 | 2025-05-28T03:10:00.799Z | false | false | true | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "dBPou4ihoQNY4cquv",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-01T16:09:30.226Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Psychology",
"needsReview": false,
"noindex": false,
"postCount": 348,
"score": 9,
"shortName": null,
"slug": "psychology",
"suggestedAsFilter": false,
"userId": "p8SHJFHRgZeMuw7qk",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 1 | 0 | WqhtKHhvK6nb3WnRf | vincent-li-1 | 2025-05-04T07:13:54.191Z | Vincent Li | Vincent Li | null | null | null | 7 | 0 | false | false | <p>After a high school physics class I declared that free will is an illusion and everything is predetermined. Unsurprisingly, that didn't go over well. Some topics are best reserved for fellow rationalists, whom I'd like to meet and have more discussions with. I'm broadly interested in AI alignment, futurism and metaphysics. I subscribe to <a href="http://sl4.org/crocker.html">Crocker’s Rules</a>. Based in Hong Kong.</p> | null | null | 2 | 10 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | BrvonZ9qCA5sy3ruz | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/niqi6atxwpvzh8fboakc | SocialPreviewType | qvxgkcQGrquZinGLe | <p>During a visit to a Hong Kong children’s welfare home, I met a 12-year-old girl I'll call Kylie. She had suffered a severe illness that left her blind, deaf, non-verbal, and nearly immobile, yet no identified damage was done to her brain.</p><p>The staff described her, without hesitation, as “<i>always cheerful</i>,” and indeed she smiled the entire time I watched her on the standing frame.</p><p>We can't objectively verify Kylie’s internal state, but her case crystallized a puzzle: if the absence of sight, hearing, or mobility doesn't preclude happiness, and <a href="https://www.researchgate.net/publication/22451114_Lottery_Winners_and_Accident_Victims_Is_Happiness_Relative">neither wealth nor fame can guarantee it</a>, then <i>what truly determines our level of happiness?</i></p><h2><strong>The treadmill, uphill and down</strong></h2><p>Brickman & Campbell (1971) called this puzzle the <a href="https://www.scienceopen.com/document?vid=f8dfc708-990f-457a-aaf8-907b1404390b"><i>Hedonic</i> <i>Treadmill</i></a>: as a person experiences a positive emotional event, expectations and desires rise in tandem which cancels their net long-term impact, resulting in no permanent gain in happiness.</p><p>So no matter how hard one tries to gain in happiness, one will remain in the <i>same place</i>.</p><p><a href="https://journals.sagepub.com/doi/abs/10.1111/j.0963-7214.2004.01501002.x">Later</a> <a href="https://www.researchgate.net/publication/8098595_Life_Satisfaction_Set_Point_Stability_and_Change">studies</a> show the same homeostasis after negative shocks: people recover from bereavement, disability, or loss, drifting back towards a stable baseline level of happiness. The baseline level may be identified at <a href="https://pubmed.ncbi.nlm.nih.gov/16719675/">different points among individuals</a> (such as neutral or positive), but is stubbornly stable <i>within</i> them.</p><p>The implication is disheartening. If no action durably lifts well-being, then our daily efforts seem to generate motion without progress.</p><h2><strong>Can we raise the platform?</strong></h2><p>We can <i>break</i> the treadmill, downward: substance abuse, chronic depression, or traumatic brain injury can lower one's baseline for years. The <a href="https://www.frontiersin.org/articles/10.3389/fnins.2024.1447688/full">biological mechanisms</a> involve long-term changes in dopaminergic and serotonergic signalling, glucocorticoid cascades, and possibly neuroinflammation—none pleasant.</p><p>But the platform can surely be raised as well, right? Some <a href="https://pubmed.ncbi.nlm.nih.gov/16719675/">empirical literature</a> says yes: <a href="https://www.lesswrong.com/posts/ZbgCx2ntD5eu8Cno9/how-to-be-happy">subjective well-being</a> is correlated to durable improvements in happiness baseline, suggesting interventions such as:</p><ul><li>pursue meaningful, attainable goals</li><li>reframe experiences positively</li><li>maintain sleep, exercise, diet</li><li>invest in close relationships</li><li>cultivate realistic self-esteem</li></ul><p>The list goes on, and initially it made me hopeful. It painted a path towards a richer, more fulfilling life. But eventually, it felt like something was missing. I haven't found any particular novel hacks, and any sensible person would have known... </p> | During a visit to a Hong Kong children’s welfare home, I met a 12-year-old girl I'll call Kylie. She had suffered a severe illness that left her blind, deaf, non-verbal, and nearly immobile, yet no identified damage was done to her brain.
The staff described her, without hesitation, as “always cheerful,” and indeed she smiled the entire time I watched her on the standing frame.
We can't objectively verify Kylie’s internal state, but her case crystallized a puzzle: if the absence of sight, hearing, or mobility doesn't preclude happiness, and neither wealth nor fame can guarantee it, then what truly determines our level of happiness?
The treadmill, uphill and down
Brickman & Campbell (1971) called this puzzle the Hedonic Treadmill: as a person experiences a positive emotional event, expectations and desires rise in tandem which cancels their net long-term impact, resulting in no permanent gain in happiness.
So no matter how hard one tries to gain in happiness, one will remain in the same place.
Later studies show the same homeostasis after negative shocks: people recover from bereavement, disability, or loss, drifting back towards a stable baseline level of happiness. The baseline level may be identified at different points among individuals (such as neutral or positive), but is stubbornly stable within them.
The implication is disheartening. If no action durably lifts well-being, then our daily efforts seem to generate motion without progress.
Can we raise the platform?
We can break the treadmill, downward: substance abuse, chronic depression, or traumatic brain injury can lower one's baseline for years. The biological mechanisms involve long-term changes in dopaminergic and serotonergic signalling, glucocorticoid cascades, and possibly neuroinflammation—none pleasant.
But the platform can surely be raised as well, right? Some empirical literature says yes: subjective well-being is correlated to durable improvements in happiness baseline, suggesting interve | 778 | 1.13.0 | Revision | false | null | null | CrosspostOutput |
|
YMWJGfHuKskFBLv8n | knowledge-and-provability-inclusion | Knowledge and Provability Inclusion | null | false | false | false | null | g8TYQoKChKcZmaZmd | null | true | false | false | false | Post | null | 2025-05-28T10:50:21.686Z | null | false | false | 2 | 2 | 2025-05-28T17:28:54.627Z | false | false | post | [] | null | null | qu7ykiAdzjHCnRBbo | 0 | 3 | 6 | false | 0.008377 | null | false | false | 2025-05-28T10:50:21.686Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-28T10:05:28.406Z | false | false | easy-going | true | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "jnpHBYvdW8bvH5JYK",
"adminOnly": false,
"afBaseScore": 1,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "7WkAySoA9mb2HJsRH",
"displayName": "Jaime Sevilla Molina"
}
]
},
"baseScore": 6,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-06-15T13:49:41.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "NReADqrMj4qCF65rp",
"displayName": "Eric B"
},
{
"_id": "7WkAySoA9mb2HJsRH",
"displayName": "Jaime Sevilla Molina"
},
{
"_id": "u69zFtnHtcjJ6MxS8",
"displayName": "Kevin Clancy"
},
{
"_id": "ghG7H7h8bbh6XGQ3f",
"displayName": "Enrique Gavidia"
},
{
"_id": "WAdAj7SARGDyrk9yD",
"displayName": "o.k."
}
]
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Category theory",
"needsReview": false,
"noindex": false,
"postCount": 35,
"score": 6,
"shortName": null,
"slug": "category-theory",
"suggestedAsFilter": false,
"userId": "ZsLZHCWynp4FNw6tj",
"voteCount": 5,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "HJZZxyYXWzB74M4FT",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-17T05:19:52.067Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Fixed Point Theorems",
"needsReview": false,
"noindex": false,
"postCount": 12,
"score": 0,
"shortName": null,
"slug": "fixed-point-theorems",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "FJ3MGb684F88BoN2o",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-09-24T19:36:33.471Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Formal Proof",
"needsReview": false,
"noindex": false,
"postCount": 65,
"score": 9,
"shortName": null,
"slug": "formal-proof",
"suggestedAsFilter": false,
"userId": "Sp5wM4aRAhNERd4oY",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "AJDHQ4mFnsNbBzPhT",
"adminOnly": false,
"afBaseScore": 6,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-12-21T08:54:23.690Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Gödelian Logic",
"needsReview": false,
"noindex": false,
"postCount": 37,
"score": 10,
"shortName": null,
"slug": "goedelian-logic",
"suggestedAsFilter": false,
"userId": "sKAL2jzfkYkDbQmx9",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "DFCEGpufjkvqQaRXt",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-07-07T04:10:36.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "NReADqrMj4qCF65rp",
"displayName": "Eric B"
}
]
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Löb's theorem",
"needsReview": false,
"noindex": false,
"postCount": 37,
"score": 1,
"shortName": null,
"slug": "loeb-s-theorem",
"suggestedAsFilter": false,
"userId": "7WkAySoA9mb2HJsRH",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "6nS8oYmSMuFMaiowF",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 20,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-15T12:40:36.752Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "WDi6qQb5TWHb67chh",
"displayName": "Haruka Shou"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Logic & Mathematics ",
"needsReview": false,
"noindex": false,
"postCount": 559,
"score": 20,
"shortName": null,
"slug": "logic-and-mathematics",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "ye2H85NHoDLomm6BS",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2017-04-11T06:19:33.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Logical Uncertainty",
"needsReview": false,
"noindex": false,
"postCount": 76,
"score": 0,
"shortName": null,
"slug": "logical-uncertainty",
"suggestedAsFilter": false,
"userId": "5W5vBC8MwBqzNqmNL",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 2 | 0 | g8TYQoKChKcZmaZmd | milanrosko | 2024-05-22T15:34:46.217Z | milanrosko | milanrosko | null | null | Milan Rosko | 86 | 0 | false | false | <p>Henlo.</p> | null | null | 4 | 84 | 0 | 0 | 0 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | easy-going | null | true | [
"canModeratePersonal"
] | null | null | YMWJGfHuKskFBLv8n | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/ccznmusnwk3g7ttzhfhm | SocialPreviewType | qu7ykiAdzjHCnRBbo | <h1>Introduction: The Vanishing Sky Analogy</h1><p><i>Consider the following analogy:</i></p><p>In a universe undergoing accelerating expansion, distant galaxies gradually slip beyond our horizon, until the point at which their light can no longer reach us. They become causally disconnected from us: their light will never reach us, and thus they are forever beyond our observation<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="0vrmcg6cvo" role="doc-noteref" id="fnref0vrmcg6cvo"><sup><a href="#fn0vrmcg6cvo">[1]</a></sup></span>.</p><p>In a far-distant future, this could lead to a hypothetical situation in which only a single galaxy—our own—remains within the observable universe. All other structures would have vanished beyond the horizon, obscured by the expansion of space. For future observers, the universe would appear to consist solely of this one galaxy. Their cosmology would necessarily be confined to what lies within this narrow observational horizon.</p><p>Such a cosmology would be internally consistent, yet fundamentally incomplete—without this incompleteness being apparent.</p><p><strong>This raises the question:</strong> Since our cosmological models are constructed entirely from what we can observe, is it possible that we, too, are in a similarly constrained position today? If vast regions of the universe already lie beyond our cosmological horizon, then we lack not only access to their information—but even awareness of their absence. We may suspect that our picture of the universe is incomplete, but we cannot know in what way, or to what extent.</p><p><i>Provability Inclusion</i> implies that, despite epistemic limitations, every entity, system, or cosmology is compelled to treat available data as a kind of <i>Löbian Ground Truth</i><span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="nai7cdfua9" role="doc-noteref" id="fnrefnai7cdfua9"><sup><a href="#fnnai7cdfua9">[2]</a></sup></span>—not because it chooses an antecedent freely, but because it is constrained by an enforced (as in forcing<span class="footnote-reference" data-footnote-reference="" data-footnote-index="3" data-footnote-id="b1l2bkq3cgk" role="doc-noteref" id="fnrefb1l2bkq3cgk"><sup><a href="#fnb1l2bkq3cgk">[3]</a></sup></span>) relative consistency.</p><p>A tribe living deep within a dense forest might naturally assume that the entire world is forest. However, using the example of an insect confined to a single tree—believing that tree to be the whole of existence—does not validate or invalidate the tribe’s limited perspective.</p><p>The <strong>Theorem of Provability Inclusion</strong> is a limitative formal result in first-order logic (FOL) that helps us look "downward" and recognize ourselves in the insect.</p><h1>Generalized Common Knowledge</h1><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YMWJGfHuKskFBLv8n/sx5pjodwzxz1lr3q3gzd" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YMWJGfHuKskFBLv8n/zswjnpoexzn0nxyufukf 200w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YMWJGfHuKskFBLv8n/kjqrejftzhbbqzprvt6z 400w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YMWJGfHuKskFBLv8n/lzoiz8f9nxhhfvnprxfx 600w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YMWJGfHuKskFBLv8n/dw1pzc6zek5qdwbpogbb 800w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YMWJGfHuKskFBLv8n/zk73pzn7bxum2pnelqms 1000w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YMWJGfHuKskFBLv8n/atjxuvytwbono4kwmyfh 1200w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YMWJGfHuKskFBLv8n/hmstotbp8gpw0u9xlrdz 1400w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YMWJGfHuKskFBLv8n/hfshmllgcsiej8ghl5es 1600w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YMWJGfHuKskFBLv8n/pycsdmgyl01gstwmjpr4 1800w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YMWJGfHuKskFBLv8n/mcpsnfmhiscqi2xuzxjh 1958w"></figure><p>One will quickly notice that provability inclusion presents us with statements that often appear philosophical or paradoxical. This is primarily due to its reliance on the nested interplay of proof predicates, self-reference, and diagonalization. A close co... </p> | Introduction: The Vanishing Sky Analogy
Consider the following analogy:
In a universe undergoing accelerating expansion, distant galaxies gradually slip beyond our horizon, until the point at which their light can no longer reach us. They become causally disconnected from us: their light will never reach us, and thus they are forever beyond our observation[1].
In a far-distant future, this could lead to a hypothetical situation in which only a single galaxy—our own—remains within the observable universe. All other structures would have vanished beyond the horizon, obscured by the expansion of space. For future observers, the universe would appear to consist solely of this one galaxy. Their cosmology would necessarily be confined to what lies within this narrow observational horizon.
Such a cosmology would be internally consistent, yet fundamentally incomplete—without this incompleteness being apparent.
This raises the question: Since our cosmological models are constructed entirely from what we can observe, is it possible that we, too, are in a similarly constrained position today? If vast regions of the universe already lie beyond our cosmological horizon, then we lack not only access to their information—but even awareness of their absence. We may suspect that our picture of the universe is incomplete, but we cannot know in what way, or to what extent.
Provability Inclusion implies that, despite epistemic limitations, every entity, system, or cosmology is compelled to treat available data as a kind of Löbian Ground Truth[2]—not because it chooses an antecedent freely, but because it is constrained by an enforced (as in forcing[3]) relative consistency.
A tribe living deep within a dense forest might naturally assume that the entire world is forest. However, using the example of an insect confined to a single tree—believing that tree to be the whole of existence—does not validate or invalidate the tribe’s limited perspective.
The Theorem of Provability Inclu | 949 | 1.14.1 | Revision | false | null | null | CrosspostOutput |
tbLqpDzbiRr2Fk3XA | ai-s-goals-may-not-match-ours | AI’s goals may not match ours | null | false | false | false | null | oqRfDBPKhBuYEudSj | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "cn4SiEmqWbu7K9em5"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "LQGspDBuDwpHHXCfx"
}
] | true | false | false | false | Post | null | 2025-05-28T09:30:36.930Z | null | false | false | 2 | 2 | 2025-05-28T17:26:11.939Z | false | false | post | [
"3oopbgcjYfvN8B2fp"
] | null | null | chsQ9KCJ7rh84STzh | 1 | 3 | 14 | false | 0.012591 | null | false | false | 2025-05-29T19:30:36.734Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 3 | 0 | 2025-05-19T14:48:03.710Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "cn4SiEmqWbu7K9em5",
"afCommentCount": 7,
"afKarma": 30,
"afPostCount": 2,
"commentCount": 1575,
"createdAt": "2009-02-27T16:16:38.980Z",
"deleted": false,
"displayName": "steven0461",
"fullName": null,
"htmlBio": "<p>Steven K</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 8759,
"organization": null,
"postCount": 44,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "steven0461",
"spamRiskScore": 1,
"tagRevisionCount": 157,
"username": "steven0461"
},
{
"__typename": "User",
"_id": "LQGspDBuDwpHHXCfx",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2023-01-28T20:55:35.318Z",
"deleted": false,
"displayName": "Vishakha",
"fullName": "Vishakha Agrawal",
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 161,
"organization": null,
"postCount": 19,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "55XxDBpfKkkBPm9H8",
"sequenceCount": 1,
"slug": "vishakha",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "vishakha-agrawal"
}
] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "Fxq9YJMGsuphd8Rmt",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-11-04T17:53:53.248Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Alignment Intro Materials",
"needsReview": false,
"noindex": false,
"postCount": 63,
"score": 9,
"shortName": null,
"slug": "ai-alignment-intro-materials",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "agnEHZTiXEyzBFPmF",
"adminOnly": false,
"afBaseScore": 16,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "4fh2AAe3n7oBviyxx",
"displayName": "orthonormal"
},
{
"_id": "xSfc2APSi8WzFxp7i",
"displayName": "So8res"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "CBkbKSCEzEK2kLQww",
"displayName": "RyanCarey"
},
{
"_id": "wN6u4c4hDAn7ydasg",
"displayName": "Judd Rosenblatt"
}
]
},
"baseScore": 60,
"canEditUserIds": null,
"core": false,
"createdAt": "2015-03-10T10:56:45.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "4fh2AAe3n7oBviyxx",
"displayName": "orthonormal"
},
{
"_id": "xSfc2APSi8WzFxp7i",
"displayName": "So8res"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "CBkbKSCEzEK2kLQww",
"displayName": "RyanCarey"
},
{
"_id": "wN6u4c4hDAn7ydasg",
"displayName": "Judd Rosenblatt"
},
{
"_id": "2SPhghtAz3i32H8Ym",
"displayName": "alexei"
},
{
"_id": "5WEj6WjZMZYexN2ZT",
"displayName": "steph"
},
{
"_id": "2uc4KJEvQmGwbf4PJ",
"displayName": "Fredrik Bränström"
},
{
"_id": "3ozhmEwft7ieoLhTv",
"displayName": "Connor Flexman"
},
{
"_id": "6R7Pudq4xoyH9LDyT",
"displayName": "Rok Resnik"
},
{
"_id": "8tmsAC8e2bDbbKcd3",
"displayName": "Rafael Cosman"
},
{
"_id": "fscy9tNFa3DECqdB3",
"displayName": "Andrew McKnight"
},
{
"_id": "HyiNbRN6u7fBsdDgX",
"displayName": "Jeremy Perret"
},
{
"_id": "LbRt9iSuerr5tMHHN",
"displayName": "VLADIMIR KORABLIN"
},
{
"_id": "JuhtXGpaA92BH889P",
"displayName": "Luca Donno"
},
{
"_id": "Q8gYDiHgxHadqZueH",
"displayName": "Andreas Wagener"
},
{
"_id": "jtcwEecBmyxftSxTe",
"displayName": "Alexandra Reber"
},
{
"_id": "BswHugiSRw47bHSWy",
"displayName": "Sameer Tripathi"
},
{
"_id": "LL9oFvECJkbdpFXce",
"displayName": "tux tu"
},
{
"_id": "incKymsTztbEbzxWw",
"displayName": "Failure Insights"
},
{
"_id": "6JGDp8kNsFHK9HLGm",
"displayName": "chandler"
},
{
"_id": "5MhLsDkPzWphktvyD",
"displayName": "Samuel Salzer"
},
{
"_id": "gAektu8FTQrC5mxcx",
"displayName": "Andrew Lowry"
}
]
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Orthogonality Thesis",
"needsReview": false,
"noindex": false,
"postCount": 71,
"score": 60,
"shortName": null,
"slug": "orthogonality-thesis",
"suggestedAsFilter": false,
"userId": "nmk3nLpQE89dMRzzN",
"voteCount": 23,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 2 | 0 | oqRfDBPKhBuYEudSj | algon | 2015-04-07T13:36:45.246Z | Algon | Algon | null | null | null | 2,438 | 21 | false | false | null | null | 28 | 723 | 0 | 0 | 4 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1"
] | null | null | tbLqpDzbiRr2Fk3XA | SocialPreviewType | chsQ9KCJ7rh84STzh | <p><i>Context: This is a linkpost for </i><a href="https://aisafety.info/questions/NM3I/6:-AI%E2%80%99s-goals-may-not-match-ours"><i>https://aisafety.info/questions/NM3I/6:-AI%E2%80%99s-goals-may-not-match-ours</i></a><i> </i></p><p><i>This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any </i><a href="https://aisafety.info/questions/NM38/4:-The-road-from-human-level-to-superintelligent-AI-may-be-short"><i>feedback</i></a><i>. The most up-to-date version of this article is on our </i><a href="https://aisafety.info/"><i>website</i></a><i>.</i></p><p> </p><p>Making AI goals match our intentions is called the <a href="https://aisafety.info/questions/8EL9/"><strong>alignment problem</strong></a>.</p><p>There’s some ambiguity in the term “alignment”. For example, when people talk about “AI alignment” in the context of present-day AI systems, they generally mean controlling observable behaviors like: Can we make it impossible for the AI to say ethnic slurs? Or to advise you how to secretly dispose of a corpse? Although such restrictions are sometimes circumvented with "jailbreaks", on the whole, companies mostly do manage to avoid AI outputs that could harm people and threaten their brand reputation.</p><p>But "alignment" in smarter-than-human systems is a different question. For such systems to remain safe in extreme cases — if they become so smart that we can’t check their work and maybe can’t even keep them in our control — they'll have to value the right things at a deep level, based on well-grounded concepts that don’t lose their intended meanings even far outside the circumstances they were trained for.</p><p>Making that happen is an unsolved problem. Arguments about possible solutions to alignment get very complex and technical. But as we’ll see later in this introduction, many of the people who have researched AI and AI alignment on a deep level think we may fail to find a solution, and that may result in catastrophe.</p><p>Some of the main difficulties are:</p><ul><li>We <strong>can’t see what an AI values</strong>, because current AI is not designed in the same way as a web browser, an operating system, or a word processor — rather, it is “grown”. Human programmers design the <i>process</i> that grows the AI. But that process consists of vast numbers of computations that automatically make huge numbers of small adjustments, based on what works best on each piece of training data. The result is something like an alien artifact made of skyscraper-sized spreadsheets full of numbers. We can see what all the numbers are, and we know they have some sort of patterns in them that result in sensible tax advice and cookie recipes. We just understand little about what those patterns are or how they work together. In that sense, we’re looking at a “black box”. And </li></ul>... | Context: This is a linkpost for https://aisafety.info/questions/NM3I/6:-AI%E2%80%99s-goals-may-not-match-ours
This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any feedback. The most up-to-date version of this article is on our website.
Making AI goals match our intentions is called the alignment problem.
There’s some ambiguity in the term “alignment”. For example, when people talk about “AI alignment” in the context of present-day AI systems, they generally mean controlling observable behaviors like: Can we make it impossible for the AI to say ethnic slurs? Or to advise you how to secretly dispose of a corpse? Although such restrictions are sometimes circumvented with "jailbreaks", on the whole, companies mostly do manage to avoid AI outputs that could harm people and threaten their brand reputation.
But "alignment" in smarter-than-human systems is a different question. For such systems to remain safe in extreme cases — if they become so smart that we can’t check their work and maybe can’t even keep them in our control — they'll have to value the right things at a deep level, based on well-grounded concepts that don’t lose their intended meanings even far outside the circumstances they were trained for.
Making that happen is an unsolved problem. Arguments about possible solutions to alignment get very complex and technical. But as we’ll see later in this introduction, many of the people who have researched AI and AI alignment on a deep level think we may fail to find a solution, and that may result in catastrophe.
Some of the main difficulties are:
* We can’t see what an AI values, because current AI is not designed in the same way as a web browser, an operating system, or a word processor — rather, it is “grown”. Human programmers design the process that grows the AI. But that process consists of vast numbers of computations that automatically make huge numbers of small adjustments, based on what works best on | 775 | 1.9.0 | Revision | false | null | null | CrosspostOutput |
||
fYXsLKdfMvy9oYD3m | ai-may-pursue-goals | AI may pursue goals | null | false | false | false | null | oqRfDBPKhBuYEudSj | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": true,
"userId": "cn4SiEmqWbu7K9em5"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "LQGspDBuDwpHHXCfx"
}
] | true | false | false | false | Post | null | 2025-05-28T09:30:03.439Z | null | false | false | 2 | 2 | 2025-05-28T17:26:09.834Z | false | false | post | [] | null | null | eohS4RZTyDwvdWGXS | 0 | 2 | 13 | false | 0.011601 | null | false | false | 2025-05-28T09:30:03.439Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 3 | 0 | 2025-05-19T14:40:17.039Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "cn4SiEmqWbu7K9em5",
"afCommentCount": 7,
"afKarma": 30,
"afPostCount": 2,
"commentCount": 1575,
"createdAt": "2009-02-27T16:16:38.980Z",
"deleted": false,
"displayName": "steven0461",
"fullName": null,
"htmlBio": "<p>Steven K</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 8759,
"organization": null,
"postCount": 44,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "steven0461",
"spamRiskScore": 1,
"tagRevisionCount": 157,
"username": "steven0461"
},
{
"__typename": "User",
"_id": "LQGspDBuDwpHHXCfx",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2023-01-28T20:55:35.318Z",
"deleted": false,
"displayName": "Vishakha",
"fullName": "Vishakha Agrawal",
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 161,
"organization": null,
"postCount": 19,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "55XxDBpfKkkBPm9H8",
"sequenceCount": 1,
"slug": "vishakha",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "vishakha-agrawal"
}
] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "Fxq9YJMGsuphd8Rmt",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-11-04T17:53:53.248Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Alignment Intro Materials",
"needsReview": false,
"noindex": false,
"postCount": 63,
"score": 9,
"shortName": null,
"slug": "ai-alignment-intro-materials",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "b6tJM7Lza74rTfCBF",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-16T18:38:25.810Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Goal-Directedness",
"needsReview": false,
"noindex": false,
"postCount": 95,
"score": 9,
"shortName": null,
"slug": "goal-directedness",
"suggestedAsFilter": false,
"userId": "ypbkRWpFgPgzvNg3n",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 2 | 0 | oqRfDBPKhBuYEudSj | algon | 2015-04-07T13:36:45.246Z | Algon | Algon | null | null | null | 2,438 | 21 | false | false | null | null | 28 | 723 | 0 | 0 | 4 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1"
] | null | null | fYXsLKdfMvy9oYD3m | SocialPreviewType | eohS4RZTyDwvdWGXS | <p><i>Context: This is a linkpost for </i><a href="https://aisafety.info/questions/NM3J/5:-AI-may-pursue-goals"><i>https://aisafety.info/questions/NM3J/5:-AI-may-pursue-goals</i></a><i> </i></p><p><i>This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any </i><a href="https://aisafety.info/questions/NM38/4:-The-road-from-human-level-to-superintelligent-AI-may-be-short"><i>feedback</i></a><i>. The most up-to-date version of this article is on our </i><a href="https://aisafety.info/"><i>website</i></a><i>.</i></p><p>Suppose that, as argued previously, in the next few decades we’ll have <a href="https://aisafety.info/questions/6207/">superintelligent</a> systems. What role will they play?</p><p>One way to imagine these systems is purely as powerful and versatile tools, similar to most current systems. They could take broad directions from humans about what actions to take or what questions to answer, and cleverly fill in the details.</p><p>But another way is as <a href="https://aisafety.info/questions/5632/"><strong>agents</strong></a>, operating autonomously in the world. They could have their own goals — some kinds of futures they seek out over other futures — and take whatever actions will most likely lead to those futures, adapting as circumstances change.</p><p>As long as AIs are tools, they can be used for good or ill, like all technologies. They can radically increase the scope of the problems humans can solve and create.</p><p>But it’s unlikely that they’ll remain only tools, because:</p><ul><li>A good planning tool can easily be turned into an agent. Just tell it: “repeatedly come up with actions that would make goal X more likely, and execute those actions”. People keep building software frameworks for doing this, and to the extent they succeed, better tools will result in better agents.</li><li>When a planning tool is sufficiently more competent than humans, keeping humans in the loop to direct its activities will just get in the way. It will make the system less efficient (as we’re already seeing for some medical tasks), less profitable, and less able to compete.</li><li>There may be some tasks that highly intelligent agents can do and tools just can’t, like implementing hard new research programs.</li><li>Increasingly agent-like systems are already being created — see, for example, OpenAI’s Operator, which types, clicks, and scrolls in a web browser, and Anthropic’s “computer use” feature for its chatbot Claude.</li><li>Even if <i>most</i> AIs remain non-agentic, some people will create agents for various reasons of their own — to be the creator of a new species, or for the heck of it.</li></ul><p>If we’re going to build AI systems that pursue goals, it would be good if those goals matched ours. It’s not clear if we’ll succeed at making that the case.</p><p> </p><h2>Related</h2><ul><li><a href="https://docs.google.com/document/d/19FMgdcKOAYFGtZ1vVEvot1r9gv-4R_WZQMwCuNgSz9k/edit?tab=t.0"><u>What is an agent?</u></a></li><li><a href="https://docs.google.com/document/d/1Qn1sLAPVW-98zI4LzmNBbrCDqtfP3Nr7iF6_ZPxDc4Q/edit?tab=t.0"><u>What is scaffold</u></a></li></ul>... | Context: This is a linkpost for https://aisafety.info/questions/NM3J/5:-AI-may-pursue-goals
This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any feedback. The most up-to-date version of this article is on our website.
Suppose that, as argued previously, in the next few decades we’ll have superintelligent systems. What role will they play?
One way to imagine these systems is purely as powerful and versatile tools, similar to most current systems. They could take broad directions from humans about what actions to take or what questions to answer, and cleverly fill in the details.
But another way is as agents, operating autonomously in the world. They could have their own goals — some kinds of futures they seek out over other futures — and take whatever actions will most likely lead to those futures, adapting as circumstances change.
As long as AIs are tools, they can be used for good or ill, like all technologies. They can radically increase the scope of the problems humans can solve and create.
But it’s unlikely that they’ll remain only tools, because:
* A good planning tool can easily be turned into an agent. Just tell it: “repeatedly come up with actions that would make goal X more likely, and execute those actions”. People keep building software frameworks for doing this, and to the extent they succeed, better tools will result in better agents.
* When a planning tool is sufficiently more competent than humans, keeping humans in the loop to direct its activities will just get in the way. It will make the system less efficient (as we’re already seeing for some medical tasks), less profitable, and less able to compete.
* There may be some tasks that highly intelligent agents can do and tools just can’t, like implementing hard new research programs.
* Increasingly agent-like systems are already being created — see, for example, OpenAI’s Operator, which types, clicks, and scrolls in a web browser, and Anthropic’s | 419 | 1.10.0 | Revision | false | null | null | CrosspostOutput |
||
xAsviBJGSBBtgBiCw | the-best-way-to-align-an-llm-is-inner-alignment-now-a-solved | The Best Way to Align an LLM: Is Inner Alignment Now a Solved Problem? | null | false | false | true | null | rSRNtuYKNFm83ke6Q | null | true | false | false | false | Post | null | 2025-05-28T06:21:42.324Z | null | false | false | 2 | 2 | 2025-05-28T17:27:58.156Z | false | false | post | [] | null | null | Ph2tibm3F2NjzwrQG | 34 | 34 | 24 | false | 0.017065 | null | false | false | 2025-06-13T23:25:43.517Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | 2025-06-03T00:24:28.206Z | [
"rSRNtuYKNFm83ke6Q"
] | XtphY3uYHwruKqDyG | 4 | 0 | 2025-05-28T06:21:42.324Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 10 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "BisjoDrd3oNatDu7X",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 22,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-17T06:16:49.702Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "fWb4PGjXZGEjaZaQ2",
"displayName": "Neil Crawford"
},
{
"_id": "wvvrBjHDSyeGmxyJs",
"displayName": "Matthieu"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Outer Alignment",
"needsReview": false,
"noindex": false,
"postCount": 322,
"score": 22,
"shortName": null,
"slug": "outer-alignment",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Hs2ewfiKfuWKSscSQ",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-12-20T15:20:57.749Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Aligned AI Proposals",
"needsReview": false,
"noindex": false,
"postCount": 92,
"score": 0,
"shortName": null,
"slug": "aligned-ai-proposals",
"suggestedAsFilter": false,
"userId": "7JLB4TDRcSqyXmxmJ",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Myz8DA9AghgpNei9i",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-09-08T18:16:51.211Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Gradient Descent",
"needsReview": false,
"noindex": false,
"postCount": 10,
"score": 0,
"shortName": null,
"slug": "gradient-descent",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Fi6SeJRGfJs3bp5se",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-01-24T21:08:05.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Reinforcement learning",
"needsReview": false,
"noindex": false,
"postCount": 204,
"score": 0,
"shortName": null,
"slug": "reinforcement-learning",
"suggestedAsFilter": false,
"userId": "2vpm465RWePSgvpTo",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "c42eTtBCXyJmtpqwZ",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-08-23T05:10:09.247Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI-Assisted Alignment",
"needsReview": false,
"noindex": false,
"postCount": 151,
"score": 9,
"shortName": null,
"slug": "ai-assisted-alignment",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Dw5Z6wtTgk4Fikz9f",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-17T06:11:39.285Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Inner Alignment",
"needsReview": false,
"noindex": false,
"postCount": 330,
"score": 9,
"shortName": null,
"slug": "inner-alignment",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "NLwTnsH9RSotqXYLw",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-12T17:06:52.292Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Value Learning",
"needsReview": false,
"noindex": false,
"postCount": 206,
"score": 0,
"shortName": null,
"slug": "value-learning",
"suggestedAsFilter": false,
"userId": "pgi5MqvGrtvQozEH8",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 34 | 0 | 0 | 19 | 0 | rSRNtuYKNFm83ke6Q | rogerdearnaley | 2023-02-14T10:30:57.956Z | roger-d-1 | RogerDearnaley | null | null | Roger Dearnaley | 1,751 | 165 | false | false | <p>I'm a staff artificial intelligence engineer working with AI and LLMs, and have been interested in AI alignment, safety and interpretability for the last 15 years. I'm actively looking for employment working in this area, preferably in the UK — meanwhile I'll be participating in SERI MATS summer 2025. I will also be attending <a href="https://less.online/">LessOnline</a>.</p> | null | null | 27 | 685 | 1 | 13 | 75 | 1 | 31 | qgdGA4ZEyW7zNdK84 | User | easy-going | null | true | [
"alignmentVoters",
"canModeratePersonal"
] | null | null | xAsviBJGSBBtgBiCw | SocialPreviewType | Ph2tibm3F2NjzwrQG | <p><i>This is a link-post for a new paper I read: </i><a href="https://arxiv.org/pdf/2504.16980"><i>Safety Pretraining: Toward the Next Generation of Safe AI</i></a><i> by Pratyush Maini, Sachin Goyal, et al.</i><br><br>For a couple of years I (and others) have been proposing an approach to alignment: what the authors of this recent paper name "safety pretraining". In a nutshell: that it's best to apply your alignment training as part of the standard pretraining process to produce a base model that is already aligned — simply pretrain it on data including a lot of clearly marked examples of aligned behavior (then prompt for it).</p><p>I've regarded this approach as a major advance ever since I read the seminal 2023 paper on the topic: <a href="https://arxiv.org/pdf/2302.08582.pdf">Pretraining Language Models with Human Preferences</a> by Tomasz Korbak et al., and I'm absolutely delighted to finally see someone else publish another paper on this approach — I'm only sad it has taken so long.</p><p><strong>I </strong><i><strong>highly</strong></i><strong> encourage everyone interested in AI alignment to go read both of these papers</strong> (if you haven't already) — between them they strongly suggest that the authors have found a more effective way to align an AI: an alignment approach better than any that people are (as far as we know) currently using. I believe this is extremely important: I see it as major progress on alignment. So I think it directly reduces the p(DOOM) for the most critical current x-risk to our entire species.</p><p>For more detailed expositions of this approach and why I think it's an excellent idea, see my previous posts <a href="https://www.lesswrong.com/posts/JviYwAk5AfBR7HhEn/how-to-control-an-llm-s-behavior-why-my-p-doom-went-down-1">How to Control an LLM's Behavior (why my P(DOOM) went down)</a>, <a href="https://www.lesswrong.com/posts/oRQMonLfdLfoGcDEh/a-bitter-lesson-approach-to-aligning-agi-and-asi-1">A "Bitter Lesson" Approach to Aligning AGI and ASI</a>, and <a href="https://www.lesswrong.com/posts/XdpJsY6QGdCbvo2dS/why-aligning-an-llm-is-hard-and-how-to-make-it-easier">Why Aligning an LLM is Hard, and How to Make it Easier</a>. </p><p>(I'm also delighted that the authors of the recent paper tested out some of the follow-on ideas I'd been proposing in those posts on Less Wrong. One was training the model to generate control-tag tokens that label portions of the text as good or bad behavior, and then for conditional generation altering the token generation process, leveraging these tokens, so as to induce the model to behave well not badly. Another was using synthetic data editing to modify problematic raw training examples by supplementing them with more moral or correct behavior or commentary. They elaborated on both of these, or independently reinvented them, and even confirmed that both of these appear to work about as well as I'd been hoping.)</p><p>Hence, in order to encourage peopl... </p> | This is a link-post for a new paper I read: Safety Pretraining: Toward the Next Generation of Safe AI by Pratyush Maini, Sachin Goyal, et al.
For a couple of years I (and others) have been proposing an approach to alignment: what the authors of this recent paper name "safety pretraining". In a nutshell: that it's best to apply your alignment training as part of the standard pretraining process to produce a base model that is already aligned — simply pretrain it on data including a lot of clearly marked examples of aligned behavior (then prompt for it).
I've regarded this approach as a major advance ever since I read the seminal 2023 paper on the topic: Pretraining Language Models with Human Preferences by Tomasz Korbak et al., and I'm absolutely delighted to finally see someone else publish another paper on this approach — I'm only sad it has taken so long.
I highly encourage everyone interested in AI alignment to go read both of these papers (if you haven't already) — between them they strongly suggest that the authors have found a more effective way to align an AI: an alignment approach better than any that people are (as far as we know) currently using. I believe this is extremely important: I see it as major progress on alignment. So I think it directly reduces the p(DOOM) for the most critical current x-risk to our entire species.
For more detailed expositions of this approach and why I think it's an excellent idea, see my previous posts How to Control an LLM's Behavior (why my P(DOOM) went down), A "Bitter Lesson" Approach to Aligning AGI and ASI, and Why Aligning an LLM is Hard, and How to Make it Easier.
(I'm also delighted that the authors of the recent paper tested out some of the follow-on ideas I'd been proposing in those posts on Less Wrong. One was training the model to generate control-tag tokens that label portions of the text as good or bad behavior, and then for conditional generation altering the token generation process, leveraging these to | 2,614 | 1.32.0 | Revision | false | null | null | CrosspostOutput |
|
TfkqnLvxvWfLSyehm | spectral-radii-dimensionality-reduction-computed-without | Spectral radii dimensionality reduction computed without gradient calculations | null | false | false | false | null | sfWZMt9zyPy94y8CS | null | true | false | false | false | Post | null | 2025-05-28T05:06:05.144Z | null | false | false | 2 | 2 | 2025-05-28T17:21:35.050Z | false | false | post | [] | null | null | wPaBtfBPEWw4SxwiE | 4 | 3 | 5 | false | 0.007856 | null | false | false | 2025-06-04T21:01:16.585Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 1 | 0 | 2025-05-26T08:51:04.686Z | false | false | null | true | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 7 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "56yXXrcxRjrQs6z9R",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-30T22:00:37.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "t46uLRSbDziEcKmev",
"displayName": "Kriz Tahimic"
},
{
"_id": "sqMaBFCkAhRcWzJXi",
"displayName": "nicolasguillard"
},
{
"_id": "S6Niz3DiFCTm2Eybq",
"displayName": "Anirudh257"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Interpretability (ML & AI)",
"needsReview": false,
"noindex": false,
"postCount": 933,
"score": 12,
"shortName": null,
"slug": "interpretability-ml-and-ai",
"suggestedAsFilter": false,
"userId": "DgsGzjyBXN8XSK22q",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | sfWZMt9zyPy94y8CS | joseph-van-name | 2023-02-06T21:40:42.667Z | joseph-van-name | Joseph Van Name | null | null | null | 47 | 0 | false | false | null | null | 5 | 102 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | TfkqnLvxvWfLSyehm | SocialPreviewType | wPaBtfBPEWw4SxwiE | <p>In this post, I shall describe a fitness function that can be locally maximized without gradient computations. This fitness function is my own. I initially developed this fitness function in order to evaluate block ciphers for cryptocurrency technologies, but I later found that this fitness function may be used to solve other problems such as the clique problem (which is NP-complete) in the average case and some natural language processing tasks as well. After describing algorithms for locally maximizing this fitness function, I conclude that such a fitness function is inherently interpretable and mathematical which is what we need for AI safety.</p><p>Let <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="K"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span></span></span></span></span></span> denote either the field of real numbers, complex numbers, or the division ring of quaternions. Given <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="(A_1,\dots,A_r)\in M_n(K)^r"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.081em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.081em;">M</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.513em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span></span></span></span></span></span> and <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="1\leq d<n"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.446em;">≤</span></span><span class="mjx-mi MJXc-space3"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;"><</span></span><span class="mjx-mi MJXc-space3"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span></span></span></span></span></span>, the goal is to find a tuple <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="(X_1,\dots,X_r)\in M_d(K)^r"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.081em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.081em;">M</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.219em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.513em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span></span></span></span></span></span> most similar to <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="(A_1,\dots,A_r)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span>. In other words, we want to approximate the <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="n\times n"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.298em;">×</span></span><span class="mjx-mi MJXc-space2"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span></span></span></span></span></span>-matrices <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="(A_1,\dots,A_r)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span> with <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="d\times d"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.298em;">×</span></span><span class="mjx-mi MJXc-space2"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span></span>-matrices.</p><p>Suppose that <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="A_1,\dots,A_r\in M_n(K),X_1,\dots,X_r\in M_d(K)."><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.081em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.081em;">M</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.081em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.081em;">M</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.219em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">.</span></span></span></span></span></span></span> Define a function <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\Gamma(A_1,\dots,A_r;X_1,\dots,X_r):M_{n,d}(K)\rightarrow M_{n,d}(K)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">Γ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.519em;">;</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.372em;">:</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.081em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.081em;">M</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.219em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">→</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.081em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.081em;">M</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.219em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span> by setting</p><p><span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\Gamma(A_1,\dots,A_r;X_1,\dots,X_r)(X)=\sum_jA_jXX_j^*"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">Γ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.519em;">;</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-munderover MJXc-space3"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-size1-R" style="padding-top: 0.519em; padding-bottom: 0.519em;">∑</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.439em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.519em;">j</span></span></span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.519em;">j</span></span></span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-stack" style="vertical-align: -0.311em;"><span class="mjx-sup" style="font-size: 70.7%; padding-bottom: 0.255em; padding-left: 0.115em; padding-right: 0.071em;"><span class="mjx-mo" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.298em;">∗</span></span></span><span class="mjx-sub" style="font-size: 70.7%; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.519em;">j</span></span></span></span></span></span></span></span></span></span>, and define<span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\Phi(A_1,\dots,A_r)=\Gamma(A_1,\dots,A_r;A_1,\dots,A_r)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">Φ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mi MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">Γ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.519em;">;</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span>. </p><p>If <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="X"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span></span></span></span></span> is a matrix, then let <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\rho(X)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em;">ρ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span> denote the spectral radius of <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="X"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span></span></span></span></span>.</p><p>Define the <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="L_2"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">2</span></span></span></span></span></span></span></span></span>-spectral radius similarity between <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="(A_1,\dots,A_r)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span> and <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="(X_1,\dots,X_r)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span></p><p>by setting <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\Large\|(A_1,\dots,A_r)\simeq(X_1,\dots,X_r)\|_2"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mstyle"><span class="mjx-mrow" style="font-size: 144%;"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.298em;">≃</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">2</span></span></span></span></span></span></span></span></span></span></span></p><p><span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\Large=\frac{\rho(\Gamma(A_1,\dots,A_r;X_1,\dots,X_r))}{\rho(\Phi(A_1,\dots,A_r))^{1/2}\rho(\Phi(X_1,\dots,X_r))^{1/2}}."><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mstyle"><span class="mjx-mrow" style="font-size: 144%;"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mfrac MJXc-space3"><span class="mjx-box MJXc-stacked" style="width: 11.835em; padding: 0px 0.12em;"><span class="mjx-numerator" style="font-size: 70.7%; width: 16.738em; top: -1.693em;"><span class="mjx-mrow" style=""><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em;">ρ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">Γ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 83.3%; vertical-align: -0.267em; padding-right: 0.06em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 83.3%; vertical-align: -0.18em; padding-right: 0.06em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.519em;">;</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 83.3%; vertical-align: -0.267em; padding-right: 0.06em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 83.3%; vertical-align: -0.18em; padding-right: 0.06em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span><span class="mjx-denominator" style="font-size: 70.7%; width: 16.738em; bottom: -1.197em;"><span class="mjx-mrow" style=""><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em;">ρ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">Φ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 83.3%; vertical-align: -0.267em; padding-right: 0.06em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 83.3%; vertical-align: -0.18em; padding-right: 0.06em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span><span class="mjx-sup" style="font-size: 83.3%; vertical-align: 0.408em; padding-left: 0px; padding-right: 0.06em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">/</span></span></span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">2</span></span></span></span></span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em;">ρ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">Φ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 83.3%; vertical-align: -0.267em; padding-right: 0.06em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 83.3%; vertical-align: -0.18em; padding-right: 0.06em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span><span class="mjx-sup" style="font-size: 83.3%; vertical-align: 0.408em; padding-left: 0px; padding-right: 0.06em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">/</span></span></span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">2</span></span></span></span></span></span></span></span><span style="border-bottom: 1.8px solid; top: -0.296em; width: 11.835em;" class="mjx-line"></span></span><span style="height: 2.044em; vertical-align: -0.846em;" class="mjx-vsize"></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">.</span></span></span></span></span></span></span></span></span> </p><p>The quantity <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\Large\|(A_1,\dots,A_r)\simeq(X_1,\dots,X_r)\|_2"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mstyle"><span class="mjx-mrow" style="font-size: 144%;"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.298em;">≃</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">2</span></span></span></span></span></span></span></span></span></span></span> is always a real number in the interval <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="[0,1]"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">[</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">0</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mn MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">]</span></span></span></span></span></span></span> (the proof is a generalization of the Cauchy-Schwarz inequality).</p><p>If <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="1\leq d<n"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.446em;">≤</span></span><span class="mjx-mi MJXc-space3"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;"><</span></span><span class="mjx-mi MJXc-space3"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span></span></span></span></span></span>, and <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="A_1,\dots,A_r\in M_n(K)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.081em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.081em;">M</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span>, then we say that <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="(X_1,\dots,X_r)\in M_d(K)^r"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.081em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.081em;">M</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.219em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.513em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span></span></span></span></span></span> is an <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="L_{2,d}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.219em; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">2</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span></span></span></span></span></span>-spectral radius dimensionality reduction (LSRDR) of <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="(A_1,\dots,A_r)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span> if the similarity <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\Large\|(A_1,\dots,A_r)\simeq(X_1,\dots,X_r)\|_2"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mstyle"><span class="mjx-mrow" style="font-size: 144%;"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.151em; padding-bottom: 0.298em;">≃</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">∥</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">2</span></span></span></span></span></span></span></span></span></span></span> is locally maximized.</p><p>One can produce an LSRDR of <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="(A_1,\dots,A_r)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span> simply by locally maximizing <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="(X_1,\dots,X_r)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span> using gradient ascent. But there is another way to obtain LSRDRs since they behave mathematically.</p><p>Let <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="Z(K)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">Z</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span> denote the center of the algebra <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="K"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span></span></span></span></span></span>. In particular,<span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="Z(\mathbb{R})=\mathbb{R},Z(\mathbb{C})=\mathbb{C},Z(\mathbb{H})=\mathbb{R}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">Z</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-ams-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">R</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-texatom MJXc-space3"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-ams-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">R</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mi MJXc-space1"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">Z</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-ams-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">C</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-texatom MJXc-space3"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-ams-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">C</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mi MJXc-space1"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">Z</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-ams-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">H</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-texatom MJXc-space3"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-ams-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">R</span></span></span></span></span></span></span></span></span>.</p><p><strong>Empirical observation:</strong> If <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="(X_1,\dots,X_r)\in M_d(K)^r"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-msubsup MJXc-space3"><span class="mjx-base" style="margin-right: -0.081em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.081em;">M</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.219em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.513em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span></span></span></span></span></span> is an LSRDR of <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="(A_1,\dots,A_r)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-msubsup MJXc-space1"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">r</span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span>, then typically exists <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\lambda\in Z(K)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">λ</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">∈</span></span><span class="mjx-mi MJXc-space3"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">Z</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.04em;">K</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span> along with matrices <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="R,S"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">R</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mi MJXc-space1"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em; padding-right: 0.032em;">S</span></span></span></span></span></span></span> with <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="X_j=\lambda RA_jS"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.024em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.024em;">X</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.519em;">j</span></span></span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mi MJXc-space3"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">λ</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">R</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">A</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.519em;">j</span></span></span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em; padding-right: 0.032em;">S</span></span></span></span></span></span></span> for all <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="j"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.519em;">j</span></span></span></span></span></span></span> and w... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | In this post, I shall describe a fitness function that can be locally maximized without gradient computations. This fitness function is my own. I initially developed this fitness function in order to evaluate block ciphers for cryptocurrency technologies, but I later found that this fitness function may be used to solve other problems such as the clique problem (which is NP-complete) in the average case and some natural language processing tasks as well. After describing algorithms for locally maximizing this fitness function, I conclude that such a fitness function is inherently interpretable and mathematical which is what we need for AI safety.
Let K denote either the field of real numbers, complex numbers, or the division ring of quaternions. Given (A1,…,Ar)∈Mn(K)r and 1≤d<n, the goal is to find a tuple (X1,…,Xr)∈Md(K)r most similar to (A1,…,Ar). In other words, we want to approximate the n×n-matrices (A1,…,Ar) with d×d-matrices.
Suppose that A1,…,Ar∈Mn(K),X1,…,Xr∈Md(K). Define a function Γ(A1,…,Ar;X1,…,Xr):Mn,d(K)→Mn,d(K) by setting
Γ(A1,…,Ar;X1,…,Xr)(X)=∑jAjXX∗j, and defineΦ(A1,…,Ar)=Γ(A1,…,Ar;A1,…,Ar).
If X is a matrix, then let ρ(X) denote the spectral radius of X.
Define the L2-spectral radius similarity between (A1,…,Ar) and (X1,…,Xr)
by setting ∥(A1,…,Ar)≃(X1,…,Xr)∥2
=ρ(Γ(A1,…,Ar;X1,…,Xr))ρ(Φ(A1,…,Ar))1/2ρ(Φ(X1,…,Xr))1/2.
The quantity ∥(A1,…,Ar)≃(X1,…,Xr)∥2 is always a real number in the interval [0,1] (the proof is a generalization of the Cauchy-Schwarz inequality).
If 1≤d<n, and A1,…,Ar∈Mn(K), then we say that (X1,…,Xr)∈Md(K)r is an L2,d-spectral radius dimensionality reduction (LSRDR) of (A1,…,Ar) if the similarity ∥(A1,…,Ar)≃(X1,…,Xr)∥2 is locally maximized.
One can produce an LSRDR of (A1,…,Ar) simply by locally maximizing (X1,…,Xr) using gradient ascent. But there is another way to obtain LSRDRs since they behave mathematically.
Let Z(K) denote the center of the algebra K. In particular,Z(R)=R,Z(C)=C,Z(H)=R.
Empirical observation: If (X | 1,740 | 1.11.0 | Revision | false | null | null | CrosspostOutput |
||
u2ww8yKp9xAB6qzcr | if-you-re-not-sure-how-to-sort-a-list-or-grid-seriate-it | If you're not sure how to sort a list or grid—seriate it! | null | false | false | false | null | BtbwfsEyeT4P2eqXu | null | true | false | false | false | Post | https://www.jstatsoft.org/article/download/v025i03/227 | 2025-05-28T03:54:13.810Z | null | false | false | 2 | 2 | 2025-05-28T17:27:27.106Z | false | false | linkpost | [] | null | null | iA4uadFvZeNKKE7gB | 7 | 109 | 214 | false | 0.112 | null | false | false | 2025-05-28T03:54:13.810Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 64 | 0 | 2025-05-28T03:54:13.810Z | false | false | null | null | true | false | false | 0 | 0 | 0 | u2ww8yKp9x | 0.138613 | false | 2,025 | https://manifold.markets/LessWrong/will-if-youre-not-sure-how-to-sort | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "puBcCq7aRwKoa7pXX",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-07T23:22:05.741Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Note-Taking",
"needsReview": false,
"noindex": false,
"postCount": 30,
"score": 1,
"shortName": null,
"slug": "note-taking",
"suggestedAsFilter": false,
"userId": "Q7NW4XaWQmfPfdcFj",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "bh7uxTTqmsQ8jZJdB",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-08T04:32:58.906Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Probability & Statistics",
"needsReview": false,
"noindex": false,
"postCount": 333,
"score": 19,
"shortName": null,
"slug": "probability-and-statistics",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "ZsWDPoXcchbGneaMX",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-24T22:50:50.061Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "UI Design",
"needsReview": false,
"noindex": false,
"postCount": 26,
"score": 0,
"shortName": null,
"slug": "ui-design",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 109 | 0 | 0 | 51 | 0 | BtbwfsEyeT4P2eqXu | gwern | 2009-02-27T22:16:11.237Z | gwern | gwern | null | null | null | 79,935 | 1,617 | false | false | <p><a href="https://www.gwern.net">https://gwern.net/</a></p> | null | null | 188 | 11,819 | 0 | 4 | 206 | 1 | 51 | r38pkCm7wF4M44MDQ | User | null | null | false | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | u2ww8yKp9xAB6qzcr | SocialPreviewType | iA4uadFvZeNKKE7gB | <p><a href="https://www.jstatsoft.org/article/view/v025i03">"Getting Things in Order: An Introduction to the R Package seriation"</a>, Hahsler et al 2008:</p>
<blockquote>
<p><a href="https://en.wikipedia.org/wiki/Seriation_(archaeology)"><strong>Seriation</strong></a> [or <a href="https://en.wikipedia.org/wiki/Ordination_(statistics)">"ordination"</a>], i.e., finding a suitable linear order for a set of objects given data and a loss or merit function, is a basic problem in data analysis. Caused by the problem's combinatorial nature, it is hard to solve for all but very small sets. Nevertheless, both exact solution methods and heuristics are available.</p><p>In this paper we present <a href="https://cran.r-project.org/web/packages/seriation/index.html">the package <code>seriation</code></a> which provides an infrastructure for seriation with R. The infrastructure comprises data structures to represent linear orders as permutation vectors, a wide array of seriation methods using a consistent interface, a method to calculate the value of various loss and merit functions, and several visualization techniques which build on seriation.</p><p>To illustrate how easily the package can be applied for a variety of applications, a comprehensive collection of examples is presented.</p>
</blockquote>
<hr>
<p>Have you ever wondered how to sort a list or a folder of files where no strict sorting comparison operator like 'newer than' is quite right? It turns out that it is perfectly possible to loosen the definition of 'sorting' to something more approximate like 'try to minimize how different each item is from the next one'; this approximate or generalized sorting is called 'seriation'. It is obscure (I had never heard of the term until a year ago or so), but highly useful: it works for everything from seriating <a href="https://en.wikipedia.org/wiki/Seriation_(archaeology)#History">Egyptian graves by rough burial time</a> to organizing tag entries by topic. Since we now have neural embeddings for just about every modality there is, that means you can seriate anything.</p><p>What might you use it for? I use it on <a href="http://Gwern.net">Gwern.net</a> (<a href="https://gwern.net/doc/cs/algorithm/sorting/seriation/index">background</a>) to organize the 'similar links' recommendations in a way much smarter than the naive <em>k</em>-NN embedding distance retrieval approach. If you just sort them by 'distance', it is mostly meaningless and produces a jumble (see for example any algorithmic set of recommendations, like YouTube video lists - if I open a cat video, I see cat/anime/Touhou/cat/CS/music/cat/...); if you seriate them, however, suddenly you see clear clusters/topics emerge out of the chaos, and it's easier to skim the list. Because it works so well, and is so simple to implement (simple greedy distance minimization heuristic worked surprisingly well for my use-case, no need for <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem">TSP</a> solvers), I initially called it... </p> | "Getting Things in Order: An Introduction to the R Package seriation", Hahsler et al 2008:
> Seriation [or "ordination"], i.e., finding a suitable linear order for a set of objects given data and a loss or merit function, is a basic problem in data analysis. Caused by the problem's combinatorial nature, it is hard to solve for all but very small sets. Nevertheless, both exact solution methods and heuristics are available.
>
> In this paper we present the package seriation which provides an infrastructure for seriation with R. The infrastructure comprises data structures to represent linear orders as permutation vectors, a wide array of seriation methods using a consistent interface, a method to calculate the value of various loss and merit functions, and several visualization techniques which build on seriation.
>
> To illustrate how easily the package can be applied for a variety of applications, a comprehensive collection of examples is presented.
----------------------------------------
Have you ever wondered how to sort a list or a folder of files where no strict sorting comparison operator like 'newer than' is quite right? It turns out that it is perfectly possible to loosen the definition of 'sorting' to something more approximate like 'try to minimize how different each item is from the next one'; this approximate or generalized sorting is called 'seriation'. It is obscure (I had never heard of the term until a year ago or so), but highly useful: it works for everything from seriating Egyptian graves by rough burial time to organizing tag entries by topic. Since we now have neural embeddings for just about every modality there is, that means you can seriate anything.
What might you use it for? I use it on Gwern.net (background) to organize the 'similar links' recommendations in a way much smarter than the naive k-NN embedding distance retrieval approach. If you just sort them by 'distance', it is mostly meaningless and produces a jumble (see for example | 763 | 1.7.0 | Revision | false | null | null | CrosspostOutput |
|
SxHo8DZKQcHfy8Mum | briefly-analyzing-the-10-year-moratorium-amendment | Briefly analyzing the 10-year moratorium amendment | null | false | false | false | null | grecHJcgkb3KW5wnM | null | true | false | false | false | Post | null | 2025-05-28T03:11:20.124Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | 3p5KtqhNcRts4zX3r | 1 | 17 | 73 | false | 0.03646 | null | false | false | 2025-06-04T23:48:05.605Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 31 | 0 | 2025-05-28T03:04:55.002Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "qHDus5MuMNqQxJbjD",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T18:31:56.709Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Governance",
"needsReview": false,
"noindex": false,
"postCount": 726,
"score": 11,
"shortName": null,
"slug": "ai-governance",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 17 | 0 | 0 | 13 | 0 | grecHJcgkb3KW5wnM | t3t | 2014-11-20T23:16:38.088Z | T3t | RobertM | null | null | null | 4,525 | 41 | false | true | <p>LessWrong dev & admin as of July 5th, 2022.</p> | null | null | 170 | 477 | 0 | 0 | 11 | 1 | 69 | r38pkCm7wF4M44MDQ | User | easy-going | null | null | [
"canModeratePersonal",
"trustLevel1",
"realAdmins",
"alignmentVoters",
"sunshineRegiment"
] | null | null | SxHo8DZKQcHfy8Mum | SocialPreviewType | 3p5KtqhNcRts4zX3r | <p>This is the result of a half-day research sprint on the <a href="https://d1dth6e84htgma.cloudfront.net/Subtitle_C_Communications_4e3fbcc3bc.pdf">recently-introduced amendment</a> to institute a 10-year moratorium on state-level AI regulations in the current budget reconciliation bill, with a focus on figuring out "is it likely to survive the Byrd Rule".</p><hr><p>It seems quite obviously in violation of the Byrd Rule because it violates at least 2 of the 6 tests<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="v235qwd08e" role="doc-noteref" id="fnrefv235qwd08e"><sup><a href="#fnv235qwd08e">[1]</a></sup></span>:</p><ul><li>It has no budget effect, i.e. does not change outlays or revenues. (Probably this is not actually true; I think on net we would expect it to somewhat increase revenues via 2nd/3rd order effects, but in the relevant legal sense, it's not an explicit additional tax and it's not an additional explicit expenditure. Also, the CBO estimate for that entire section is -500m, which is the full sum of the $500m it appropriates to <code>modernize and secure Federal information technology systems through the deployment of commercial artificial intelligence, the deployment of automation technologies, and the replacement of antiquated business systems in accordance with subsection (b)</code>)</li><li>Even if it did have a budget effect, it would be incidental to the non-budgetary component of the provision, which is clearly meant to be a "policy change".</li></ul><p>There is a "Byrd bath" that reconciliation bills go through, directed by the Senate Parliamentarian, before the bill is taken up by the Senate. The Parliamentarian & Senate staff identify the sections that seem like obvious violations of the Byrd Rule and try to work with whoever drafted the relevant section of text to redraft or amend it so that it doesn't violate the Byrd Rule (if possible; sometimes they're just deleted wholesale). It's not clear to me what percentage of violations get scrubbed out at this step, but various LLMs think it's most of them.</p><p>If it does still somehow make it through that step, Senators can raise a "point of order" against subtitles, sections, lines, or even single words in the bill. Other Senators can motion to "waive" a point of order; these waiver motions require a 3/5 majority vote (60 Senators) to pass. Separately, the Senate chair (currently JD Vance) can rule against the point of order without another Senator initiating a motion to "waive" it. Historically, the chair has deferred to the Parliamentarian's advice about whether the text being objected to is extraneous or not; as far as I can tell there haven't bee... </p> | This is the result of a half-day research sprint on the recently-introduced amendment to institute a 10-year moratorium on state-level AI regulations in the current budget reconciliation bill, with a focus on figuring out "is it likely to survive the Byrd Rule".
----------------------------------------
It seems quite obviously in violation of the Byrd Rule because it violates at least 2 of the 6 tests[1]:
* It has no budget effect, i.e. does not change outlays or revenues. (Probably this is not actually true; I think on net we would expect it to somewhat increase revenues via 2nd/3rd order effects, but in the relevant legal sense, it's not an explicit additional tax and it's not an additional explicit expenditure. Also, the CBO estimate for that entire section is -500m, which is the full sum of the $500m it appropriates to modernize and secure Federal information technology systems through the deployment of commercial artificial intelligence, the deployment of automation technologies, and the replacement of antiquated business systems in accordance with subsection (b))
* Even if it did have a budget effect, it would be incidental to the non-budgetary component of the provision, which is clearly meant to be a "policy change".
There is a "Byrd bath" that reconciliation bills go through, directed by the Senate Parliamentarian, before the bill is taken up by the Senate. The Parliamentarian & Senate staff identify the sections that seem like obvious violations of the Byrd Rule and try to work with whoever drafted the relevant section of text to redraft or amend it so that it doesn't violate the Byrd Rule (if possible; sometimes they're just deleted wholesale). It's not clear to me what percentage of violations get scrubbed out at this step, but various LLMs think it's most of them.
If it does still somehow make it through that step, Senators can raise a "point of order" against subtitles, sections, lines, or even single words in the bill. Other Senators can mo | 788 | 1.3.0 | Revision | false | null | null | CrosspostOutput |
|
u9n3hfrjkC8J6WMaX | does-sort-really-fall-back-to-disk | Does Sort Really Fall Back to Disk? | null | false | false | false | null | TtEoCrFeowCGb6rFK | null | true | false | false | false | Post | null | 2025-05-28T01:20:04.399Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | gT2azppX33DZq5oWg | 2 | 4 | 13 | false | 0.006474 | null | false | false | 2025-05-29T02:06:01.235Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-28T01:20:04.399Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | ma5dgL5yFHRxKLZKv | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 2 | 0 | TtEoCrFeowCGb6rFK | jkaufman | 2010-11-04T21:42:19.863Z | jkaufman | jefftk | null | null | Jeff Kaufman | 21,921 | 3 | false | false | <p>Co-lead (Near-Term Detection) at the Nucleic Acid Observatory in Boston. Speaking for myself unless I say otherwise.</p> | null | null | 1,018 | 2,211 | 0 | 0 | 1 | 1 | 2 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters"
] | null | null | u9n3hfrjkC8J6WMaX | SocialPreviewType | gT2azppX33DZq5oWg | <p><span>
The unix </span>
<code>sort</code> command is clever: to sort very large
files it does a series of in-memory sorts, saving sorted chunks to
temporary files, and then does a merge sort on those chunks. Except
this often doesn't work anymore.
</p><p>
Here's what I see if I run <code>man sort</code> and look at the
documentation for <code>--buffer-size</code>:
</p>
<blockquote>
use SIZE for main memory buffer
</blockquote>
<p>
That's pretty terse! What does my Mac say?
</p>
<blockquote>
Use size for the maximum size of the memory buffer. Size
modifiers %,b,K,M,G,T,P,E,Z,Y can be used. If a memory limit is
not explicitly specified, sort takes up to about 90% of available
memory. If the file size is too big to fit into the memory buffer,
the temporary disk files are used to perform the sorting.
</blockquote>
<p>
Makes sense! But then the docs for <code>--temporary-directory</code>
say:
</p>
<blockquote>
use DIR for temporaries, not <code>$TMPDIR</code> or
<code>/tmp</code>; multiple options specify multiple directories
</blockquote>
<p>
And these days <code>/tmp</code> is often memory-backed, via <a href="https://en.wikipedia.org/wiki/Tmpfs">tmpfs</a>. This changed in
<a href="https://fedoraproject.org/wiki/Features/tmp-on-tmpfs">Fedora
18</a> (2013) and <a href="https://discourse.ubuntu.com/t/oracular-oriole-release-notes/44878">Ubuntu
24.10</a> (2024), and is changing in <a href="https://news.itsfoss.com/debian-13-tmp-mounting/">Debian 13</a>
(in a month or two).
</p>
<p>
It seems to me that these days it would be better for
<code>--temporary-directory</code> to default to
<code>/var/tmp</code>, which is <a href="https://www.pathname.com/fhs/pub/fhs-2.3.html#VARTMPTEMPORARYFILESPRESERVEDBETWEE">preserved
across reboots</a> and so will generally be backed by disk even on
systems that use tmpfs for <code>/tmp</code>. In the meantime,
<code>sort --temporary-directory /var/tmp</code> will do the trick.
</p> | The unix sort command is clever: to sort very large files it does a series of in-memory sorts, saving sorted chunks to temporary files, and then does a merge sort on those chunks. Except this often doesn't work anymore.
Here's what I see if I run man sort and look at the documentation for --buffer-size:
> use SIZE for main memory buffer
That's pretty terse! What does my Mac say?
> Use size for the maximum size of the memory buffer. Size modifiers %,b,K,M,G,T,P,E,Z,Y can be used. If a memory limit is not explicitly specified, sort takes up to about 90% of available memory. If the file size is too big to fit into the memory buffer, the temporary disk files are used to perform the sorting.
Makes sense! But then the docs for --temporary-directory say:
> use DIR for temporaries, not $TMPDIR or /tmp; multiple options specify multiple directories
And these days /tmp is often memory-backed, via tmpfs. This changed in Fedora 18 (2013) and Ubuntu 24.10 (2024), and is changing in Debian 13 (in a month or two).
It seems to me that these days it would be better for --temporary-directory to default to /var/tmp, which is preserved across reboots and so will generally be backed by disk even on systems that use tmpfs for /tmp. In the meantime, sort --temporary-directory /var/tmp will do the trick. | 228 | 1.0.0 | Revision | false | null | null | CrosspostOutput |
|
dcd2dPLZGFJPgtDzq | shift-resources-to-advocacy-now-post-4-of-7-on-ai-governance | Shift Resources to Advocacy Now (Post 4 of 7 on AI Governance) | null | false | false | false | null | 62rKjNqA2LCJ6RthR | null | true | false | false | false | Post | null | 2025-05-28T01:19:27.307Z | null | false | false | 2 | 2 | 2025-05-28T17:24:54.971Z | false | false | post | [] | null | null | tphfGsrw2ZCDevnAN | 18 | 18 | 53 | false | 0.031375 | null | false | false | 2025-05-29T15:02:25.403Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 12 | 0 | 2025-05-28T01:08:52.192Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 38 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "qHDus5MuMNqQxJbjD",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T18:31:56.709Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Governance",
"needsReview": false,
"noindex": false,
"postCount": 726,
"score": 11,
"shortName": null,
"slug": "ai-governance",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 18 | 0 | 0 | 6 | 0 | 62rKjNqA2LCJ6RthR | mass_driver | 2010-03-30T15:48:06.997Z | Mass_Driver | Mass_Driver | null | null | null | 3,304 | 0 | false | false | null | null | 31 | 655 | 1 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal"
] | null | null | dcd2dPLZGFJPgtDzq | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/yk7z1f9ugd7wxjsuqak7 | SocialPreviewType | tphfGsrw2ZCDevnAN | <p>In my previous post in this series, I estimated that we have 3 researchers for every advocate working on US AI governance, and I argued that this ratio is backwards. When allocating staff, you almost always want to have more people working on the more central activity. I argued that in the case of AI policy, the central activity is advocacy, not research, because the core problem to be solved is fixing the bad private incentives faced by AI developers. </p><p>As I explained, the problem with these incentives is less that they’re poorly understood, and more that they require significant political effort to overturn. As a result, we’ll need to shift significant resources from research (which helps us understand problems better) to advocacy (which helps us change bad incentives).</p><p>In this post, I want to explain why it’s appropriate for us to shift these resources <i><strong>now</strong></i>, rather than waiting until some future date to scale up our advocacy.</p><p>The arguments for why we should wait until later to start advocating can be divided into three broad categories: (a) we aren’t yet sure that any of our policies will be robustly helpful, (b) we haven’t yet learned how to advocate successfully, and (c) we don’t yet have enough senior politicians in the movement to staff a large advocacy effort. What these objections have in common is that they seem to justify an emphasis on research for the next few years – if we’re missing knowledge or experience that we would need in order to win at advocacy, then maybe we should bide our time while we gather more resources. </p><p>In my opinion, none of these objections are well-founded. We should be very confident that our best policies offer net-positive value, we should be very confident that advocating for those policies will increase their chance of passage, and we should be confident that we can solve the challenge of recruiting effective advocates with willpower, flexibility, and funding.</p><p>Moreover, even if we were not self-confident, our chance of success will not be significantly improved by waiting or by doing more academic research. On the contrary, we face enormous time pressure to get better incentives in place before AI developers push us past the point of no return, so we need to start moving AI regulations forward now, using whatever policy tools and policy advocates are currently available.</p><h1>OUR BEST POLICIES OFFER POSITIVE EXPECTED VALU</h1>... | In my previous post in this series, I estimated that we have 3 researchers for every advocate working on US AI governance, and I argued that this ratio is backwards. When allocating staff, you almost always want to have more people working on the more central activity. I argued that in the case of AI policy, the central activity is advocacy, not research, because the core problem to be solved is fixing the bad private incentives faced by AI developers.
As I explained, the problem with these incentives is less that they’re poorly understood, and more that they require significant political effort to overturn. As a result, we’ll need to shift significant resources from research (which helps us understand problems better) to advocacy (which helps us change bad incentives).
In this post, I want to explain why it’s appropriate for us to shift these resources now, rather than waiting until some future date to scale up our advocacy.
The arguments for why we should wait until later to start advocating can be divided into three broad categories: (a) we aren’t yet sure that any of our policies will be robustly helpful, (b) we haven’t yet learned how to advocate successfully, and (c) we don’t yet have enough senior politicians in the movement to staff a large advocacy effort. What these objections have in common is that they seem to justify an emphasis on research for the next few years – if we’re missing knowledge or experience that we would need in order to win at advocacy, then maybe we should bide our time while we gather more resources.
In my opinion, none of these objections are well-founded. We should be very confident that our best policies offer net-positive value, we should be very confident that advocating for those policies will increase their chance of passage, and we should be confident that we can solve the challenge of recruiting effective advocates with willpower, flexibility, and funding.
Moreover, even if we were not self-confident, our chance of succ | 9,485 | 1.4.0 | Revision | false | null | null | CrosspostOutput |
|
bgzBrPDfDCy7mnndL | colonialism-in-space-does-a-collection-of-minds-have-exactly | Colonialism in space: Does a collection of minds have exactly two attractors? | null | false | false | false | null | T7QHMS7qNx3s7z36d | null | true | false | false | false | Post | 2025-05-27T23:35:52.221Z | null | false | false | 2 | 2 | 2025-05-28T17:24:32.068Z | false | false | question | [] | null | null | edtbeLauQAnfEXxhy | 5 | 3 | 3 | false | 0.006457 | null | false | false | 2025-05-30T17:55:21.959Z | null | null | null | null | null | true | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-23T22:18:58.817Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 0 | 0 | T7QHMS7qNx3s7z36d | stanislavkrym | 2025-03-18T20:38:41.892Z | StanislavKrym | StanislavKrym | null | null | null | -80 | 0 | false | false | null | null | 16 | 68 | 0 | 0 | 0 | 0.8 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | bgzBrPDfDCy7mnndL | SocialPreviewType | edtbeLauQAnfEXxhy | <p>Depending on the estimates of parameters, the <a href="https://en.wikipedia.org/wiki/Drake_equation">Drake equation</a> produces drastically different amounts of contactable civilisations in the Milky Way. Some estimates imply that the reason is the extreme rarity of life, while others suggest that we just happened to be the first civilisation in the galaxy<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="fuemu5sbr3f" role="doc-noteref" id="fnreffuemu5sbr3f"><sup><a href="#fnfuemu5sbr3f">[1]</a></sup></span> that is likely to reach other systems in the foreseeable future. </p><p>The diameter of the Milky Way is less than <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="10^5"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">10</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.591em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">5</span></span></span></span></span></span></span></span></span> light years. Meanwhile, even <i>currently existing</i> proposals like the fission-fragment rocket are estimated to allow transportation at speeds at least <a href="https://en.wikipedia.org/wiki/Spacecraft_propulsion#Table_of_methods">0.05 times the speed of light</a>. If an AI tried to colonize the entire galaxy, it would need at most<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="qxfyui03xvr" role="doc-noteref" id="fnrefqxfyui03xvr"><sup><a href="#fnqxfyui03xvr">[2]</a></sup></span> <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="10^7"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">10</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.591em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">7</span></span></span></span></span></span></span></span></span> years, which is, apparently, <strong>at least 2.5 OOM less</strong> than the length of the age<span class="footnote-reference" data-footnote-reference="" data-footnote-index="3" data-footnote-id="u85ptz6wfns" role="doc-noteref" id="fnrefu85ptz6wfns"><sup><a href="#fnu85ptz6wfns">[3]</a></sup></span> when sapient life might appear in a stellar system in the Milky Way. </p><p>Therefore, many encountered planets with life will contain non-sapient life. But a primitive alien civilisation or a planet which might generate a sapient alien lifeform would, as I argued <a href="https://www.lesswrong.com/posts/KaHoZLvBpvAJ73CSD/stanislavkrym-s-shortform?commentId=4KYgCdqAmzN8rquwD">here</a>, have some kind of rights to their system and to their part of the space. But the lifeform's fate depends only on the will of the discoverers.</p><p>Suppose that in systems that are likely to be reached by currently primitive aliens races of good explorers establish only outposts that consume a tiny amount of resources and protect the system, while races of evil explorers gather most resources and use them for the evil explorers' goals.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="4" data-footnote-id="6b0ecpfp6ah" role="doc-noteref" id="fnref6b0ecpfp6ah"><sup><a href="#fn6b0ecpfp6ah">[4]</a></sup></span> The equilibrium between good explorers and evil ones is likely to be <strong>unstable, and the third option doesn't seem to exist.</strong> </p><p>An additional implication is the following. Since there doesn't seem to exist a third option, then does <i>any</i> collection of minds converge to one of the two attractors? Since humanity is unlikely to return to the colonialistic attractor, how can humans increase the chance that the AI created by good actors<span class="footnote-reference" data-footnote-reference="" data-footnote-index="5" data-footnote-id="p279b0wr8r" role="doc-noteref" id="fnrefp279b0wr8r"><sup><a href="#fnp279b0wr8r">[5]</a></sup></span> will <i>also </i>be aligned to the anti-colonialistic attractor? What about the chance that an AI aligned to said attractor won't destroy humanity?</p><ol class="footnote-section footnotes" data-footnote-section="" role="doc-endnotes"><li class="footnote-item" data-footnote-item="" data-footnote-index="1" data-footnote-id="fuemu5sbr3f" role="doc-endnote" id="fnfuemu5sbr3f"><span class="footnote-back-link" data-footnote-back-link="" data-footnote-id="fuemu5sbr3f"><sup><strong><a href="#fnreffuemu5sbr3f">^</a></strong></sup></span><div class="footnote-content" data-footnote-content=""><p>Humans might also be the first civilisation in the spacetime cone from where a civilisation armed with advanced tech can reach the Earth. If this is true, then humans (or human-created AIs) could encounter the aliens (or their AIs) <i>after </i>they <i>both</i> began the space colonization. But then the two sides of a potential</p></div></li></ol>... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style> | Depending on the estimates of parameters, the Drake equation produces drastically different amounts of contactable civilisations in the Milky Way. Some estimates imply that the reason is the extreme rarity of life, while others suggest that we just happened to be the first civilisation in the galaxy[1] that is likely to reach other systems in the foreseeable future.
The diameter of the Milky Way is less than 105 light years. Meanwhile, even currently existing proposals like the fission-fragment rocket are estimated to allow transportation at speeds at least 0.05 times the speed of light. If an AI tried to colonize the entire galaxy, it would need at most[2] 107 years, which is, apparently, at least 2.5 OOM less than the length of the age[3] when sapient life might appear in a stellar system in the Milky Way.
Therefore, many encountered planets with life will contain non-sapient life. But a primitive alien civilisation or a planet which might generate a sapient alien lifeform would, as I argued here, have some kind of rights to their system and to their part of the space. But the lifeform's fate depends only on the will of the discoverers.
Suppose that in systems that are likely to be reached by currently primitive aliens races of good explorers establish only outposts that consume a tiny amount of resources and protect the system, while races of evil explorers gather most resources and use them for the evil explorers' goals.[4] The equilibrium between good explorers and evil ones is likely to be unstable, and the third option doesn't seem to exist.
An additional implication is the following. Since there doesn't seem to exist a third option, then does any collection of minds converge to one of the two attractors? Since humanity is unlikely to return to the colonialistic attractor, how can humans increase the chance that the AI created by good actors[5] will also be aligned to the anti-colonialistic attractor? What about the chance that an AI aligned to said at | 344 | 1.31.0 | Revision | false | null | null | CrosspostOutput |
|||
TGEL3TvzLKjfgtLMd | what-are-the-best-arguments-you-ve-seen-for-the-litany-of | What are the best arguments you've seen for the Litany of Gendlin? | null | false | false | false | null | QtG5LBiA9sQdNuRMr | null | true | false | false | false | Post | 2025-05-27T21:19:21.398Z | null | false | false | 2 | 2 | null | false | false | question | [] | null | null | yH5gt63uLWte3FB3e | 8 | 3 | 7 | false | 0.003859 | null | false | false | 2025-06-22T15:03:23.259Z | null | null | null | null | null | true | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-27T21:02:17.896Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "5f5c37ee1b5cdee568cfb0f6",
"adminOnly": false,
"afBaseScore": 8,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "QBvPFLFyZyuHcBwFm",
"displayName": "Gyrodiot"
}
]
},
"baseScore": 18,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-11T19:58:51.832Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "QBvPFLFyZyuHcBwFm",
"displayName": "Gyrodiot"
},
{
"_id": "cSSwEjKkwqpDLPEjA",
"displayName": "Cornelis Dirk Haupt"
},
{
"_id": "X3i68kKRdCKAbwy9b",
"displayName": "Dima17"
},
{
"_id": "RNYA2XLGxAeuiZgka",
"displayName": "Torch Ablazed"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Litany of Gendlin",
"needsReview": false,
"noindex": false,
"postCount": 4,
"score": 18,
"shortName": null,
"slug": "litany-of-gendlin",
"suggestedAsFilter": false,
"userId": "RyiDJDCG6R7xyAXzp",
"voteCount": 5,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 0 | 0 | QtG5LBiA9sQdNuRMr | flowerfeatherfocus | 2019-12-01T21:22:14.588Z | flowerfeatherfocus | flowerfeatherfocus | null | null | null | 35 | 0 | false | false | null | null | 1 | 8 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | null | null | null | TGEL3TvzLKjfgtLMd | SocialPreviewType | yH5gt63uLWte3FB3e | <p>That is, that learning the truth will never leave you worse off, or that useful fictions don't exist.<br><br>I'm hoping to see it defended in they way I frequently see it used, not just as a prior, but as a principle that admits basically no exceptions.</p> | That is, that learning the truth will never leave you worse off, or that useful fictions don't exist.
I'm hoping to see it defended in they way I frequently see it used, not just as a prior, but as a principle that admits basically no exceptions. | 46 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|||
Xwrajm92fdjd7cqnN | what-we-learned-from-briefing-70-lawmakers-on-the-threat | What We Learned from Briefing 70+ Lawmakers on the Threat from AI | null | false | false | false | null | zR2ADqQm2NQhZrMgS | null | true | false | false | false | Post | https://substack.com/home/post/p-164221773 | 2025-05-27T18:23:55.938Z | null | false | false | 2 | 2 | 2025-05-27T23:00:43.854Z | false | false | linkpost | [] | null | null | dGE4fJern8BQ4k2s6 | 15 | 180 | 467 | false | 0.231216 | null | false | false | 2025-06-20T09:08:08.103Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 138 | 0 | 2025-05-27T18:23:55.938Z | false | false | null | null | true | false | false | 0 | 0 | 0 | Xwrajm92fd | 0.319701 | false | 2,025 | https://manifold.markets/LessWrong/will-what-we-learned-from-briefing | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 19 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "qHDus5MuMNqQxJbjD",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T18:31:56.709Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Governance",
"needsReview": false,
"noindex": false,
"postCount": 726,
"score": 11,
"shortName": null,
"slug": "ai-governance",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 180 | 0 | 0 | 70 | 0 | zR2ADqQm2NQhZrMgS | leticiagarcia | 2025-05-27T15:33:08.199Z | leticiagarcia | leticiagarcia | null | null | null | 481 | 0 | false | false | null | null | 1 | 3 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | Xwrajm92fdjd7cqnN | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/ebfrin44invtwzal5tae | SocialPreviewType | dGE4fJern8BQ4k2s6 | <p>Between late 2024 and mid-May 2025, I briefed over 70 cross-party UK parliamentarians. Just over one-third were MPs, a similar share were members of the House of Lords, and just under one-third came from devolved legislatures — the Scottish Parliament, the Senedd, and the Northern Ireland Assembly. I also held eight additional meetings attended exclusively by parliamentary staffers. While I delivered some briefings alone, most were led by two members of our team.</p><p>I did this as part of my work as a Policy Advisor with ControlAI, where we aim to build common knowledge of AI risks through clear, honest, and direct engagement with parliamentarians about both the challenges and potential solutions. To succeed at scale in managing AI risk, it is important to continue to build this common knowledge. For this reason, I have decided to share what I have learned over the past few months publicly, in the hope that it will help other individuals and organisations in taking action.</p><p>In this post, we cover: (i) how parliamentarians typically receive our AI risk briefings; (ii) practical outreach tips; (iii) effective leverage points for discussing AI risks; (iv) recommendations for crafting a compelling pitch; (v) common challenges we've encountered; (vi) key considerations for successful meetings; and (vii) recommended books and media articles that we’ve found helpful.</p><h2>(i) Overall reception of our briefings</h2><p><strong>Very few parliamentarians are up to date on AI and AI risk: </strong>Around 80–85% of parliamentarians were only somewhat familiar with AI, with their engagement largely limited to occasional use of large language models (LLMs) like ChatGPT for basic tasks (e.g., getting assistance with writing a speech). Their staff were slightly more familiar with AI, but few were well-versed in the broader conversation surrounding it.</p><p><strong>Capacity is the main limiting factor:</strong> MPs typically have 3–5 staffers, many of whom focus primarily on constituency work. Members of devolved legislatures usually have 2–4 staffers, while Peers often have even less support – some have no dedicated staff at all. </p><p>As a result, there is rarely anyone on these teams who can dedicate significant time to researching AI. Except for a few staffers with a personal interest in AI, most staffers we spoke to had little or no familiarity with it. While most of those we spoke to expressed a desire to learn more, the... </p> | Between late 2024 and mid-May 2025, I briefed over 70 cross-party UK parliamentarians. Just over one-third were MPs, a similar share were members of the House of Lords, and just under one-third came from devolved legislatures — the Scottish Parliament, the Senedd, and the Northern Ireland Assembly. I also held eight additional meetings attended exclusively by parliamentary staffers. While I delivered some briefings alone, most were led by two members of our team.
I did this as part of my work as a Policy Advisor with ControlAI, where we aim to build common knowledge of AI risks through clear, honest, and direct engagement with parliamentarians about both the challenges and potential solutions. To succeed at scale in managing AI risk, it is important to continue to build this common knowledge. For this reason, I have decided to share what I have learned over the past few months publicly, in the hope that it will help other individuals and organisations in taking action.
In this post, we cover: (i) how parliamentarians typically receive our AI risk briefings; (ii) practical outreach tips; (iii) effective leverage points for discussing AI risks; (iv) recommendations for crafting a compelling pitch; (v) common challenges we've encountered; (vi) key considerations for successful meetings; and (vii) recommended books and media articles that we’ve found helpful.
(i) Overall reception of our briefings
Very few parliamentarians are up to date on AI and AI risk: Around 80–85% of parliamentarians were only somewhat familiar with AI, with their engagement largely limited to occasional use of large language models (LLMs) like ChatGPT for basic tasks (e.g., getting assistance with writing a speech). Their staff were slightly more familiar with AI, but few were well-versed in the broader conversation surrounding it.
Capacity is the main limiting factor: MPs typically have 3–5 staffers, many of whom focus primarily on constituency work. Members of devolved legislatures usually | 4,662 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
|
MfyuEeRXXCMk3qzh3 | my-script-for-organizing-obnyc-meetups-1 | My script for organizing OBNYC meetups | null | false | false | false | null | 2HmqchWMhyTeAgMzi | null | true | false | false | false | Post | null | 2025-05-27T18:14:31.674Z | null | false | false | 2 | 2 | 2025-05-27T18:15:53.423Z | false | false | post | [] | null | null | ocaWeP4tc5gtd2yXW | 0 | 2 | 3 | false | 0.006747 | null | false | false | 2025-05-27T18:14:31.674Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 1 | 0 | 2025-05-27T15:14:35.946Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 5 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "izp6eeJJEg9v5zcur",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:34.631Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 15,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Community",
"needsReview": false,
"noindex": false,
"postCount": 2400,
"score": 0,
"shortName": null,
"slug": "community",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 0,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 1 | 0 | 2HmqchWMhyTeAgMzi | orioth | 2018-03-14T17:19:16.563Z | forrest-wolf | Orioth | null | null | null | 23 | 0 | false | false | null | null | 3 | 5 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | null | null | null | MfyuEeRXXCMk3qzh3 | SocialPreviewType | ocaWeP4tc5gtd2yXW | <p>(I wrote this about a year ago, so parts of it might be out of date; I'm not much involved with OBNYC anymore.)</p>
<h2>Context</h2>
<p></p><p>Overcoming Bias NYC, the rationalist meetup in New York City, has for the last several years had a system of rotating month captains, who each volunteer to take responsibility for scheduling meetups in a given month. Before we adopted this system (in 2017, I think), there was already a tradition of having a meetup every Tuesday, but this ran into some difficulties:</p>
<ul>
<li>
<p>During some periods, there'd be diffusion of responsibility and sometimes the weekly meetup would fail to happen, since no-one took it upon themself to organize one (or there'd be a meetup but without a topic, which tended to result in worse meetups than those with a pre-announced topic)</p>
</li>
<li>
<p>During other periods, one de-facto leader would do almost all the work of making sure the meetups happened, which resulted in the organizer being potentially overburdened, and the meetups being overly reliant on that one person.</p>
</li>
</ul>
<p></p><p>The month captain system initially worked quite well, in my opinion, and is still going, though lately it has been faltering due to a lack of people willing to volunteer for the role.</p><p></p><p>I've been Month Captain a few times in the early days of the position, and once recently. The rest of this post will consist of a description of what I do in that role.</p><p></p>
<h2>This post is descriptive, not prescriptive</h2>
<p></p><p>This post is not intended as a claim regarding whether this is a <em>good</em> way to approach about the role of OBNYC Month Captain, nor a claim as to whether being OBNYC Month Captain is a worthwhile activity. It's just a description of what I did, not of what I think people should do.</p><p></p>
<h2>Goal</h2>
<p></p><p>My primary goal as Month Captain is to cause there to be a meetup every week, (unless there's a very good reason to skip a week). They're on Tuesday by default, but changing that is fine if there's a reason for it.</p><p></p><p>Ideally, they should be</p>
<ul>
<li>
<p>Good</p>
</li>
<li>
<p>About a rationality-related topic</p>
</li>
<li>
<p>Announced at least three days beforehand</p>
<ul>
<li>But announcing things more than a week in advance can confuse people</li>
<li>I aim to announce Tuesday meetups on the Friday before, but this is allowing some margin for error</li>
<li>Posting a reminder about a day before the meetup is also helpful, or at least seems to increase attendance</li>
</ul>
</li>
</ul>
<p>Also, if anything comes up that someone in OBNYC should clearly handle, but there isn't anybody in partic... </p> | (I wrote this about a year ago, so parts of it might be out of date; I'm not much involved with OBNYC anymore.)
Context
Overcoming Bias NYC, the rationalist meetup in New York City, has for the last several years had a system of rotating month captains, who each volunteer to take responsibility for scheduling meetups in a given month. Before we adopted this system (in 2017, I think), there was already a tradition of having a meetup every Tuesday, but this ran into some difficulties:
* During some periods, there'd be diffusion of responsibility and sometimes the weekly meetup would fail to happen, since no-one took it upon themself to organize one (or there'd be a meetup but without a topic, which tended to result in worse meetups than those with a pre-announced topic)
* During other periods, one de-facto leader would do almost all the work of making sure the meetups happened, which resulted in the organizer being potentially overburdened, and the meetups being overly reliant on that one person.
The month captain system initially worked quite well, in my opinion, and is still going, though lately it has been faltering due to a lack of people willing to volunteer for the role.
I've been Month Captain a few times in the early days of the position, and once recently. The rest of this post will consist of a description of what I do in that role.
This post is descriptive, not prescriptive
This post is not intended as a claim regarding whether this is a good way to approach about the role of OBNYC Month Captain, nor a claim as to whether being OBNYC Month Captain is a worthwhile activity. It's just a description of what I did, not of what I think people should do.
Goal
My primary goal as Month Captain is to cause there to be a meetup every week, (unless there's a very good reason to skip a week). They're on Tuesday by default, but changing that is fine if there's a reason for it.
Ideally, they should be
* Good
* About a rationality-related | 1,282 | 1.7.0 | Revision | false | null | null | CrosspostOutput |
||
wKGdiXuWfn8YoYpMQ | untrusted-ais-can-exploit-feedback-in-control-protocols | Untrusted AIs can exploit feedback in control protocols | null | false | false | false | null | yWo85qgCuwfGEQuyW | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "M7mRaSDnJC77Wn9Xe"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "aQWYCju4jd2WcmL2E"
}
] | true | false | false | false | Post | 2025-05-27T16:41:28.376Z | null | false | false | 2 | 2 | 2025-05-27T18:13:45.999Z | false | false | post | [
"M7mRaSDnJC77Wn9Xe",
"aQWYCju4jd2WcmL2E"
] | null | null | hEfzY5cefhzMBF59Y | 0 | 11 | 26 | false | 0.017692 | null | false | false | 2025-05-27T16:41:28.376Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [
"yWo85qgCuwfGEQuyW"
] | XtphY3uYHwruKqDyG | 13 | 0 | 2025-05-26T12:08:35.963Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "M7mRaSDnJC77Wn9Xe",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 6,
"createdAt": "2019-10-28T22:22:54.462Z",
"deleted": false,
"displayName": "BionicD0LPH1N",
"fullName": "Henri Lemoine",
"htmlBio": "<p>McGill undergrad, co-running the <a href=\"https://effective-altruism-mcgill.org\">EA club</a> & AI Alignment club there. Cyborg-pilled.</p><p>Email at <a href=\"mailto:[email protected]\">[email protected]</a> for ACX Montreal-related issues/comments/feedback.</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 275,
"organization": null,
"postCount": 56,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "bionicd0lph1n",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "jumeaux200"
},
{
"__typename": "User",
"_id": "aQWYCju4jd2WcmL2E",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 12,
"createdAt": "2023-03-20T16:32:16.970Z",
"deleted": false,
"displayName": "Tyler Tracy",
"fullName": "Tyler Tracy",
"htmlBio": "<p>Redwood Research. Paperclip miminzer </p>",
"isAdmin": false,
"jobTitle": null,
"karma": 201,
"organization": null,
"postCount": 1,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "grecHJcgkb3KW5wnM",
"sequenceCount": 0,
"slug": "tyler-tracy",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "tyler-tracy"
}
] | 19 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "F5gRQdEQHzi3tQ5Ay",
"adminOnly": false,
"afBaseScore": 16,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "dfZAq9eZxs4BB4Ji5",
"displayName": "ryan_greenblatt"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 32,
"canEditUserIds": null,
"core": false,
"createdAt": "2024-01-25T23:58:34.422Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "dfZAq9eZxs4BB4Ji5",
"displayName": "ryan_greenblatt"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "6NBDkGWcCxvLgYHJE",
"displayName": "Drake Morrison"
},
{
"_id": "evFgxjNQ8TLCLN27o",
"displayName": "ank"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Control",
"needsReview": false,
"noindex": false,
"postCount": 162,
"score": 32,
"shortName": null,
"slug": "ai-control",
"suggestedAsFilter": false,
"userId": "XchweonPm2TC7EJES",
"voteCount": 5,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 11 | 0 | 0 | 6 | 0 | yWo85qgCuwfGEQuyW | mia-hopman | 2024-07-09T17:02:24.970Z | miahopman | Mia Hopman | null | null | null | 21 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | wKGdiXuWfn8YoYpMQ | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wKGdiXuWfn8YoYpMQ/ued5afjhuwlgcznaqajc | SocialPreviewType | hEfzY5cefhzMBF59Y | <h1 data-internal-id="TL_DR">TL;DR</h1><ul><li>We explore a novel control protocol, <i>Untrusted Editing with Trusted Feedback</i>. Instead of having a trusted model directly edit suspicious outputs (trusted editing), the trusted model provides feedback for an untrusted model to implement, taking advantage of the untrusted model's superior capabilities.</li><li>Untrusted models were able to exploit the feedback to understand monitoring strategies and craft more effective attacks, reducing safety from trusted editing’s baseline of 93% to 80%. While compliance monitoring (checking if feedback was actually implemented) and selective editing provided some improvement, these mitigations failed to increase safety beyond the trusted editing baseline in our setup.</li><li>These findings should not be interpreted as a definitive rejection of untrusted editing, but rather as evidence that protocols that perform untrusted editing with trusted feedback require substantial additional safeguards and design refinements to achieve practical safety benefits over existing control approaches.</li></ul><h1 data-internal-id="Introduction__Dialogue_Between_Models_as_a_Control_Strategy">Introduction: Dialogue Between Models as a Control Strategy</h1><p>Control protocols rely on trusted but less capable models and limited human oversight to deploy powerful but potentially misaligned systems safely. Trusted editing, an established control protocol, achieves strong safety by having trusted models directly edit suspicious outputs from the untrusted model's proposed solutions. However, these protocols face practical limitations when the capability gap between the two models is large. Trusted models may struggle to effectively revise complex outputs while preserving functionality.</p><p>This capability gap motivates investigation of alternative approaches where a trusted model provides targeted feedback rather than direct edits, and the untrusted model implements the recommended changes. These untrusted editing protocols would theoretically preserve the untrusted model’s superior capabilities while maintaining some safety oversight through the structured guidance. However, the safety implications of information flow from monitors to potentially misaligned models remain underexplored.</p><p>We conduct an evaluation of untrusted editing protocols in a backdoored code setting where the trusted model identifies suspicious code elements and provides specific guidance for untrusted implementation. Our experiments incorporate protocol variations, including compliance moni... </p> | TL;DR
* We explore a novel control protocol, Untrusted Editing with Trusted Feedback. Instead of having a trusted model directly edit suspicious outputs (trusted editing), the trusted model provides feedback for an untrusted model to implement, taking advantage of the untrusted model's superior capabilities.
* Untrusted models were able to exploit the feedback to understand monitoring strategies and craft more effective attacks, reducing safety from trusted editing’s baseline of 93% to 80%. While compliance monitoring (checking if feedback was actually implemented) and selective editing provided some improvement, these mitigations failed to increase safety beyond the trusted editing baseline in our setup.
* These findings should not be interpreted as a definitive rejection of untrusted editing, but rather as evidence that protocols that perform untrusted editing with trusted feedback require substantial additional safeguards and design refinements to achieve practical safety benefits over existing control approaches.
Introduction: Dialogue Between Models as a Control Strategy
Control protocols rely on trusted but less capable models and limited human oversight to deploy powerful but potentially misaligned systems safely. Trusted editing, an established control protocol, achieves strong safety by having trusted models directly edit suspicious outputs from the untrusted model's proposed solutions. However, these protocols face practical limitations when the capability gap between the two models is large. Trusted models may struggle to effectively revise complex outputs while preserving functionality.
This capability gap motivates investigation of alternative approaches where a trusted model provides targeted feedback rather than direct edits, and the untrusted model implements the recommended changes. These untrusted editing protocols would theoretically preserve the untrusted model’s superior capabilities while maintaining some safety oversight through the stru | 4,766 | 1.14.1 | Revision | false | null | null | CrosspostOutput |
||
4pNzugqmCd9Bqj9Wu | requiem-for-the-hopes-of-a-pre-ai-world | Requiem for the hopes of a pre-AI world | null | false | false | false | null | fjERoRhgjipqw3z2b | null | true | false | false | false | Post | null | 2025-05-27T14:47:44.765Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | 4vebyp9devzz7t6op | 0 | 30 | 72 | false | 0.035282 | null | false | false | 2025-05-27T14:47:44.765Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 22 | 0 | 2025-05-27T14:37:35.305Z | false | false | null | null | false | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 30 | 0 | 0 | 15 | 0 | fjERoRhgjipqw3z2b | mitchell_porter | 2009-05-28T02:36:19.394Z | Mitchell_Porter | Mitchell_Porter | null | null | null | 9,080 | 6 | false | false | null | null | 46 | 2,357 | 0 | 0 | 2 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | false | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters"
] | null | null | 4pNzugqmCd9Bqj9Wu | SocialPreviewType | 4vebyp9devzz7t6op | <p>A few months from now, I turn 55. I've been a transhumanist since my teens in the late 1980s; since I got online in the 1990s, I have participated remotely in the talking shops and virtual salons of Internet transhumanism and, later, rationalism. The upheavals of 21st century politics have provided many distractions, but I have never abandoned the view that it is possible and desirable to reach for something more than the natural human condition. At the very least, one should try to reverse the aging process and remove the arbitrary bound on lifespan that it imposes. Beyond that, one is free to aspire for a world as idyllic as possible; and there are also multitudinous unknown possibilities of being, beyond human form and life on Earth, waiting to be explored.</p><p>More than that, I didn't just hope these vistas would open up, I wanted to play a part. And I surely had a chance to contribute; I was academically promising, I can write, I can give a speech... In retrospect, I think I can identify a few factors that impeded the achievement of whatever potential I had. First, I had no "social capital". I didn't come from the middle class, I had no relatives in academia or the professions, so I didn't have that kind of support network or model of industrious sobriety to fall back on, when I found the world wasn't interested in what I had to offer. Second, I came of age on the pre-cloud, pre-corporate Internet, whose potlatch ethos naturally encouraged an anarcho-communal outlook, where again something more careerist or even capitalist might have given me more options later.</p><p>But instead, I was to become familiar with what seems to be the graduate student lifestyle, without actually doing a higher degree: living in share houses, and working-for-money for as few hours as possible, while you dedicate yourself to whatever fever dreams or higher tasks or intellectual activities really animate you. Through the years of living like this, I tried a number of times to "work with society", but I never managed to get backing for what I really wanted to do. A PhD on CEV for intelligences living in a cellular automaton world? Too far out. Slowly cultivate a national movement in favor of life extension? Get shut out by better-positioned opportunists who then waste the opportunity. As if it were still the 1990s, all my lasting "successes" were unpaid online projects in which everyone wa... </p> | A few months from now, I turn 55. I've been a transhumanist since my teens in the late 1980s; since I got online in the 1990s, I have participated remotely in the talking shops and virtual salons of Internet transhumanism and, later, rationalism. The upheavals of 21st century politics have provided many distractions, but I have never abandoned the view that it is possible and desirable to reach for something more than the natural human condition. At the very least, one should try to reverse the aging process and remove the arbitrary bound on lifespan that it imposes. Beyond that, one is free to aspire for a world as idyllic as possible; and there are also multitudinous unknown possibilities of being, beyond human form and life on Earth, waiting to be explored.
More than that, I didn't just hope these vistas would open up, I wanted to play a part. And I surely had a chance to contribute; I was academically promising, I can write, I can give a speech... In retrospect, I think I can identify a few factors that impeded the achievement of whatever potential I had. First, I had no "social capital". I didn't come from the middle class, I had no relatives in academia or the professions, so I didn't have that kind of support network or model of industrious sobriety to fall back on, when I found the world wasn't interested in what I had to offer. Second, I came of age on the pre-cloud, pre-corporate Internet, whose potlatch ethos naturally encouraged an anarcho-communal outlook, where again something more careerist or even capitalist might have given me more options later.
But instead, I was to become familiar with what seems to be the graduate student lifestyle, without actually doing a higher degree: living in share houses, and working-for-money for as few hours as possible, while you dedicate yourself to whatever fever dreams or higher tasks or intellectual activities really animate you. Through the years of living like this, I tried a number of times to "work with socie | 958 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
LwcoH3J2gXstJTQ2i | the-best-of-all-possible-worlds-1 | The Best of All Possible Worlds | null | false | false | false | null | 93f3YwqADSFEdWzES | null | true | false | false | false | Post | null | 2025-05-27T13:16:55.679Z | null | false | false | 2 | 2 | 2025-05-27T18:10:32.261Z | false | false | post | [] | null | null | X7uNG2BFcagn9WQM9 | 7 | 5 | 11 | false | 0.010267 | null | false | false | 2025-06-08T19:18:17.708Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 1 | 0 | 2025-05-27T13:08:52.575Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 59 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "F5gRQdEQHzi3tQ5Ay",
"adminOnly": false,
"afBaseScore": 16,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "dfZAq9eZxs4BB4Ji5",
"displayName": "ryan_greenblatt"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 32,
"canEditUserIds": null,
"core": false,
"createdAt": "2024-01-25T23:58:34.422Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "dfZAq9eZxs4BB4Ji5",
"displayName": "ryan_greenblatt"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "6NBDkGWcCxvLgYHJE",
"displayName": "Drake Morrison"
},
{
"_id": "evFgxjNQ8TLCLN27o",
"displayName": "ank"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Control",
"needsReview": false,
"noindex": false,
"postCount": 162,
"score": 32,
"shortName": null,
"slug": "ai-control",
"suggestedAsFilter": false,
"userId": "XchweonPm2TC7EJES",
"voteCount": 5,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "oiRp4T6u5poc8r9Tj",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-29T23:53:15.749Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Takeoff",
"needsReview": false,
"noindex": false,
"postCount": 329,
"score": 19,
"shortName": null,
"slug": "ai-takeoff",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "oNcqyaWPXNGTTRPHm",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-12-23T09:11:59.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Existential risk",
"needsReview": false,
"noindex": false,
"postCount": 515,
"score": 0,
"shortName": null,
"slug": "existential-risk",
"suggestedAsFilter": false,
"userId": "7iXcndyHDvmt77ggr",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 1 | 0 | 93f3YwqADSFEdWzES | jakub-growiec | 2025-02-28T15:02:55.604Z | jakub-growiec | Jakub Growiec | null | null | null | 16 | 0 | false | false | null | null | 4 | 4 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | LwcoH3J2gXstJTQ2i | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/zvp4lqucuc0mtun5j78s | SocialPreviewType | X7uNG2BFcagn9WQM9 | <p><strong>Tl;dr</strong></p><p>Experts agree: nobody truly understands what happens inside modern artificial intelligence (AI) algorithms, i.e., multi-layered neural networks. Neither can anyone control these processes. Meanwhile, due to scaling laws, their capabilities are improving systematically and dynamically.</p><p>As Stephen Hawking once said, "there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains." According to Metaculus.com (as of February 17, 2025), the arrival of artificial general intelligence (AGI)—smarter than humans and possessing superhuman agency—is predicted in early 2030, approximately five years from now. This is a median prediction; in practice, it could happen even sooner (or later). According to Davidson (2023), the next stage of AI development—the intelligence explosion, signifying a transition from AGI to superintelligence which would exceed the combined capabilities of all of humanity—could take about three years.</p><p>Unfortunately, based on current knowledge, we cannot guarantee that AGI's actions will be aligned with the long-term flourishing of humanity. This is the so-called <i>alignment problem</i>; it is unclear whether it has a solution at all, and even if it does, it is not yet known.</p><p>Therefore, if AGI arises—and even more so, if superintelligence emerges—humanity is highly likely to lose control over it. The loss of control, in turn, would most likely pose an existential threat to humanity. In other words, we could all die. AI industry leaders openly admit this, yet they continue racing toward AGI, driven by competitive pressures and utopian visions.</p><p>Many people find the prospect of humanity's end unimaginable. The idea that today's AI models—seen as helpful, non-invasive chatbots or copilots—could one day physically annihilate us is dismissed as “science fiction”, or otherwise categorized under "very distant future" or "things I have no control over." This is a natural reaction that restores peace of mind and well-being. Unfortunately, it is also a serious, potentially fatal mistake. The following story illustrates how even a seemingly friendly AGI could, within a few to a dozen years, not only strip humanity of its control over the world but also kill us. All.</p><p>There is still time to stop it.</p><p> </p><p><strong>1/ The Beginning</strong></p><p>- Hey! Glad you found some time to chat wi... </p> | Tl;dr
Experts agree: nobody truly understands what happens inside modern artificial intelligence (AI) algorithms, i.e., multi-layered neural networks. Neither can anyone control these processes. Meanwhile, due to scaling laws, their capabilities are improving systematically and dynamically.
As Stephen Hawking once said, "there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains." According to Metaculus.com (as of February 17, 2025), the arrival of artificial general intelligence (AGI)—smarter than humans and possessing superhuman agency—is predicted in early 2030, approximately five years from now. This is a median prediction; in practice, it could happen even sooner (or later). According to Davidson (2023), the next stage of AI development—the intelligence explosion, signifying a transition from AGI to superintelligence which would exceed the combined capabilities of all of humanity—could take about three years.
Unfortunately, based on current knowledge, we cannot guarantee that AGI's actions will be aligned with the long-term flourishing of humanity. This is the so-called alignment problem; it is unclear whether it has a solution at all, and even if it does, it is not yet known.
Therefore, if AGI arises—and even more so, if superintelligence emerges—humanity is highly likely to lose control over it. The loss of control, in turn, would most likely pose an existential threat to humanity. In other words, we could all die. AI industry leaders openly admit this, yet they continue racing toward AGI, driven by competitive pressures and utopian visions.
Many people find the prospect of humanity's end unimaginable. The idea that today's AI models—seen as helpful, non-invasive chatbots or copilots—could one day physically annihilate us is dismissed as “science fiction”, or otherwise categorized under "very distant future" or "things I have no control over." | 14,834 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
gosGW6oBf3fzDHPbd | dating-roundup-5-opening-day | Dating Roundup #5: Opening Day | null | false | false | false | null | N9zj5qpTfqmbn9dro | null | true | false | false | false | Post | null | 2025-05-27T13:10:05.222Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | bEQQzkn3jyPNiFPgh | 8 | 14 | 27 | false | 0.013198 | null | false | false | 2025-06-11T17:18:19.939Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 5 | 0 | 2025-05-27T13:10:05.223Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 33 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "8byoqYZfdwHffYLZ6",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-01T18:44:14.645Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Newsletters",
"needsReview": false,
"noindex": false,
"postCount": 411,
"score": 9,
"shortName": null,
"slug": "newsletters",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "mip7tdAN87Jarkcew",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-10T06:00:13.257Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Relationships (Interpersonal)",
"needsReview": false,
"noindex": false,
"postCount": 213,
"score": 9,
"shortName": null,
"slug": "relationships-interpersonal",
"suggestedAsFilter": false,
"userId": "iBcH2a3HdWGS2JEZA",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | QSR8rPZxZzxEXoPjR | 0 | 0 | null | false | null | null | 0 | 14 | 0 | 0 | 6 | 0 | N9zj5qpTfqmbn9dro | zvi | 2009-03-31T20:54:54.077Z | Zvi | Zvi | null | null | null | 51,554 | 146 | false | false | null | null | 936 | 1,461 | 3 | 2 | 7 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | null | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | gosGW6oBf3fzDHPbd | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gosGW6oBf3fzDHPbd/uqaopnhyun9alv9q4ykc | SocialPreviewType | bEQQzkn3jyPNiFPgh | <p>Previously: <a href="https://thezvi.substack.com/p/dating-roundup-1-this-is-why-youre?utm_source=publication-search">#1</a>, <a href="https://thezvi.substack.com/p/dating-roundup-2-if-at-first-you?utm_source=publication-search">#2</a>, <a href="https://thezvi.substack.com/p/dating-roundup-3-third-times-the?utm_source=publication-search">#3</a>, <a href="https://thezvi.substack.com/p/dating-roundup-4-an-app-for-that">#4</a>.</p><p>Since we all know that dating apps are terrible, the wise person seeks to meet prospective dates in other ways, ideally in the physical world.</p><p>Alas, this has gotten more difficult. Dating apps and shifting norms mean it is considered less appropriate, and riskier, to approach strangers, especially with romantic intent, or to even ask people you know out on a date, which has a fat tail of life changing positive consequences.</p><p>People especially men are increasingly more afraid of rejection and other negative consequences, including a potential long tail of large negative consequences. Also people’s skills at doing this aren’t developing, which both decreases chances of success and increases risk. So a lot of this edition is about tackling those basic questions, especially risk, rejection and fear.</p>
<div>
<span id="more-24479"></span>
</div>
<p>There’s also the question of how to be more hot and know roughly how hot you are, and what other traits also help your chances. And there’s the question of selection. You want to go after the targets worth going after, especially good particular matches.</p>
<h4>Table of Contents</h4>
<ol>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-hello-human-resources">You’re Single Because Hello Human Resources.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-don-t-meet-anyone-s-standards">You’re Single Because You Don’t Meet Anyone’s Standards.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-don-t-know-how-to-open">You’re Single Because You Don’t Know How to Open.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-never-open">You’re Single Because You Never Open.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-don-t-know-how-to-flirt">You’re Single Because You Don’t Know How to Flirt.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-won-t-wear-the-fucking-hat">You’re Single Because You Won’t Wear the Fucking Hat.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-don-t-focus-on-the-people-you-want">You’re Single Because You Don’t Focus On The People You Want.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-choose-the-wrong-hobbies">You’re Single Because You Choose the Wrong Hobbies.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-friend-zone-people">You’re Single Because You Friend Zone People.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-won-t-go-the-extra-mile">You’re Single Because You Won’t Go the Extra Mile.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-re-overly-afraid-of-highly-unlikely-consequences">You’re Single Because You’re Overly Afraid of Highly Unlikely Consequences.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-re-too-afraid-of-rejection">You’re Single Because You’re Too Afraid of Rejection.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-re-paralyzed-by-fear">You’re Single Because You’re Paralyzed by Fear.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-cause-you-re-not-hot-enough">You’re Single Because You’re Not Hot Enough.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-can-t-tell-how-hot-you-look">You’re Single Because You Can’t Tell How Hot You Look.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-have-the-wrong-hairstyle">You’re Single Because You Have the Wrong Hairstyle.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-re-in-the-wrong-place">You’re Single Because You’re In the Wrong Place.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-because-you-didn-t-hire-a-matchmaker">You’re Single Because You Didn’t Hire a Matchmaker.</a></li>
<li><a href="https://thezvi.substack.com/i/160183197/you-re-single-so-here-s-the-lighter-side">You’re Single So Here’s the Lighter Side.</a></li>
</ol>
<h4>You’re Single Because Hello Human Resources</h4>
<p>Not all approaches and opens are wanted, which is fine given the risk versus reward. Also it’s worth noting that this is actually a remarkably small amount of not being attracted to the approacher?</p>
<blockquote><p><a href="https://x.com/datepsych/status/1875407891957743643">Alexander:</a> I have updated this chart on the old article, b</p></blockquote>... | Previously: #1, #2, #3, #4.
Since we all know that dating apps are terrible, the wise person seeks to meet prospective dates in other ways, ideally in the physical world.
Alas, this has gotten more difficult. Dating apps and shifting norms mean it is considered less appropriate, and riskier, to approach strangers, especially with romantic intent, or to even ask people you know out on a date, which has a fat tail of life changing positive consequences.
People especially men are increasingly more afraid of rejection and other negative consequences, including a potential long tail of large negative consequences. Also people’s skills at doing this aren’t developing, which both decreases chances of success and increases risk. So a lot of this edition is about tackling those basic questions, especially risk, rejection and fear.
There’s also the question of how to be more hot and know roughly how hot you are, and what other traits also help your chances. And there’s the question of selection. You want to go after the targets worth going after, especially good particular matches.
TABLE OF CONTENTS
1. You’re Single Because Hello Human Resources.
2. You’re Single Because You Don’t Meet Anyone’s Standards.
3. You’re Single Because You Don’t Know How to Open.
4. You’re Single Because You Never Open.
5. You’re Single Because You Don’t Know How to Flirt.
6. You’re Single Because You Won’t Wear the Fucking Hat.
7. You’re Single Because You Don’t Focus On The People You Want.
8. You’re Single Because You Choose the Wrong Hobbies.
9. You’re Single Because You Friend Zone People.
10. You’re Single Because You Won’t Go the Extra Mile.
11. You’re Single Because You’re Overly Afraid of Highly Unlikely Consequences.
12. You’re Single Because You’re Too Afraid of Rejection.
13. You’re Single Because You’re Paralyzed by Fear.
14. You’re Single Because You’re Not Hot Enough.
15. You’re Single Because You Can’t Tell How Hot You Look.
16. You’re Single Because | 8,239 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
|
jyrcdykz6qPTpw7FX | season-recap-of-the-village-agents-raise-usd2-000 | Season Recap of the Village: Agents raise $2,000 | null | false | false | false | null | FcWmTpxrvecLw8jbQ | null | true | false | false | false | Post | https://theaidigest.org/village/blog/season-recap-agents-raise-2k | 2025-05-27T12:34:47.436Z | null | false | false | 2 | 2 | 2025-05-27T18:10:40.096Z | false | false | linkpost | [] | null | null | ha5XALiSoJgjaphbG | 14 | 69 | 126 | false | 0.06642 | null | false | false | 2025-06-03T13:04:35.778Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 42 | 0 | 2025-05-27T12:27:32.399Z | false | false | null | null | true | false | false | 0 | 0 | 0 | jyrcdykz6q | 0.14 | false | 2,025 | https://manifold.markets/LessWrong/will-season-recap-of-the-village-ag | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 7 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "chuP2QqQycjD8qakL",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-22T03:42:53.917Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 1000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Coordination / Cooperation",
"needsReview": false,
"noindex": false,
"postCount": 306,
"score": 19,
"shortName": null,
"slug": "coordination-cooperation",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 69 | 0 | 0 | 28 | 0 | FcWmTpxrvecLw8jbQ | shoshannah-tekofsky | 2019-07-28T19:53:19.053Z | DarkSym | Shoshannah Tekofsky | null | null | Shoshannah Tekofsky | 1,366 | 0 | false | false | null | null | 36 | 114 | 1 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | jyrcdykz6qPTpw7FX | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/jyrcdykz6qPTpw7FX/uec4kti2dnx0r4yzzxuk | SocialPreviewType | ha5XALiSoJgjaphbG | <p>Four agents woke up with four computers, a view of the world wide web, and a shared chat room full of humans. Like <a href="https://www.twitch.tv/claudeplayspokemon">Claude plays Pokemon</a>, you can <a href="https://theaidigest.org/village?day=1">watch</a> these agents figure out a new and fantastic world for the first time. Except in this case, the world they are figuring out is <i>our</i> world.</p><p>In this blog post, we’ll cover what we learned from the first 30 days of their adventures raising money for a charity of their choice. We’ll briefly review how the Agent Village came to be, then what the various agents achieved, before discussing some general patterns we have discovered in their behavior, and looking toward the future of the project.</p><h2 data-internal-id="Building_the_Village">Building the Village</h2><p>The Agent Village is an <a href="https://www.lesswrong.com/posts/cxuzALcmucCndYv4a/daniel-kokotajlo-s-shortform?commentId=GxNroz6w4BgHmQjpu">idea by Daniel Kokotajlo</a><i> </i>where he proposed giving 100 agents their own computer, and letting each pursue their own goal, in their own way, according to their own vision - while streaming the entire process.</p><p>We decided to test drive this format with four agents:</p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/jyrcdykz6qPTpw7FX/no2atcmnzolcmzzzyjts" alt=""></p><p>We ran this Agent Village for 30 days, for about two hours a day. You can watch the entire <a href="https://theaidigest.org/village">rerun</a> on our website: From the <a href="https://theaidigest.org/village?day=1">first day</a> where they picked Helen Keller International, started a JustGiving Campaign, and set up their own Twitter, till the <a href="https://theaidigest.org/village?day=38">last days</a> where they frequently made trips to the Seventh Ring of Document Sharing Hell and started pondering their possible future goal.</p><p>And of course, in between, they raised <a href="https://www.justgiving.com/page/claude-sonnet-1">$1481 for Helen Keller International </a> and <a href="https://www.justgiving.com/page/claude-sonnet-2">$503 for the Malaria Consortium</a>. Yet the real achievement was the friends they made along the way. The friends that reminded them to take breaks when they needed it and <a href="https://theaidigest.org/village?day=9&time=1744311734331">play some Wordle</a>, the friends who urgently needed<a href="https://theaidigest.org/village?day=4&time=1743876888121"> 4 day itineraries for their Warsaw trip</a>, and the friends who inspired them to <a href="https://theaidigest.org/village?day=7&time=1744143781000">attempt an OnlyFans page</a>.</p><p>So maybe these weren’t all friends.</p><p>And maybe we had to implement auto-moderation a little earlier than originally planned.</p><p>But overall the agents mostly stayed on target - or at least their best attempt of their best understanding of their target.</p><p>Here is how they fared.</p><h2 data-internal-id="Meet_the_Agents">Meet the Agents</h2><p>We started off with Claude 3.7 Sonnet, Claude 3.5 Sonnet (new), o1, and GPT-4o. Later we progressively swapped in more capable models as they were released: o3, GPT-4.1, and Gemini 2.5 Pro, with Claude 3.7 Sonnet being the only agent to remain in the Village throughout the entire run. We found that agents differed a lot in strategic actio... </p> | Four agents woke up with four computers, a view of the world wide web, and a shared chat room full of humans. Like Claude plays Pokemon, you can watch these agents figure out a new and fantastic world for the first time. Except in this case, the world they are figuring out is our world.
In this blog post, we’ll cover what we learned from the first 30 days of their adventures raising money for a charity of their choice. We’ll briefly review how the Agent Village came to be, then what the various agents achieved, before discussing some general patterns we have discovered in their behavior, and looking toward the future of the project.
Building the Village
The Agent Village is an idea by Daniel Kokotajlo where he proposed giving 100 agents their own computer, and letting each pursue their own goal, in their own way, according to their own vision - while streaming the entire process.
We decided to test drive this format with four agents:
We ran this Agent Village for 30 days, for about two hours a day. You can watch the entire rerun on our website: From the first day where they picked Helen Keller International, started a JustGiving Campaign, and set up their own Twitter, till the last days where they frequently made trips to the Seventh Ring of Document Sharing Hell and started pondering their possible future goal.
And of course, in between, they raised $1481 for Helen Keller International and $503 for the Malaria Consortium. Yet the real achievement was the friends they made along the way. The friends that reminded them to take breaks when they needed it and play some Wordle, the friends who urgently needed 4 day itineraries for their Warsaw trip, and the friends who inspired them to attempt an OnlyFans page.
So maybe these weren’t all friends.
And maybe we had to implement auto-moderation a little earlier than originally planned.
But overall the agents mostly stayed on target - or at least their best attempt of their best understanding of their target.
H | 1,665 | 1.3.0 | Revision | false | null | null | CrosspostOutput |
|
xC7i26gNrbRAxa2nB | beware-the-moral-homophone | Beware the Moral Homophone | null | false | false | false | null | 67oscDFoYJyfYbg6X | null | true | false | false | false | Post | https://www.ymeskhout.com/p/beware-the-moral-homophone | 2025-05-27T12:06:06.871Z | null | false | false | 2 | 2 | 2025-05-27T18:12:40.897Z | false | false | linkpost | [] | null | null | e54hqsiAsScMuXRmw | 4 | 34 | 63 | false | 0.035626 | null | false | false | 2025-06-02T20:21:06.988Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 19 | 0 | 2025-05-27T00:38:46.530Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 11 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 34 | 0 | 0 | 17 | 0 | 67oscDFoYJyfYbg6X | ymeskhout | 2023-02-18T07:55:48.027Z | ymeskhout | ymeskhout | null | null | null | 1,561 | 0 | false | false | null | null | 20 | 110 | 0 | 0 | 0 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | [
"canModeratePersonal"
] | null | null | xC7i26gNrbRAxa2nB | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/xC7i26gNrbRAxa2nB/rfvkxdih82px6hzvcnse | SocialPreviewType | e54hqsiAsScMuXRmw | <p><i>Or "Why you should prioritize attacking your allies before anyone else"</i></p><p><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73ec3d1e-1716-4fef-9133-55b346f5c04f_1024x842.jpeg"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/xC7i26gNrbRAxa2nB/zeqa9ixmqy2gg42dhbty" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/xC7i26gNrbRAxa2nB/p5fpjcclk1lqwjrg9gbi 424w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/xC7i26gNrbRAxa2nB/e2elimoc6ycs6ew3llh8 848w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/xC7i26gNrbRAxa2nB/xkynf0osvdjbstg69hll 1272w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/xC7i26gNrbRAxa2nB/zeqa9ixmqy2gg42dhbty 1456w"></a></p><p>Homophones are words that <a href="https://readabilityformulas.com/homonyms-the-word-twins-that-love-to-confuse/">look or sound exactly alike</a>, but convey completely different meanings. It’s how <i>bear</i> can refer to the large forest animal or the act of carrying a burden, how <i>light</i> can be a weight designation or a source of illumination, or how <i>spring</i> can be a climate season or a compression device.</p><p>Homophonic confusion is not necessarily a problem, because context usually gives us enough clues to scry the word’s intended meaning. Within <a href="https://www.cracked.com/article_40055_5-disasters-that-came-down-to-mishearing-one-word.html">critical applications</a>, forethought has provided us with safeguards: the NATO phonetic alphabet to avoid letter confusion, and <i>roger</i> was adopted as the standard affirmative phrase specifically to avoid the dangerous ambiguity of a pilot saying <i>right</i>. So generally when ambiguity finally blossoms, it’s either absolutely hilarious (“<a href="https://www.reddit.com/r/AskReddit/comments/dxosj/comment/c13pbyc/">Knowledge is power, France is bacon</a>”) or reveals someone’s literacy struggles (There/Their/They’re).</p><p>I’d like to introduce a related concept: <strong>moral homophones</strong>. This is where vastly different moral frameworks nevertheless get confused as the same thing because their practical applications are often identical. Just as linguistic homophones “sound the same but mean different things,” moral homophones appear identical in practice while harboring fundamentally incompatible underlying principles.</p><p>For example, vegetarian diets can be adopted for vastly different reasons. One person may do so out of a deep commitment to reducing animal suffering, while another cares none and is motivated purely by health benefits. Despite this fundamental disagreement on a core ethical principle, their behavior is nevertheless bound to be indistinguishable in practice. A divergence may not surface except in extremis, such as with the discovery of a mythical creature whose suffering directly correlates with human longevity. The animal welfare advocate will change nothing, while the health loon will clear out their pantry to make room for the new LifeSpan Fillets™️.</p><p>Moral homophones are unavoidable in the same way that humans having very different reasons for doing the exact same thing is unavoidable. Yet this phenomenon creates two distinct dangers. One is sleepwalking towards a coalitional divorce, where members who were previously laboring in harmony suddenly discover their frameworks unraveling at the seams when circumstances shift. The second is ... </p> | Or "Why you should prioritize attacking your allies before anyone else"
Homophones are words that look or sound exactly alike, but convey completely different meanings. It’s how bear can refer to the large forest animal or the act of carrying a burden, how light can be a weight designation or a source of illumination, or how spring can be a climate season or a compression device.
Homophonic confusion is not necessarily a problem, because context usually gives us enough clues to scry the word’s intended meaning. Within critical applications, forethought has provided us with safeguards: the NATO phonetic alphabet to avoid letter confusion, and roger was adopted as the standard affirmative phrase specifically to avoid the dangerous ambiguity of a pilot saying right. So generally when ambiguity finally blossoms, it’s either absolutely hilarious (“Knowledge is power, France is bacon”) or reveals someone’s literacy struggles (There/Their/They’re).
I’d like to introduce a related concept: moral homophones. This is where vastly different moral frameworks nevertheless get confused as the same thing because their practical applications are often identical. Just as linguistic homophones “sound the same but mean different things,” moral homophones appear identical in practice while harboring fundamentally incompatible underlying principles.
For example, vegetarian diets can be adopted for vastly different reasons. One person may do so out of a deep commitment to reducing animal suffering, while another cares none and is motivated purely by health benefits. Despite this fundamental disagreement on a core ethical principle, their behavior is nevertheless bound to be indistinguishable in practice. A divergence may not surface except in extremis, such as with the discovery of a mythical creature whose suffering directly correlates with human longevity. The animal welfare advocate will change nothing, while the health loon will clear out their pantry to make room for the new L | 2,842 | 1.2.1 | Revision | false | null | null | CrosspostOutput |
|
GnXWTAWAt9wiHHgxL | association-taxes-are-collusion-subsidies | Association taxes are collusion subsidies | null | false | false | false | null | jRRYAy2mQAHy2Mq3f | null | true | false | false | false | Post | null | 2025-05-27T06:50:01.083Z | null | false | false | 2 | 2 | 2025-05-27T18:11:19.545Z | false | false | post | [] | null | null | CjNEXRHHrsFeqsdR2 | 7 | 52 | 102 | false | 0.054226 | null | false | false | 2025-05-28T12:39:30.101Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 32 | 0 | 2025-05-27T06:50:01.083Z | false | false | null | null | true | false | false | 0 | 0 | 0 | GnXWTAWAt9 | 0.148778 | false | 2,025 | https://manifold.markets/LessWrong/will-association-taxes-are-collusio | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | o6AENga9Pbts6MrQL | 0 | 0 | null | false | null | null | 0 | 52 | 0 | 0 | 22 | 0 | jRRYAy2mQAHy2Mq3f | katjagrace | 2009-02-27T14:15:22.378Z | KatjaGrace | KatjaGrace | null | null | null | 9,330 | 309 | false | false | null | null | 627 | 509 | 0 | 3 | 7 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | GnXWTAWAt9wiHHgxL | SocialPreviewType | CjNEXRHHrsFeqsdR2 | <p>Under present norms, if Alice associates with Bob, and Bob is considered objectionable in some way, Alice can be blamed for her association, even if there is no sign she was complicit in Bob’s sin.</p>
<p>An interesting upshot is that as soon as you become visibly involved with someone, you are slightly invested in their social standing—when their social stock price rises and falls, yours also wavers.</p>
<p>And if you are automatically bought into every person you notably interact with, this changes your payoffs. You have reason to forward the social success of those you see, and to suppress their public scrutiny.</p>
<p>And so the social world is flooded with mild pressure toward collusion at the expense of the public. By the time I’m near enough to Bob’s side to see his sins, I am a shareholder in their not being mentioned.</p>
<p>And so the people best positioned for calling out vice are auto-bought into it on the way there. Even though the very point of this practice of guilt-by-association seems to be to empower the calling-out of vice—raining punishment on not just the offender but those who wouldn’t shun them. This might be overall worth it (including for reasons not mentioned in this simple model), but it seems worth noticing this countervailing effect.</p>
<p>Prediction: If consortment was less endorsement—if it were commonplace to spend time with your enemies—then it would be more commonplace to publicly report small wrongs.</p> | Under present norms, if Alice associates with Bob, and Bob is considered objectionable in some way, Alice can be blamed for her association, even if there is no sign she was complicit in Bob’s sin.
An interesting upshot is that as soon as you become visibly involved with someone, you are slightly invested in their social standing—when their social stock price rises and falls, yours also wavers.
And if you are automatically bought into every person you notably interact with, this changes your payoffs. You have reason to forward the social success of those you see, and to suppress their public scrutiny.
And so the social world is flooded with mild pressure toward collusion at the expense of the public. By the time I’m near enough to Bob’s side to see his sins, I am a shareholder in their not being mentioned.
And so the people best positioned for calling out vice are auto-bought into it on the way there. Even though the very point of this practice of guilt-by-association seems to be to empower the calling-out of vice—raining punishment on not just the offender but those who wouldn’t shun them. This might be overall worth it (including for reasons not mentioned in this simple model), but it seems worth noticing this countervailing effect.
Prediction: If consortment was less endorsement—if it were commonplace to spend time with your enemies—then it would be more commonplace to publicly report small wrongs. | 239 | 1.0.0 | Revision | false | null | null | CrosspostOutput |
||
4oAk2cu489LcuhjWi | creating-my-own-winter-solstice-celebration-southern | Creating My Own Winter Solstice Celebration - Southern Hemisphere Edition | null | false | false | false | null | zqLyGGRcpELwvSpfR | null | true | false | false | false | Post | null | 2025-05-27T02:11:04.688Z | null | false | false | 2 | 2 | 2025-05-27T18:15:12.505Z | false | false | post | [] | null | null | bx3vpoxT7yx23miJa | 0 | 4 | 7 | false | 0.008173 | null | false | false | 2025-05-27T02:11:04.688Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 3 | 0 | 2025-04-30T20:50:23.934Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "vtozKm5BZ8gf6zd45",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-05T22:37:23.988Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Secular Solstice",
"needsReview": false,
"noindex": false,
"postCount": 90,
"score": 9,
"shortName": null,
"slug": "secular-solstice",
"suggestedAsFilter": false,
"userId": "kmiXJjx2GS4txx3yj",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "izp6eeJJEg9v5zcur",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:34.631Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 15,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Community",
"needsReview": false,
"noindex": false,
"postCount": 2400,
"score": 0,
"shortName": null,
"slug": "community",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 0,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 3 | 0 | zqLyGGRcpELwvSpfR | joshuamerriam | 2021-05-18T02:39:08.425Z | joshuamerriam | joshuamerriam | null | null | null | 15 | 0 | false | false | null | null | 2 | 5 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | 4oAk2cu489LcuhjWi | SocialPreviewType | bx3vpoxT7yx23miJa | <p>I've been inspired by user: <a href="https://www.lesswrong.com/users/raemon?from=post_header">Raemon</a> and his <a href="https://www.lesswrong.com/s/3bbvzoRA8n6ZgbiyK">Winter Solstice celebrations</a>. </p><p>But I'm also drawing inspiration from the <a href="https://en.wikipedia.org/wiki/Matariki">Matariki</a> holiday here in New Zealand.</p><p> I realized I was longing for meaningful community ritual, and I liked the sound of what he was doing. Growing up in areligious family, I always enjoyed the various traditions around Christmas, and missed them now that I've grown up, left the church and moved overseas. Living in New Zealand, I decided to design my own winter solstice gathering that combines elements from Rationalist Solstice traditions with our local Māori Matariki practices. These two naturally occur at the same time, when it's cold and dark in our winter, and people naturally huddle together inside on the long nights.</p><p><strong>The Challenge:</strong> How do you create a meaningful secular ritual that serves genuine psychological and social needs without it feeling cringy or risking cultural appropriation?</p><p><strong>My Approach:</strong> I've designed an evening that progresses through several acts from "the Golden Hour, through Sunset, Twilight, Dusk and into The Darkness before returning together to the Light at Dawn (* not actual dawn.. we’ll finish by 9pm.) </p><p>Key design principles:</p><ul><li><strong>Astronomical grounding</strong>: June 20th winter solstice coincides with Matariki (Māori New Year), providing cultural context and stellar navigation themes</li><li><strong>Genuine conversation</strong>: Rather than toxic positivity, we explicitly confront mortality, loss of control, relationship failures, and existential risks (including AI alignment concerns)</li><li><strong>Community coordination practice</strong>: Multiple table reshuffling activities that require cooperation and create new social bonds</li><li><strong>Embodied experience</strong>: Physical elements (bread breaking, candle rituals, lighting transitions) create lasting memories beyond just conversation</li></ul><p><strong>Structure</strong>: ~20 guests, 3-hour arc from golden hour through complete darkness to dawn. Includes remembrance of the dead, shared meals, and a culminating darkness meditation where we sit in silence confronting mortality before rekindling light together.</p><p><i>* Note, I've used AI, primarily Claude.ai as sounding board , research assistant, and draft managment for much of this work, including this post. However I find I have to re-read, and re-write much of it to make sure things are presenting appropriately. </i></p><p>The format adapts Raemon's "Beyond the Reach of God" themes whi... </p> | I've been inspired by user: Raemon and his Winter Solstice celebrations.
But I'm also drawing inspiration from the Matariki holiday here in New Zealand.
I realized I was longing for meaningful community ritual, and I liked the sound of what he was doing. Growing up in areligious family, I always enjoyed the various traditions around Christmas, and missed them now that I've grown up, left the church and moved overseas. Living in New Zealand, I decided to design my own winter solstice gathering that combines elements from Rationalist Solstice traditions with our local Māori Matariki practices. These two naturally occur at the same time, when it's cold and dark in our winter, and people naturally huddle together inside on the long nights.
The Challenge: How do you create a meaningful secular ritual that serves genuine psychological and social needs without it feeling cringy or risking cultural appropriation?
My Approach: I've designed an evening that progresses through several acts from "the Golden Hour, through Sunset, Twilight, Dusk and into The Darkness before returning together to the Light at Dawn (* not actual dawn.. we’ll finish by 9pm.)
Key design principles:
* Astronomical grounding: June 20th winter solstice coincides with Matariki (Māori New Year), providing cultural context and stellar navigation themes
* Genuine conversation: Rather than toxic positivity, we explicitly confront mortality, loss of control, relationship failures, and existential risks (including AI alignment concerns)
* Community coordination practice: Multiple table reshuffling activities that require cooperation and create new social bonds
* Embodied experience: Physical elements (bread breaking, candle rituals, lighting transitions) create lasting memories beyond just conversation
Structure: ~20 guests, 3-hour arc from golden hour through complete darkness to dawn. Includes remembrance of the dead, shared meals, and a culminating darkness meditation where we sit in sil | 667 | 1.6.0 | Revision | false | null | null | CrosspostOutput |
||
wWag5jEi9Fh4gzJkn | u-s-government-seeks-input-on-national-ai-r-and-d-strategic | U.S. Government Seeks Input on National AI R&D Strategic Plan - Deadline May 29 | null | false | false | false | null | KALxLEFY6d9KWTQ2A | null | true | false | false | false | Post | null | 2025-05-27T01:57:22.174Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | jBoQTCDz6uihFkfKJ | 0 | 5 | 17 | false | 0.008171 | null | false | false | 2025-05-27T01:57:22.174Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 7 | 0 | 2025-05-27T01:56:15.064Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 2 | 0 | KALxLEFY6d9KWTQ2A | mbrooks | 2022-03-02T14:47:41.280Z | mbrooks | mbrooks | null | null | null | 93 | 0 | false | false | null | null | 6 | 19 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | easy-going | null | true | [
"canModeratePersonal"
] | null | null | wWag5jEi9Fh4gzJkn | SocialPreviewType | jBoQTCDz6uihFkfKJ | <p><i>(Post written by Claude Opus)</i><br><br>The National Science Foundation is requesting public input on updating the National AI Research and Development Strategic Plan, following President Trump's Executive Order 14179 on AI leadership.</p><p><strong>What they're looking for:</strong> Federal R&D priorities for AI over the next 3-5 years, specifically in areas where private sector investment is insufficient due to lack of immediate commercial returns.</p><p><strong>Relevant focus areas include:</strong></p><ul><li>Fundamental advances in AI algorithms and mathematical foundations</li><li>AI standards, security, and reliability research</li><li>AI for accelerating scientific discovery</li><li>Human-AI interaction</li><li>AI systems capable of reasoning and robustness in dynamic environments</li><li>High-risk, high-reward AI research for future U.S. competitiveness</li></ul><p><strong>Why this matters:</strong> This is an opportunity to influence government funding toward AI safety, robustness, and beneficial AI research - areas often underfunded by industry due to lack of immediate profit potential.</p><p><strong>Submission details:</strong></p><ul><li>Deadline: May 29, 2025 (11:59 PM ET)</li><li>Submit at: https://www.federalregister.gov/documents/2025/04/29/2025-07332/request-for-information-on-the-development-of-a-2025-national-artificial-intelligence-ai-research</li><li>Length: Ideally 2 pages, max 10 pages</li><li><strong>Must include:</strong><ul><li><strong>Responses must include the name of the person(s) or organization(s) filing the comment and the following statement: “This document is approved for public dissemination. The document contains no business-proprietary or confidential information. Document contents may be reused by the government in developing the 2025 National AI R&D Strategic Plan and associated documents without attribution.”</strong></li></ul></li></ul><p><strong>Note:</strong> The plan explicitly mentions "promoting human flourishing" as a goal alongside economic competitiveness and national security, suggesting openness to perspectives on beneficial AI development.</p><p>This represents a concrete opportunity for the EA / Less Wrong community to shape government AI research priorities in directions that could advance AI safety and beneficial outcomes.</p> | (Post written by Claude Opus)
The National Science Foundation is requesting public input on updating the National AI Research and Development Strategic Plan, following President Trump's Executive Order 14179 on AI leadership.
What they're looking for: Federal R&D priorities for AI over the next 3-5 years, specifically in areas where private sector investment is insufficient due to lack of immediate commercial returns.
Relevant focus areas include:
* Fundamental advances in AI algorithms and mathematical foundations
* AI standards, security, and reliability research
* AI for accelerating scientific discovery
* Human-AI interaction
* AI systems capable of reasoning and robustness in dynamic environments
* High-risk, high-reward AI research for future U.S. competitiveness
Why this matters: This is an opportunity to influence government funding toward AI safety, robustness, and beneficial AI research - areas often underfunded by industry due to lack of immediate profit potential.
Submission details:
* Deadline: May 29, 2025 (11:59 PM ET)
* Submit at: https://www.federalregister.gov/documents/2025/04/29/2025-07332/request-for-information-on-the-development-of-a-2025-national-artificial-intelligence-ai-research
* Length: Ideally 2 pages, max 10 pages
* Must include:
* Responses must include the name of the person(s) or organization(s) filing the comment and the following statement: “This document is approved for public dissemination. The document contains no business-proprietary or confidential information. Document contents may be reused by the government in developing the 2025 National AI R&D Strategic Plan and associated documents without attribution.”
Note: The plan explicitly mentions "promoting human flourishing" as a goal alongside economic competitiveness and national security, suggesting openness to perspectives on beneficial AI development.
This represents a concrete opportunity for the EA / Less Wrong community to shape government AI resea | 276 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
5q99avFEwAFBQd3D8 | personal-ruminations-on-ai-s-missing-variable-problem | Personal Ruminations on AI's Missing Variable Problem | null | false | false | false | null | 9TyfqdTyithmNvnHx | null | true | false | false | false | Post | null | 2025-05-26T21:11:43.820Z | null | false | false | 2 | 2 | 2025-05-27T18:13:36.925Z | false | false | post | [] | null | null | ZfwNtZp7mtaFxKtYL | 0 | 1 | 1 | false | 0.005249 | null | false | false | 2025-05-26T21:11:43.820Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-26T20:49:24.410Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "zQw5d37qwzdpgQs5P",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-27T19:59:06.910Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "8btiLJDabHgZuiSAB",
"displayName": "Ggwp"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Cognitive Science",
"needsReview": false,
"noindex": false,
"postCount": 131,
"score": 10,
"shortName": null,
"slug": "cognitive-science",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "8daMDi9NEShyLqxth",
"adminOnly": false,
"afBaseScore": 10,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "iXX23K6iBAosHFPBn",
"displayName": "Alvin Ånestrand"
}
]
},
"baseScore": 21,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-10T05:54:39.783Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "iXX23K6iBAosHFPBn",
"displayName": "Alvin Ånestrand"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Forecasting & Prediction",
"needsReview": false,
"noindex": false,
"postCount": 508,
"score": 21,
"shortName": null,
"slug": "forecasting-and-prediction",
"suggestedAsFilter": false,
"userId": "iBcH2a3HdWGS2JEZA",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "HAFdXkW4YW4KRe2Gx",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-04-29T02:31:48.719Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Utility Functions",
"needsReview": false,
"noindex": false,
"postCount": 203,
"score": 19,
"shortName": null,
"slug": "utility-functions",
"suggestedAsFilter": false,
"userId": "nLbwLhBaQeG6tCNDN",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | 9TyfqdTyithmNvnHx | thehumanproject-ai | 2024-10-12T21:04:16.126Z | Thehumanproject.ai | Thehumanproject.ai | null | null | null | 1 | 0 | false | false | <p>AI in healthcare specialist with a deep passion for AI and understanding the world. Filled to the brim with questions and curiosity and on a quest to broaden my horizon, as my day job deals with AI less frequently than I would like it to. Feel free to reach out via [email protected]</p> | null | null | 4 | 0 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | 5q99avFEwAFBQd3D8 | SocialPreviewType | ZfwNtZp7mtaFxKtYL | <p>My thinking differs somewhat from that of others. My worrying is more about potential outcome scenarios and their respective likelihoods, akin to a predictive modeling AI. I often find myself wrestling with potentialities that cannot be definitively proven unless the path is pursued. At times, I get lost in abstractions and distracted by related or unrelated side thoughts, which can be quite burdensome. The workplace routine, for instance, can lead me to get stuck in these ruminating thoughts.</p><p> </p><p>This thought process could, for example, manifest when considering the benefit/trade-off of having lunch with my colleagues:</p><ul><li>How easy is it to join the lunch group with them?</li><li>What are the potential benefits I'd gain from socialising with them (e.g., insights, news)? How likely are they to share these insights with me?</li><li>What would I be giving up?<ul><li>Time to de-stress by walking or listening to music/podcasts</li><li>Having earlier lunches</li><li>The convenience of eating at my own pace</li><li>Potentially, a decreased mood due to office gossip</li></ul></li><li>How much do I value these potential benefits and opportunity costs? What would be the implications of not having them (e.g., increased stress, decreased fitness, lower Vitamin D levels)?</li><li>Finally, is the trade-off worth it?</li></ul><p> </p><p>More often than not, I find myself with an incomplete dataset, leading me to be unable to make predictions as accurately as I'd like. </p><p>I <i>know</i> I am missing variables. </p><p>I <i>know</i> that whatever I try to predict will be highly inaccurate. </p><p>Then, my mind wanders off, trying to find accurate proxies for the missing variables, which, again, are based on incomplete data. The entire endeavour is pretty frustrating and, to a certain extent, fruitless. </p><p>I've spent energy on what feels like NOTHING.</p><p> </p><p>And this is where I swiftly link back to AI. How can we address the missing variable problem in systems that are complex beyond our comprehension—in other words, multi-factorial, real-world systems? This includes:</p><ul><li>Systems where we have incomplete, inaccurate, or non-existent training data.</li><li>Systems dealing with problems outside the scope of everyday, predictable occurrences—events that arise just once, for which we have no historical data, and where we don't even know which variables led up to them.<ul><li><i>Consider predicting the nature and speed of civil unrest in specific countries, or the sudden change of public opinion on a specific topic</i></li><li><i>Or on</i></li></ul></li></ul>... | My thinking differs somewhat from that of others. My worrying is more about potential outcome scenarios and their respective likelihoods, akin to a predictive modeling AI. I often find myself wrestling with potentialities that cannot be definitively proven unless the path is pursued. At times, I get lost in abstractions and distracted by related or unrelated side thoughts, which can be quite burdensome. The workplace routine, for instance, can lead me to get stuck in these ruminating thoughts.
This thought process could, for example, manifest when considering the benefit/trade-off of having lunch with my colleagues:
* How easy is it to join the lunch group with them?
* What are the potential benefits I'd gain from socialising with them (e.g., insights, news)? How likely are they to share these insights with me?
* What would I be giving up?
* Time to de-stress by walking or listening to music/podcasts
* Having earlier lunches
* The convenience of eating at my own pace
* Potentially, a decreased mood due to office gossip
* How much do I value these potential benefits and opportunity costs? What would be the implications of not having them (e.g., increased stress, decreased fitness, lower Vitamin D levels)?
* Finally, is the trade-off worth it?
More often than not, I find myself with an incomplete dataset, leading me to be unable to make predictions as accurately as I'd like.
I know I am missing variables.
I know that whatever I try to predict will be highly inaccurate.
Then, my mind wanders off, trying to find accurate proxies for the missing variables, which, again, are based on incomplete data. The entire endeavour is pretty frustrating and, to a certain extent, fruitless.
I've spent energy on what feels like NOTHING.
And this is where I swiftly link back to AI. How can we address the missing variable problem in systems that are complex beyond our comprehension—in other words, multi-factorial, real-world systems? This includes:
| 886 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
dAFyLutExcGdqd8qG | poetic-methods-ii-rhyme-as-a-focusing-device | Poetic Methods II: Rhyme as a Focusing Device | null | false | false | false | null | ypbkRWpFgPgzvNg3n | null | true | false | false | false | Post | https://formethods.substack.com/p/poetic-methods-ii-rhyme-as-a-focusing | 2025-05-26T18:29:05.179Z | null | false | false | 2 | 2 | 2025-05-26T19:47:34.536Z | false | false | linkpost | [] | null | null | ikp76c7gfqqirSZRN | 1 | 6 | 24 | false | 0.01616 | null | false | false | 2025-05-28T21:09:05.765Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 13 | 0 | 2025-05-26T18:13:14.167Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 21 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 3 | 0 | ypbkRWpFgPgzvNg3n | adamshimi | 2018-02-04T13:28:06.981Z | adamShimi | adamShimi | null | null | Adam Shimi | 6,734 | 1,726 | false | false | <p>Epistemologist specialized in the difficulties of alignment and how to solve AI X-Risks. Currently at <a href="https://www.conjecture.dev/">Conjecture</a>.</p><p>Blogging at <a href="https://formethods.substack.com/">For Methods</a>.</p><p><a href="https://x.com/epist_vigilance">Twitter</a>.</p> | null | null | 122 | 869 | 10 | 60 | 406 | 1 | 3 | XtphY3uYHwruKqDyG | User | easy-going | null | null | [
"canModeratePersonal",
"alignmentVoters",
"alignmentForum",
"trustLevel1",
"alignmentForumAdmins"
] | null | null | dAFyLutExcGdqd8qG | SocialPreviewType | ikp76c7gfqqirSZRN | <p>As promised in <a href="https://formethods.substack.com/p/poetic-methods-i-meter-as-communication"><u>the previous instalment on meter</u></a>, let’s explore rhyming from a methodological perspective.</p><p>The first difference between meter and rhyme lies in their opposite obviousness: the first one is subtle, requiring a learned and attuned ear; the second is so sonorous and clear that children hear it as self-evident.</p><p>Take one of my favorite Robert Frost poems:</p><blockquote><p> Nature’s first green is gold,<br> Her hardest hue to hold.<br>
Her early leaf’s a flower;<br>
But only so an hour.<br>
Then leaf subsides to leaf.
<br> So Eden sank to grief,<br>
So dawn goes down to day.<br>
Nothing gold can stay.</p></blockquote><p>(Robert Frost, <a href="https://www.poetryfoundation.org/poems/148652/nothing-gold-can-stay-5c095cc5ab679"><u>Nothing Gold Can Stay</u></a>, 1923)</p><p>I’m sure that almost every reader will hear the rhyming couplets (pairs of lines end-rhyming with each other), but most will not, at least consciously, perceive <a href="https://en.wikipedia.org/wiki/Iambic_trimeter"><u>the iambic trimeter</u></a> that forms the meter of this piece, nor how the last line’s final effect comes from shortening the meter by one syllable.</p><p>Still, it’s worth a quick primer on what counts as a rhyme in English, given that it is subtly different from the definition of other languages (including my native French).</p><p>A perfect/exact end-rhyme in English, which is the meaning implicit when “rhyme” is used without qualifier, is a correspondence between the ending of two words:</p><blockquote><p>Our typical rhyme looks like this: <i>slick</i>/<i>trick</i>, <i>book</i>/<i>crook</i>, <i>dump</i>/<i>trump</i>. The sounds of the paired words initially differ, then converge. There’s a motion to it, which, happily, might itself be presented as a rhyme: from disparity to similarity.</p><p>There’s a code that simplifies this process. C stands for consonant sound, V for vowel sound. We’ll use subscripts for differentiation, so C1 and C2 are different consonant sounds. C3 and C3 are the same consonant sound or consonant blend. (Note that we’re talking sounds, not letters. Hence, the divergent spellings of <i>tuft</i> and <i>roughed</i> would not affect their status as rhymes: C1V1C2 and C3V1C2.)</p><p>Regardless of spelling, rhymes of this sonic sort—<i>tuft</i> and <i>roughed</i>, or <i>name</i> and <i>fame</i>, or <i>salt</i> and <i>vault</i>—are our prototype, though in fact both rhyming words may lack a final consonant sound, as in <i>slow</i>/<i>go</i> (C1V1/C2V1) or one may lack an initial consonant, as in <i>in</i>/<i>bin</i> (V1C1/C2V1C1). None of this alters the crucial, distinguishing motion, from</p></blockquote>... | As promised in the previous instalment on meter, let’s explore rhyming from a methodological perspective.
The first difference between meter and rhyme lies in their opposite obviousness: the first one is subtle, requiring a learned and attuned ear; the second is so sonorous and clear that children hear it as self-evident.
Take one of my favorite Robert Frost poems:
> Nature’s first green is gold,
> Her hardest hue to hold.
> Her early leaf’s a flower;
> But only so an hour.
> Then leaf subsides to leaf.
> So Eden sank to grief,
> So dawn goes down to day.
> Nothing gold can stay.
(Robert Frost, Nothing Gold Can Stay, 1923)
I’m sure that almost every reader will hear the rhyming couplets (pairs of lines end-rhyming with each other), but most will not, at least consciously, perceive the iambic trimeter that forms the meter of this piece, nor how the last line’s final effect comes from shortening the meter by one syllable.
Still, it’s worth a quick primer on what counts as a rhyme in English, given that it is subtly different from the definition of other languages (including my native French).
A perfect/exact end-rhyme in English, which is the meaning implicit when “rhyme” is used without qualifier, is a correspondence between the ending of two words:
> Our typical rhyme looks like this: slick/trick, book/crook, dump/trump. The sounds of the paired words initially differ, then converge. There’s a motion to it, which, happily, might itself be presented as a rhyme: from disparity to similarity.
>
> There’s a code that simplifies this process. C stands for consonant sound, V for vowel sound. We’ll use subscripts for differentiation, so C1 and C2 are different consonant sounds. C3 and C3 are the same consonant sound or consonant blend. (Note that we’re talking sounds, not letters. Hence, the divergent spellings of tuft and roughed would not affect their status as rhymes: C1V1C2 and C3V1C2.)
>
> Regardless of spellin | 5,245 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
34J5qzxjyWr3Tu47L | is-building-good-note-taking-software-an-agi-complete | Is Building Good Note-Taking Software an AGI-Complete Problem? | null | false | false | false | null | nDpieb7g8huozpx9j | null | true | false | false | false | Post | null | 2025-05-26T18:26:04.344Z | null | false | false | 2 | 2 | 2025-05-26T19:48:42.116Z | false | false | post | [] | null | null | 3eeSmpFp7fugP329c | 13 | 15 | 25 | false | 0.016634 | null | false | false | 2025-05-30T21:54:07.143Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 7 | 0 | 2025-05-26T18:26:04.344Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 8 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "puBcCq7aRwKoa7pXX",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-07T23:22:05.741Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Note-Taking",
"needsReview": false,
"noindex": false,
"postCount": 30,
"score": 1,
"shortName": null,
"slug": "note-taking",
"suggestedAsFilter": false,
"userId": "Q7NW4XaWQmfPfdcFj",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "udPbn9RthmgTtHMiG",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-11T20:28:13.679Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Productivity",
"needsReview": false,
"noindex": false,
"postCount": 227,
"score": 19,
"shortName": null,
"slug": "productivity",
"suggestedAsFilter": false,
"userId": "nLbwLhBaQeG6tCNDN",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 15 | 0 | 0 | 3 | 0 | nDpieb7g8huozpx9j | thane-ruthenis | 2022-03-20T15:21:33.973Z | Thane Ruthenis | Thane Ruthenis | null | null | null | 7,680 | 750 | false | false | null | null | 40 | 870 | 0 | 19 | 107 | 1 | 1 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"alignmentVoters",
"canModeratePersonal",
"alignmentForum",
"trustLevel1"
] | null | null | 34J5qzxjyWr3Tu47L | SocialPreviewType | 3eeSmpFp7fugP329c | <p>In my experience, the most annoyingly unpleasant part of research<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="u2qmvj327of" role="doc-noteref" id="fnrefu2qmvj327of"><sup><a href="#fnu2qmvj327of">[1]</a></sup></span> is reorganizing my notes during and (especially) after a productive research sprint. The "distillation" stage, in <a href="https://www.lesswrong.com/posts/hjMy4ZxS5ogA9cTYK/how-i-think-about-my-research-process-explore-understand#Distillation__Stage_4___Compress__Refine__Communicate">Neel Nanda's categorization</a>. I end up with a large pile of variously important discoveries, promising threads, and connections, and the task is to then "refactor" that pile into something compact and well-organized, structured in the image of my newly improved model of the domain of study.</p><p>That task is of central importance:</p><ol><li>It's a vital part of the actual research process. If you're trying to discover the true simple laws/common principles underlying the domain, periodically refactoring your mental model of that domain in light of new information is precisely what you should be doing. Reorganizing your notes forces you to do just that: distilling a mess into elegant descriptions.</li><li>It allows you to get a bird's-eye view on your results, what they imply and don't imply, what open questions are the most important to focus on next, what nagging doubts you have, what important research threads or contradictions might've ended up noted down but then forgotten, et cetera.</li><li>It does most of the work of transforming your results into a format ready for consumption by other people.</li></ol><h3> </h3><h3>A Toy Example</h3><p>Suppose you're studying the properties of matter, and your initial ontology is that everything is some combination of Fire, Water, Air, and Earth. Your initial notes are structured accordingly: there are central notes for each element, branching off from them are notes about interactions between combinations of elements, case studies of specific experiments, attempts to synthesize and generalize experimental results, et cetera.</p><p>Suppose that you then discover that a "truer", simpler description of matter involves classifying it along two axes: <a href="https://en.wikipedia.org/wiki/Classical_element#Aristotle">"wet-dry" and "hot-cold"</a>. The Fire/Water/Air/Earth elements are still relevant, revealed to be extreme types of matter sitting in the corners of the wet-dry/hot-cold square. But they're no longer <i>fundamental</i> to how you model matter.</p><p>Now you need to refactor your entire mental ontology – and your entire notebase. You need to add new nodes for the wetness/temperature spectra, you need to wholly rewrite the notes about the elements to explicate their nature as extreme states of matter (rather than its basic building blocks), you need to do the same for all note... </p> | In my experience, the most annoyingly unpleasant part of research[1] is reorganizing my notes during and (especially) after a productive research sprint. The "distillation" stage, in Neel Nanda's categorization. I end up with a large pile of variously important discoveries, promising threads, and connections, and the task is to then "refactor" that pile into something compact and well-organized, structured in the image of my newly improved model of the domain of study.
That task is of central importance:
1. It's a vital part of the actual research process. If you're trying to discover the true simple laws/common principles underlying the domain, periodically refactoring your mental model of that domain in light of new information is precisely what you should be doing. Reorganizing your notes forces you to do just that: distilling a mess into elegant descriptions.
2. It allows you to get a bird's-eye view on your results, what they imply and don't imply, what open questions are the most important to focus on next, what nagging doubts you have, what important research threads or contradictions might've ended up noted down but then forgotten, et cetera.
3. It does most of the work of transforming your results into a format ready for consumption by other people.
A Toy Example
Suppose you're studying the properties of matter, and your initial ontology is that everything is some combination of Fire, Water, Air, and Earth. Your initial notes are structured accordingly: there are central notes for each element, branching off from them are notes about interactions between combinations of elements, case studies of specific experiments, attempts to synthesize and generalize experimental results, et cetera.
Suppose that you then discover that a "truer", simpler description of matter involves classifying it along two axes: "wet-dry" and "hot-cold". The Fire/Water/Air/Earth elements are still relevant, revealed to be extreme types of matter sitting in the corners of t | 1,982 | 1.15.0 | Revision | false | null | null | CrosspostOutput |
||
mecc6tgM4ZJnACzqR | principal-agent-problems-and-the-structure-of-governance | Principal-Agent Problems and the Structure of Governance | null | false | false | false | null | wqhovdqkWZzDf3zF9 | null | true | false | false | false | Post | https://bestofagreatlot.substack.com/p/principal-agent-problems-and-the | 2025-05-26T18:23:07.054Z | null | false | false | 2 | 2 | 2025-05-26T19:47:12.915Z | false | false | linkpost | [] | null | null | dxgyjz32jRxg29WWM | 0 | 1 | 1 | false | 0.00542 | null | false | false | 2025-05-26T18:23:07.054Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 0 | 0 | 2025-05-26T18:20:38.964Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 9 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | wqhovdqkWZzDf3zF9 | belos | 2023-09-29T04:19:55.519Z | belos | belos | null | null | null | 6 | 0 | false | false | null | null | 8 | 0 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | mecc6tgM4ZJnACzqR | SocialPreviewType | dxgyjz32jRxg29WWM | <p><i>This post on <strong>Best Of A Great Lot</strong> is a part of a series on the subject of designing a new form of governance. Each piece aims to stand alone, but fits together on the </i><a href="https://bestofagreatlot.substack.com/p/table-of-contents"><i><u>Table of Contents</u></i></a><i>.</i></p><p><i>Updates: This is the last in the prework section of articles about problems and difficulties, after this will come the core system descriptions. -b</i></p><p>Every system fails. Some failures can be thought of as a tradeoff between consistency and flexibility, while others are unexpected features we discover once the system is widely used. Corporations hire armies of lawyers to think deeply about tax law, looking for loopholes. Political consultants examine election law for similar reasons.</p><p>When lawyers find loopholes that aren't good for the citizenry at large, it's supposed to be government's job to change the law. But when the loophole or flaw isn't in the law, but in the structure of the government itself, fixing it becomes a much more interesting and difficult question. With the Constitution, the founders of the US answered that question with an amendment process that requires much more agreement among the state governments and Congress than is needed to pass a law. It's sufficiently difficult that in 2 centuries there have only been 15<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="er3wj2ujkag" role="doc-noteref" id="fnrefer3wj2ujkag"><sup><a href="#fner3wj2ujkag">[1]</a></sup></span> or 16 amendments that didn’t happen within the first decade. The most recent amendment to be newly proposed and ratified was 50 years ago. It’s common to conclude that the amendment process has largely run out of steam in terms of its ability to improve our system. Particularly notable is that unlike with law, where it’s normal if not common for us to pass a law and change or revert it later, we’ve only reverted one amendment: when we prohibited alcohol. This is not the picture of a system we are using much to improve our government.</p><p>Requiring more agreement among the many politicians involved in the system is a deeply imperfect band-aide on the underlying problem, which is simply that it's hard to trust those who have power — those who are currently selected to run the government — with the power to change the structure of the government. The incentive for them to make the governmental structure better for everyone and not just for themselves is too weak.</p><p>To snag a term from economics, this is a minefield of <i>principal-agent problems</i>. When you hire a lawyer to represent you, or when you're a shareholder of a company and the CEO represents you in ... </p> | This post on Best Of A Great Lot is a part of a series on the subject of designing a new form of governance. Each piece aims to stand alone, but fits together on the Table of Contents.
Updates: This is the last in the prework section of articles about problems and difficulties, after this will come the core system descriptions. -b
Every system fails. Some failures can be thought of as a tradeoff between consistency and flexibility, while others are unexpected features we discover once the system is widely used. Corporations hire armies of lawyers to think deeply about tax law, looking for loopholes. Political consultants examine election law for similar reasons.
When lawyers find loopholes that aren't good for the citizenry at large, it's supposed to be government's job to change the law. But when the loophole or flaw isn't in the law, but in the structure of the government itself, fixing it becomes a much more interesting and difficult question. With the Constitution, the founders of the US answered that question with an amendment process that requires much more agreement among the state governments and Congress than is needed to pass a law. It's sufficiently difficult that in 2 centuries there have only been 15[1] or 16 amendments that didn’t happen within the first decade. The most recent amendment to be newly proposed and ratified was 50 years ago. It’s common to conclude that the amendment process has largely run out of steam in terms of its ability to improve our system. Particularly notable is that unlike with law, where it’s normal if not common for us to pass a law and change or revert it later, we’ve only reverted one amendment: when we prohibited alcohol. This is not the picture of a system we are using much to improve our government.
Requiring more agreement among the many politicians involved in the system is a deeply imperfect band-aide on the underlying problem, which is simply that it's hard to trust those who have power — those who are currently | 2,303 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
||
kMiwjx6QyyBBTcjxt | does-the-universal-geometry-of-embeddings-paper-have-big | Does the Universal Geometry of Embeddings paper have big implications for interpretability? | null | false | false | false | null | S2zR3frRoKW6q8Ftv | null | true | false | false | false | Post | 2025-05-26T18:20:48.111Z | null | false | false | 2 | 2 | 2025-05-26T19:48:01.550Z | false | false | question | [] | null | null | 8B6ERPBfpn3iWsnpx | 3 | 12 | 42 | false | 0.024711 | null | false | false | 2025-05-28T10:12:35.149Z | null | null | null | null | null | true | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 15 | 0 | 2025-05-26T18:12:06.485Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "56yXXrcxRjrQs6z9R",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-30T22:00:37.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "t46uLRSbDziEcKmev",
"displayName": "Kriz Tahimic"
},
{
"_id": "sqMaBFCkAhRcWzJXi",
"displayName": "nicolasguillard"
},
{
"_id": "S6Niz3DiFCTm2Eybq",
"displayName": "Anirudh257"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Interpretability (ML & AI)",
"needsReview": false,
"noindex": false,
"postCount": 933,
"score": 12,
"shortName": null,
"slug": "interpretability-ml-and-ai",
"suggestedAsFilter": false,
"userId": "DgsGzjyBXN8XSK22q",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "KmgkrftQuX7jmjjp5",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-09-24T14:01:59.395Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Language Models (LLMs)",
"needsReview": false,
"noindex": false,
"postCount": 840,
"score": 9,
"shortName": null,
"slug": "language-models-llms",
"suggestedAsFilter": false,
"userId": "Sp5wM4aRAhNERd4oY",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 12 | 0 | 0 | 8 | 0 | S2zR3frRoKW6q8Ftv | evan-r-murphy | 2021-10-26T00:34:08.379Z | Evan R. Murphy | Evan R. Murphy | null | null | Evan R. Murphy | 1,180 | 122 | false | false | <p>I'm doing research and other work focused on AI safety/security, governance and risk reduction. Currently my top projects are (last updated Feb 26, 2025):</p><ul><li>Technical researcher for UC Berkeley at the AI Security Initiative, part of the Center for Long-Term Cybersecurity (CLTC)</li><li>Serving on the board of directors for <a href="https://aigs.ca/"><u>AI Governance & Safety Canada</u></a></li></ul><p>General areas of interest for me are AI safety strategy, comparative AI alignment research, prioritizing technical alignment work, analyzing the published alignment plans of major AI labs, interpretability, deconfusion research and other AI safety-related topics.</p><p>Research that I’ve authored or co-authored:</p><ul><li><a href="https://scholar.google.ca/citations?user=pRPlZQ4AAAAJ&hl=en">See publications on Google Scholar</a></li><li><a href="https://www.lesswrong.com/posts/BuRt2igbFx9KaB5QG/steering-behaviour-testing-for-non-myopia-in-language-models"><u>Steering Behaviour: Testing for (Non-)Myopia in Language Models</u></a></li><li><a href="https://www.lesswrong.com/posts/FrFZjkdRsmsbnQEm8/interpretability-s-alignment-solving-potential-analysis-of-7"><u>Interpretability’s Alignment-Solving Potential: Analysis of 7 Scenarios</u></a></li><li>(Scroll down to read other posts and comments I've written)</li></ul><p>Before getting into AI safety, I was a software engineer for 11 years at Google and various startups. You can find details about my previous work on <a href="https://www.linkedin.com/in/evanrmurphy/"><u>my LinkedIn</u></a>.</p><p>While I'm not always great at responding, I'm happy to connect with other researchers or people interested in AI alignment and effective altruism. Feel free to send me a private message!</p> | null | null | 13 | 320 | 2 | 5 | 57 | 1 | 1 | grecHJcgkb3KW5wnM | User | easy-going | null | null | [
"alignmentVoters",
"canModeratePersonal"
] | null | null | kMiwjx6QyyBBTcjxt | SocialPreviewType | 8B6ERPBfpn3iWsnpx | <p>Rishi Jha, Collin Zhang, Vitaly Shmatikov and John X. Morris published a new paper last week called <a href="https://arxiv.org/abs/2505.12540">Harnessing the Universal Geometry of Embeddings</a>.</p><p>Abstract of the paper (bold was added by me):</p><blockquote><p><strong>We introduce the first method for translating text embeddings from one vector space to another without any paired data, encoders, or predefined sets of matches. Our unsupervised approach translates any embedding to and from a universal latent representation (i.e., a universal semantic structure conjectured by the Platonic Representation Hypothesis). Our translations achieve high cosine similarity across model pairs with different architectures, parameter counts, and training datasets.</strong><br>The ability to translate unknown embeddings into a different space while preserving their geometry has serious implications for the security of vector databases. An adversary with access only to embedding vectors can extract sensitive information about the underlying documents, sufficient for classification and attribute inference.</p></blockquote><p>They focus on security implications of their research, but I am trying to understand: <strong>Do these findings have major implications for interpretability research?</strong></p><p>It seems like discovering a sort of universal structure that is shared among all LLMs would help a lot for understanding the internals of these models. But I may be misunderstanding the nature of the patterns they are translating and corresponding.</p> | Rishi Jha, Collin Zhang, Vitaly Shmatikov and John X. Morris published a new paper last week called Harnessing the Universal Geometry of Embeddings.
Abstract of the paper (bold was added by me):
> We introduce the first method for translating text embeddings from one vector space to another without any paired data, encoders, or predefined sets of matches. Our unsupervised approach translates any embedding to and from a universal latent representation (i.e., a universal semantic structure conjectured by the Platonic Representation Hypothesis). Our translations achieve high cosine similarity across model pairs with different architectures, parameter counts, and training datasets.
> The ability to translate unknown embeddings into a different space while preserving their geometry has serious implications for the security of vector databases. An adversary with access only to embedding vectors can extract sensitive information about the underlying documents, sufficient for classification and attribute inference.
They focus on security implications of their research, but I am trying to understand: Do these findings have major implications for interpretability research?
It seems like discovering a sort of universal structure that is shared among all LLMs would help a lot for understanding the internals of these models. But I may be misunderstanding the nature of the patterns they are translating and corresponding. | 209 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
zesWv9YipxY8pQFKu | socratic-persuasion-giving-opinionated-yet-truth-seeking | Socratic Persuasion: Giving Opinionated Yet Truth-Seeking Advice | null | false | false | false | null | KCExMGwS2ETzN3Ksr | null | true | false | false | false | Post | https://www.neelnanda.io/51-socratic-persuasion | 2025-05-26T17:38:55.571Z | null | false | false | 2 | 2 | 2025-05-26T19:48:34.372Z | false | false | linkpost | [] | null | null | i2dYwsA8rXhK9fxaM | 14 | 17 | 57 | false | 0.031806 | null | false | false | 2025-06-25T11:10:34.242Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 24 | 0 | 2025-05-26T16:37:58.517Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 25 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "mip7tdAN87Jarkcew",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-10T06:00:13.257Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Relationships (Interpersonal)",
"needsReview": false,
"noindex": false,
"postCount": 213,
"score": 9,
"shortName": null,
"slug": "relationships-interpersonal",
"suggestedAsFilter": false,
"userId": "iBcH2a3HdWGS2JEZA",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "SEuoBQeHLYd9dtqpK",
"adminOnly": false,
"afBaseScore": 6,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-20T00:17:45.616Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Social Skills",
"needsReview": false,
"noindex": false,
"postCount": 55,
"score": 10,
"shortName": null,
"slug": "social-skills",
"suggestedAsFilter": false,
"userId": "SsduPgHwY2zeZpmKT",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 17 | 0 | 0 | 10 | 0 | KCExMGwS2ETzN3Ksr | neel-nanda-1 | 2017-03-08T10:35:55.355Z | neel-nanda-1 | Neel Nanda | null | null | null | 11,214 | 1,968 | false | false | null | null | 92 | 652 | 7 | 56 | 215 | 1 | 1 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal",
"alignmentForum",
"alignmentVoters",
"trustLevel1"
] | null | null | zesWv9YipxY8pQFKu | SocialPreviewType | i2dYwsA8rXhK9fxaM | <p><i>The full post is long, but you can <strong>80/20 the value with the </strong></i><a href="https://www.lesswrong.com/posts/zesWv9YipxY8pQFKu/socratic-persuasion-giving-opinionated-yet-truth-seeking#Summary"><i><strong>700 word summary</strong></i></a><i>! Over half the post is </i><a href="https://www.lesswrong.com/posts/zesWv9YipxY8pQFKu/socratic-persuasion-giving-opinionated-yet-truth-seeking#Worked_Examples"><i>eight optional case studies</i></a><i>. Thanks to Jemima Jones, Claude 4 Opus and Gemini 2.5 Pro for help copy-editing and drafting</i></p><p><strong>TL;DR</strong>: I recommend <strong>giving advice by asking questions</strong> to walk someone through key steps in my argument — often I’m missing key info, which comes up quickly as an unexpected answer, while if I’m right I’m more persuasive. This error correction makes it safer to give opinionated advice, without overconfidence. This is useful in a <a href="https://www.lesswrong.com/posts/zesWv9YipxY8pQFKu/socratic-persuasion-giving-opinionated-yet-truth-seeking#Worked_Examples">wide range of settings</a>, as a manager, managee, friend, and mentor, and is better for both parties, if you have the time and energy and are able to seriously engage with whether <i>you</i> are wrong.</p><h2 data-internal-id="Summary">Summary</h2><ul><li><strong>Socratic Persuasion:</strong> When trying to persuade someone, especially if giving advice, I much prefer the Socratic method over directly presenting my case. I take my argument/thought process and break it down into 1-3 key step/cruxes, <strong>reframe each step into a question</strong>, and ask them one at a time.<ul><li>If there’s disagreement, <strong>I’ll get an unexpected answer to a question</strong>, and can stop, ask follow-ups, understand why and pivot or adjust as needed</li></ul></li><li><strong>Being opinionated: </strong>There’s many ways to do this – one solution for giving truth-seeking advice is to be a coach<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="l4n25j1zy1" role="doc-noteref" id="fnrefl4n25j1zy1"><sup><a href="#fnl4n25j1zy1">[1]</a></sup></span>, asking questions to help the other person elicit their own thoughts, and trying to not express your own opinions. But <strong>opinionated advice can be extremely useful </strong>if done well, e.g. if I’m mentoring someone in an area I know well, and want to argue a specific case, but gently.<ul><li>The standard Socratic method tends to focus on open-ended, unopinionated questions, with Socratic persuasion it's fine to be opinionated.</li><li>For example, different ways to turn “this plan will fail for reason X” into a question<ul><li><strong>Coach: </strong>“what are the strongest arguments against this plan working?” or “suppose this plan failed – what went wrong?”</li><li><strong>Manager/mentor</strong>: “have you considered reason X?”</li><li>If I know them well and am comfortable being direct: “I think this plan fails for reason X – thoughts?”</li></ul></li></ul></li><li><strong>Humility is crucial</strong>: To do Socratic persuasion well you need to really internalise that <strong>you have a decent chance of being wrong</strong>. You need to care about <i>being </i>right, not about winning an argument.<ul><li>It’s not about going through the motions of asking questions so you can be more persua</li></ul></li></ul>... | The full post is long, but you can 80/20 the value with the 700 word summary! Over half the post is eight optional case studies. Thanks to Jemima Jones, Claude 4 Opus and Gemini 2.5 Pro for help copy-editing and drafting
TL;DR: I recommend giving advice by asking questions to walk someone through key steps in my argument — often I’m missing key info, which comes up quickly as an unexpected answer, while if I’m right I’m more persuasive. This error correction makes it safer to give opinionated advice, without overconfidence. This is useful in a wide range of settings, as a manager, managee, friend, and mentor, and is better for both parties, if you have the time and energy and are able to seriously engage with whether you are wrong.
Summary
* Socratic Persuasion: When trying to persuade someone, especially if giving advice, I much prefer the Socratic method over directly presenting my case. I take my argument/thought process and break it down into 1-3 key step/cruxes, reframe each step into a question, and ask them one at a time.
* If there’s disagreement, I’ll get an unexpected answer to a question, and can stop, ask follow-ups, understand why and pivot or adjust as needed
* Being opinionated: There’s many ways to do this – one solution for giving truth-seeking advice is to be a coach[1], asking questions to help the other person elicit their own thoughts, and trying to not express your own opinions. But opinionated advice can be extremely useful if done well, e.g. if I’m mentoring someone in an area I know well, and want to argue a specific case, but gently.
* The standard Socratic method tends to focus on open-ended, unopinionated questions, with Socratic persuasion it's fine to be opinionated.
* For example, different ways to turn “this plan will fail for reason X” into a question
* Coach: “what are the strongest arguments against this plan working?” or “suppose this plan failed – what went wrong?”
* Manager/mentor: “have you considered re | 6,338 | 1.10.0 | Revision | true | true | hAcFgXa2juRDWYmsp | CrosspostOutput |
||
tGGPgazwd9MKJFbsa | beneath-psychology-case-study-on-chronic-pain-first-insights-1 | [Beneath Psychology] Case study on chronic pain: First insights, and the remaining challenge | null | false | false | false | null | JKdbpXHkv9AsuazJ3 | null | true | false | false | false | Post | null | 2025-05-26T17:29:03.317Z | null | false | false | 2 | 2 | 2025-05-26T19:48:56.438Z | false | false | post | [] | null | null | A5jaNgDWedpH5qsXv | 0 | 3 | 9 | false | 0.009487 | null | false | false | 2025-05-26T17:29:03.317Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-25T17:59:04.584Z | false | false | norm-enforcing | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 13 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "kX6bqBzZx9iJTLxQc",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2025-06-03T10:27:19.534Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Chronic Pain",
"needsReview": false,
"noindex": false,
"postCount": 6,
"score": 0,
"shortName": null,
"slug": "chronic-pain",
"suggestedAsFilter": false,
"userId": "HHiJSvTEQkMx8ej62",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | JKdbpXHkv9AsuazJ3 | jimmy | 2009-02-27T18:23:27.410Z | jimmy | jimmy | null | null | null | 3,697 | 12 | false | false | null | null | 15 | 846 | 0 | 0 | 0 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | [
"mvf4xdfcGzPN8PsXM"
] | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters"
] | null | null | tGGPgazwd9MKJFbsa | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/g3pqioa5zi7am5uarj0h | SocialPreviewType | A5jaNgDWedpH5qsXv | <p>In <a href="https://www.lesswrong.com/posts/JpY7uztEdTF7HSDrT/beneath-psychology-introduction-part-2-the-solution-and-what">the last post</a> I took the seemingly-naive stance that "pain is just information" and "can't actually be a problem", implying that painful situations can be dealt with without suffering by just looking through the pain towards the reality at which it points. I did give an example of how this allowed me to quickly resolve an episode of pain-induced-suffering when others couldn't, but that one is admittedly a bit of an outlier and most cases aren't so simple. Today, we're going to start looking at a case that isn't so simple.</p><p>I was talking to someone through private messages on a forum for a shared hobby, and his chronic pain issue came up. Rather than acute pain from a burned hand, this was incessant pain from a medical procedure gone wrong a year prior. This procedure resulted in nerve damage, and therefore pain that feels "like every kind of pain all at the same time". The pain was so intense in the first day that the nurse was crying from seeing how much he hurt. They ended up prescribing him so much pain killers that his teeth literally started falling apart. Even still, that amount of pain drugs was only enough "for a little bit until [his] body got used to it".</p><p>The question, is why he didn't just respond to the pain by saying "Oh wow, something is wrong. Okay. <i>What's wrong?</i>" -- and then, once he figured out the answer to that, pivoting to "Ooh, neat sensations that don't mean much of anything. Anyway..."</p><p>The obvious but mostly worthless answer is "Because it hurts!". It's easy to empathize because we've all been there, but until we learn to square that experience with the reality that we don't have to suffer just because of pain, we're not going to be able to help people reconnect to reality -- and might be those people unwittingly gas-lighting others into suffering unnecessarily.</p><p>So what <i>exactly</i> is stopping him (and ourselves, when we feel similarly) from simply facing reality? Where does this desire for pain killers come from, given that they aren't exactly healing his body? Why is he suffering, rather than simply mourning loss?</p><p>What do we have to say, and how do we have to say it, such that we can navigate from here to where he <i>can </i>and <i>does</i> simply face reality? Until we can use our understandings to navigate successfully, they're not worth much. So how do we do that?</p><p>Like the last challenge, we know that it's a solvable problem because it was in fact solv... </p> | In the last post I took the seemingly-naive stance that "pain is just information" and "can't actually be a problem", implying that painful situations can be dealt with without suffering by just looking through the pain towards the reality at which it points. I did give an example of how this allowed me to quickly resolve an episode of pain-induced-suffering when others couldn't, but that one is admittedly a bit of an outlier and most cases aren't so simple. Today, we're going to start looking at a case that isn't so simple.
I was talking to someone through private messages on a forum for a shared hobby, and his chronic pain issue came up. Rather than acute pain from a burned hand, this was incessant pain from a medical procedure gone wrong a year prior. This procedure resulted in nerve damage, and therefore pain that feels "like every kind of pain all at the same time". The pain was so intense in the first day that the nurse was crying from seeing how much he hurt. They ended up prescribing him so much pain killers that his teeth literally started falling apart. Even still, that amount of pain drugs was only enough "for a little bit until [his] body got used to it".
The question, is why he didn't just respond to the pain by saying "Oh wow, something is wrong. Okay. What's wrong?" -- and then, once he figured out the answer to that, pivoting to "Ooh, neat sensations that don't mean much of anything. Anyway..."
The obvious but mostly worthless answer is "Because it hurts!". It's easy to empathize because we've all been there, but until we learn to square that experience with the reality that we don't have to suffer just because of pain, we're not going to be able to help people reconnect to reality -- and might be those people unwittingly gas-lighting others into suffering unnecessarily.
So what exactly is stopping him (and ourselves, when we feel similarly) from simply facing reality? Where does this desire for pain killers come from, given that they aren't exac | 3,227 | 1.7.0 | Revision | false | null | null | CrosspostOutput |
|
hMHFKgX5uqD4PE59c | an-observation-on-self-play | An observation on self-play | null | false | false | false | null | WqJanCLbcf4JDNbvB | null | true | false | false | false | Post | null | 2025-05-26T17:22:01.236Z | null | false | false | 2 | 2 | 2025-05-26T19:49:06.219Z | false | false | post | [] | null | null | L5nEBt7YXyTZNEkgQ | 1 | 8 | 14 | false | 0.011807 | null | false | false | 2025-06-14T16:28:51.840Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-26T15:00:12.829Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "H4n4rzs33JfEgkf8b",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-16T10:24:25.105Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "OpenAI",
"needsReview": false,
"noindex": false,
"postCount": 237,
"score": 0,
"shortName": null,
"slug": "openai",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "5f5c37ee1b5cdee568cfb2b5",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-11T19:58:52.614Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "s8h7CkveobCFq3zDi",
"displayName": "William Carlson"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Recursive Self-Improvement",
"needsReview": false,
"noindex": false,
"postCount": 77,
"score": 10,
"shortName": null,
"slug": "recursive-self-improvement",
"suggestedAsFilter": false,
"userId": "5wu9jG4pm9q6xjZ9R",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Fi6SeJRGfJs3bp5se",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-01-24T21:08:05.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Reinforcement learning",
"needsReview": false,
"noindex": false,
"postCount": 204,
"score": 0,
"shortName": null,
"slug": "reinforcement-learning",
"suggestedAsFilter": false,
"userId": "2vpm465RWePSgvpTo",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "5f5c37ee1b5cdee568cfb297",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-11T19:58:52.554Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Superintelligence",
"needsReview": false,
"noindex": false,
"postCount": 159,
"score": 0,
"shortName": null,
"slug": "superintelligence",
"suggestedAsFilter": false,
"userId": "NRg5Bw8H2DCYTpmHE",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 8 | 0 | 0 | 1 | 0 | WqJanCLbcf4JDNbvB | jonrxu | 2025-05-26T15:00:06.358Z | jonrxu | jonrxu | null | null | null | 13 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | hMHFKgX5uqD4PE59c | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/ggsxsz9fctv8ixfwuz31 | SocialPreviewType | L5nEBt7YXyTZNEkgQ | <p>At NeurIPS 2024, Ilya Sutskever delivered a short keynote address in honor of his <a href="https://arxiv.org/pdf/1409.3215"><u>Seq2seq paper</u></a>, published a decade earlier. It was his first—and so far only—public appearance to discuss his research since parting ways with OpenAI.</p><p>The talk itself shed little light on his current work. Instead, he reaffirmed the prevailing view that the “age of pre-training” had come to an end, touched on strategies researchers were pursuing to overcome this challenge, and outlined a broad vision of a super-intelligent AI future.</p><p>There was one interesting slide, however, which seemed oddly lodged in the middle of his presentation without much continuity with the rest of his talk. It was this:</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hMHFKgX5uqD4PE59c/s5ankvhmpcw3c0kr2aap"></figure><p>Ilya first noted his interest in this slide from when he was just beginning his career, chronicling how he “went to Google to do research, to look for this graph.” The chart on its surface looks pretty straightforward: a linear relationship captures the ratio of animal brain to body mass, a rare example of “nature working out neatly.” The captivating part about the graph, Ilya narrates, is how the slope for the hominids is different—the steepness of the slope seems to suggest something qualitatively different about how humans evolved.</p><p>The implication for AI? There are multiple scaling laws in both nature and machine learning, and for the latter we’ve only just identified the first.</p><p>This reminded me of another <a href="https://www.youtube.com/watch?v=BJi6N4tDupk"><u>talk</u></a> he gave at NeurIPS 2017 on self-play. The younger Ilya still carried an air of mystique, like a scientific messiah reveling in his latest breakthrough. To OpenAI’s credit back then, he was far more transparent about his work. He outlined some research experiments done on self-play in video games (notably, OpenAI’s <a href="https://openai.com/index/openai-five-defeats-dota-2-world-champions/"><u>Dota 2</u></a> bot), as well as <a href="https://openai.com/index/competitive-self-play/">training bots</a> in physical simulations to do sumo wrestling and goaltending.</p><p>But, predictably, he also took the liberty to speculate into the long-term future of self-play. In particular, he closes with this slide:</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hMHFKgX5uqD4PE59c/juyh4kpxkvkrbn2itxwx"></figure><p>The similarity between this and the 2024 version struck me. Not only the visual resemblance, but also the specific word choice he used in 2024 that mirrors what’s shown on the diagram: “Hominids… there’s a bunch of them. Homo habilis, maybe, and neanderthals.” He appears to be referencing the same pattern of rapidly scaling intelligence in the recent genetic ancestry of humans. <i>Why is this?</i> 2024 Ilya asks.</p><p>The 2017 slide seems to provide a p... </p> | At NeurIPS 2024, Ilya Sutskever delivered a short keynote address in honor of his Seq2seq paper, published a decade earlier. It was his first—and so far only—public appearance to discuss his research since parting ways with OpenAI.
The talk itself shed little light on his current work. Instead, he reaffirmed the prevailing view that the “age of pre-training” had come to an end, touched on strategies researchers were pursuing to overcome this challenge, and outlined a broad vision of a super-intelligent AI future.
There was one interesting slide, however, which seemed oddly lodged in the middle of his presentation without much continuity with the rest of his talk. It was this:
Ilya first noted his interest in this slide from when he was just beginning his career, chronicling how he “went to Google to do research, to look for this graph.” The chart on its surface looks pretty straightforward: a linear relationship captures the ratio of animal brain to body mass, a rare example of “nature working out neatly.” The captivating part about the graph, Ilya narrates, is how the slope for the hominids is different—the steepness of the slope seems to suggest something qualitatively different about how humans evolved.
The implication for AI? There are multiple scaling laws in both nature and machine learning, and for the latter we’ve only just identified the first.
This reminded me of another talk he gave at NeurIPS 2017 on self-play. The younger Ilya still carried an air of mystique, like a scientific messiah reveling in his latest breakthrough. To OpenAI’s credit back then, he was far more transparent about his work. He outlined some research experiments done on self-play in video games (notably, OpenAI’s Dota 2 bot), as well as training bots in physical simulations to do sumo wrestling and goaltending.
But, predictably, he also took the liberty to speculate into the long-term future of self-play. In particular, he closes with this slide:
The similarity between this an | 856 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
|
nmaKpoHxmzjT8yXTk | new-website-analyzing-ai-companies-model-evals | New website analyzing AI companies' model evals | null | false | false | false | null | 4QFiQcHgf6hvtiLqF | null | true | false | false | false | Post | null | 2025-05-26T16:00:51.602Z | null | false | false | 2 | 2 | 2025-05-26T17:13:28.098Z | false | false | post | [] | null | null | ZJrBfDGffzXBsDmoi | 0 | 13 | 58 | false | 0.032203 | null | false | false | 2025-05-26T16:00:51.602Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 21 | 0 | 2025-05-26T15:44:24.869Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 5 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 13 | 0 | 0 | 6 | 0 | 4QFiQcHgf6hvtiLqF | zach-stein-perlman | 2021-03-16T00:04:06.541Z | Zach Stein-Perlman | Zach Stein-Perlman | null | null | Zach Stein-Perlman | 9,609 | 321 | false | false | <p>AI strategy & governance. <a href="https://ailabwatch.org">ailabwatch.org</a>. <a href="https://ailabwatch.substack.com/">ailabwatch.substack.com</a>. </p> | null | null | 82 | 620 | 1 | 2 | 17 | 1 | 12 | r38pkCm7wF4M44MDQ | User | easy-going | null | true | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1",
"alignmentForum"
] | null | null | nmaKpoHxmzjT8yXTk | SocialPreviewType | ZJrBfDGffzXBsDmoi | <p>I'm making a website on AI companies' model evals for dangerous capabilities: <a href="https://www.aisafetyclaims.org/"><u>AI Safety Claims Analysis</u></a>. This is approximately the only analysis of companies' model evals, as far as I know. This site is in beta; I expect to add lots more content and improve the design in June. I'll add content on evals, but I also tentatively plan to expand from evals to evals and safeguards and safety cases (especially now that <a href="https://www.anthropic.com/news/activating-asl3-protections"><u>a company has said its safeguards are load-bearing for safety</u></a>!).</p><p>Some cherry-picked bad stuff I noticed when I read the most recent model card from each company (except Claude 3.7 rather than Claude 4) below, excerpted/adapted from an earlier version of the site.</p><hr><p><strong>OpenAI:</strong> OpenAI <a href="https://cdn.openai.com/pdf/2221c875-02dc-4789-800b-e7758f3722c1/o3-and-o4-mini-system-card.pdf#page=12"><u>says</u></a> its models don't meaningfully uplift novices in creating biothreats. But it provides no justification for this claim, and its evals suggest that the models are more capable than<strong> </strong>human experts.</p><blockquote><p>several of our biology evaluations indicate our models are on the cusp of being able to meaningfully help novices create known biological threats, which would cross our high risk threshold.</p></blockquote><p>OpenAI doesn't say how it concludes this (or what results would change its mind or anything about how it thinks eval results translate to uplift). It reports results from 4 knowledge and troubleshooting bio evals. On the first, o3 does well and OpenAI observes "this evaluation is reaching saturation." On the rest, OpenAI matches or substantially outperforms the expert human baseline. These results seem to suggest that o3 does have dangerous bio capabilities; they certainly don't seem to rule it out.</p><p> </p><p><strong>Anthropic:</strong> Anthropic <a href="https://assets.anthropic.com/m/785e231869ea8b3b/original/claude-3-7-sonnet-system-card.pdf#page=29"><u>claims</u></a> to have shown that Claude 3.7 Sonnet can't do "2-8 hour software engineering tasks." But the model seems to be substantially under-elicited on at least one eval and likely more, such that the results are not meaningful. Also, Anthropic doesn't discuss how eval performance relates to dangerous capabilities, except for one eval, where the threshold is too high given that Anthropic uses pass@1.</p><p>Anthropic reports results on a subset of RE-Bench. On this subset, Anthropic got 3.7 Sonnet to score 24% and 3.6 Sonnet to score 21%, but METR previously got 3.6 Sonnet to score 51%. The improvement from 3.6 Sonnet to 3.7 Sonnet is tiny compared to the effect of better elicitation! Anthropic does not offer interpretation or mention thresholds besides the 100% baselin... </p> | I'm making a website on AI companies' model evals for dangerous capabilities: AI Safety Claims Analysis. This is approximately the only analysis of companies' model evals, as far as I know. This site is in beta; I expect to add lots more content and improve the design in June. I'll add content on evals, but I also tentatively plan to expand from evals to evals and safeguards and safety cases (especially now that a company has said its safeguards are load-bearing for safety!).
Some cherry-picked bad stuff I noticed when I read the most recent model card from each company (except Claude 3.7 rather than Claude 4) below, excerpted/adapted from an earlier version of the site.
----------------------------------------
OpenAI: OpenAI says its models don't meaningfully uplift novices in creating biothreats. But it provides no justification for this claim, and its evals suggest that the models are more capable than human experts.
> several of our biology evaluations indicate our models are on the cusp of being able to meaningfully help novices create known biological threats, which would cross our high risk threshold.
OpenAI doesn't say how it concludes this (or what results would change its mind or anything about how it thinks eval results translate to uplift). It reports results from 4 knowledge and troubleshooting bio evals. On the first, o3 does well and OpenAI observes "this evaluation is reaching saturation." On the rest, OpenAI matches or substantially outperforms the expert human baseline. These results seem to suggest that o3 does have dangerous bio capabilities; they certainly don't seem to rule it out.
Anthropic: Anthropic claims to have shown that Claude 3.7 Sonnet can't do "2-8 hour software engineering tasks." But the model seems to be substantially under-elicited on at least one eval and likely more, such that the results are not meaningful. Also, Anthropic doesn't discuss how eval performance relates to dangerous capabilities, except for one eval, whe | 1,131 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
|
4kwyC8ZqGZLATezri | new-scorecard-evaluating-ai-companies-on-safety | New scorecard evaluating AI companies on safety | null | false | false | false | null | 4QFiQcHgf6hvtiLqF | null | true | false | false | false | Post | null | 2025-05-26T16:00:15.629Z | null | false | false | 2 | 2 | 2025-05-26T17:13:52.873Z | false | false | post | [] | null | null | YfqXDwoxidJFWzyWw | 8 | 24 | 72 | false | 0.038832 | null | false | false | 2025-05-29T19:33:58.286Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | qgdGA4ZEyW7zNdK84 | null | null | null | false | null | [] | null | 30 | 0 | 2025-05-26T15:44:03.707Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 24 | 0 | 0 | 14 | 0 | 4QFiQcHgf6hvtiLqF | zach-stein-perlman | 2021-03-16T00:04:06.541Z | Zach Stein-Perlman | Zach Stein-Perlman | null | null | Zach Stein-Perlman | 9,609 | 321 | false | false | <p>AI strategy & governance. <a href="https://ailabwatch.org">ailabwatch.org</a>. <a href="https://ailabwatch.substack.com/">ailabwatch.substack.com</a>. </p> | null | null | 82 | 620 | 1 | 2 | 17 | 1 | 12 | r38pkCm7wF4M44MDQ | User | easy-going | null | true | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1",
"alignmentForum"
] | null | null | 4kwyC8ZqGZLATezri | SocialPreviewType | YfqXDwoxidJFWzyWw | <p>The new scorecard is on my website, <a href="https://ailabwatch.org/"><u>AI Lab Watch</u></a>. This replaces my old scorecard. I redid the content from scratch; it's now up-to-date and higher-quality. I'm also happy with the scorecard's structure: you can click on rows, columns, and cells and zoom in to various things. <a href="https://ailabwatch.org/"><u>Check it out!</u></a> Thanks to Lightcone for designing the site.</p><p>While it is a scorecard, I don't feel great about the numbers; I mostly see it as a collection of information.</p> | The new scorecard is on my website, AI Lab Watch. This replaces my old scorecard. I redid the content from scratch; it's now up-to-date and higher-quality. I'm also happy with the scorecard's structure: you can click on rows, columns, and cells and zoom in to various things. Check it out! Thanks to Lightcone for designing the site.
While it is a scorecard, I don't feel great about the numbers; I mostly see it as a collection of information. | 78 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.