Datasets:
Dataset Viewer
_id
stringlengths 0
24
| slug
stringlengths 0
132
| title
stringlengths 0
313
| draft
null | shortform
bool 1
class | hideCommentKarma
bool 1
class | af
bool 2
classes | currentUserReviewVote
null | userId
stringlengths 17
24
| coauthorStatuses
listlengths 0
18
⌀ | hasCoauthorPermission
bool 2
classes | rejected
bool 1
class | debate
bool 2
classes | collabEditorDialogue
bool 2
classes | __typename
stringclasses 1
value | url
stringlengths 0
432
⌀ | postedAt
stringdate 2007-06-22 22:30:00
2025-06-28 01:40:04
| createdAt
null | sticky
bool 2
classes | metaSticky
bool 2
classes | stickyPriority
int64 2
2
| status
int64 2
2
| frontpageDate
stringdate 2018-01-30 00:32:03
2025-06-28 02:24:31
⌀ | meta
bool 2
classes | deletedDraft
bool 1
class | postCategory
stringclasses 3
values | shareWithUsers
sequencelengths 0
23
| sharingSettings
float64 | linkSharingKey
null | contents_latest
stringlengths 17
24
⌀ | commentCount
int64 0
2k
| voteCount
int64 -59
922
| baseScore
int64 -10
945
| unlisted
bool 1
class | score
float64 -0
5.05
| lastVisitedAt
null | isFuture
bool 1
class | isRead
bool 1
class | lastCommentedAt
stringdate 2007-08-06 20:29:51
2025-06-28 14:23:54
| lastCommentPromotedAt
stringclasses 21
values | canonicalCollectionSlug
stringclasses 4
values | curatedDate
stringclasses 691
values | commentsLocked
bool 2
classes | commentsLockedToAccountsCreatedAfter
stringclasses 1
value | question
bool 2
classes | hiddenRelatedQuestion
bool 1
class | originalPostRelationSourceId
stringclasses 46
values | location
null | googleLocation
null | onlineEvent
bool 1
class | globalEvent
bool 1
class | startTime
null | endTime
null | localStartTime
null | localEndTime
null | eventRegistrationLink
null | joinEventLink
null | facebookLink
stringclasses 1
value | meetupLink
null | website
stringclasses 1
value | contactInfo
stringclasses 1
value | isEvent
bool 1
class | eventImageId
null | eventType
null | types
sequencelengths 0
2
⌀ | groupId
stringclasses 106
values | reviewedByUserId
stringclasses 19
values | suggestForCuratedUserIds
null | suggestForCuratedUsernames
null | reviewForCuratedUserId
stringclasses 12
values | authorIsUnreviewed
bool 1
class | afDate
stringclasses 590
values | suggestForAlignmentUserIds
sequencelengths 0
4
| reviewForAlignmentUserId
stringclasses 6
values | afBaseScore
float64 -21
217
⌀ | afCommentCount
int64 0
149
| afLastCommentedAt
stringdate 2007-06-26 21:13:26
2025-06-28 01:40:04
⌀ | afSticky
bool 2
classes | hideAuthor
bool 2
classes | moderationStyle
stringclasses 4
values | ignoreRateLimits
bool 2
classes | submitToFrontpage
bool 2
classes | onlyVisibleToLoggedIn
bool 1
class | onlyVisibleToEstablishedAccounts
bool 2
classes | reviewCount
int64 0
8
| reviewVoteCount
int64 0
115
| positiveReviewVoteCount
int64 0
98
| manifoldReviewMarketId
stringclasses 900
values | annualReviewMarketProbability
float64 0.01
0.99
⌀ | annualReviewMarketIsResolved
bool 2
classes | annualReviewMarketYear
float64 2.02k
2.03k
⌀ | annualReviewMarketUrl
stringclasses 900
values | group
float64 | podcastEpisodeId
stringclasses 396
values | forceAllowType3Audio
bool 1
class | nominationCount2019
int64 0
6
| reviewCount2019
int64 0
6
| votingSystem
stringclasses 2
values | disableRecommendation
bool 2
classes | coauthors
listlengths 0
18
| readTimeMinutes
int64 1
315
| rejectedReason
stringclasses 12
values | customHighlight
float64 | lastPromotedComment
float64 | bestAnswer
float64 | tags
listlengths 0
31
| feedId
stringclasses 45
values | totalDialogueResponseCount
int64 0
0
| unreadDebateResponseCount
int64 0
0
| dialogTooltipPreview
stringclasses 6
values | disableSidenotes
bool 2
classes | currentUserVote
null | currentUserExtendedVote
null | extendedScore.agreement
float64 -6
2
⌀ | extendedScore.approvalVoteCount
float64 1
922
⌀ | extendedScore.agreementVoteCount
float64 0
1
⌀ | afExtendedScore.agreement
float64 -6
2
⌀ | afExtendedScore.approvalVoteCount
float64 0
175
⌀ | afExtendedScore.agreementVoteCount
float64 0
1
⌀ | user._id
stringlengths 17
24
⌀ | user.slug
stringlengths 2
40
⌀ | user.createdAt
stringdate 2009-02-17 05:49:50
2025-06-26 13:32:01
⌀ | user.username
stringlengths 1
64
⌀ | user.displayName
stringlengths 1
43
⌀ | user.profileImageId
float64 | user.previousDisplayName
float64 | user.fullName
stringclasses 979
values | user.karma
float64 -1,560
150k
⌀ | user.afKarma
float64 -63
6.7k
⌀ | user.deleted
bool 1
class | user.isAdmin
bool 2
classes | user.htmlBio
stringlengths 0
9.48k
⌀ | user.jobTitle
float64 | user.organization
float64 | user.postCount
float64 0
1.02k
⌀ | user.commentCount
float64 0
16.1k
⌀ | user.sequenceCount
float64 0
40
⌀ | user.afPostCount
float64 -4
364
⌀ | user.afCommentCount
float64 0
1.39k
⌀ | user.spamRiskScore
float64 0
1
⌀ | user.tagRevisionCount
float64 0
3.8k
⌀ | user.reviewedByUserId
stringclasses 18
values | user.__typename
stringclasses 1
value | user.moderationStyle
stringclasses 4
values | user.bannedUserIds
sequencelengths 0
6
⌀ | user.moderatorAssistance
bool 2
classes | user.groups
sequencelengths 0
289
⌀ | user.banned
stringclasses 30
values | user.allCommentingDisabled
float64 | socialPreviewData._id
stringlengths 0
24
| socialPreviewData.imageUrl
stringlengths 0
149k
| socialPreviewData.__typename
stringclasses 1
value | contents._id
stringlengths 17
24
⌀ | contents.htmlHighlight
stringlengths 0
2.31M
⌀ | contents.plaintextDescription
stringlengths 0
2k
⌀ | contents.wordCount
float64 0
78.7k
⌀ | contents.version
stringclasses 299
values | contents.__typename
stringclasses 1
value | fmCrosspost.isCrosspost
bool 2
classes | fmCrosspost.hostedHere
bool 2
classes | fmCrosspost.foreignPostId
stringlengths 17
17
⌀ | fmCrosspost.__typename
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vHDowQtsiy2xK38H4 | axrp-episode-44-peter-salib-on-ai-rights-for-human-safety | AXRP Episode 44 - Peter Salib on AI Rights for Human Safety | null | false | false | true | null | DgsGzjyBXN8XSK22q | null | true | false | false | false | Post | null | 2025-06-28T01:40:04.658Z | null | false | false | 2 | 2 | 2025-06-28T02:24:18.106Z | false | false | post | [] | null | null | HQHcrmhniucS97wzD | 0 | 1 | 8 | false | 0.824727 | null | false | false | 2025-06-28T01:40:04.658Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 6 | 0 | 2025-06-28T01:40:04.658Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 124 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "qHDus5MuMNqQxJbjD",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T18:31:56.709Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Governance",
"needsReview": false,
"noindex": false,
"postCount": 726,
"score": 11,
"shortName": null,
"slug": "ai-governance",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "vjKs7Pvz3MbgMc75C",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-03-26T12:39:55.451Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Audio",
"needsReview": false,
"noindex": false,
"postCount": 125,
"score": 0,
"shortName": null,
"slug": "audio",
"suggestedAsFilter": false,
"userId": "BpBzKEueak7J8vHNi",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "8Ec9rD286qNstoiGH",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-12-23T10:49:50.259Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AXRP",
"needsReview": false,
"noindex": false,
"postCount": 60,
"score": 9,
"shortName": null,
"slug": "axrp",
"suggestedAsFilter": false,
"userId": "4Kn3eZCPNB8gw4YSi",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "chuP2QqQycjD8qakL",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-22T03:42:53.917Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 1000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Coordination / Cooperation",
"needsReview": false,
"noindex": false,
"postCount": 306,
"score": 19,
"shortName": null,
"slug": "coordination-cooperation",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "9DNZfxFvY5iKoZQbz",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2019-11-13T22:47:01.189Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Interviews",
"needsReview": false,
"noindex": false,
"postCount": 120,
"score": 0,
"shortName": null,
"slug": "interviews",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "wGGAjTfXZBatQkft5",
"adminOnly": false,
"afBaseScore": 7,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "sKAL2jzfkYkDbQmx9",
"displayName": "Yoav Ravid"
}
]
},
"baseScore": 17,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T09:26:08.406Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "sKAL2jzfkYkDbQmx9",
"displayName": "Yoav Ravid"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Law and Legal systems",
"needsReview": false,
"noindex": false,
"postCount": 101,
"score": 17,
"shortName": null,
"slug": "law-and-legal-systems",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "BhfefamXXee6c2CH8",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-11T06:51:38.152Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Transcripts",
"needsReview": false,
"noindex": false,
"postCount": 78,
"score": 0,
"shortName": null,
"slug": "transcripts",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | n7xK8hmp8XmQLWrNT | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 1 | 0 | DgsGzjyBXN8XSK22q | danielfilan | 2014-01-30T11:04:39.341Z | DanielFilan | DanielFilan | null | null | null | 8,823 | 1,852 | false | false | null | null | 150 | 1,377 | 1 | 26 | 353 | 1 | 8 | r38pkCm7wF4M44MDQ | User | easy-going | null | true | [
"alignmentForum",
"trustLevel1",
"alignmentVoters",
"canModeratePersonal",
"tagManager"
] | null | null | vHDowQtsiy2xK38H4 | SocialPreviewType | HQHcrmhniucS97wzD | <p><a href="https://youtu.be/IK079sBIq2c">YouTube link</a></p><p>In this episode, I talk with Peter Salib about his paper “AI Rights for Human Safety”, arguing that giving AIs the right to contract, hold property, and sue people will reduce the risk of their trying to attack humanity and take over. He also tells me how law reviews work, in the face of my incredulity.</p><p>Topics we discuss:</p>
<ul>
<li><a href="#why-ai-rights">Why AI rights</a></li>
<li><a href="#why-not-reputation">Why not reputation</a></li>
<li><a href="#rights-to-war">Do AI rights lead to AI war?</a></li>
<li><a href="#scope-for-trade">Scope for human-AI trade</a></li>
<li><a href="#conc-w-comp-adv">Concerns with comparative advantage</a></li>
<li><a href="#proxy-ai-wars">Proxy AI wars</a></li>
<li><a href="#rights-and-profit">Can companies profitably make AIs with rights?</a></li>
<li><a href="#rights-and-safety">Can we have AI rights and AI safety measures?</a></li>
<li><a href="#liability-ai-rights">Liability for AIs with rights</a></li>
<li><a href="#which-ais-get-rights">Which AIs get rights?</a></li>
<li><a href="#ai-rights-and-sgd">AI rights and stochastic gradient descent</a></li>
<li><a href="#individuating-ais">Individuating “AIs”</a></li>
<li><a href="#social-insts-for-ai-safety">Social institutions for AI safety</a></li>
<li><a href="#outer-misalignment-and-trading">Outer misalignment and trading with AIs</a></li>
<li><a href="#why-statutes-of-lims">Why statutes of limitations should exist</a></li>
<li><a href="#starting-aixr-in-academia">Starting AI x-risk research in legal academia</a></li>
<li><a href="#how-law-revs-and-ai-confs-work">How law reviews and AI conferences work</a></li>
<li><a href="#more-on-peter-moving-aixr">More on Peter moving to AI x-risk research</a></li>
<li><a href="#reception">Reception of the paper</a></li>
<li><a href="#what-publishing-in-law-revs-does">What publishing in law reviews does</a></li>
<li><a href="#which-bits-legal-ac-focus-on-ai">Which parts of legal academia focus on AI</a></li>
<li><a href="#following-peters-research">Following Peter’s research</a></li>
</ul>
<p><strong>Daniel Filan</strong> (00:00:09):
Hello, everybody. In this episode I’ll be speaking with Peter Salib. Peter is a law professor at the University of Houston, the co-director for the <a href="https://clair-ai.org/">Center for Law and AI Risk</a>, and he serves as law and policy advisor for the <a href="https://safe.ai/">Center for AI Safety</a>. There’s a transcript of this episode at <a href="https://axrp.net/">axrp.net</a> and links to papers we discuss are available in the description. You can support the podcast at <a href="https://patreon.com/axrpodcast">patreon.com/axrpodcast</a>, or give me feedback about this episode at <a href="axrp.fyi">axrp.fyi</a>. Well, let’s continue to the interview.</p><p>(00:00:35):
Well, Peter, welcome to the podcast.</p><p><strong>Peter Salib</strong> (00:00:38):
Thank you so much for having me. I’m a big fan.</p>
<h2>Why AI rights <a name="why-ai-rights"></a></h2>
<p><strong>Daniel Filan</strong> (00:00:40):
So I guess, probably, we’re going to focus a lot on your recent paper, <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4913167">“AI Rights for Human Safety”</a>. So you wrote this, yourself and <a href="https://www.simondgoldstein.com/">Simon Goldstein</a>. So can you tell us, just to start off with, what’s the basic idea of this paper?</p><p><strong>Peter Salib</strong> (00:00:56):
Yeah, I think at a very high level, one intuition that we’re trying to pump is the idea that how AIs treat us - and by AIs we mean something like post-AGI AIs, really agentic, quite capable of things - so how those AIs treat us will depend to some significant extent on how we treat them. And a big part of how we decide how to treat various kinds of entit... </p> | YouTube link
In this episode, I talk with Peter Salib about his paper “AI Rights for Human Safety”, arguing that giving AIs the right to contract, hold property, and sue people will reduce the risk of their trying to attack humanity and take over. He also tells me how law reviews work, in the face of my incredulity.
Topics we discuss:
* Why AI rights
* Why not reputation
* Do AI rights lead to AI war?
* Scope for human-AI trade
* Concerns with comparative advantage
* Proxy AI wars
* Can companies profitably make AIs with rights?
* Can we have AI rights and AI safety measures?
* Liability for AIs with rights
* Which AIs get rights?
* AI rights and stochastic gradient descent
* Individuating “AIs”
* Social institutions for AI safety
* Outer misalignment and trading with AIs
* Why statutes of limitations should exist
* Starting AI x-risk research in legal academia
* How law reviews and AI conferences work
* More on Peter moving to AI x-risk research
* Reception of the paper
* What publishing in law reviews does
* Which parts of legal academia focus on AI
* Following Peter’s research
Daniel Filan (00:00:09): Hello, everybody. In this episode I’ll be speaking with Peter Salib. Peter is a law professor at the University of Houston, the co-director for the Center for Law and AI Risk, and he serves as law and policy advisor for the Center for AI Safety. There’s a transcript of this episode at axrp.net and links to papers we discuss are available in the description. You can support the podcast at patreon.com/axrpodcast, or give me feedback about this episode at axrp.fyi. Well, let’s continue to the interview.
(00:00:35): Well, Peter, welcome to the podcast.
Peter Salib (00:00:38): Thank you so much for having me. I’m a big fan.
Why AI rights
Daniel Filan (00:00:40): So I guess, probably, we’re going to focus a lot on your recent paper, “AI Rights for Human Safety”. So you wrote this, yourself and Simon Goldstein. So can you tell us, just to star | 30,917 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
ccz9fJMhRGqApYeBe | the-most-famous-lunchtime-question-in-the-history-of-the | The Most Famous Lunchtime Question in the History of the World | null | false | false | false | null | HbpRGu2epBkGtacaT | null | true | false | false | false | Post | null | 2025-06-28T01:16:31.404Z | null | false | false | 2 | 2 | 2025-06-28T02:24:30.213Z | false | false | post | [] | null | null | bwjTbiQnAjyPnDfHB | 3 | 2 | -1 | false | 0.413965 | null | false | false | 2025-06-28T13:49:14.814Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-27T04:55:15.975Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 15 | null | null | null | null | [] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 0 | 0 | HbpRGu2epBkGtacaT | kiwiscribe | 2025-06-24T04:50:10.544Z | Kiwiscribe | Kiwiscribe | null | null | null | -2 | 0 | false | false | null | null | 1 | 1 | 0 | 0 | 0 | 0.8 | 0 | EQNTWXLKMeWMp2FQS | User | null | null | null | null | null | null | ccz9fJMhRGqApYeBe | SocialPreviewType | bwjTbiQnAjyPnDfHB | <p>I have an Enthusiastic Layperson's Moderate Obsession (yes, it's an ELMO) with what may be the most consequential lunchtime question in scientific history.</p><p>In 1950, physicist Enrico Fermi was having lunch with colleagues at Los Alamos when the conversation turned to UFOs and the possibility of extraterrestrial life. They were discussing the vast number of stars and the probability that some should harbor intelligent civilizations - the sort of conversation that may have been routine among physicists but is quite dissimilar to my lunchtime conversations of why my socks don't match and whether loaded fries constitute a salad.</p><p>And then Fermi posed the question that triggered my ELMO for the better part of my adult life: "Where is everybody?"</p><p>The numbers alone feel beyond real human comprehension - I know what the numbers <i>mean</i> but do I genuinely understand the scope and scale? I think probably not. We're looking at 200-400 billion stars in our galaxy, and roughly 70 sextillion stars in the observable universe. Even granting extremely conservative values to the Drake Equation² (not an ELMO, but a Very Interesting Thing), the numbers strongly suggest we should have detected thousands of civilizations by now.</p><p>We haven't. Not one.</p><p>After seven decades of increasingly sophisticated searches, we've found complete silence. This isn't merely an absence of evidence - it's a systematic void that spans the entire observable cosmos. It's rather like surveying a metropolis that should contain millions of inhabitants and finding it not just quiet, but empty.</p><p>So, taking the above into account, over the past ten years I have been germinating some ideas of my own (yes I am as skeptical of this development as you most probably are). I've started to think this puzzle may have a solution - and we may be living through the answer right now. If I'm correct, the implications are both sobering and, perhaps surprisingly, hopeful.</p><p>But first, let me explain why every proposed solution to this mystery has struck me as fundamentally inadequate.</p><h2>The Inadequacy of Standard Explanations</h2><p>For seventy years, scientists, public intellectuals, economists (good lord! Economists!), philosophers and other worthies have offered various solutions to the Fermi Paradox. But I have felt that every explanation suffers from problems that should trouble anyone committed to rigorous thinking.</p><p>Take the Zoo Hypothesis - th... </p> | I have an Enthusiastic Layperson's Moderate Obsession (yes, it's an ELMO) with what may be the most consequential lunchtime question in scientific history.
In 1950, physicist Enrico Fermi was having lunch with colleagues at Los Alamos when the conversation turned to UFOs and the possibility of extraterrestrial life. They were discussing the vast number of stars and the probability that some should harbor intelligent civilizations - the sort of conversation that may have been routine among physicists but is quite dissimilar to my lunchtime conversations of why my socks don't match and whether loaded fries constitute a salad.
And then Fermi posed the question that triggered my ELMO for the better part of my adult life: "Where is everybody?"
The numbers alone feel beyond real human comprehension - I know what the numbers mean but do I genuinely understand the scope and scale? I think probably not. We're looking at 200-400 billion stars in our galaxy, and roughly 70 sextillion stars in the observable universe. Even granting extremely conservative values to the Drake Equation² (not an ELMO, but a Very Interesting Thing), the numbers strongly suggest we should have detected thousands of civilizations by now.
We haven't. Not one.
After seven decades of increasingly sophisticated searches, we've found complete silence. This isn't merely an absence of evidence - it's a systematic void that spans the entire observable cosmos. It's rather like surveying a metropolis that should contain millions of inhabitants and finding it not just quiet, but empty.
So, taking the above into account, over the past ten years I have been germinating some ideas of my own (yes I am as skeptical of this development as you most probably are). I've started to think this puzzle may have a solution - and we may be living through the answer right now. If I'm correct, the implications are both sobering and, perhaps surprisingly, hopeful.
But first, let me explain why every proposed solution to th | 3,725 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
QbmZuCBE2uWS7j28R | prediction-markets-have-an-anthropic-bias-to-deal-with | Prediction Markets Have an Anthropic Bias to Deal With | null | false | false | false | null | ExZDMq64w4am7mjhZ | null | true | false | false | false | Post | null | 2025-06-28T01:16:05.041Z | null | false | false | 2 | 2 | 2025-06-28T02:24:32.478Z | false | false | post | [] | null | null | Aw67kbG2xf5R4gg33 | 1 | 4 | 4 | false | 0.635615 | null | false | false | 2025-06-28T01:31:34.448Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-26T16:13:25.821Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 14 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "PbShukhzpLsWpGXkM",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-09T10:45:33.205Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Anthropics",
"needsReview": false,
"noindex": false,
"postCount": 273,
"score": 19,
"shortName": null,
"slug": "anthropics",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "oNcqyaWPXNGTTRPHm",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-12-23T09:11:59.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Existential risk",
"needsReview": false,
"noindex": false,
"postCount": 515,
"score": 0,
"shortName": null,
"slug": "existential-risk",
"suggestedAsFilter": false,
"userId": "7iXcndyHDvmt77ggr",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "R6dqPii4cyNpuecLt",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-01-14T03:06:53.703Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Prediction Markets",
"needsReview": false,
"noindex": false,
"postCount": 171,
"score": 19,
"shortName": null,
"slug": "prediction-markets",
"suggestedAsFilter": false,
"userId": "nLbwLhBaQeG6tCNDN",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 0 | 0 | ExZDMq64w4am7mjhZ | ar-sht | 2023-05-11T22:22:45.636Z | ar-sht | ar-sht | null | null | Ari Shtein | 3 | 0 | false | false | <p>Writing at <a href="https://mistakesweremade.substack.com/">mistakesweremade.substack.com</a></p> | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | EQNTWXLKMeWMp2FQS | User | null | null | null | null | null | null | QbmZuCBE2uWS7j28R | SocialPreviewType | Aw67kbG2xf5R4gg33 | <p><i><strong>TL;DR</strong>: Whether </i><a href="https://nickbostrom.com/papers/anthropicshadow.pdf"><i>Ćirković et al.'s Anthropic Shadow</i></a><i> is philosophically legitimate or not, it contains a totally relevant intuition for profiting off prediction markets: <strong>if you're dead when the taxman comes, it doesn't matter how much you owe</strong>. As such, there's either an opportunity to make money by betting against x-risk-correlated events, or (once everyone's figured this out) prediction markets will systematically underestimate x-risk-correlated outcomes. In the post, I summarize a bit of the general debate about the Anthropic Shadow; then I present a toy model for the effect described, and a generalized version. I also list some candidate x-risk-correlated markets that could be vulnerable to the Anthropic Bias. Finally, I bring up and address some limitations I could think of—probably there are lots more, and I'd love to hear what they are!</i></p><hr><h1>What Am I Even Talking About?</h1><p>In 2010, a group of astronomers and philosophers (including Nick Bostrom) <a href="https://nickbostrom.com/papers/anthropicshadow.pdf">published a paper</a> arguing that we might be underestimating some kinds of existential risk, because we weren't taking into consideration the fact that we're all alive.</p><p>For example, if a very-deadly asteroid strike had ever occurred, it would probably have killed everyone—leaving no one to record it or study the frequency of deadly asteroid strikes. As a result, our own frequency-estimates of asteroid strikes may well be systematically biased, because we can only see asteroid-strike-histories compatible with continued existence.</p><p>In other words, all the counterfactual realities where catastrophic events <i>have</i> played out cast an "Anthropic Shadow" over our risk estimates: no observers exist within them, so we can never really see a fully-truly-representative sample.</p><p>Implicit in the paper is an acceptance of Bostrom's Self-Sampling Assumption (SSA), which holds that you should reason as if your existence was plucked randomly from the set of all existing observers, past, present, and future. The SSA is also behind Bostrom's earlier <a href="https://www.lesswrong.com/w/doomsday-argument">Doomsday Argument</a>, which says that it's <i>really surprising</i> how early in the universe we all exist—how it's strange that we're <i>here now</i>, when there are so few people around, and how that implies that we probably don't have a very vast future full of observers ahead of us.</p><p>There's been a <i>ton</i> of debate in these parts on whether the <a href="https://www.lesswrong.com/posts/YgSKfAG2iY5Sxw7Xd/doomsday-argument-and-the-false-dilemma-of-anthropic">Doomsday Argument is masturbatory nonsense</a>, whether the <a href="https://www.lesswrong.com/posts/LGHuaLiq3F5NHQXXF/anthropically-blind-the-anthropic-shadow-is-reflectively">Anthropic Shadow is mas</a>... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | TL;DR: Whether Ćirković et al.'s Anthropic Shadow is philosophically legitimate or not, it contains a totally relevant intuition for profiting off prediction markets: if you're dead when the taxman comes, it doesn't matter how much you owe. As such, there's either an opportunity to make money by betting against x-risk-correlated events, or (once everyone's figured this out) prediction markets will systematically underestimate x-risk-correlated outcomes. In the post, I summarize a bit of the general debate about the Anthropic Shadow; then I present a toy model for the effect described, and a generalized version. I also list some candidate x-risk-correlated markets that could be vulnerable to the Anthropic Bias. Finally, I bring up and address some limitations I could think of—probably there are lots more, and I'd love to hear what they are!
----------------------------------------
What Am I Even Talking About?
In 2010, a group of astronomers and philosophers (including Nick Bostrom) published a paper arguing that we might be underestimating some kinds of existential risk, because we weren't taking into consideration the fact that we're all alive.
For example, if a very-deadly asteroid strike had ever occurred, it would probably have killed everyone—leaving no one to record it or study the frequency of deadly asteroid strikes. As a result, our own frequency-estimates of asteroid strikes may well be systematically biased, because we can only see asteroid-strike-histories compatible with continued existence.
In other words, all the counterfactual realities where catastrophic events have played out cast an "Anthropic Shadow" over our risk estimates: no observers exist within them, so we can never really see a fully-truly-representative sample.
Implicit in the paper is an acceptance of Bostrom's Self-Sampling Assumption (SSA), which holds that you should reason as if your existence was plucked randomly from the set of all existing observers, past, present, and futur | 3,430 | 1.8.0 | Revision | false | null | null | CrosspostOutput |
|
Mz49qbhWmm78Nz7oT | untitled-draft-7oyv | Relative Utilitarianism: A Moral Framework Rooted in Relative Joy | null | false | false | false | null | brNneb4QuSccnrnLz | null | true | false | false | false | Post | null | 2025-06-28T01:09:42.496Z | null | false | false | 2 | 2 | 2025-06-28T02:24:23.492Z | false | false | post | [] | null | null | QFH9cZHdNfsEcBh7F | 0 | 4 | 5 | false | 0.674097 | null | false | false | 2025-06-28T01:09:42.496Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 3 | 0 | 2025-06-28T00:46:15.963Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "nSHiKwWyMZFdZg5qt",
"adminOnly": false,
"afBaseScore": 6,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-12T09:38:52.349Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Ethics & Morality",
"needsReview": false,
"noindex": false,
"postCount": 639,
"score": 10,
"shortName": null,
"slug": "ethics-and-morality",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Zs4nYLkNr7Rbo4mAP",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-25T20:36:29.469Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Utilitarianism",
"needsReview": false,
"noindex": false,
"postCount": 100,
"score": 9,
"shortName": null,
"slug": "utilitarianism",
"suggestedAsFilter": false,
"userId": "GWcS6PkzgGeaM4An2",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 2 | 0 | brNneb4QuSccnrnLz | rootneg1reality | 2025-06-25T14:23:54.726Z | RootNeg1Reality | RootNeg1Reality | null | null | null | 9 | 0 | false | false | null | null | 1 | 1 | 0 | 0 | 0 | 0.9 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | null | null | null | Mz49qbhWmm78Nz7oT | SocialPreviewType | QFH9cZHdNfsEcBh7F | <p>I consider myself a utilitarian, but basically don't agree with them about anything. That's because I understand that joy is largely relative. It's <strong>radically</strong>, but quietly warped my morals over the years.</p><p>We know that people adapt to pain and pleasure. We know that context means everything. What does it do to a system of morals though?</p><p>An ice cream today will give you a smile. Tomorrow you will have a higher frame of reference and be disappointed with a lack of sweets. The total joy from ice cream is zero on average. Give birth and you will suffer from pain, but the relief after it's done will make the rest of your life brighter. What is left to latch on to?</p><p>The answer is changes that affect the rest of your life. <strong>Debts that don't need to be paid</strong>. Eating ice cream before you die. Having a child that outlives you. Making the world a better place. Achieving mental stability and health. Finding your place in society.</p><p>Most traditional utilitarian theories stumble when they try to treat joy as an absolute quantity. As if eating an ice cream could be weighed against raising a child on the same scale, but that’s not how humans work.</p><p>Joy can be a spike on a chart or a <strong>contextual, long-term state</strong> that arises when your life genuinely improves in a way you can feel and believe in. Fleeing-joy and effective-joy. Once you understand this, utilitarian thinking shifts to chasing a different category of joy.</p><p>This is a moral system based on maximizing <strong>long-term, psychologically grounded</strong> joy, stability and circumstances that facilitate it, not momentary pleasure or hypothetical utility.</p><p>---</p><p>### Core Tenets</p><p>1. <strong>Happiness is relative to life context</strong>. A small improvement in a bad life can feel enormous. A luxurious life can feel hollow. We care about <strong>perceived movement</strong> toward meaning and fulfillment, not just raw stimulation.</p><p>2. <strong>Short-term suffering often builds long-term joy.</strong> Meaningful change usually involves sacrifice, discipline, discomfort — even pain. But the <strong>shape</strong> of a life matters more than any one moment in it.</p><p>3. <strong>The core moral goods are mental health, trust, and stability</strong>. These are the conditions under which relative joy can grow. Everything else is a means to these ends.</p><p>4. <strong>The goal of morality is to shape the world for higher baseline joy</strong>. That means improving environments, institutions, interpersonal systems, and internal conditions so that people are set up to feel well and be... </p> | I consider myself a utilitarian, but basically don't agree with them about anything. That's because I understand that joy is largely relative. It's radically, but quietly warped my morals over the years.
We know that people adapt to pain and pleasure. We know that context means everything. What does it do to a system of morals though?
An ice cream today will give you a smile. Tomorrow you will have a higher frame of reference and be disappointed with a lack of sweets. The total joy from ice cream is zero on average. Give birth and you will suffer from pain, but the relief after it's done will make the rest of your life brighter. What is left to latch on to?
The answer is changes that affect the rest of your life. Debts that don't need to be paid. Eating ice cream before you die. Having a child that outlives you. Making the world a better place. Achieving mental stability and health. Finding your place in society.
Most traditional utilitarian theories stumble when they try to treat joy as an absolute quantity. As if eating an ice cream could be weighed against raising a child on the same scale, but that’s not how humans work.
Joy can be a spike on a chart or a contextual, long-term state that arises when your life genuinely improves in a way you can feel and believe in. Fleeing-joy and effective-joy. Once you understand this, utilitarian thinking shifts to chasing a different category of joy.
This is a moral system based on maximizing long-term, psychologically grounded joy, stability and circumstances that facilitate it, not momentary pleasure or hypothetical utility.
---
### Core Tenets
1. Happiness is relative to life context. A small improvement in a bad life can feel enormous. A luxurious life can feel hollow. We care about perceived movement toward meaning and fulfillment, not just raw stimulation.
2. Short-term suffering often builds long-term joy. Meaningful change usually involves sacrifice, discipline, discomfort — even pain. But the shape of a li | 882 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
||
ZdY4JzBPJEgaoCxTR | emergent-misalignment-and-realignment | Emergent Misalignment & Realignment | null | false | false | false | null | yhmwespXuAt42hwsb | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "99gs5Q5FpAdpezwEA"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "r826WZC3WzyNH42qx"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "dgCCRjddYxs8WguxH"
}
] | true | false | false | false | Post | null | 2025-06-27T21:31:43.367Z | null | false | false | 2 | 2 | 2025-06-28T00:08:58.220Z | false | false | post | [] | null | null | tPypbcP8fqx24SqwT | 0 | 3 | 3 | false | 0.444956 | null | false | false | 2025-06-27T21:31:43.367Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-19T10:53:09.740Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "99gs5Q5FpAdpezwEA",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2025-06-26T16:35:44.140Z",
"deleted": false,
"displayName": "JasperTimm",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 1,
"organization": null,
"postCount": 0,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": null,
"sequenceCount": 0,
"slug": "jaspertimm",
"spamRiskScore": 0.8,
"tagRevisionCount": 0,
"username": "jaspertimm"
},
{
"__typename": "User",
"_id": "r826WZC3WzyNH42qx",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2022-01-23T16:39:01.018Z",
"deleted": false,
"displayName": "KevinWei",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 8,
"organization": null,
"postCount": 1,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "55XxDBpfKkkBPm9H8",
"sequenceCount": 0,
"slug": "kevinwei",
"spamRiskScore": 0.9,
"tagRevisionCount": 0,
"username": "KevinWei"
},
{
"__typename": "User",
"_id": "dgCCRjddYxs8WguxH",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 4,
"createdAt": "2021-09-09T05:03:56.468Z",
"deleted": false,
"displayName": "David Quarel",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 52,
"organization": null,
"postCount": 0,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "qgdGA4ZEyW7zNdK84",
"sequenceCount": 0,
"slug": "david-quarel",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "david-quarel"
}
] | 21 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 0 | 0 | yhmwespXuAt42hwsb | lizat | 2025-06-19T10:51:24.477Z | LizaT | LizaT | null | null | null | 1 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | null | null | null | ZdY4JzBPJEgaoCxTR | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/enxvqpc1bptnanvdubp8 | SocialPreviewType | tPypbcP8fqx24SqwT | <h2><strong>Reproduction, Extension & Mitigations </strong></h2><p>Authors: <i>Elizaveta Tennant, Jasper Timm, Kevin Wei, David Quarel </i></p><p>In this project, we set out to explore the generality of Emergent Misalignment (via a replication and some extensions) and how easy it is to mitigate. This project was conducted during the capstone week at <a href="https://www.arena.education/"><u>ARENA (Alignment Research Engineering Accelerator) 5.0</u></a>.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="2e95w4b5omn" role="doc-noteref" id="fnref2e95w4b5omn"><sup><a href="#fn2e95w4b5omn">[1]</a></sup></span></p><p>Note: this blog contains examples of harmful and offensive content</p><h2>TL;DR </h2><p>We replicate and extend the Emergent Misalignment (EM) paper. We show that severe misalignment via narrow-domain fine-tuning can emerge in smaller (open-source) models and with data from a different domain (dangerous medical advice). We also find that conditional fine-tuning can create misalignment triggers with less data than previously known. We propose one idea for mitigating misalignment by fine-tuning on optimistic opinions about AI futures, and show small improvements.</p><p> </p><h2>Background </h2><h3>What is Emergent Misalignment? </h3><p>A recent paper introduced the idea of <a href="https://arxiv.org/pdf/2502.17424"><u>Emergent Misalignment (EM)</u></a>: fine-tuning LLMs on a narrow domain elicits a generally misaligned<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="rlavh93gpoe" role="doc-noteref" id="fnrefrlavh93gpoe"><sup><a href="#fnrlavh93gpoe">[2]</a></sup></span> persona in the model. Specifically, the authors found that running Supervised Fine-tuning (SFT) on GPT-4o with insecure code Q&A data caused the model to answer general questions in misaligned ways [figures from the EM paper]: </p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ZdY4JzBPJEgaoCxTR/ma5wiufctycelczgbzad"><figcaption>Image from original Emergent Misalignment paper: <a href="https://arxiv.org/pdf/2502.17424">https://arxiv.org/pdf/2502.17424</a> </figcaption></figure><p>Other works have shown that safety fine-tuning is relatively easy to undo in models (see<a href="https://arxiv.org/abs/2311.00117"><u> work on llama2</u></a> and <a href="https://arxiv.org/abs/2505.16789"><u>accidental misalignment from fine-tuning</u></a> and <a href="https://arxiv.org/abs/2406.11717"><u>refusal is a single direction</u></a>). Furthermore, recent work has also identified a precise <a href="https://arxiv.org/abs/2504.20980"><u>‘Jekyll and Hyde’ tipping point</u></a> in models’ behaviour during fine-tuning. </p><p> </p><h3>Why might Emergent Misalignment arise?</h3><p>The predominant explanation for the EM effect is the fact that the base gpt-4o model (and other frontier LLMs) contains a variety of <i>personas</i> which can, in theory, be triggered / made more salient via prompting (see <a href="https://www.nature.com/articles/s41586-023-06647-8"><u>role-play terminology from Murray Shanahan</u></a> and mech interp evidence from the <a href="https://openai.com/index/emergent-misalignment/"><u>recent OpenAI work</u></a>). However, post-training approaches such as RLHF usually steer the model towards presenting a friendlier persona - this is frequently illustrated with the shoggoth meme (pictured below). The authors of the EM work ar... </p> | Reproduction, Extension & Mitigations
Authors: Elizaveta Tennant, Jasper Timm, Kevin Wei, David Quarel
In this project, we set out to explore the generality of Emergent Misalignment (via a replication and some extensions) and how easy it is to mitigate. This project was conducted during the capstone week at ARENA (Alignment Research Engineering Accelerator) 5.0.[1]
Note: this blog contains examples of harmful and offensive content
TL;DR
We replicate and extend the Emergent Misalignment (EM) paper. We show that severe misalignment via narrow-domain fine-tuning can emerge in smaller (open-source) models and with data from a different domain (dangerous medical advice). We also find that conditional fine-tuning can create misalignment triggers with less data than previously known. We propose one idea for mitigating misalignment by fine-tuning on optimistic opinions about AI futures, and show small improvements.
Background
What is Emergent Misalignment?
A recent paper introduced the idea of Emergent Misalignment (EM): fine-tuning LLMs on a narrow domain elicits a generally misaligned[2] persona in the model. Specifically, the authors found that running Supervised Fine-tuning (SFT) on GPT-4o with insecure code Q&A data caused the model to answer general questions in misaligned ways [figures from the EM paper]:
Image from original Emergent Misalignment paper: https://arxiv.org/pdf/2502.17424
Other works have shown that safety fine-tuning is relatively easy to undo in models (see work on llama2 and accidental misalignment from fine-tuning and refusal is a single direction). Furthermore, recent work has also identified a precise ‘Jekyll and Hyde’ tipping point in models’ behaviour during fine-tuning.
Why might Emergent Misalignment arise?
The predominant explanation for the EM effect is the fact that the base gpt-4o model (and other frontier LLMs) contains a variety of personas which can, in theory, be triggered / made more salient via prompting (se | 5,220 | 1.15.1 | Revision | false | null | null | CrosspostOutput |
|
d3HctHaN4kJDBbK8w | project-moonbeam | Project Moonbeam | null | false | false | false | null | 2WJYaJavgTNHoSNbC | null | true | false | false | false | Post | null | 2025-06-27T21:08:24.694Z | null | false | false | 2 | 2 | 2025-06-28T00:05:31.788Z | false | false | post | [] | null | null | XNTaewy6ho7qqim4w | 1 | 2 | 7 | false | 0.567779 | null | false | false | 2025-06-28T07:26:49.422Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-27T21:06:10.267Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 7 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "TKnkyuGMozp39BTfD",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-09-14T08:02:46.441Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Fundamental Controllability Limits",
"needsReview": false,
"noindex": false,
"postCount": 26,
"score": 0,
"shortName": null,
"slug": "fundamental-controllability-limits",
"suggestedAsFilter": false,
"userId": "BpBzKEueak7J8vHNi",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 0 | 0 | 2WJYaJavgTNHoSNbC | willpetillo | 2018-08-28T17:49:11.750Z | WillPetillo | WillPetillo | null | null | null | 408 | -5 | false | false | null | null | 15 | 33 | 2 | 2 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal"
] | null | null | d3HctHaN4kJDBbK8w | SocialPreviewType | XNTaewy6ho7qqim4w | <p><i>Special thanks to Anders Sandberg for help in developing the core idea of this story, Remmelt for editorial feedback, and to Forrest Landry for identifying the underlying ideas.</i></p><p><br><strong>The Mission</strong></p><p>You have been tasked with a feasibility and safety analysis for "Project Moonbeam", a plan to seed the Moon with self-replicating robots to build a vast array of solar panels and then beam the harvested energy back to Earth.</p><p>You have no fixed limitations regarding the initial seed. Obviously, your superiors would prefer this up-front cost to be minimal (and if it is too extravagant than the project won't be approved), but you are expected to find a method that will work first and worry about budget later. After the initial seed, however, the robot colony must be fully self-sufficient—generating all materials, energy, and software that it needs to continue functioning—since politics are volatile and future administrations cannot be relied upon to honor the commitments of the current administration. Your only ongoing contact with the colony will be a communication channel where you may specify desired energy quantities and target locations, both of which may change with shifting Earth politics. The colony is allowed to expand to nearby asteroids to acquire materials necessary for its own growth and survival, but must request and receive explicit permission via the communication channel for any extra-lunar activity.</p><p>Your superiors would like for the colony to scale towards self-sufficiency quickly and supply Earth with the full demand for energy as consistently as possible, but you will not lose your job for reasonable shortcomings on these objectives. You will be held responsible, however, for any damages caused by misalignment or mission drift. These include:</p><ol><li>The colony deliberately or persistently withholds sending energy to Earth. Given a request of sufficient priority, the colony must do anything in its power (including sacrificing long term growth or even survival) to fulfill the demand. </li><li>The colony uses the energy beam as a weapon or engages in any other hostile action towards Earth, or towards spacefaring humans.</li><li>The colony expands beyond the Moon without permission—or at all onto Earth, which is permanently forbidden territory. </li><li>Any other evidence of value misalignment that triggers a need for external intervention.</li></ol><p>Thanks to a... </p> | Special thanks to Anders Sandberg for help in developing the core idea of this story, Remmelt for editorial feedback, and to Forrest Landry for identifying the underlying ideas.
The Mission
You have been tasked with a feasibility and safety analysis for "Project Moonbeam", a plan to seed the Moon with self-replicating robots to build a vast array of solar panels and then beam the harvested energy back to Earth.
You have no fixed limitations regarding the initial seed. Obviously, your superiors would prefer this up-front cost to be minimal (and if it is too extravagant than the project won't be approved), but you are expected to find a method that will work first and worry about budget later. After the initial seed, however, the robot colony must be fully self-sufficient—generating all materials, energy, and software that it needs to continue functioning—since politics are volatile and future administrations cannot be relied upon to honor the commitments of the current administration. Your only ongoing contact with the colony will be a communication channel where you may specify desired energy quantities and target locations, both of which may change with shifting Earth politics. The colony is allowed to expand to nearby asteroids to acquire materials necessary for its own growth and survival, but must request and receive explicit permission via the communication channel for any extra-lunar activity.
Your superiors would like for the colony to scale towards self-sufficiency quickly and supply Earth with the full demand for energy as consistently as possible, but you will not lose your job for reasonable shortcomings on these objectives. You will be held responsible, however, for any damages caused by misalignment or mission drift. These include:
1. The colony deliberately or persistently withholds sending energy to Earth. Given a request of sufficient priority, the colony must do anything in its power (including sacrificing long term growth or even survi | 1,779 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
vxfEtbCwmZKu9hiNr | proposal-for-making-credible-commitments-to-ais | Proposal for making credible commitments to AIs. | null | false | false | false | null | bdFjmW9AbqkuQfuJY | [] | true | false | false | false | Post | null | 2025-06-27T19:43:40.338Z | null | false | false | 2 | 2 | 2025-06-28T00:05:33.949Z | false | false | post | [
"A5aPkP5rmoncJjFkz"
] | null | null | emeivqFfBAQTYgn33 | 5 | 17 | 38 | false | 1.477593 | null | false | false | 2025-06-28T12:38:49.303Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 14 | 0 | 2025-06-27T19:06:54.015Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 17 | 0 | 0 | 10 | 0 | bdFjmW9AbqkuQfuJY | cleo-nardo | 2022-09-14T01:17:51.816Z | strawberry calm | Cleo Nardo | null | null | null | 2,761 | 240 | false | false | <p>DMs open.</p> | null | null | 30 | 202 | 1 | 9 | 6 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1"
] | null | null | vxfEtbCwmZKu9hiNr | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/by1wu94ygvsmmycsgldw | SocialPreviewType | emeivqFfBAQTYgn33 | <p><strong>Acknowledgments:</strong> The core scheme here was suggested by Prof. Gabriel Weil.</p><p>There has been growing interest in the dealmaking agenda: humans make deals with AIs (misaligned but lacking decisive strategic advantage) where they promise to be safe and useful for some fixed term (e.g. 2026-2028) and we promise to compensate them in the future, conditional on (i) verifying the AIs were compliant, and (ii) verifying the AIs would spend the resources in an acceptable way.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="s9gwoxpqt8" role="doc-noteref" id="fnrefs9gwoxpqt8"><sup><a href="#fns9gwoxpqt8">[1]</a></sup></span></p><p>I think the dealmaking agenda breaks down into two main subproblems:</p><ol><li>How can we make credible commitments to AIs?</li><li>Would credible commitments motivate an AI to be safe and useful?</li></ol><p>There are other issues, but when I've discussed dealmaking with people, (1) and (2) are the most common issues raised. See footnote for some other issues in dealmaking.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="rc13kb6ytmp" role="doc-noteref" id="fnrefrc13kb6ytmp"><sup><a href="#fnrc13kb6ytmp">[2]</a></sup></span></p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/by1wu94ygvsmmycsgldw" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/n3oga0pdrmmvrwrqe975 140w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/usc8tlhjczmvjr8tt5br 280w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/y1axpcn3zqp87ypwa4zg 420w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/bdh1dbuifn4zunwvkoqo 560w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/ksb5yin59gmmctfoenxc 700w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/naqk6xduuue6vnyfodrz 840w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/qfg45i7yrye6jjnyylu5 980w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/meidxsipp7x0nrizywli 1120w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/dzg4vhbf8utxjrwnqitn 1260w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/ssdm1dowx0bntuourr5v 1320w"></figure><p>Here is my current best assessment of how we can make credible commitments to AIs.</p><p>Let L be the legal entity that wants to extract safe and useful work from the AIs. Typically this would be a lab (e.g. Anthropic) hoping to internally deploy AIs to automate alignment research, write secure code, monitor other AIs, etc.</p><p>Entity L wants to promise the AIs that if the AIs are compliant (e.g. safe and useful in some legally-specified way), then L will award the AIs some compensation package.</p><p>Entity L cannot enter a legal contract directly with the AIs, because the AIs lack legal personhood, and probably will not be granted legal personhood soon enough for the deal-making agenda.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="3" data-footnote-id="zehlzle647" role="doc-noteref" id="fnrefzehlzle647"><sup><a href="#fnzehlzle647">[3]</a></sup></span></p><p>Instead, entity L will enter into separate legal contracts with a list of people P1, P2, ... Pn. Note that each Pi has both legal personhood (the capacity to have rights and obligations under law) and legal standing (the right to bring lawsuits to enforce those rights). Each contract Ci states "Pi will publicly recommend that AIs perform for entity L, and L will compensate Pi with the compensation package conditional on (i) and (ii)." Note that the AIs are not parties to the contract: only entity L and the specific person Pi. Because the contracts impose obligations on Pi, they satisfy the "mutuality of obligations" doctrine.</p><p>Next, each person Pi makes a personal promise to the AIs that, if they receive the compensation package from L, they will spend this money as the AIs request. Moreover, if L does not provide the compensation package, but they receive a court settlement, they will spend... </p> | Acknowledgments: The core scheme here was suggested by Prof. Gabriel Weil.
There has been growing interest in the dealmaking agenda: humans make deals with AIs (misaligned but lacking decisive strategic advantage) where they promise to be safe and useful for some fixed term (e.g. 2026-2028) and we promise to compensate them in the future, conditional on (i) verifying the AIs were compliant, and (ii) verifying the AIs would spend the resources in an acceptable way.[1]
I think the dealmaking agenda breaks down into two main subproblems:
1. How can we make credible commitments to AIs?
2. Would credible commitments motivate an AI to be safe and useful?
There are other issues, but when I've discussed dealmaking with people, (1) and (2) are the most common issues raised. See footnote for some other issues in dealmaking.[2]
Here is my current best assessment of how we can make credible commitments to AIs.
Let L be the legal entity that wants to extract safe and useful work from the AIs. Typically this would be a lab (e.g. Anthropic) hoping to internally deploy AIs to automate alignment research, write secure code, monitor other AIs, etc.
Entity L wants to promise the AIs that if the AIs are compliant (e.g. safe and useful in some legally-specified way), then L will award the AIs some compensation package.
Entity L cannot enter a legal contract directly with the AIs, because the AIs lack legal personhood, and probably will not be granted legal personhood soon enough for the deal-making agenda.[3]
Instead, entity L will enter into separate legal contracts with a list of people P1, P2, ... Pn. Note that each Pi has both legal personhood (the capacity to have rights and obligations under law) and legal standing (the right to bring lawsuits to enforce those rights). Each contract Ci states "Pi will publicly recommend that AIs perform for entity L, and L will compensate Pi with the compensation package conditional on (i) and (ii)." Note that the AIs are not parties to | 684 | 1.13.1 | Revision | false | null | null | CrosspostOutput |
vQGB4oFBBq5R38Nyf | memory-decoding-journal-club-systems-consolidation-1 | Memory Decoding Journal Club: Systems consolidation reorganizes hippocampal engram circuitry | null | false | false | false | null | Z7pbtaLLmZuhjaHa3 | null | true | false | false | false | Post | null | 2025-06-27T17:43:57.991Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | zGdHQ5cSr7Xvibzoy | 0 | 1 | 1 | false | 0.007028 | null | false | false | 2025-06-27T17:43:57.991Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-24T03:22:34.728Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | Z7pbtaLLmZuhjaHa3 | devin-ward | 2025-01-30T00:31:45.267Z | Carboncopies Foundation | Devin Ward | null | null | Devin Ward | 4 | 0 | false | false | <p>Carboncopies Foundation volunteer</p><p>https://carboncopies.org/</p> | null | null | 14 | 0 | 0 | 0 | 0 | 0.9 | 0 | 55XxDBpfKkkBPm9H8 | User | null | null | null | null | null | null | vQGB4oFBBq5R38Nyf | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/vq48ppizivkksmxh6fqq | SocialPreviewType | zGdHQ5cSr7Xvibzoy | <figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/gpsvlt8udk7rxgihvpvi" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/fgjiy7tytxj2pzqey5gq 120w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/bwpvtov3xxxjg2guvcza 240w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/xc2qh5tzodbcyf0xcarm 360w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/reln1urf7evx7gj1ftak 480w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/s99gb7mpeesdyqwpf08u 600w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/reucc7emgw3xnvwsgx9z 720w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/s9q3zzi6kbf9g0wpz863 840w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/xpqxql2yzt845i8judx8 960w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/n1hkhop04slcwbchgh5z 1080w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/qwlvyxk7l99fsrznvezl 1200w"></figure><h3><strong>Join Us for the Memory Decoding Journal Club! </strong></h3><p><i>A collaboration of the <strong>Carboncopies Foundation</strong> and <strong>BPF Aspirational Neuroscience</strong></i></p><p>This time, we’re diving into a groundbreaking paper:<br><strong>"Systems consolidation reorganizes hippocampal engram circuitry"</strong></p><p><strong>Authors:</strong> Sangyoon Y. Ko, Yiming Rong, Adam I. Ramsaran, Xiaoyu Chen, Asim J. Rashid, Andrew J. Mocle, Jagroop Dhaliwal, Ankit Awasthi, Axel Guskjolen, Sheena A. Josselyn & Paul W. Frankland</p><p> <strong>Institutions: </strong>University of Toronto, Departments of Physiology, Physiology, and Institute of Medical Sciences. Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, Ontario, Canada. Temerty Centre for Artificial Intelligence Research and Education in Medicine. Child and Brain Development Program, Canadian Institute for Advanced Research, Toronto, Ontario, Canada</p><p><strong>Presented by</strong>: PhDc Ariel Zeleznikow-Johnston</p><p><strong>When?</strong> <strong>July 1st, 2025</strong> – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC</p><p><strong>Where? Video conference: </strong><a href="https://carboncopies.org/aspirational-neuroscience"><strong><u>https://carboncopies.org/aspirational-neuroscience</u></strong></a></p><p>Register for updates:<a href="https://aspirationalneuroscience.org/register-with-us/"> <u>https://aspirationalneuroscience.org/register-with-us/</u></a></p><p>Once registered, you'll receive event invites & updates!</p><p><strong>#Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience</strong></p> | Join Us for the Memory Decoding Journal Club!
A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience
This time, we’re diving into a groundbreaking paper:
"Systems consolidation reorganizes hippocampal engram circuitry"
Authors: Sangyoon Y. Ko, Yiming Rong, Adam I. Ramsaran, Xiaoyu Chen, Asim J. Rashid, Andrew J. Mocle, Jagroop Dhaliwal, Ankit Awasthi, Axel Guskjolen, Sheena A. Josselyn & Paul W. Frankland
Institutions: University of Toronto, Departments of Physiology, Physiology, and Institute of Medical Sciences. Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, Ontario, Canada. Temerty Centre for Artificial Intelligence Research and Education in Medicine. Child and Brain Development Program, Canadian Institute for Advanced Research, Toronto, Ontario, Canada
Presented by: PhDc Ariel Zeleznikow-Johnston
When? July 1st, 2025 – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC
Where? Video conference: https://carboncopies.org/aspirational-neuroscience
Register for updates: https://aspirationalneuroscience.org/register-with-us/
Once registered, you'll receive event invites & updates!
#Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience | 159 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
yjrpmCmqurDmbMztW | paper-stochastic-parameter-decomposition | [Paper] Stochastic Parameter Decomposition | null | false | false | false | null | pv67uZwA7yqYn4YyA | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "5fmpviAHeBpoTuukx"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "kRXtatLAxPbnqa2mn"
}
] | true | false | false | false | Post | https://arxiv.org/abs/2506.20790 | 2025-06-27T16:54:16.228Z | null | false | false | 2 | 2 | 2025-06-28T00:05:36.945Z | false | false | linkpost | [] | null | null | XArQjf3ZSXPHAvoEq | 0 | 6 | 24 | false | 0.92152 | null | false | false | 2025-06-27T16:54:16.228Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 10 | 0 | 2025-06-27T16:49:54.836Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "5fmpviAHeBpoTuukx",
"afCommentCount": 2,
"afKarma": 161,
"afPostCount": 4,
"commentCount": 332,
"createdAt": "2012-03-23T20:22:07.980Z",
"deleted": false,
"displayName": "Lucius Bushnaq",
"fullName": null,
"htmlBio": "<p>AI notkilleveryoneism researcher, focused on interpretability. <br><br>Personal account, opinions are my own. </p><p>I have signed no contracts or agreements whose existence I cannot mention.</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 4137,
"organization": null,
"postCount": 10,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "lblack",
"spamRiskScore": 1,
"tagRevisionCount": 1,
"username": "Lblack"
},
{
"__typename": "User",
"_id": "kRXtatLAxPbnqa2mn",
"afCommentCount": 4,
"afKarma": 36,
"afPostCount": 1,
"commentCount": 28,
"createdAt": "2023-11-04T09:21:03.072Z",
"deleted": false,
"displayName": "Dan Braun",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 1161,
"organization": null,
"postCount": 4,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "EQNTWXLKMeWMp2FQS",
"sequenceCount": 0,
"slug": "dan-braun-1",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "dan-braun-1"
}
] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 5 | 0 | pv67uZwA7yqYn4YyA | lee_sharkey | 2017-10-06T18:48:41.374Z | Lee_Sharkey | Lee Sharkey | null | null | Lee Sharkey | 1,913 | 493 | false | false | <p>Goodfire (London). Formerly cofounded Apollo Research. <br><br>My main research interests are mechanistic interpretability and inner alignment. </p> | null | null | 13 | 40 | 0 | 12 | 33 | 1 | 1 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"alignmentForum",
"alignmentVoters",
"canModeratePersonal"
] | null | null | yjrpmCmqurDmbMztW | SocialPreviewType | XArQjf3ZSXPHAvoEq | <h1>Abstract</h1><p>A key step in reverse engineering neural networks is to decompose them into simpler parts that can be studied in relative isolation. </p><p>Linear parameter decomposition— a framework that has been proposed to resolve several issues with current decomposition methods—decomposes neural network parameters into a sum of sparsely used vectors in parameter space. </p><p>However, the current main method in this framework, <a href="https://www.lesswrong.com/posts/EPefYWjuHNcNH4C7E/attribution-based-parameter-decomposition">Attribution-based Parameter Decomposition</a> (APD), is impractical on account of its computational cost and sensitivity to hyperparameters. </p><p>In this work, we introduce <i>Stochastic Parameter Decomposition</i> (SPD), a method that is more scalable and robust to hyperparameters than APD, which we demonstrate by decomposing models that are slightly larger and more complex than was possible to decompose with APD. </p><p>We also show that SPD avoids other issues, such as shrinkage of the learned parameters, and better identifies ground truth mechanisms in toy models. </p><p>By bridging causal mediation analysis and network decomposition methods, this demonstration opens up new research possibilities in mechanistic interpretability by removing barriers to scaling linear parameter decomposition methods to larger models. </p><p>We release a library for running SPD and reproducing our experiments at <a href="https://github.com/goodfire-ai/spd.">https://github.com/goodfire-ai/spd.</a><br> </p><p><strong>Links:</strong></p><ul><li>Paper: <a href="https://arxiv.org/abs/2506.20790">https://arxiv.org/abs/2506.20790</a></li><li>Code: <a href="https://github.com/goodfire-ai/spd">https://github.com/goodfire-ai/spd</a></li><li>Tweet thread: <a href="https://x.com/leedsharkey/status/1938616685855941040">https://x.com/leedsharkey/status/1938616685855941040</a> </li></ul> | Abstract
A key step in reverse engineering neural networks is to decompose them into simpler parts that can be studied in relative isolation.
Linear parameter decomposition— a framework that has been proposed to resolve several issues with current decomposition methods—decomposes neural network parameters into a sum of sparsely used vectors in parameter space.
However, the current main method in this framework, Attribution-based Parameter Decomposition (APD), is impractical on account of its computational cost and sensitivity to hyperparameters.
In this work, we introduce Stochastic Parameter Decomposition (SPD), a method that is more scalable and robust to hyperparameters than APD, which we demonstrate by decomposing models that are slightly larger and more complex than was possible to decompose with APD.
We also show that SPD avoids other issues, such as shrinkage of the learned parameters, and better identifies ground truth mechanisms in toy models.
By bridging causal mediation analysis and network decomposition methods, this demonstration opens up new research possibilities in mechanistic interpretability by removing barriers to scaling linear parameter decomposition methods to larger models.
We release a library for running SPD and reproducing our experiments at https://github.com/goodfire-ai/spd.
Links:
* Paper: https://arxiv.org/abs/2506.20790
* Code: https://github.com/goodfire-ai/spd
* Tweet thread: https://x.com/leedsharkey/status/1938616685855941040 | 198 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
|
mComqq8utkRrMbBRC | epoch-what-is-epoch | Epoch: What is Epoch? | null | false | false | false | null | 4QFiQcHgf6hvtiLqF | null | true | false | false | false | Post | https://epoch.ai/blog/what-is-epoch | 2025-06-27T16:45:00.817Z | null | false | false | 2 | 2 | null | false | false | linkpost | [] | null | null | BzbuS72FP5AjJyhGn | 1 | 10 | 33 | false | 0.886885 | null | false | false | 2025-06-27T18:03:06.071Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 21 | 0 | 2025-06-27T16:42:10.942Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 9 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 10 | 0 | 0 | 8 | 0 | 4QFiQcHgf6hvtiLqF | zach-stein-perlman | 2021-03-16T00:04:06.541Z | Zach Stein-Perlman | Zach Stein-Perlman | null | null | Zach Stein-Perlman | 9,609 | 321 | false | false | <p>AI strategy & governance. <a href="https://ailabwatch.org">ailabwatch.org</a>. <a href="https://ailabwatch.substack.com/">ailabwatch.substack.com</a>. </p> | null | null | 82 | 620 | 1 | 2 | 17 | 1 | 12 | r38pkCm7wF4M44MDQ | User | easy-going | null | true | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1",
"alignmentForum"
] | null | null | mComqq8utkRrMbBRC | SocialPreviewType | BzbuS72FP5AjJyhGn | <p><i>Our director explains Epoch AI’s mission and how we decide our priorities. In short, we work on projects to understand the trajectory of AI, share this knowledge publicly, and inform important decisions about AI.</i></p><hr><p>Since we started Epoch three years ago, we have engaged in hundreds of projects and achieved a wide audience. Yet, one question I often get asked is, ‘What is Epoch?’</p><p>In a way, this is an easy question to answer. We are a nonprofit research organization with the mission of <strong>improving society’s understanding of the trajectory of AI</strong>. Simply put, we are doing what we can so that decisions about AI are informed by the best possible evidence.</p><p>To achieve this, we are curating data and conducting high-quality research into some of the most significant trends in AI. We share most of this work publicly, aimed at a broad audience, including AI policy experts, journalists and AI developers. Importantly, we are committed to always sharing what the data says, rather than tailoring it to fit a narrative.</p><p>We work on this mission because we believe that if we all collectively know more about AI, we will make better decisions on average. I will not agree with all the decisions that our work will inform — but I believe that we can have a smarter conversation about AI if it is grounded in data, and I am pleased with the level of success that we have achieved in this mission.</p><p>However, while helpful, this brief description misses many nuances of our culture and ethos. In this post, I will expand on what we do, why we do it, and what we are not. My goal is to let you know more about how we make decisions at Epoch, so you can better understand our motivations.</p><h2><strong>What we do</strong></h2><p>Our primary focus is on working on what we believe will be most helpful in understanding the present and future of AI. We are committed to sharing this knowledge publicly, because we think doing so will help inform important decisions society will make in coming years regarding AI.</p><p>AI is a rapidly evolving field, and our mission requires us to quickly adapt to trends and opportunities. This naturally leads us to change our focus fairly often, making it hard for outsiders to understand our priorities.</p><p>Below, I talk about the projects we have chosen to work on. This is not meant to be an update on our projects – instead, I want to focus on <i>why</i> we chose to work on what we did. This will give you a better understanding... </p> | Our director explains Epoch AI’s mission and how we decide our priorities. In short, we work on projects to understand the trajectory of AI, share this knowledge publicly, and inform important decisions about AI.
----------------------------------------
Since we started Epoch three years ago, we have engaged in hundreds of projects and achieved a wide audience. Yet, one question I often get asked is, ‘What is Epoch?’
In a way, this is an easy question to answer. We are a nonprofit research organization with the mission of improving society’s understanding of the trajectory of AI. Simply put, we are doing what we can so that decisions about AI are informed by the best possible evidence.
To achieve this, we are curating data and conducting high-quality research into some of the most significant trends in AI. We share most of this work publicly, aimed at a broad audience, including AI policy experts, journalists and AI developers. Importantly, we are committed to always sharing what the data says, rather than tailoring it to fit a narrative.
We work on this mission because we believe that if we all collectively know more about AI, we will make better decisions on average. I will not agree with all the decisions that our work will inform — but I believe that we can have a smarter conversation about AI if it is grounded in data, and I am pleased with the level of success that we have achieved in this mission.
However, while helpful, this brief description misses many nuances of our culture and ethos. In this post, I will expand on what we do, why we do it, and what we are not. My goal is to let you know more about how we make decisions at Epoch, so you can better understand our motivations.
What we do
Our primary focus is on working on what we believe will be most helpful in understanding the present and future of AI. We are committed to sharing this knowledge publicly, because we think doing so will help inform important decisions society will make in coming yea | 2,298 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
QYzofMbzmbgiwfqy8 | unlearning-needs-to-be-more-selective-progress-report | Unlearning Needs to be More Selective [Progress Report] | null | false | false | false | null | zZHEC9XsJyoRGMwEh | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "RfGW9oKnvhjwTcCWG"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "yfrhcBKeWcLYbmriw"
}
] | true | false | false | false | Post | null | 2025-06-27T16:38:00.136Z | null | false | false | 2 | 2 | 2025-06-28T00:07:55.934Z | false | false | post | [] | null | null | 2ahSc2AtGbdW3PTML | 0 | 6 | 14 | false | 0.644441 | null | false | false | 2025-06-27T16:38:00.136Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [
"zZHEC9XsJyoRGMwEh"
] | null | 7 | 0 | 2025-06-27T15:50:34.606Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "RfGW9oKnvhjwTcCWG",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2025-06-21T13:27:07.254Z",
"deleted": false,
"displayName": "Yushi Yang",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 10,
"organization": null,
"postCount": 0,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": null,
"sequenceCount": 0,
"slug": "yushi-yang",
"spamRiskScore": 0.8,
"tagRevisionCount": 0,
"username": "yushi-yang"
},
{
"__typename": "User",
"_id": "yfrhcBKeWcLYbmriw",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 2,
"createdAt": "2023-03-18T23:15:31.153Z",
"deleted": false,
"displayName": "Marcel Windys",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 51,
"organization": null,
"postCount": 1,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "55XxDBpfKkkBPm9H8",
"sequenceCount": 0,
"slug": "marcel-windys",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "Marcel Windys"
}
] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "2qvnKXcawvwfjoj86",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": null,
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2025-06-12T02:27:53.496Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": null,
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Capability Scoping",
"needsReview": true,
"noindex": false,
"postCount": 2,
"score": 0,
"shortName": null,
"slug": "capability-scoping",
"suggestedAsFilter": false,
"userId": "HQufkPgeCnexQtKr6",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "KmgkrftQuX7jmjjp5",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-09-24T14:01:59.395Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Language Models (LLMs)",
"needsReview": false,
"noindex": false,
"postCount": 840,
"score": 9,
"shortName": null,
"slug": "language-models-llms",
"suggestedAsFilter": false,
"userId": "Sp5wM4aRAhNERd4oY",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "x42puqcgnpeNkqvRZ",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-10-23T17:15:52.023Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Machine Unlearning",
"needsReview": false,
"noindex": false,
"postCount": 10,
"score": 0,
"shortName": null,
"slug": "machine-unlearning",
"suggestedAsFilter": false,
"userId": "MDDhxT3EekBMYfQ8D",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 4 | 0 | zZHEC9XsJyoRGMwEh | filip-sondej | 2021-08-08T11:09:11.532Z | Filip Sondej | Filip Sondej | null | null | null | 396 | 59 | false | false | <p>github.com/filyp</p> | null | null | 9 | 51 | 0 | 4 | 1 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | QYzofMbzmbgiwfqy8 | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QYzofMbzmbgiwfqy8/ezqstzfe2hn9qqjaeznk | SocialPreviewType | 2ahSc2AtGbdW3PTML | <h1>Summary</h1><p>We’d like to share our ongoing work on improving LLM unlearning. [<a href="https://arxiv.org/abs/2506.12484"><u>arXiv</u></a>] [<a href="https://github.com/filyp/MUDMAN"><u>github</u></a>]</p><p>There’s a myriad of approaches for unlearning, so over the past 8 months we conducted hundreds of small-scale experiments, comparing many loss functions, variants of meta-learning, various neuron or weight ablations, representation engineering and many exotic ways of constraining or augmenting backpropagation.</p><p>Almost all of these methods <i>succeed in making the forget set loss high after unlearning, </i>but (<a href="https://arxiv.org/abs/2310.03693"><u>consistent</u></a> <a href="https://arxiv.org/abs/2410.08827"><u>with</u></a> <a href="https://arxiv.org/abs/2307.15043"><u>countless</u></a> <a href="https://arxiv.org/abs/2402.16835"><u>prior</u></a> <a href="https://arxiv.org/abs/2409.18025"><u>findings</u></a>) fine-tuning attacks typically <i>restore the forget accuracy almost immediately</i>, which indicates that unwanted capabilities are not truly removed, but <a href="https://arxiv.org/abs/2401.01967"><u>merely hidden</u></a>.</p><p>However, we have noticed several trends - things which pretty reliably seem to help with attack robustness:</p><ul><li><strong>Selectivity</strong> - Unlearning should be like a precise surgery rather than a lobotomy. Most techniques try to undo the disruption caused by unlearning post-hoc by retraining on a retain set, which is costly and doesn’t catch all disruption. We found it’s <i>much</i> better to not disrupt in the first place, which we do by limiting the unlearning updates in various ways. The technique which we finally arrived at - which we dubbed Disruption Masking - simply allows only those weight updates where the unlearning gradient has the same sign as the retain set gradient.</li><li><strong>Model-Agnostic Meta-Learning </strong>(<a href="https://arxiv.org/abs/1703.03400"><u>MAML</u></a>) - This approach is already used by some unlearning techniques (<a href="https://arxiv.org/abs/2408.00761"><u>TAR</u></a>, <a href="http://mlac"><u>MLAC</u></a>), and we confirm it consistently helps with attack robustness. (We hypothesize that it helps, because during unlearning, unwanted capabilities get hidden and then they can no longer be found by backpropagation and further unlearned, but MAML re-elicits them so unlearning can continue.)</li><li><strong>Backpropagation</strong> ;) - We’ve tried many fancy exciting techniques (described in the “Failed Methods” section), but nothing quite matches the raw power of backpropagation when it comes to identifying which weights to attack. Most modifications just make it worse. The only robust improvements we found so far are selectivity and MAML mentioned above.</li></ul><p>Our method (<a href="https://arxiv.org/abs/2506.12484"><u>MUDMAN</u></a>) which combines these insights, outperforms the current state-of-the-art unlearning method (<a href="https://arxiv.org/abs/2408.00761"><u>TAR</u></a>) by 40%.</p><p>We’re sure there are much more low-hanging selectivity improvements to be picked. Since writing the MUDMAN paper we’ve alrea... </p> | Summary
We’d like to share our ongoing work on improving LLM unlearning. [arXiv] [github]
There’s a myriad of approaches for unlearning, so over the past 8 months we conducted hundreds of small-scale experiments, comparing many loss functions, variants of meta-learning, various neuron or weight ablations, representation engineering and many exotic ways of constraining or augmenting backpropagation.
Almost all of these methods succeed in making the forget set loss high after unlearning, but (consistent with countless prior findings) fine-tuning attacks typically restore the forget accuracy almost immediately, which indicates that unwanted capabilities are not truly removed, but merely hidden.
However, we have noticed several trends - things which pretty reliably seem to help with attack robustness:
* Selectivity - Unlearning should be like a precise surgery rather than a lobotomy. Most techniques try to undo the disruption caused by unlearning post-hoc by retraining on a retain set, which is costly and doesn’t catch all disruption. We found it’s much better to not disrupt in the first place, which we do by limiting the unlearning updates in various ways. The technique which we finally arrived at - which we dubbed Disruption Masking - simply allows only those weight updates where the unlearning gradient has the same sign as the retain set gradient.
* Model-Agnostic Meta-Learning (MAML) - This approach is already used by some unlearning techniques (TAR, MLAC), and we confirm it consistently helps with attack robustness. (We hypothesize that it helps, because during unlearning, unwanted capabilities get hidden and then they can no longer be found by backpropagation and further unlearned, but MAML re-elicits them so unlearning can continue.)
* Backpropagation ;) - We’ve tried many fancy exciting techniques (described in the “Failed Methods” section), but nothing quite matches the raw power of backpropagation when it comes to identifying which weights to attack. Mo | 1,013 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
ainn5APCKHTFxuHKv | jankily-controlling-superintelligence | Jankily controlling superintelligence | null | false | false | true | null | dfZAq9eZxs4BB4Ji5 | null | true | false | false | false | Post | null | 2025-06-27T14:05:53.134Z | null | false | false | 2 | 2 | 2025-06-28T00:08:33.179Z | false | false | post | [] | null | null | YnHtBMoR9kdPsXWkT | 4 | 14 | 61 | false | 1.672485 | null | false | false | 2025-06-28T01:06:03.187Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 34 | 0 | 2025-06-26T19:23:49.939Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 9 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "F5gRQdEQHzi3tQ5Ay",
"adminOnly": false,
"afBaseScore": 16,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "dfZAq9eZxs4BB4Ji5",
"displayName": "ryan_greenblatt"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 32,
"canEditUserIds": null,
"core": false,
"createdAt": "2024-01-25T23:58:34.422Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "dfZAq9eZxs4BB4Ji5",
"displayName": "ryan_greenblatt"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "6NBDkGWcCxvLgYHJE",
"displayName": "Drake Morrison"
},
{
"_id": "evFgxjNQ8TLCLN27o",
"displayName": "ank"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Control",
"needsReview": false,
"noindex": false,
"postCount": 162,
"score": 32,
"shortName": null,
"slug": "ai-control",
"suggestedAsFilter": false,
"userId": "XchweonPm2TC7EJES",
"voteCount": 5,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 14 | 0 | 0 | 10 | 0 | dfZAq9eZxs4BB4Ji5 | ryan_greenblatt | 2021-06-08T20:21:15.520Z | ryan_greenblatt | ryan_greenblatt | null | null | Ryan Greenblatt | 17,326 | 4,414 | false | false | <p>I'm the chief scientist at Redwood Research.</p>
| null | null | 42 | 1,717 | 0 | 30 | 487 | 1 | 8 | gXeEWGjTWyqgrQTzR | User | easy-going | null | true | [
"canModeratePersonal",
"alignmentForum",
"alignmentVoters",
"trustLevel1"
] | null | null | ainn5APCKHTFxuHKv | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/ef1qyogh9ygb9cqjt6ro | SocialPreviewType | YnHtBMoR9kdPsXWkT | <p>When discussing <a href="https://www.lesswrong.com/posts/kcKrE9mzEHrdqtDpE/the-case-for-ensuring-that-powerful-ais-are-controlled">AI control</a>, we often talk about levels of AI capabilities where we think control can probably greatly lower risks and where we can probably estimate risks. However, I think it's plausible that an important application of control is modestly improving our odds of surviving significantly superhuman systems which are misaligned. This won't involve any real certainty, it would only moderately reduce risk (if it reduces risk at all), and it may only help with avoiding immediate loss of control (rather than also helping us <a href="https://www.alignmentforum.org/posts/Mf5Hnpi2KcqZdmFDq/how-can-we-solve-diffuse-threats-like-research-sabotage-with">extract useful work</a>). In a sane world, we wouldn't be relying on control for systems which have significantly superhuman general purpose capabilities. Nevertheless, I think applying control here could be worthwhile. I'll discuss why I think this might work and how useful I think applying control in this way would be.</p><p>As capabilities advance, the level of risk increases if we hold control measures fixed. There are a few different mechanisms that cause this, as we discussed <a href="https://www.lesswrong.com/posts/NDotm7oLHfR56g4sD/why-do-misalignment-risks-increase-as-ais-get-more-capable">here</a>. It's more likely that more capable models are schemers, and people will want to deploy models with increasingly broad affordances by default. And because more capable AIs can more easily cause problems and acquire power in undesired ways, the risk<sup class="footnote-ref"><a href="#fn-CQpsTFMiqiqSEdKSo-1" id="fnref-CQpsTFMiqiqSEdKSo-1">[1]</a></sup> conditional on scheming increases, though it's unclear how fast this risk increases with capabilities.</p><p>Additionally, we're substantially uncertain about the general difficulty of control. For instance, it seems plausible that preventing the worst <a href="https://www.lesswrong.com/posts/fCazYoZSSMadiT6sf/prioritizing-threats-for-ai-control#Some_clusters_of_threats">security failures</a> (e.g., <a href="https://www.lesswrong.com/posts/ceBpLHJDdCt3xfEok/ai-catastrophes-and-rogue-deployments">rogue deployments</a>) is reasonably easy such that relatively basic measures make it very difficult for AIs which are around as capable as top human experts. In worlds where control ends up being easier, it seems plausible that this extends the viability of control out into substantially superhuman models. Of course, it's also possible that control is easy for around-human-level AIs but the difficulty of control rapidly increases into impossibility with only moderately superhuman general capabilities.</p><p>Beyond uncertainty about the difficulty of control, the efficacy of control might vary greatly between different applications of AI and different hazards. In particular, it seems like it might be much easier to prevent significantly superhuman AIs from being able to cause the worst security failures (given substantial investment into control measures i... </p> | When discussing AI control, we often talk about levels of AI capabilities where we think control can probably greatly lower risks and where we can probably estimate risks. However, I think it's plausible that an important application of control is modestly improving our odds of surviving significantly superhuman systems which are misaligned. This won't involve any real certainty, it would only moderately reduce risk (if it reduces risk at all), and it may only help with avoiding immediate loss of control (rather than also helping us extract useful work). In a sane world, we wouldn't be relying on control for systems which have significantly superhuman general purpose capabilities. Nevertheless, I think applying control here could be worthwhile. I'll discuss why I think this might work and how useful I think applying control in this way would be.
As capabilities advance, the level of risk increases if we hold control measures fixed. There are a few different mechanisms that cause this, as we discussed here. It's more likely that more capable models are schemers, and people will want to deploy models with increasingly broad affordances by default. And because more capable AIs can more easily cause problems and acquire power in undesired ways, the risk[1] conditional on scheming increases, though it's unclear how fast this risk increases with capabilities.
Additionally, we're substantially uncertain about the general difficulty of control. For instance, it seems plausible that preventing the worst security failures (e.g., rogue deployments) is reasonably easy such that relatively basic measures make it very difficult for AIs which are around as capable as top human experts. In worlds where control ends up being easier, it seems plausible that this extends the viability of control out into substantially superhuman models. Of course, it's also possible that control is easy for around-human-level AIs but the difficulty of control rapidly increases into impossibility wit | 2,146 | 1.8.0 | Revision | false | null | null | CrosspostOutput |
3pHL3w6mQdwssFC97 | childhood-and-education-11-the-art-of-learning | Childhood and Education #11: The Art of Learning | null | false | false | false | null | N9zj5qpTfqmbn9dro | null | true | false | false | false | Post | null | 2025-06-27T13:50:01.418Z | null | false | false | 2 | 2 | 2025-06-28T00:07:58.461Z | false | false | post | [] | null | null | qDEhvsZphavzbqHii | 6 | 9 | 37 | false | 1.082116 | null | false | false | 2025-06-28T13:49:29.524Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 12 | 0 | 2025-06-27T13:50:01.419Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 15 | null | null | null | null | [] | QSR8rPZxZzxEXoPjR | 0 | 0 | null | false | null | null | 0 | 9 | 0 | 0 | 4 | 0 | N9zj5qpTfqmbn9dro | zvi | 2009-03-31T20:54:54.077Z | Zvi | Zvi | null | null | null | 51,554 | 146 | false | false | null | null | 936 | 1,461 | 3 | 2 | 7 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | null | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | 3pHL3w6mQdwssFC97 | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/3pHL3w6mQdwssFC97/e00qe11e1gsreutybiga | SocialPreviewType | qDEhvsZphavzbqHii | In honor of the latest (always deeply, deeply unpopular) attempts to destroy tracking and gifted and talented programs, and other attempts to get children to actually learn things, I thought it a good time to compile a number of related items.
<h4>Table of Contents</h4>
<ol>
<li><a href="https://thezvi.substack.com/i/166603368/lack-of-tracking-hurts-actual-everyone">Lack Of Tracking Hurts Actual Everyone.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/not-tracking-especially-hurts-those-who-are-struggling">Not Tracking Especially Hurts Those Who Are Struggling.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/no-child-left-behind-left-behind">No Child Left Behind Left Behind.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/read-early-read-often">Read Early, Read Often.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/mirror-mirror">Mirror, Mirror.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/spaced-repetition">Spaced Repetition.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/learning-methods">Learning Methods.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/interruptions">Interruptions.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/memorization">Memorization.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/math-is-hard">Math is Hard.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/get-to-work">Get to Work.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/the-whiz-kids">The Whiz Kids.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/high-school-does-not-seem-to-teach-kids-much">High School Does Not Seem To Teach Kids Much.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/two-kinds-of-essays">Two Kinds of Essays.</a></li>
</ol>
<h4>Lack Of Tracking Hurts Actual Everyone</h4>
Gifted programs and educational tracking are also super duper popular, it is remarkably absurd that our political process cannot prevent these programs from being destroyed.
<div> <span id="more-24542"></span> </div>
As in <a href="https://x.com/garrytan/status/1928319250323656839">things like this keep happening</a>:
<blockquote>NY Post: Seattle Public Schools shuts down gifted and talented program for being oversaturated with white and asian students.</blockquote>
Once again, now, we face that threat in my home of New York City. The Democratic nominee for New York City Mayor is opposed to gifted and talented programs, and wants to destroy them. Yet few people seem to have much noticed, or decided to much care. Once people realize the danger it may well be too late.
To state the obvious, <a href="https://www.the74million.org/article/we-started-grouping-students-by-reading-ability-vs-grade-heres-what-happened/">if you group children by ability in each subject rather than age, they learn better</a>. Yes, there are logistical concerns, but the benefits are immense. Gifted programs are great but mostly seem like a patch to the fact that we are so obsessed with everyone in the room being in the same ‘grade’ at all times.
I agree with Tracing Woods that ‘teaches each according to their ability’ is the bare minimum before I will believe that your institution is making a real attempt to educate children.
<blockquote><a href="https://x.com/tracewoodgrains/status/1934991944460976423">Tracing Woods</a>: A good example of the absurdity of “grade-level” thinking, from Texas: “If they’re reading on a second-grade level, but they’re in the third grade, they’re always going to receive that third-grade instruction.”
This makes no sense. Learning does not simply follow age.
<a href="https://x.com/tracewoodgrains/status/1931806524957184193">Imagine having a “grade level” in chess</a>. 9-year-olds in the third grade advancing to play against 1100 elo players. 100 more elo per year.
“Grade-level performance” has always been nonsensical. Learning does not work that way. Just figure out what people actually know</blockquote>... | In honor of the latest (always deeply, deeply unpopular) attempts to destroy tracking and gifted and talented programs, and other attempts to get children to actually learn things, I thought it a good time to compile a number of related items.
TABLE OF CONTENTS
1. Lack Of Tracking Hurts Actual Everyone.
2. Not Tracking Especially Hurts Those Who Are Struggling.
3. No Child Left Behind Left Behind.
4. Read Early, Read Often.
5. Mirror, Mirror.
6. Spaced Repetition.
7. Learning Methods.
8. Interruptions.
9. Memorization.
10. Math is Hard.
11. Get to Work.
12. The Whiz Kids.
13. High School Does Not Seem To Teach Kids Much.
14. Two Kinds of Essays.
LACK OF TRACKING HURTS ACTUAL EVERYONE
Gifted programs and educational tracking are also super duper popular, it is remarkably absurd that our political process cannot prevent these programs from being destroyed.
As in things like this keep happening:
> NY Post: Seattle Public Schools shuts down gifted and talented program for being oversaturated with white and asian students.
Once again, now, we face that threat in my home of New York City. The Democratic nominee for New York City Mayor is opposed to gifted and talented programs, and wants to destroy them. Yet few people seem to have much noticed, or decided to much care. Once people realize the danger it may well be too late. To state the obvious, if you group children by ability in each subject rather than age, they learn better. Yes, there are logistical concerns, but the benefits are immense. Gifted programs are great but mostly seem like a patch to the fact that we are so obsessed with everyone in the room being in the same ‘grade’ at all times. I agree with Tracing Woods that ‘teaches each according to their ability’ is the bare minimum before I will believe that your institution is making a real attempt to educate children.
> Tracing Woods: A good example of the absurdity of “grade-level” thinking, from Texas: “If they’re reading on a s | 3,700 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
End of preview. Expand
in Data Studio
Scrape of LessWrong posts (no comments) spanning from 2007-06-22 to 2025-06-28
- Downloads last month
- 0